FEA Sidekick Synthesis

This synthesis consolidates five related notes documenting a startup concept for an AI-powered assistant for finite element analysis (FEA) workflows. The product, FEA Sidekick, targets a clear gap in the simulation market: the labour-intensive, error-prone meshing phase that consumes disproportionate engineering time.


I. Product Vision and Scope

Core Concept

FEA Sidekick is an AI “sidekick” that accelerates simulation workflows through model-aware meshing. Unlike end-to-end AI solvers that attempt to replace validated FEA (unacceptable in regulated industries), the sidekick enhances existing workflows by intelligently guiding mesh refinement based on model semantics.

Key Differentiators

  • Solver-agnostic approach: Converts commercial solver model decks into an open, intermediate representation that works across platforms
  • Engineer-first philosophy: The engineer retains ownership and accountability; the AI assists rather than replaces
  • Interpretable outputs: Refinement recommendations trace to specific model features, not opaque model weights

The founder background directly informs this positioning through first-hand experience in automotive production engineering and aerospace FEA research.


II. The Problem

The Pain

As documented in the problem statement, FEA workflows are broken in practice:

  • Meshing is the bottleneck: In complex assemblies, generating a mesh of satisfactory quality can take longer than the solve itself. A crash simulation may solve in days, but meshing routinely takes over a week.
  • Mesh quality drives everything downstream: Poor meshing is the single most common cause of inaccurate results and wasted compute. Engineers iterate manually, often re-meshing multiple times before a valid run.
  • Multi-physics models compound the problem: Each parametric study requires a week or more of solver time, and the mesh must be right before the solve starts, or the cost is sunk.

Who Feels It

Aerospace and automotive OEMs and their Tier 1 suppliers — organisations running large numbers of non-linear or contact-heavy FEA models. The buyer is the simulation team and CAE manager, not IT procurement or C-suite, which shortens the sales cycle.

Why Now

Three converging forces make this the right moment:

  1. GNN-based mesh learning has matured: Graph neural networks can now tokenise 3D geometry and learn refinement policies from structured simulation data.
  2. Open-source solver ecosystems are production-ready: The product does not need to displace entrenched commercial solvers; it interoperates with them.
  3. Compute costs are rising: Every re-run due to poor meshing is a measurable cost in engineer-hours and cloud spend.

Why Not End-to-End AI

End-to-end AI solvers cannot be accepted in regulated industries due to certification requirements. FEA Sidekick wraps around the solver the engineer already uses, producing interpretable, auditable outputs. The wedge is model-aware meshing — reducing the most expensive manual step without breaking certification workflows.


III. Technical Architecture

The Model Graph

At the heart of FEA Sidekick lies the “model graph” — a linked data structure connecting:

  • Syntax-level model elements (sets, surfaces, steps, boundary conditions, loads)
  • Material definitions and contact interfaces
  • Underlying mesh connectivity

This graph enables semantic understanding of where accuracy matters: load introduction regions, constraint boundaries, material transitions, and contact interfaces receive refinement priority, while low-gradient interior regions are coarsened to reduce wasted degrees of freedom.

Technology Stack

The proof of concept relies on a carefully selected tool stack:

  • Parsing: py-tree-sitter for extracting model semantics from CalculiX/Abaqus input decks
  • Mesh generation: gmsh Python API for programmatic mesh refinement via size fields
  • Model construction: pygccx for automated CalculiX model generation
  • Machine learning: PyTorch Geometric for GNN-based refinement policies (seed phase)

IV. Technical Roadmap

The technical roadmap defines three phases of increasing ambition:

Phase 1: Model-Aware Meshing

The proof of concept is the near-term wedge. Four milestones define the PoC:

  1. Parse CalculiX/Abaqus-style input decks via tree-sitter into an open model format capturing syntax and semantics
  2. Build a linked model graph connecting syntax/semantics to mesh connectivity
  3. Produce adaptive meshing recommendations prioritising critical regions
  4. Close the loop with CalculiX: remesh, rerun, and quantify gains

Phase 2: The Claude Code Moment for FEA

Build an AI agent that understands the model and automates labour-intensive tasks (meshing, contact definition, BC specification). The architecture comprises two tiers:

  • Physics-informed model layer: Uses GNN to tokenise 3D geometries and tree-sitter grammar to construct model trees, predicting optimal meshing strategies and anticipating convergence issues
  • Conversational LLM layer: Interacts with the user via MCP (Model Context Protocol), calling into the physics-informed model layer for validated suggestions

The model graph constrains which changes the LLM can propose, preventing the generation of syntactically valid but physically nonsensical input decks. Future extensions include pre-processor integration via native Python APIs (Abaqus CAE, HyperMesh, ANSA) and surrogate/reduced-order FEA model recommendations for applications like WAAM multi-physics eigenstrain calibration.

Phase 3: Solver-Level Acceleration

The most ambitious objective: accelerate the solver in real time through AI-driven adaptive re-meshing and Jacobian/tangent initialisation. If a physics-informed model can predict the displacement field at the next increment to within the basin of quadratic convergence for Newton-Raphson, the solver reaches convergence in significantly fewer iterations. This requires tight integration with solver developers through partnerships.


V. Market Positioning and Rationale

Competitive Advantage

Rather than competing with entrenched solver vendors (Ansys, Dassault, Siemens), FEA Sidekick interoperates with them. This positioning dramatically lowers adoption barriers inside large organisations where introducing new solvers requires navigating procurement, IT security, and validation processes.

Domain Expertise as Market Signal

The founder’s background provides three critical signals for pre-seed investors:

  1. Domain expertise: Automotive production experience combined with Cranfield University aerospace FEA research
  2. Market access: Existing relationships with GE Aerospace and Airbus
  3. Timing: Convergence of GNN-based mesh learning, open-source solver ecosystems, and rising compute costs

VI. The Proof of Concept

Two-Phase Development

The proof of concept splits into distinct phases aligned with funding milestones:

Phase 1: Pre-Seed Demo (Weeks 1–8)

A heuristic model-aware meshing demonstration requiring no machine learning:

  1. Automate CalculiX model creation for a WAAM deposited wall geometry
  2. Develop tree-sitter parser for CalculiX input files
  3. Construct the model graph linking semantics to mesh connectivity
  4. Implement heuristic size field generation from model features
  5. Close the loop: remesh, rerun, and quantify improvement

The WAAM geometry is deliberately chosen for its relevance to GE Aerospace and Airbus while remaining tractable for a proof of concept.

Phase 2: Seed Milestone (Post-Funding)

Replace the heuristic with a trained GNN refinement policy:

  1. Generate labelled training data from Phase 1 automated runs
  2. Train GNN policy using PyTorch Geometric
  3. Benchmark against baseline and heuristic results
  4. Extend to non-linear and thermo-mechanical models

Success Metrics

The benchmark definition requires improvement across three metrics:

  • DOF count: Reduction while maintaining <1% error in peak von Mises stress
  • Time to convergence: Wall-clock solver runtime
  • Iteration count: For non-linear extensions

VII. Long-Term Product Evolution

Future Capabilities

The product roadmap extends beyond meshing:

  • GNN-based refinement policies: Learned error indicators for adaptive remeshing
  • Format converters: Additional commercial solver support (Nastran, Ansys) on the same intermediate representation
  • Data factory: Automated variant generation and multi-fidelity simulation runs for robust training data
  • Solver integration: Partnership-driven iteration-level acceleration (Jacobian initialization, tangent methods)

Physics-Informed LLM Integration

The long-term vision includes conversational interaction via Model Context Protocol (MCP). However, this requires a physics-informed backbone — the LLM handles natural language while the underlying model graph constrains changes to physically valid options. This addresses the failure mode of bare LLM assistants: generating syntactically valid but physically nonsensical models.


Pre-Development Requirements

The proof of concept plan and founder background notes identify critical prerequisites:

  1. GPL 2 strategy: Interface with CalculiX at the I/O level only to avoid copyleft requirements
  2. University IP clearance: Written NoC from Cranfield University
  3. Employment contract review: Confirm IP assignment clauses
  4. Immigration confirmation: Global Talent Visa permits company directorship

Customer Discovery

Before any code is written, the next steps mandate concrete evidence gathering from GE Aerospace and Airbus contacts: meshing cycle costs, re-run frequency due to quality issues, and the value of a 30% cycle reduction.


IX. Key Insights

  1. Regulatory reality drives product scope: End-to-end AI solvers cannot displace validated FEA in aerospace and automotive industries due to certification requirements — the wedge is augmentation, not replacement.

  2. Meshing is the bottleneck: Poor mesh quality is the single most common cause of inaccurate results and wasted compute in FEA workflows.

  3. Interoperability beats displacement: Wrapping around existing validated solvers lowers adoption barriers compared to introducing new solvers in enterprise environments.

  4. Interpretability is non-negotiable: In regulated industries, accountability requires auditable recommendations traceable to model features.

  5. Phased approach de-risks development: The heuristic-to-ML progression validates the core model graph concept before investing in training infrastructure.

  6. Dual-layer architecture prevents AI failure modes: The physics-informed model graph constrains LLM proposals, eliminating the generation of syntactically valid but physically nonsensical models.


Source Notes