← Back to the fabric
InsightsApril 21, 2026

Why SPICE lies to you. And what we did about it.

Matt Berggren CEO, Neurocad Inc.


Every electrical engineer learns SPICE at some point. You set up that 741 op-amp, fight the netlist, hold your breath. If the simulation gods are smiling, a Bode plot materializes on screen. A rite of passage. Proof you made it.

Then you spend the next decade trying to avoid doing it again.

I spent years as an FAE at Altium, meeting engineers on-site, hearing them out over lukewarm coffee and deli sandwiches. When SPICE came up there was a particular kind of exhaustion in their voice that said, 

"I know it will lie to me and I don't always know why."

Setting it up correctly took long enough. Trusting the result took longer. And when it blew up anyway, the error message assumed you had a PhD in numerical methods sitting next to your coffee. Hard to debug what you do not fully understand. Harder still when the tool will not tell you what actually went wrong.

That frustration stayed with me. When we started building Neurocad, it was one of the problems we were determined not to leave alone. 

In short: SPICE fails because of how it handles geometry, and the fix required building something most EDA companies have never attempted.

The geometry behind SPICE non-convergence

Newton-Raphson (the algorithm at the heart of most SPICE solvers) is a subdivision algorithm. It takes an initial guess, finds the tangent at that point, and iterates toward a solution. It is elegant. It is also fragile in a very specific way: the quality of your result depends entirely on the accuracy of your starting conditions.

Get the initial guess wrong, miss the tangent, and you are not going to converge. You will accumulate errors across every integration step until you have broken outside the reference frame of the actual curve entirely. You are in what I privately think of as the mathematical nether regions, like to ‘Minus World’ in Super Mario, and you are not getting back.

What is that curve? A spline. And here is where engineers who came up through electronics and never crossed into geometry may get confused: a spline is not a line. It is a polygon. An approximation of a continuous function, bent to a reference frame by weights and knots, the same concept shipbuilders used to shape wooden hulls centuries before floating-point units existed. The sailboat hull and the SPICE convergence failure share the same mathematical ancestry. 

Non-convergence is as much a topology problem as it is a circuit design problem. Topology is not something the circuit analysis curriculum teaches.

To understand what triggers your ITL4 or GMIN Stepping errors is, at some level, to understand the topology of complex curves. All of that calculus and yet, you spent more time discretizing the solution to answer a homework problem, than you did learning about the shape of the curve and its absolute parallel to the CAD software used to design your circuit. In fact, the exact same system of equations that make parametric modeling deterministic: the same reasoning that drives a constraint-based CAD model also governs whether your solver stays inside the curve or breaks out of it entirely.

What that means in practice: when the geometry is not handled correctly, intent does not propagate. The reasoning behind the circuit, the constraints, the relationships, the decisions that made the model valid, gets lost at exactly the moment the solver needs it most. It is also the problem Neurocad was built to fix.

What building a parametric modeling kernel reveals

When we built the Neurocad modeling kernel, we were working with numbers at a decimal precision electronics engineers have never needed to consider. Getting a watertight BREP (a Boundary Representation, the mathematical structure underlying solid models in SolidWorks, Fusion 360, and every Parasolid-backed modeler) requires approaching problems closer to Navier-Stokes and fluid dynamics than to computer graphics. This is model-based engineering at its most fundamental. A complete mathematical definition of intent, constraints, and behavior that downstream tools can interrogate without losing fidelity.

Surface modeling at G5 continuity (curvature-continuous to the fifth derivative) exceeds the precision requirements of most electrical engineering work by a considerable margin. As an EE-turned-CAD dude, I was humbled to realize that, it has proven more than valuable when debugging analog simulations. What the kernel produces is structured engineering data. Geometry that carries its own reasoning and doesn’t break the moment something upstream changes. Mechanical modeling has always operated at a higher order of geometric precision than its electrical counterpart. Surviving the crossing between those two domains requires a kernel that understands both.

The applicable insight: the same discipline that makes a BREP stable is the discipline that makes a simulation converge. 

You must be confident in your initial conditions before Newton-Raphson takes its first step. The tangency at step one is the decision you live with through the entire analysis.

We defer Newton-Raphson dependency until initial conditions are established with the precision the kernel provides. More Laplace than most of us have looked at since graduate school, but that is the work.

What this means for you

SPICE is not the problem. SPICE speaks truth. That is the most unforgiving thing about it: garbage in, garbage out. The problem has always been the setup tax, the trust deficit, and what happens after analysis when you have to reconstruct the circuit in CAD anyway.

So why all the noise about building a kernel?  Neurocad is unifying the mathematical foundation of all three: electronics, 3D modeling, and analysis.  What it does, how it feels, and how it performs.

Today, Neurocad AI has a complete view of the field, 30+ years of expertise in all 3 domains,  and the tools to play any position.  The hardest part about reasoning is connecting intent + execution + validation without losing the reasoning at each boundary.  That design intent, your objectives, from whatever the raw ingredients you might find…to design, to simulation, through to the tools you use today. That chain has never been intact. Neurocad is what makes that intact.

Where we are headed is the natural extension of those principles which underpin electronic design automation at the simulation layer. The compounding workflow: model to circuit to simulation, recurse to a higher-order model, iterating upward. Automated. Zero re-entry. Changing the workflow boundary from so much transcription to true, system intelligence definition.  The datasheet-to-CAD problem was a ledge, but it wasn’t the only one.   Now we breach the boundary between simulation and native synthesis.

Why we built the intent layer for engineering

Neurocad is built from thousands of customer encounters, on site, in your offices: engineers pleading to us, with varying degrees of frustration and resignation, “I don’t need more features, I need a solution to the real problem."

SPICE may not have come up as much as the PCB library management grind or as loudly as the ECAD-to-MCAD handoff, but it came up enough. Engineers trusted it when it worked. They just never got a good explanation for why it sometimes didn't. And no one ever automated the setup work that precedes it. We heard and understood that problem, then built a kernel. The rest is in motion.

"The industry built the problem. Engineers paid for it. We are done with both."

Neurocad is the intent layer for engineering: the infrastructure between what documentation says and what CAD tools need, applied from the first datasheet to the last simulation pass.

If any part of these challenges feel familiar, Neurocad was built for you.

Neurocad™ is open to a small cohort for early access. Engineers working in multi-tool environments can apply at neurocad.com


Neurocad™ is built by engineers who spent their careers inside the workflows this platform is designed to fix. Previously at Accel EDA, Altium, Autodesk, Meta, Microsoft, HP, and Siemens building tools used by millions of designers, engineers, and consumers worldwide.


Neurocad™ is the intent layer connecting AI intent to physics-compliant, DRC-valid CAD output across Altium, SolidWorks, and adjacent engineering environments.




FAQ: Why SPICE lies to you

Why does SPICE non-convergence happen?

It is a geometry problem, not a model problem. Newton-Raphson subdivides a spline into discrete segments and iterates toward a solution. If the initial tangent point is wrong, the solver accumulates error at every step until it has broken outside the reference frame of the curve. It cannot recover. ITL4 errors, GMIN stepping failures, and ABSTOL violations are symptoms of that geometric breakdown. The circuit model is often fine. The starting conditions are not.

What is Newton-Raphson and why does it matter for simulation?

Newton-Raphson is a subdivision algorithm that iterates from an initial guess to find a solution. In SPICE, it solves the nonlinear equations describing circuit behavior at each time step. The fragility is geometric: every subsequent iteration depends on the tangency of the first. Get the starting conditions wrong and the solver drifts outside the mathematical boundary of the actual curve. No amount of model refinement fixes that. Understanding Newton-Raphson as a geometric operation is the key to understanding why SPICE behaves the way it does.

What does topology have to do with SPICE simulation?

More than circuit analysis courses teach. A spline is a polygon, an approximation of a continuous function governed by weights, knots, and a reference frame. The same topological principles that keep a parametric CAD model stable under revision also determine whether a SPICE solver stays inside the curve it is following. When design intent does not propagate correctly through that geometric structure, the solver fails. Topology explains why. It also informed how Neurocad built its modeling kernel.

What is a parametric modeling kernel and why does it matter for simulation?

A parametric modeling kernel is the mathematical engine underlying constraint-driven CAD tools like SolidWorks, Fusion 360, and Catia. It stores intent and constraints alongside geometry, so models stay valid when upstream inputs change. Building one requires working at geometric precision levels most electronics engineers have never needed: G5 continuity, watertight BREP construction, structured engineering data that carries its own reasoning. The insight: the same discipline that makes a BREP stable makes a simulation converge. Both require establishing precise initial conditions before iterating. Neurocad's kernel applies that discipline to simulation setup.

What is design intent extraction and how does it apply to SPICE?

Design intent extraction recovers the reasoning behind an engineering artifact from the documentation that describes it: the constraints, relationships, and decisions that made the design valid. In simulation, intent loss occurs when that reasoning does not propagate into the solver. The solver gets geometry without the context governing it. Neurocad's intent review step surfaces what the system understood from a source document before anything is generated. Engineers confirm the interpretation before the simulation pipeline begins. That is design intent extraction applied to the setup problem.

What is zero re-entry and how does it apply to SPICE workflows?

In SPICE workflows specifically, re-entry happens twice. Before simulation, engineers manually set up netlists, node assignments, and initial conditions from documentation the tool cannot read directly. After simulation, results have to be reconstructed into CAD because no automated path exists between simulation output and native design assets. Zero re-entry means engineering intent moves directly into the tools that act on it without manual reconstruction at either boundary. Eliminating both is the zero re-entry problem at the simulation layer. It is the same problem Neurocad is already solving at the datasheet-to-CAD boundary, extended now to the boundary between simulation and native synthesis.

How does Neurocad address SPICE non-convergence?

By deferring Newton-Raphson dependency until initial conditions are established with kernel-level precision. Classical SPICE makes an initial guess and commits to it. Neurocad applies the same geometric discipline used in high-continuity mechanical surface modeling: establish rigorous tangency before subdivision begins. The result is a simulation setup significantly more resistant to the geometric discontinuities that cause non-convergence in conventional solvers. Automated SPICE simulation is on Neurocad's roadmap, built directly on the kernel capabilities available in Early Access today.

What is ratiometric inference and why does it matter for modeling and simulation?

Ratiometric inference is Neurocad's proprietary capability for resolving ambiguous or incomplete dimensions in engineering documentation before geometry is generated. Most real-world datasheets and package drawings contain incomplete information. Conventional tools fail on that ambiguity or require manual intervention. Ratiometric inference uses proportional reasoning and manufacturing tolerance standards to resolve those gaps. The geometry produced reflects the intent of the original documentation, not a best guess. In simulation, imprecise initial geometry propagates error into every downstream step. Resolving ambiguity at the source prevents intent loss at scale.

What is the intent layer for engineering?

The infrastructure between what engineering documentation says and what CAD tools need. Applied to SPICE, it means simulation setup, model generation, and circuit-to-CAD handoff happen without re-entry at any boundary. For the full argument on why this matters at the program and organization level, see The decade ahead is going to break engineering teams that aren't ready.


Neurocad™ is open to a small cohort for early access. Engineers working in multi-tool environments can apply at neurocad.com

Neurocad™ is built by engineers who spent their careers inside the workflows this platform is designed to fix. Previously at Accel EDA, Altium, Autodesk, Meta, Microsoft, HP, and Siemens building tools used by millions of designers, engineers, and consumers worldwide.