Publised on Mar 3, 2026
Why LLMs are fundamentally problematic in safety-critical systems

Felix Schaller

Why LLMs are fundamentally problematic in safety-critical systems (ISO 26262 / SOTIF)
Large Language Models are powerful tools for text generation, but they introduce structural risks when used in safety-critical or regulated environments.
Key issues include:
* Non-deterministic behavior
* Lack of formal guarantees
* Hallucinations without verifiable causal chains
* No direct path to certification under ISO 26262 or ISO 21448 (SOTIF)
In domains like automotive, aerospace, and defense, systems must be:
* Deterministic
* Verifiable
* Traceable
* Certifiable
This is why purely statistical AI approaches struggle in these contexts.
Alternatives exist:
Model-based, symbolic and declarative AI approaches can transform natural language into formal representations that allow reasoning, verification, and compliance with safety standards.
The future of AI in regulated industries is not about replacing engineering rigor — it’s about restoring it.
