Insurance
InsurTech 2.0 is collapsing — most of the startups that raised on "AI-first insurance" burned through capital and failed or are being quietly absorbed by incumbents. What is emerging from the wreckage is more interesting: parametric AI underwriting, embedded insurance via API, and agent-first claims processing that handles FNOL to payment without human intervention. The carriers that win will be those that treat AI governance as an engineering requirement under the NAIC FACTS framework, not a compliance afterthought.
Insurance is the industry where agentic AI delivers the most immediate, measurable ROI — and where deployment failures carry the heaviest regulatory consequences. The InsurTech 2.0 wave burned billions proving that technology enthusiasm without actuarial discipline produces bad loss ratios. What is left is more interesting: the survivors and the incumbents adopting their technology understand exactly which workflows AI can automate safely and which ones still require a human in the loop.
What AI Is Actually Changing
The near-term impact is concentrated in three areas: claims triage, document processing, and fraud detection. Claims triage routes incoming claims to the right adjuster or to straight-through processing based on complexity scoring, coverage type, and fraud risk indicators. Document processing handles the unstructured data problem: policy applications, medical records, repair estimates, and contractor invoices that previously required manual data entry. Fraud detection applies pattern analysis at a scale and speed that human investigators cannot match — and unlike rule-based systems, ML fraud models adapt as fraud patterns evolve.
Parametric underwriting is the structural change with the longest tail. Pay-automatically-when-conditions-are-met products eliminate the claims process entirely for qualifying events. The actuarial AI revolution — ML models outperforming traditional actuarial tables on loss prediction — is enabling parametric pricing that was not feasible with manual actuarial approaches. The engineering challenge is the real-time data pipeline, not the model.
Where the Architecture Breaks
The technical problem that most insurance AI projects underestimate is the integration layer between modern inference infrastructure and legacy policy administration systems. AI models run at millisecond scale. Legacy policy administration systems — many built on COBOL mainframes in the 1980s and 1990s — were designed for batch processing. The mismatch between real-time inference and batch-oriented core systems is where most production deployments develop problems.
- Real-time fraud scoring requires features from claims history databases that are batch-updated nightly — stale features produce stale scores
- Embedded insurance APIs need sub-second response times from policy systems designed for overnight batch processing
- Agent-first FNOL workflows need bi-directional state management with ClaimCenter or Duck Creek — the APIs exist but the latency assumptions were not designed for agentic loops
- Parametric trigger pipelines need data freshness guarantees that batch-oriented core systems cannot provide without a real-time facade layer
The solution pattern is consistent: build a real-time data facade in front of the legacy system, replicate the high-velocity features to a low-latency store (Redis, DynamoDB, Snowflake dynamic tables), and let the legacy system remain the system of record for regulatory compliance while the AI layer operates against the replicated data.
The Regulatory Engineering Problem
The NAIC FACTS framework — fairness, accountability, compliance, transparency, security — reads like a governance checklist but is actually a set of architectural requirements. Transparency means every automated decision must produce a structured audit record that explains the inputs, the model version, and the output reasoning. Accountability means there is an identified human responsible for each AI system in production. Compliance means the model outputs must not disparately impact protected classes under applicable state law.
| Requirement | What It Means for Engineering |
|---|---|
| Transparency | Every automated decision stores structured justification — inputs, model version, feature values, output reasoning |
| Accountability | Model registry with identified human owners, change approval workflows, version pinning in production |
| Fairness | Disparate impact testing across protected class proxies before deployment and on production traffic samples |
| Security | Model serving infrastructure isolated from core system write paths, adversarial input detection |
| Compliance | Adverse action notices generated automatically when coverage is declined or modified |
These are not insurmountable requirements — they are design constraints that need to be in the architecture from the start. Retrofitting explainability into a model that has been in production for six months is difficult and expensive. Building it in from the beginning adds modest complexity and pays off immediately at the first market conduct examination.
Building Compliant AI Infrastructure for Insurance
Before any AI system touches underwriting or claims, build the model registry: version tracking, human owner assignment, approval workflows. This is the accountability layer the NAIC requires.
Real-time inference requires real-time features. Build a feature store that tracks data freshness and alerts when features exceed acceptable staleness thresholds — especially for fraud detection and parametric triggers.
Every automated decision writes a structured record to an append-only log: inputs, model ID, output, timestamp. This feeds both regulatory audit requirements and model monitoring.
Define the exact conditions that route an AI decision to human review. Document them. Test them. The handoff protocol is where agentic systems most often fail in production.
Legacy policy administration systems (many still COBOL-based) have no native API surface — every integration requires wrapper layers that become the actual maintenance burden
The NAIC Model Bulletin on AI requires explainable, auditable decisions for underwriting and claims — black-box models are a regulatory liability that grows with every automated adverse decision
Parametric products need real-time IoT and weather data feeds tied directly to payout triggers — the freshness gap between sensor data and claims initiation is where disputes arise
State-by-state rate filings: a single product change may require 50+ regulatory submissions with different schemas, timelines, and approval processes
Catastrophe modeling now feeds real-time reinsurance pricing via climate AI — latency and data freshness matter at a level old actuarial pipelines were never designed for
Embedded insurance distribution via third-party platforms creates API contract complexity that traditional carrier systems were not architected to handle at quote-and-bind speed
Regulatory complexity is structural: 50 separate state regulators, different filing schemas, different AI governance requirements — this cannot be abstracted away
Claims processing timelines are legally mandated in most states — system reliability and uptime are compliance requirements, not just SLA targets
The actuarial AI revolution is real — ML models are outperforming traditional actuarial tables on loss prediction, but the explainability requirement means you cannot just deploy a gradient boosted model and call it done
The AI explainability mandate is not theoretical — carriers have faced regulatory action for automated adverse underwriting decisions they could not explain at examination
Distribution complexity spans captive agents, independent agents, MGAs, embedded APIs, and direct channels — each has different API integration requirements and data access controls
InsurTech 2.0 Collapse Is a Signal, Not a Setback
Lemonade, Hippo, Root — the InsurTech darlings of the 2019-2022 period — are cautionary tales in capital allocation, not proof that AI does not belong in insurance. They burned cash on customer acquisition and underpriced risk. What they proved is that technology alone does not change the fundamental actuarial and regulatory reality of the industry. The carriers absorbing these businesses are inheriting their technology stacks and their distribution, while discarding the "move fast" attitude that ignored loss ratios. The real AI transformation in insurance is happening inside incumbents who understand that Guidewire with an AI layer is more defensible than a greenfield insurtech with better UX.
Parametric Products Change the Engineering Problem Entirely
Traditional claims require an adjuster to assess a loss. Parametric products pay automatically when a defined condition is met — a wind speed threshold, a rainfall measurement, an earthquake magnitude — without any claims process at all. The engineering problem shifts from AI-assisted adjudication to real-time data pipeline integrity: if the IoT sensor data or weather feed that triggers payment is stale, wrong, or manipulated, you pay incorrectly. Building parametric products requires treating the data ingestion layer as the risk control layer, with the same auditability you would apply to an underwriting model.
Agent-First FNOL Is Where the Adjuster Workforce Transition Begins
AI agents handling first notice of loss — receiving the claim, collecting documentation, running fraud screening, determining coverage, and initiating payment — can process straightforward property and auto claims end-to-end without human intervention. The human adjuster role shifts to exception handling, complex coverage disputes, and litigation oversight. This is not a future state; it is in production at several carriers. The workforce transition is real and the carriers that are not designing the human handoff protocols are creating gaps that increase litigation exposure when edge cases hit the agentic pipeline without a clean escalation path.
Insurance in the U.S. is regulated state-by-state under the McCarran-Ferguson Act, coordinated through NAIC model laws. The NAIC Model Bulletin on Use of Artificial Intelligence Systems (adopted by 24+ states as of early 2026) requires carriers to implement AI governance programs covering the FACTS framework: fairness, accountability, compliance, transparency, and security. Carriers must document model lineage, maintain audit trails for automated decisions, and provide consumer-facing explanations for adverse actions. NAIC Regulatory Notice 24-09 extends these obligations to generative AI use cases. The NAIC Insurance Data Security Model Law (#668) requires comprehensive cybersecurity programs. State market conduct examinations now routinely audit AI governance documentation alongside financial filings. The cautionary cases are already public: carriers that deployed black-box underwriting models have faced regulatory action when they could not explain adverse decisions to examiners.
Agent-first FNOL: end-to-end automated processing for straightforward claims, with humans handling only exceptions and disputes
Parametric products expanding beyond catastrophe coverage into agriculture, travel, and SMB business interruption — all requiring real-time data pipeline infrastructure
NAIC AI governance adoption accelerating — majority of states expected to adopt the Model Bulletin by late 2026, making FACTS compliance a baseline carrier requirement
Embedded insurance distribution growing through API partnerships — coverage sold inside fintechs, e-commerce platforms, and gig economy apps requires real-time quote-and-bind
Climate AI feeding catastrophe models: satellite imagery, IoT sensors, and ML-based exposure modeling replacing annual property surveys
InsurTech consolidation: incumbents acquiring distressed InsurTech 2.0 survivors for technology and distribution, not for their underwriting models
Deploying black-box underwriting models without explainability infrastructure — creates NAIC FACTS regulatory liability that compounds with every automated adverse decision
Building AI fraud detection on batch-updated feature stores — the freshness gap between data update and inference is where sophisticated fraud exploits the system
Treating embedded distribution as a UI problem — it is an API contract and data freshness problem that requires changes to core system architecture
Full platform replacement over incremental modernization — large carrier core system replacements routinely exceed budgets and timelines by multiples
Ignoring the independent agent channel when building digital-first experiences — in commercial lines, independent agents still control the majority of distribution volume
We build insurance systems that treat explainability and auditability as first-class engineering requirements, not bolt-on compliance features. Every automated decision in our systems — underwriting, claims triage, fraud scoring — produces structured justification output that satisfies NAIC FACTS requirements. We design for incremental modernization: strangler-fig patterns over legacy policy administration systems, not rip-and-replace projects that take five years and fail at the finish line. For parametric products, we build the IoT-to-payout pipeline with the data freshness and audit trail that regulators expect. Our team has direct integration experience with Guidewire, Duck Creek, and Verisk data pipelines.
Ready to build for Insurance?
We bring domain expertise, not just engineering hours.
Start a ConversationFree 30-minute scoping call. No obligation.
