Sentinel Core

Manager Portal

M
AI Agents / Crash Geometry Expert
AGT-MOT-001 Motor Vehicle AI-based

Crash Geometry Expert

AGT-MOT-001 reconstructs the three-dimensional geometry of vehicle crashes from submitted photographs and performs Finite Element Analysis (FEA) simulations to validate whether the observed damage pattern is physically consistent with the claimant's narrative of how the accident occurred. Using OpenCV Structure from Motion (SfM) to build a 3D point cloud from multi-angle photos, then Blender Python API for mesh reconstruction, and FEniCS FEA to simulate impact forces, the agent can determine: the direction of impact force, the approximate speed at impact, and whether metal deformation patterns are consistent with the declared collision trajectory. A staged accident — where a vehicle is deliberately damaged by a stationary impact rather than a moving collision — produces distinctive geometric and material deformation signatures that this agent can detect.

Tech Stack

Python 3.11 Runtime
OpenCV 4.x + SfM Structure from Motion — 3D point cloud reconstruction from 2D photos
Open3D 0.17 Point cloud processing, surface normal estimation, mesh cleaning
Blender 4.x Python API 3D mesh reconstruction and damage topology modelling
FEniCS / DOLFINx Finite Element Analysis — impact force simulation on mesh
NumPy / SciPy Numerical computation, signal processing for FEA results
PyTorch 2.x CNN for damage classification from 2D photo input
Meshlab Mesh simplification and quality metrics

Input

Multiple photographs of the damaged vehicle from different angles, plus the claimant's written narrative of the accident.

Accepted Formats

JPEG PNG MP4 (frame extraction)

Fields

Name Type Req Description
damage_photos array<binary> Yes At least 3 photographs of the damaged vehicle from different angles (front, side, rear, damage close-up)
accident_narrative string Yes Claimant's text description of how the accident occurred, including impact direction and speed estimate
vehicle_make_model string No Vehicle make/model for accessing the reference mesh and material properties from the vehicle database
declared_speed_kmh float No Declared speed at impact in km/h
declared_impact_point string No Declared impact point (front, rear, left-side, right-side, corner)

Output

3D reconstruction quality metrics, FEA simulation results, and physical consistency verdict.

Format:

JSON

Fields

Name Type Description
reconstruction_quality float Quality score of the 3D reconstruction (0.0–1.0 based on point cloud density and surface error)
detected_impact_direction string Agent-inferred impact direction from 3D deformation geometry
detected_impact_speed_kmh object Estimated speed range: {min, max, confidence}
fea_stress_map string Base64-encoded PNG of the FEA Von Mises stress distribution on the vehicle mesh
deformation_consistency_score float Physical consistency of damage pattern with declared collision scenario (0.0–1.0)
anomaly_indicators array<string> Detected anomalies: STATIC_IMPACT_PATTERN, INCORRECT_FORCE_DIRECTION, MULTIPLE_EVENT_SIGNATURES, PRIOR_DAMAGE_OVERLAP
flags array<string> FLAG_STAGED_CRASH, FLAG_DIRECTION_MISMATCH, FLAG_SPEED_INCONSISTENCY, FLAG_PRIOR_DAMAGE
risk_score float Normalised risk contribution 0.0–1.0
verdict string PASS | FLAG | INCONCLUSIVE

Example Response

{
  "reconstruction_quality": 0.78,
  "detected_impact_direction": "stationary_front_compression",
  "detected_impact_speed_kmh": {"min": 5, "max": 15, "confidence": 0.71},
  "deformation_consistency_score": 0.21,
  "anomaly_indicators": ["STATIC_IMPACT_PATTERN"],
  "flags": ["FLAG_STAGED_CRASH", "FLAG_SPEED_INCONSISTENCY"],
  "risk_score": 0.85,
  "verdict": "FLAG"
}

How It Works

Vehicle crash geometry analysis is grounded in the physical laws governing material deformation. When a vehicle is struck by another vehicle at speed, the metal crumples according to predictable physical laws — the deformation pattern encodes information about the impact direction, speed, and force distribution. An experienced accident reconstruction expert can read a crash scene and determine whether it is consistent with the described events.

AGT-MOT-001 automates this forensic expertise using a pipeline that progresses from photogrammetry through 3D reconstruction to physics simulation.

The Structure from Motion pipeline converts 2D photographs into a 3D point cloud by exploiting parallax — the same physical point appears in slightly different positions in photos taken from different angles. By matching these correspondences and solving the camera geometry, the algorithm recovers both the camera positions and a 3D model of the scene.

This 3D model is then compared against the undamaged manufacturer reference mesh to produce a per-vertex deformation field — essentially a map of how every part of the vehicle body has moved from its original position. This deformation field is the input to the Finite Element Analysis.

FEA is the same mathematical framework used by automotive engineers to design crash safety systems. Running it in reverse (inverse FEA) allows the agent to ask: 'What force vector, applied at what point on this vehicle, would produce this observed deformation pattern?' The answer is a reconstructed impact vector — magnitude, direction, and point of application.

This reconstructed impact vector is then compared against the claimant's narrative. If the narrative says 'frontal collision at 80 km/h' but the FEA says 'stationary compression at 15 km/h from the left side', the discrepancy is flagged. This type of physics-based evidence is extremely difficult to contest because it is grounded in objective physical laws.

Thinking Steps

1

Photo Ingestion & Quality Assessment

Load all submitted photographs. Assess quality: blur detection (Laplacian variance), exposure check (histogram analysis), and coverage check (ensure photos cover the declared damage area). Require minimum 3 photos for SfM reconstruction.

Insufficient photo coverage is a common fraud tactic — deliberately photographing only undamaged areas to avoid reconstruction.

2

3D Reconstruction via Structure from Motion

Use OpenCV SfM to detect SIFT/ORB keypoints across all photos, match features across views, compute the fundamental matrix, and recover camera poses. Triangulate 3D points to build a sparse point cloud. Densify with multi-view stereo (MVS) to get surface detail.

SfM quality degrades with fewer than 5 photos or when photos are taken from less than 10 degrees apart. The agent reports reconstruction_quality to flag insufficient inputs.

3

Damage Region Segmentation

Apply a CNN trained on 50,000 labelled vehicle damage images to segment the damaged regions in each 2D photo. Project the segmentation masks into 3D space using the reconstructed camera poses to identify which 3D points correspond to damaged surfaces.

The segmentation model distinguishes fresh damage (bright metal edges, paint cracks) from old damage (oxidation, faded edges) to detect prior damage overlap.

4

Vehicle Mesh Retrieval & Alignment

Retrieve the manufacturer reference mesh for the declared vehicle make/model from the internal 3D vehicle database (200+ vehicle models). Use ICP (Iterative Closest Point) alignment to register the damaged point cloud against the reference mesh. Deformation vectors per vertex are computed.

The deformation vector field is the core data structure that feeds into FEA — it represents how each part of the vehicle surface has moved from its original position.

5

Finite Element Analysis Simulation

Import the deformation data into FEniCS. Apply the material properties of automotive steel (Young's modulus, yield strength, density). Run an inverse FEA to determine what impact force vector (magnitude + direction) would produce the observed deformation field. Compare the reconstructed force vector against the declared impact scenario.

Inverse FEA is computationally expensive (~30–120 seconds) but produces court-admissible physics evidence.

6

Staged Crash Pattern Detection

Analyse the deformation pattern for signatures of staged crashes: uniform deformation depth (indicating slow, controlled impact against a wall vs. dynamic collision), symmetric damage inconsistent with the declared glancing impact, deformation lines suggesting a stationary press rather than a dynamic impact.

A genuine 80 km/h frontal collision produces a characteristic progressive crush zone with accordian folding; a garage wall impact at 10 km/h looks very different despite potentially similar visual appearance.

7

Narrative Cross-Validation

Parse the accident_narrative using NLP to extract: impact point, direction, speed estimate, road surface, weather. Cross-validate each claim against the FEA results. Flag any contradiction between the physics-derived impact vector and the narrative.

Common narrative-physics mismatches: 'rear-ended at 60 km/h' but FEA shows low-speed side impact; 'T-bone collision' but damage is entirely frontal.

Thinking Tree

  • Root Question: Is the vehicle damage physically consistent with the declared accident scenario?
    • 3D reconstruction quality sufficient?
      • Quality ≥ 0.5 → proceed to FEA
      • Quality < 0.5 → INCONCLUSIVE (insufficient photos)
    • FEA impact vector reconstruction
      • Reconstructed direction matches declared → PASS (direction)
      • Direction mismatch > 30° → FLAG_DIRECTION_MISMATCH
    • Speed consistency check
      • FEA speed estimate consistent with declared speed → PASS
      • FEA speed estimate significantly lower → FLAG_SPEED_INCONSISTENCY
    • Staged crash pattern detection
      • Uniform static deformation signature detected → FLAG_STAGED_CRASH
      • Dynamic impact deformation signature → PASS
    • Prior damage overlap
      • Old damage detected in declared new damage area → FLAG_PRIOR_DAMAGE
      • All damage appears recent → PASS

Decision Tree

Are at least 3 photos available and reconstruction quality ≥ 0.5?

Yes → d2 No → inconclusive
d1

FEA reconstructed impact direction within 30° of declared direction?

Yes → d3 No → flag_direction
d2

FEA speed estimate consistent with declared speed (within ±40%)?

Yes → d4 No → flag_speed
d3

Deformation pattern matches dynamic collision signature (not static)?

Yes → d5 No → flag_staged
d4

No prior/pre-existing damage detected in claimed damage zone?

Yes → pass No → flag_prior
d5

FLAG — DIRECTION_MISMATCH: FEA impact vector contradicts declared collision direction

flag_direction

FLAG — SPEED_INCONSISTENCY: Material deformation indicates much lower speed than declared

flag_speed

FLAG — STAGED_CRASH: Deformation pattern consistent with controlled static impact, not traffic collision

flag_staged

FLAG — PRIOR_DAMAGE: Pre-existing damage detected in the claimed new damage area

flag_prior

INCONCLUSIVE — Insufficient photo coverage for reliable 3D reconstruction

inconclusive

PASS — Crash geometry and deformation physics consistent with declared accident

pass

Technical Design

Architecture

AGT-MOT-001 is an async FastAPI microservice with a long-running task queue (Celery + Redis) for FEA computations. Photo ingestion and SfM are synchronous and fast (<5 seconds). FEA simulation is dispatched as an async task and results are polled. The Blender Python API runs in a subprocess to avoid GIL contention. Vehicle reference meshes are cached in a local mesh database.

Components

Component Role Technology
PhotoQualityAssessor Checks blur, exposure, and coverage of submitted photos OpenCV Laplacian + histogram analysis
SfMReconstructor Builds 3D point cloud from multi-view photos OpenCV SfM + COLMAP
DamageSegmenter CNN to segment fresh vs old damage in 2D photos PyTorch Mask R-CNN
MeshAligner Registers damaged point cloud to reference mesh Open3D ICP registration
FEASimulator Inverse FEA to reconstruct impact force vector FEniCS DOLFINx
BlenderMeshBuilder Converts point cloud to watertight mesh for FEA Blender Python API (bpy)
NarrativeParser Extracts impact scenario claims from narrative text spaCy NER + rule-based extraction
PhysicsValidator Compares FEA results against narrative claims Pure Python + SciPy stats

Architecture Diagram

┌──────────────────────────────────┐
│  POST /analyze                   │
│  (photos[] + narrative)          │
└──────────────┬───────────────────┘
               │
               ▼
┌──────────────────────────────────┐
│      PhotoQualityAssessor        │
└──────────────┬───────────────────┘
               │
        ┌──────┴──────┐
        ▼             ▼
┌──────────────┐ ┌────────────────┐
│SfM           │ │ NarrativeParser│
│Reconstructor │ │  (NLP)         │
└──────┬───────┘ └──────┬─────────┘
       │                │
       ▼                │
┌──────────────┐        │
│ Damage       │        │
│ Segmenter    │        │
└──────┬───────┘        │
       │                │
       ▼                │
┌──────────────┐        │
│ MeshAligner  │        │
│ (ICP reg.)   │        │
└──────┬───────┘        │
       │                │
       ▼                │
┌──────────────┐        │
│BlenderMesh   │        │
│Builder       │        │
└──────┬───────┘        │
       │                │
       ▼                │
┌──────────────┐        │
│ FEASimulator │        │
│ (FEniCS)     │        │
└──────┬───────┘        │
       │                │
       └──────┬─────────┘
              │
              ▼
   ┌────────────────────┐
   │  PhysicsValidator  │
   └────────────────────┘

Data Flow

API Gateway PhotoQualityAssessor | Raw photo binaries
PhotoQualityAssessor SfMReconstructor | Validated photos + quality flags
SfMReconstructor DamageSegmenter | Calibrated image set + camera poses
DamageSegmenter MeshAligner | 3D damage region point cloud
MeshAligner BlenderMeshBuilder | Aligned deformation vectors
BlenderMeshBuilder FEASimulator | Watertight mesh + deformation field
FEASimulator PhysicsValidator | Impact force vector + stress map
NarrativeParser PhysicsValidator | Declared impact direction, speed, point
PhysicsValidator API Gateway | JSON verdict + FEA stress map PNG