Sentinel Core

Manager Portal

M
AI Agents / Anatomical Logic Analysis
AGT-MED-005 Medical AI-based

Anatomical Logic Analysis

AGT-MED-005 applies deep learning computer vision to X-ray and MRI radiological images to perform automated lesion localisation and anatomical consistency verification. The agent uses a fine-tuned ResNet-50 convolutional neural network with MONAI medical imaging augmentations to identify bone structures, detect lesions or fractures, and precisely localise them in anatomical coordinates. The localised findings are then compared against the claimant's declared pain location and the treating physician's diagnosis. A claim asserting a left femur fracture that the X-ray shows no evidence of, or an X-ray showing pathology in one anatomical region while the claim reports injury in another, is flagged as a potential fraud or identity substitution (the X-ray may belong to a different patient).

Tech Stack

Python 3.11 Runtime
PyTorch 2.x CNN training and inference framework
MONAI 1.x Medical imaging transforms, augmentations, and metrics
ResNet-50 CNN backbone for anatomical landmark detection and lesion localisation
pydicom 2.x DICOM file parsing and metadata extraction
SimpleITK Medical image processing and registration
OpenCV 4.x Bounding box rendering and annotation
ONNX Runtime Optimised model serving

Input

A radiological image (X-ray or MRI) in DICOM or common image format, plus declared anatomical region and clinical description.

Accepted Formats

DICOM (.dcm) JPEG PNG NIfTI (.nii)

Fields

Name Type Req Description
image_file binary Yes Radiological image file (DICOM preferred for metadata integrity)
declared_region string Yes Claimant-declared anatomical region of injury (e.g. 'left_knee', 'lumbar_spine', 'right_wrist')
declared_finding string No Physician-declared finding (e.g. 'fracture', 'disc_herniation', 'ligament_tear')
image_modality string No Imaging modality: XRAY | MRI | CT (auto-detected from DICOM if not provided)
patient_id_hash string No Hashed patient ID from DICOM header for identity cross-reference

Output

Detected anatomical region, localised findings with confidence scores, and consistency verdict against declared information.

Format:

JSON

Fields

Name Type Description
detected_body_region string CNN-detected anatomical region from the image content
detected_findings array<object> List of detected pathologies: {finding_type, confidence, bounding_box, anatomical_location}
region_match boolean Whether detected body region matches declared region
finding_match boolean | null Whether detected pathology matches declared finding (null if no finding declared)
dicom_patient_id string | null Patient ID from DICOM header (for identity verification)
dicom_study_date string | null Study date from DICOM metadata
annotated_image_b64 string Annotated image with bounding boxes, base64 encoded
flags array<string> FLAG_REGION_MISMATCH, FLAG_NO_PATHOLOGY, FLAG_FINDING_MISMATCH, FLAG_IDENTITY_SWAP
risk_score float Normalised risk contribution 0.0–1.0
verdict string PASS | FLAG | INCONCLUSIVE

Example Response

{
  "detected_body_region": "right_knee",
  "detected_findings": [
    {"finding_type": "medial_meniscus_tear", "confidence": 0.82, "anatomical_location": "right_knee_medial"}
  ],
  "region_match": false,
  "finding_match": false,
  "dicom_patient_id": "PAT-8827",
  "dicom_study_date": "2024-06-10",
  "flags": ["FLAG_REGION_MISMATCH"],
  "risk_score": 0.83,
  "verdict": "FLAG"
}

How It Works

Radiological image fraud in health insurance takes several forms: submitting X-rays that belong to another patient, exaggerating the severity of a finding, or claiming injury to a body part that shows no pathology.

AGT-MED-005 addresses all of these with a multi-stage deep learning pipeline. The first stage uses DICOM metadata extraction to get objective information about what the image actually shows — the DICOM standard mandates that the BodyPartExamined tag be set at acquisition time by the radiologist's equipment, making it independent of what the claimant says.

The second stage applies a two-model CNN pipeline. The first model classifies the radiological image into one of 42 anatomical regions based purely on the visual content. This catches cases where the DICOM metadata has been modified or where the claimant is submitting an image without DICOM headers. The second model, selected based on the detected anatomical region, performs pathology detection to find and localise any abnormalities.

The cross-validation logic then compares four things: (1) declared body region vs. DICOM BodyPartExamined, (2) declared body region vs. CNN-detected body region, (3) declared finding vs. CNN-detected finding, and (4) declared patient identity vs. DICOM PatientID.

Any mismatch in these comparisons triggers a specific flag. The combination of DICOM metadata (objective, machine-generated) and CNN visual analysis (independent of any text the claimant provides) makes this a robust dual-verification system.

All detections are annotated on the original image with bounding boxes and returned as a base64-encoded image for adjudicator review — human medical expertise remains the final arbiter.

Thinking Steps

1

DICOM Parsing & Metadata Extraction

Parse the radiological file with pydicom. Extract DICOM header fields: PatientID, StudyDate, Modality, BodyPartExamined, StudyDescription. These metadata fields provide ground truth about what body part was actually scanned, independent of the claimant's declaration.

DICOM BodyPartExamined tag is set by the radiology technician at the scanning machine — it cannot be easily modified by the patient and provides an independent body-part record.

2

Image Preprocessing & Windowing

Apply DICOM window centre/width for optimal contrast (e.g. bone window for X-rays). Normalise pixel values to [0,1]. Apply MONAI spatial transforms: spacing normalisation, orientation standardisation, and intensity clipping.

Proper windowing is critical: a chest X-ray viewed with bone window settings shows completely different anatomy than with soft tissue settings.

3

Body Region Classification

Pass the preprocessed image through a body-region classifier (ResNet-50 with 42 output classes for anatomical regions). The top-1 predicted class is the detected body region. Compare against the declared_region field.

The body region classifier was trained on 280,000 radiological images across 42 anatomical classes and achieves 97.3% top-1 accuracy on the validation set.

4

Lesion / Fracture Detection

Run the pathology detection network (region-specific ResNet-50 fine-tuned for that body region) to identify and localise any pathological findings: fractures, dislocations, effusions, disc herniations, tumours, or normal findings. Output bounding boxes and confidence scores.

Each body-region has its own specialised detection model. The lumbar spine model was fine-tuned on 18,000 MRI scans annotated by radiologists.

5

Declared vs. Detected Cross-Validation

Compare detected_body_region against declared_region (flag if mismatch). Compare detected findings against declared_finding (flag if claim reports fracture but no fracture is detected, or if pathology is in a different location than declared).

A false negative rate of ~5% for fracture detection means the agent should not be used as the sole reason to deny a claim — it raises a flag for human radiologist review.

6

Identity Consistency Check

Extract PatientID from DICOM header and compare against the claimant's policy record. If the IDs differ, flag FLAG_IDENTITY_SWAP — the claimant may be submitting another patient's radiological image.

De-identified DICOM files (where PatientID has been removed) are noted as INCONCLUSIVE rather than flagged — privacy-compliant de-identification is legitimate.

Thinking Tree

  • Root Question: Does the radiological image support the claimed injury at the declared location?
    • DICOM metadata extraction
      • BodyPartExamined matches declared region → proceed
      • BodyPartExamined differs from declared → preliminary mismatch signal
      • No DICOM metadata (JPEG/PNG) → rely on CNN only
    • CNN body region classification
      • Detected region matches declared → proceed to pathology
      • Detected region differs → FLAG_REGION_MISMATCH
    • Pathology detection
      • Finding detected at declared location → PASS
      • Finding detected at different location → FLAG_FINDING_MISMATCH
      • No pathology detected despite declared injury → FLAG_NO_PATHOLOGY
    • DICOM patient identity check
      • PatientID matches policy record → no concern
      • PatientID differs from claimant → FLAG_IDENTITY_SWAP

Decision Tree

Is DICOM BodyPartExamined available?

Yes → d2 No → d3
d1

DICOM BodyPartExamined matches declared region?

Yes → d3 No → flag_region
d2

CNN-detected body region matches declared region?

Yes → d4 No → flag_region
d3

Declared finding detected in the image?

Yes → d5 No → flag_no_path
d4

DICOM PatientID matches claimant policy record?

Yes → pass No → flag_identity
d5

FLAG — REGION_MISMATCH: Image does not show the declared body part

flag_region

FLAG — NO_PATHOLOGY: Declared injury not visible in radiological image

flag_no_path

FLAG — IDENTITY_SWAP: DICOM patient ID does not match claimant

flag_identity

PASS — Image consistent with declared injury, region, and identity

pass

Technical Design

Architecture

AGT-MED-005 is an async FastAPI microservice. DICOM parsing runs on CPU (<100 ms). CNN inference uses ONNX Runtime with the body-region classifier running first to select the appropriate pathology model, then the pathology model runs. Total inference time is approximately 400–800 ms per image on CPU. Models are loaded at startup from a model registry and cached in memory.

Components

Component Role Technology
DICOMParser Extracts pixel data and metadata from DICOM files pydicom 2.x
ImagePreprocessor Applies windowing, normalisation, and spatial transforms MONAI + SimpleITK
BodyRegionClassifier 42-class CNN to identify anatomical region ONNX Runtime + ResNet-50
PathologyDetector Region-specific CNN for lesion/fracture detection and localisation ONNX Runtime + ResNet-50 fine-tuned
MetadataValidator Cross-references DICOM metadata against declared information Python dict comparison
AnnotationRenderer Draws bounding boxes on image for adjudicator review OpenCV 4.x
VerdictAssembler Combines all check results into final verdict Pure Python

Architecture Diagram

┌───────────────────────────────┐
│  POST /analyze                │
│  (image + declared metadata)  │
└───────────────┬───────────────┘
                │
                ▼
┌───────────────────────────────┐
│        DICOMParser            │
│  (pixel data + header tags)   │
└──────────┬────────────────────┘
           │
           ▼
┌───────────────────────────────┐
│      ImagePreprocessor        │
│  (window / normalise / align) │
└──────────┬────────────────────┘
           │
     ┌─────┴──────┐
     ▼            ▼
┌──────────┐  ┌──────────────────────┐
│ Body     │  │  MetadataValidator   │
│ Region   │  │  (DICOM vs declared) │
│Classifier│  └──────────┬───────────┘
└──────┬───┘             │
       │                 │
       ▼                 │
┌──────────────┐         │
│  Pathology   │         │
│  Detector    │         │
└──────┬───────┘         │
       │                 │
       └────────┬────────┘
                │
                ▼
  ┌─────────────────────────┐
  │    AnnotationRenderer   │
  └─────────────┬───────────┘
                │
                ▼
     ┌───────────────────┐
     │  VerdictAssembler │
     └───────────────────┘

Data Flow

API Gateway DICOMParser | Raw DICOM or image binary
DICOMParser ImagePreprocessor | Pixel array + metadata dict
ImagePreprocessor BodyRegionClassifier | Normalised image tensor
BodyRegionClassifier PathologyDetector | Detected region class + model selector
DICOMParser MetadataValidator | DICOM BodyPartExamined, PatientID, StudyDate
PathologyDetector AnnotationRenderer | Bounding boxes + finding labels
MetadataValidator VerdictAssembler | Metadata match flags
AnnotationRenderer VerdictAssembler | Annotated image + detection results
VerdictAssembler API Gateway | JSON verdict + base64 annotated image