Skip to content

Project

Explainable AI Diagnostic Framework

Open‑source diagnostic tooling that wraps common XAI methods into a unified workflow for model interpretability.

PyTorchFastAPIXAIPython
Role
Applied ML engineer
Reading time
2 min read

Highlights

  • Integrated Integrated Gradients, Grad‑CAM, and LRP into a single, consistent diagnostic API.
  • Built a FastAPI backend that serves explanations for image and point‑cloud models via a unified adapter.
  • Generated interactive dashboards and PDF reports to make explanations usable by non‑ML stakeholders.

Problem

Teams I worked with were experimenting with increasingly complex models (vision transformers, point‑cloud networks), but:

  • Explanations lived in scattered notebooks with one‑off scripts.
  • Each new model family required bespoke explanation code.
  • Sharing results with non‑technical collaborators (e.g., clinicians, civil engineers) was painful.

Constraints

  • Support both image and point‑cloud models without rewriting every explainer.
  • Make explanations reproducible and easy to compare across runs.
  • Keep the system simple enough that other teams can extend it.

Approach

I built a diagnostic framework that standardizes how we request and consume explanations:

  • Wrapped Integrated Gradients, Grad‑CAM, and Layer‑wise Relevance Propagation (LRP) behind a shared interface.
  • Implemented an adapter layer so both image and point‑cloud models can plug into the same pipeline.
  • Exposed the functionality through a FastAPI service, letting downstream tools request explanations via HTTP.

The pipeline looks like:

  1. Receive a model ID, input sample, and explanation type (e.g., Grad‑CAM).
  2. Load the appropriate model and adapter.
  3. Run the selected XAI algorithm to generate saliency maps or relevance scores.
  4. Package results as both machine‑readable artifacts and human‑friendly visualizations.

On top of the API, I added:

  • Interactive dashboards for exploring explanations across samples and models.
  • PDF report generation so domain experts can review results without touching code.

Tradeoffs

  • Supporting multiple model types increases abstraction layers, so I kept the core XAI implementations thin and well‑documented.
  • Generating rich visualizations and reports adds compute overhead, but dramatically improves how explanations are consumed.
  • A custom framework is more work up front than using standalone libraries, but it standardizes workflows across projects.

Results

  • A reusable interpretability service that can be dropped into new ML projects.
  • Consistent, high‑quality explanations for both image and point‑cloud models, reducing duplicated notebook code.
  • Better collaboration with non‑ML stakeholders through dashboards and reports they can actually use.