AffectLog's Trustworthy AI (ALT-AI) - Design Document – Prometheus-X Components & Services

AffectLog's Trustworthy AI (ALT-AI) - Design Document

AffectLog's Trustworthy AI (ALT-AI) provides a set of tools for explaining, visualizing, and understanding complex machine learning models. It aims to facilitate model transparency, interpretability, and aid compliance with emerging regulatory standards (e.g., GDPR, EU AI Act). ALT-AI helps data scientists, analysts, and stakeholders interpret model predictions, identify feature importance, assess fairness, and evaluate whether models align with ethical and legal requirements.

Technical Usage Scenarios & Features

ALT-AI supports both global (overall model behavior) and local (individual predictions) explanations. It helps users:

The toolbox is designed to be flexible and scalable, while prioritizing privacy, security, and compliance.

Features/Main Functionalities

Technical Usage Scenarios

Requirements

Timeline: Feasibility discussions (e.g., integration with Decentralized AI Training BB) are tentatively planned for Q1 2025. After these discussions, a more precise project timeline and roadmap will be established. A high-level work plan has been shared with the relevant Building Block (BB) and Work Package leader for consideration.

Integrations

Direct Integrations with Other BBs

Integrations via Connector

Relevant Standards

Input / Output Data

Supported Model Types

Supported Data Formats

Architecture

ALT-AI comprises several components:

(See classDiagram-v1.1.png for a class diagram, sequenceDiagram-v1.1.png for dynamic behavior.)

Configuration and Deployment Settings

Third Party Components & Licenses

Implementation Details

Built with flexibility, compliance, and scalability in mind. Integration feasibility with the Decentralized AI Training BB will be assessed in Q1 2025, after which a detailed roadmap will be provided.

Partners & Roles

Usage In The Dataspace

Leveraging AffectLog for Organizational Skill Gap Analysis

ALT-AI can interpret models for skill gap analysis, clarifying key features driving skill shortages and verifying fairness. If combined with decentralized training, privacy is enhanced.

OpenAPI Specification

Future iterations may provide an OpenAPI spec for model submission, explanation retrieval, and compliance reporting.


Test Specification

Newly Added Test Definitions

The following table covers the core endpoints tested for acceptance, focusing on verifying correct functionality of the /predict and /explain routes. These definitions are technology independent; they can be executed manually (e.g., using curl) or via the provided unittest harness.

Test Case Test Description Prerequisites Inputs Expected Outcome
Test #1 /predict – 14 Features
Ensures correct binary classification for exactly 14 features.
1. ALT-AI app running (local or Docker).
2. Model preloaded (the EBM class).
3. No authentication required.
JSON with 14 feature values, e.g.:
<br>{<br> \"features\": [30, \"State-gov\", 141297, \"Bachelors\", 13, \"Married-civ-spouse\", \"Prof-specialty\", \"Husband\", \"Asian-Pac-Islander\", \"Male\", 0, 0, 40, \"India\"]<br>}<br>
1. HTTP 200 response.
2. Body: {\"prediction\": [\"some_class\"]} (a list with a single item).
3. Verifiable by checking the JSON structure and ensuring only one prediction is returned (e.g. \">50K\" or 0/1).
Test #2 /predict – 15 Features
Verifies extra feature is discarded, leaving 14 for the model.
1. ALT-AI app running.
2. Same environment as Test #1.
3. The code is expected to log a warning upon receiving 15 features.
JSON with 15 feature values, the 15th typically \">50K\", e.g.:
<br>{<br> \"features\": [30, \"State-gov\", 141297, \"Bachelors\", 13, \"Married-civ-spouse\", \"Prof-specialty\", \"Husband\", \"Asian-Pac-Islander\", \"Male\", 0, 0, 40, \"India\", \">50K\"]<br>}<br>
1. HTTP 200 response.
2. Body: {\"prediction\": [\"some_class\"]} with a single item.
3. Console (expected) to warn “Received 15 features; ... removing it.”
4. Verifiable by checking that the model used only the first 14.
Test #3 /explain
Verifies the returned global explanation includes names and scores.
1. ALT-AI app running.
2. Model is trained in memory.
No input (GET). 1. HTTP 200 response.
2. JSON body includes {\"explanation\": {\"names\": [...], \"scores\": [...]}}.
3. Verifiable by checking if names and scores are arrays of length > 0.

Manual Execution Example

  1. Set up the ALT-AI application:

    • pip install -r requirements.txt
    • python app.py (listens on port 5002)
  2. Test #1 (14 features):

    curl -X POST http://localhost:5002/predict \
         -H "Content-Type: application/json" \
         -d '{"features":[30,"State-gov",141297,"Bachelors",13,"Married-civ-spouse","Prof-specialty","Husband","Asian-Pac-Islander","Male",0,0,40,"India"]}'
    
    • Expect {"prediction":["..."]} with one item.
  3. Test #2 (15 features):

    curl -X POST http://localhost:5002/predict \
         -H "Content-Type: application/json" \
         -d '{"features":[30,"State-gov",141297,"Bachelors",13,"Married-civ-spouse","Prof-specialty","Husband","Asian-Pac-Islander","Male",0,0,40,"India",">50K"]}'
    
    • Expect HTTP 200 and a single‐item prediction.
    • Also expect a console warning “Received 15 features...”.
  4. Test #3 (/explain):

    curl -X GET http://localhost:5002/explain
    
    • Expect {"explanation":{"names":[...],"scores":[...]}}.

Disclaimers