Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world datasets and workflows.
-
Updated
Feb 7, 2025 - Python
Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world datasets and workflows.
An interpretable framework for inferring nonlinear multivariate Granger causality based on self-explaining neural networks.
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
A list of research papers of explainable machine learning.
Code for Surgical Skill Assessment via Video Semantic Aggregation (MICCAI 2022)
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Explainable Boosting Machines
A Python implementation of Word Mover's Distance that decomposes document level WMD into word level WMD for interpretable sociocultural NLP.
An interpretable system that models the future of work as an equilibrium under AI-driven forces. Instead of predicting job loss, it decomposes workforce disruption into automation pressure, adaptability, skill transferability, demand, and AI augmentation to explain stability, tension, and transition paths by 2030.
An analytical essay on why prediction-based models fail in reflexive, unstable systems. This article argues that accuracy collapses when models influence behavior, and proposes equilibrium and force-based modeling as a more robust framework for understanding pressure, instability, and transitions in AI-shaped systems.
Visual Intelligence is a desktop app that extracts text from images and PDFs in Turkish and English using Tesseract OCR. It offers advanced features like text summarization, keyword extraction, table and QR/barcode detection. The app has a modern, user-friendly interface built with TailwindCSS and Vanilla JS.
Add a description, image, and links to the interpretable-models topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-models topic, visit your repo's landing page and select "manage topics."