← Back to Papers
2022 software fairness; bias-free software design; visualization; training; software testing; decision making; machine learning; debugging; software; data models

Fairkit-learn: A Fairness Evaluation and Comparison Toolkit

B. Johnson; Y. Brun

Advances in how we build and use software, specifically the integration of machine learning for decision making, have led to widespread concern around model and software fairness. We present fairkit-learn, an interactive Python toolkit designed to support data scientists’ ability to reason about and understand model fairness. We outline how fairkit-learn can support model training, evaluation, and comparison and describe the potential benefit that comes with using fairkit-learn in comparison to the state-of-the-art. Fairkit-learn is open source at https://go.gmu.edu/fairkit-learn/.

Added 2026-04-21