BIAS AUDIT FRAMEWORKS: DEVELOPING TOOLS FOR EARLY DETECTION OF ALGORITHMIC BIAS IN AI DEVELOPMENT
Abstract
Algorithmic bias in artificial intelligence (AI) systems continues to pose significant ethical and societal challenges, especially in critical domains such as healthcare, education, and finance. Current approaches to bias mitigation often fail to provide a holistic, proactive solution that integrates fairness, accountability, and transparency into the AI development lifecycle. This study introduces a Bias Audit Framework designed to detect and mitigate algorithmic bias during the early stages of AI development. The framework comprises four core components: Data Bias Assessment, Model Bias Evaluation, Developer Awareness and Training, and Continuous Monitoring and Feedback. A healthcare dataset was used as a case study to evaluate the framework's efficacy. Initially, the logistic regression model trained on the imbalanced dataset achieved high overall performance with Accuracy: 85%, Precision: 0.89, and Recall: 0.83, but exhibited fairness issues. Disparate Impact Ratio (DIR) was 0.67, and Equal Opportunity Difference (EOD) was 0.13, reflecting gender bias. After applying the Bias Audit Framework,—including oversampling, data augmentation, and threshold optimization—the model was retrained. Its performance remained robust (Accuracy: ~84–85%, Precision: ~0.88, Recall: ~0.88), while fairness significantly improved: Female recall increased to 0.88, reducing EOD to ~0, and DIR improved to 0.85–0.95, indicating a more balanced and equitable model. By equipping developers with practical tools and emphasizing interdisciplinary collaboration, the framework ensures a systematic and ethical approach to addressing algorithmic bias. These findings underscore the importance of embedding bias mitigation practices into all stages of AI development to foster equitable and trustworthy AI systems.
References
Agarwal, A., & Agarwal, H. (2023). A seven-layer model with checklists for standardising fairness assessment throughout the AI lifecycle. AI and Ethics, 4, 299-314. DOI: https://doi.org/10.1007/s43681-023-00266-9
Akter, S., Sultana,S., Mariani, M., Wamba,F.S., Spanaki, K., Yogesh K. and Dwivedi (2023) Advancing algorithmic bias management capabilities in AI-driven marketing analytics research, Industrial Marketing Management. 114, 243-261, https://doi.org/10.1016/j.indmarman.2023.08.013 DOI: https://doi.org/10.1016/j.indmarman.2023.08.013
Aninze, A. (2024). Artificial Intelligence Life Cycle: The Detection and Mitigation of Bias. International Conference on AI Research. 40-49 DOI: https://doi.org/10.34190/icair.4.1.3131
Black, E., Naidu, R., Ghani, R., Rodolfa, K.T., Ho, D.E., & Heidari, H. (2023). Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 1 11, https://doi.org/10.1145/3617694.3623259 DOI: https://doi.org/10.1145/3617694.3623259
Cohausz, L., Kappenberger, J., & Stuckenschmidt, H. (2024). What Fairness Metrics Can Really Tell You: A Case Study in the Educational Domain. Proceedings of the 14th Learning Analytics and Knowledge Conference. 792 799, https://doi.org/10.1145/3636555.3636873 DOI: https://doi.org/10.1145/3636555.3636873
Ferrara, C., Sellitto, G., Ferrucci, F., Palomba, F., & De Lucia, A. (2023). Fairness-aware machine learning engineering: how far are we? Empirical Software Engineering, 29( 9). https://doi.org/10.1007/s10664-023-10402-y DOI: https://doi.org/10.1007/s10664-023-10402-y
Ferrara, E. (2023). Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies. ArXiv, abs/2304.07683. DOI: https://doi.org/10.2196/preprints.48399
Holstein, K., Vaughan, J.W., Daum, H., Dudk, M., & Wallach, H.M. (2018). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-16. https://doi.org/10.1145/3290605.3300830 DOI: https://doi.org/10.1145/3290605.3300830
Hu, F., Ratz, P., & Charpentier, A. (2023). Parametric Fairness with Statistical Guarantees. ArXiv, abs/2310.20508.
Jain, L.R., & Menon, V. (2023). AI Algorithmic Bias: Understanding its Causes, Ethical and Social Implications. 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), 460-467. DOI: https://doi.org/10.1109/ICTAI59109.2023.00073
Kordzadeh, N., & Ghasemaghaei, M. (2021). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388409. https://doi.org/10.1080/0960085X.2021.1927212 DOI: https://doi.org/10.1080/0960085X.2021.1927212
Lalor, J.P., Abbasi, A., Oketch, K., Yang, Y., & Forsgren, N. (2024). Should Fairness be a Metric or a Model? A Model-based Framework for Assessing Bias in Machine Learning Pipelines. ACM Transactions on Information Systems, 42, 1 - 41. DOI: https://doi.org/10.1145/3641276
Mahamadou, A.J., & Trotsyuk, A.A. (2024). Revisiting Technical Bias Mitigation Strategies. ArXiv, abs/2410.17433.
Min, A. (2023). ARTIFICIAL INTELLIGENCE AND BIAS: CHALLENGES, IMPLICATIONS, AND REMEDIES. Journal of Social Research. 2(11) 3808-3817. DOI: https://doi.org/10.55324/josr.v2i11.1477
Mishra, I., Kashyap, V., Yadav, N., & Pahwa, D.R. (2024). Harmonizing Intelligence: A Holistic Approach to Bias Mitigation in Artificial Intelligence (AI). International Research Journal on Advanced Engineering Hub (IRJAEH). 2(7) 1978-1985. https://doi.org/10.47392/IRJAEH.2024.0270 DOI: https://doi.org/10.47392/IRJAEH.2024.0270
Nathim, K.W., Hameed, N.A., Salih, S.A., Taher, N.A., Salman, H.M., & Chornomordenko, D. (2024). Ethical AI with Balancing Bias Mitigation and Fairness in Machine Learning Models. 2024 36th Conference of Open Innovations Association (FRUCT), 797-807. DOI: https://doi.org/10.23919/FRUCT64283.2024.10749873
Oyeniran, O.C., Adewusi, A.O., Adeleke, A.G., Akwawa, L.A., & Azubuko, C.F. (2022). Ethical AI: Addressing bias in machine learning models and software applications. Computer Science & IT Research Journal. 3(3), 115-126. https://doi.org/10.51594/csitrj.v3i3.1559 DOI: https://doi.org/10.51594/csitrj.v3i3.1559
Oveh R.O., Aziken G.O. & Atomatofa, E. (2025) Tailoring Safety Practices for AI Innovation: A Model for Nigerias Socioeconomic Context. Journal of Science Research and Reviews. 12(2) 101 114. DOI: https://doi.org/10.70882/josrar.2025.v2i2.47
Richardson, B., Garcia-Gathright, J.I., Way, S.F., Thom-Santelli, J., & Cramer, H. (2021). Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1 13, https://doi.org/10.1145/3411764.3445604 DOI: https://doi.org/10.1145/3411764.3445604
Samala, A.D., & Rawas, S. (2025). Bias in artificial intelligence: smart solutions for detection, mitigation, and ethical strategies in real-world applications. IAES International Journal of Artificial Intelligence (IJ-AI). 14(1) DOI: https://doi.org/10.11591/ijai.v14.i1.pp32-43
Shahbazi, N., Lin, Y., Asudeh, A., & Jagadish, H.V. (2022). Representation Bias in Data: A Survey on Identification and Resolution Techniques. ACM Computing Surveys, 55, 1 - 39. DOI: https://doi.org/10.1145/3588433
Copyright (c) 2025 FUDMA JOURNAL OF SCIENCES

This work is licensed under a Creative Commons Attribution 4.0 International License.
FUDMA Journal of Sciences