Files

Download

Download Full Text (364 KB)

Description

Artificial Intelligence (AI) is increasingly being deployed in sensitive environments where malfunctioning electronic devices or chemical leaks can lead to catastrophic consequences. AI models are trained to process and analyze images and data from various detectors and sensors, playing a critical role in threat detection and safety assessments. However, ensuring these models remain fair and unbiased is a significant challenge. Bias in AI systems can stem from historical data, flawed algorithms, or systemic design issues, potentially leading to inaccurate assessments that compromise security and decision-making. The objective of this ongoing research is to identify and develop methodologies for ensuring AI fairness in high- risk environments. This involves examining bias detection techniques, implementing fairness-aware AI architectures, and proposing best practices for mitigating algorithmic bias. By addressing these challenges, AI can be leveraged more effectively in sensitive facility environments, reducing the risks associated with biased decision-making and enhancing overall security and operational integrity.

Publication Date

4-1-2025

Keywords

Sensitive Environment AI, Threat Detection, Bias Detection Technique, Fairness, Aware AI

AI Fairness Investigating and Developing AI Models to Mitigate Bias and Enhance Fairness

Share

COinS