Measuring Algorithmic Bias in Violence Detection Systems by Quantifying Anomalies Across Skin Tones

Department

College of Engineering

Document Type

Oral Presentation

Publication Date

4-17-2026

Abstract

Automated Violence Detection Systems (AVDS) are increasingly deployed in sensitive environments (schools, policing), operating on the assumption that object detection algorithms are objective. However, industrystandard models (e.g., YOLOv8) are suspected of disproportionately misclassifying darker-skinned individuals as armed. This bias stems from "underestimation bias"[1] (unrepresentative training data) and "negative legacy"[1] (subjective labeling of aggressive behavior). While current research focuses on detecting weapons, this project introduces an intervention: Object Discrimination Training. We investigate if explicitly training models on "confusing objects" (phones, wallets) held by diverse hands reduces False Positives more effectively than standard binary (weapon/no-weapon) training.

This document is currently not available here.

Share

COinS