A high school in Baltimore County, Maryland recently experienced a troubling incident when an AI-powered security system mistakenly identified a student’s bag of snacks as a firearm, leading to a serious law-enforcement response. The system, part of a campus-safety tool deployed across the school district, flagged a crumpled bag of Doritos in a student’s hand as a possible weapon. Within minutes, police arrived, ordered the student to the ground, handcuffed and searched him before discovering the “weapon” was nothing more than a snack bag.
The student recounted sitting with friends after practice, finishing a bag of Doritos, then putting the empty and folded bag in his pocket. The AI system monitored surveillance cameras and, based on posture and the item in his hand, triggered an alert to school officials and law enforcement. The school district confirmed there was no weapon, and the alert was cancelled shortly after officers arrived. Still, the family says the experience left the student shaken and questioning his safety.
School and district officials have expressed regret over the incident, describing it as “truly unfortunate” and offering counseling support for students involved. The AI-detection software provider, which installs its alerts into school camera systems, did not publicly respond in detail about the misclassification. The event has reignited debate over the use of automated surveillance in school environments, especially when AI systems make life-changing decisions in seconds based on image recognition.
Critics argue the incident exposes deeper issues such as: Are these systems accurate enough to handle ambiguous real-world scenarios? What safeguards exist to ensure mis-detections don’t lead to trauma? And who bears responsibility when the technology fails? Proponents of school-based AI safety systems say they enhance threat-detection speed, but incidents like this highlight that speed without accuracy or human oversight can undermine student trust and safety.
For families, educators and policymakers, the takeaway is stark: deploying AI for safety is not just about technology. It demands rigorous testing, transparency, fallback protocols and communication with students and staff about what happens when an alert fires. Until such frameworks are mature, even well-intentioned systems can result in serious misfires—turning tools meant to protect into sources of fear.
















