Goto

Collaborating Authors

 public safety


ThreatGPT: An Agentic AI Framework for Enhancing Public Safety through Threat Modeling

Zisad, Sharif Noor, Hasan, Ragib

arXiv.org Artificial Intelligence

As our cities and communities become smarter, the systems that keep us safe, such as traffic control centers, emergency response networks, and public transportation, also become more complex. With this complexity comes a greater risk of security threats that can affect not just machines but real people's lives. To address this challenge, we present ThreatGPT, an agentic Artificial Intelligence (AI) assistant built to help people whether they are engineers, safety officers, or policy makers to understand and analyze threats in public safety systems. Instead of requiring deep cybersecurity expertise, it allows users to simply describe the components of a system they are concerned about, such as login systems, data storage, or communication networks. Then, with the click of a button, users can choose how they want the system to be analyzed by using popular frameworks such as STRIDE, MITRE ATT&CK, CVE reports, NIST, or CISA. ThreatGPT is unique because it does not just provide threat information, but rather it acts like a knowledgeable partner. Using few-shot learning, the AI learns from examples and generates relevant smart threat models. It can highlight what might go wrong, how attackers could take advantage, and what can be done to prevent harm. Whether securing a city's infrastructure or a local health service, this tool adapts to users' needs. In simple terms, ThreatGPT brings together AI and human judgment to make our public systems safer. It is designed not just to analyze threats, but to empower people to understand and act on them, faster, smarter, and with more confidence.


Mysterious SUV-sized drones may have blocked medical helicopter

Popular Science

Residents and law enforcement officials are reporting numerous large, unidentified drones flying at night over New Jersey, New York, and Pennsylvania. Some of the fixed-wing devices are estimated to be roughly four-feet-wide, while others are believed to be as large as a car. And while officials including New Jersey governor Phil Murphy have stressed there is no evidence suggesting the drones pose a threat to public safety, at least one related incident may have delayed medevac transport of a seriously injured car wreck victim. As The New York Times noted over the weekend, sightings date as far back as November, and have occurred over residential areas, highways, railroads, reservoirs, and power lines. In most instances, the loud, blinking drones appear to be "significantly larger" than standard drones piloted by hobbyists. At least two events prompted the Federal Aviation Administration to temporarily ban drones at Donald Trump's Bedminster National Golf Club and Picatinny Arsenal, a 6,400-square-acre military research and manufacturing facility in Morris County, New Jersey.


Authorities stress 'no known threat to public safety' following unusual drones near Trump Bedminster club

FOX News

Officials are still investigating unusual drone activity that has been reported in recent weeks in New Jersey. The FAA set temporary restrictions above Trump National Golf Club in Bedminster in response. Authorities investigating the unusual drone activity observed several times in northern New Jersey in recent days, including the vicinity of President-elect Trump's Bedminster golf club, continue to stress that there is no threat to public safety. Multiple videos show drones flying in Somerset and Morris counties over the past few weeks, including Dec. 1 and Dec. 3. In a video from Nov. 25, a Morris County resident named Mike Walsh spotted drones flying over Black River Middle School in Chester.


Gender Bias in LLM-generated Interview Responses

Kong, Haein, Ahn, Yongsu, Lee, Sangyub, Maeng, Yunho

arXiv.org Artificial Intelligence

LLMs have emerged as a promising tool for assisting individuals in diverse text-generation tasks, including job-related texts. However, LLM-generated answers have been increasingly found to exhibit gender bias. This study evaluates three LLMs (GPT-3.5, GPT-4, Claude) to conduct a multifaceted audit of LLM-generated interview responses across models, question types, and jobs, and their alignment with two gender stereotypes. Our findings reveal that gender bias is consistent, and closely aligned with gender stereotypes and the dominance of jobs. Overall, this study contributes to the systematic examination of gender bias in LLM-generated interview responses, highlighting the need for a mindful approach to mitigate such biases in related applications.


UC police seek approval for more pepper balls, sponge rounds, launchers, drones

Los Angeles Times

UCLA police, who were called on to handle some of the nation's largest campus protests over the Israel-Hamas war last spring, are asking for approval to double their stockpile of pepper balls and sponge rounds, obtain eight more projectile launchers and purchase three new drones. The University of California Board of Regents will consider the requests by UCLA, along with the other nine UC campus police departments, on Thursday. All California law enforcement agencies are required by state law to report annually on the acquisition and use of weapons characterized as "military equipment." A UC spokesman called the police requests a "routine agenda item" not tied to protests or other particular incidents. "All of the campus's requests are for non-lethal alternatives to standard-issue firearms, enabling officers to de-escalate situations and respond without the use of deadly force," UC spokesman Stett Holbrook said in a statement.


Column: California says its new gun law is about public safety. But what about these women?

Los Angeles Times

Kismet Jackson used to carry her handgun just about everywhere in San Bernardino County. To get her nails done. To pick up her prescription. To hang out with her grandchildren. For her, it was all about staying safe. "Being out and about, you just want to protect yourself," explained Jackson, an Air Force veteran and member of the National African American Gun Assn.


Frontier AI Regulation: Managing Emerging Risks to Public Safety

Anderljung, Markus, Barnhart, Joslyn, Korinek, Anton, Leung, Jade, O'Keefe, Cullen, Whittlestone, Jess, Avin, Shahar, Brundage, Miles, Bullock, Justin, Cass-Beggs, Duncan, Chang, Ben, Collins, Tantum, Fist, Tim, Hadfield, Gillian, Hayes, Alan, Ho, Lewis, Hooker, Sara, Horvitz, Eric, Kolt, Noam, Schuett, Jonas, Shavit, Yonadav, Siddarth, Divya, Trager, Robert, Wolf, Kevin

arXiv.org Artificial Intelligence

Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.


Quantum-AI empowered Intelligent Surveillance: Advancing Public Safety Through Innovative Contraband Detection

Shah, Syed Atif Ali, Algeelani, Nasir, Al-Sammarraie, Najeeb

arXiv.org Artificial Intelligence

Surveillance systems have emerged as crucial elements in upholding peace and security in the modern world. Their ubiquity aids in monitoring suspicious activities effectively. However, in densely populated environments, continuous active monitoring becomes impractical, necessitating the development of intelligent surveillance systems. AI integration in the surveillance domain was a big revolution, however, speed issues have prevented its widespread implementation in the field. It has been observed that quantum artificial intelligence has led to a great breakthrough. Quantum artificial intelligence-based surveillance systems have shown to be more accurate as well as capable of performing well in real-time scenarios, which had never been seen before. In this research, a RentinaNet model is integrated with Quantum CNN and termed as Quantum-RetinaNet. By harnessing the Quantum capabilities of QCNN, Quantum-RetinaNet strikes a balance between accuracy and speed. This innovative integration positions it as a game-changer, addressing the challenges of active monitoring in densely populated scenarios. As demand for efficient surveillance solutions continues to grow, Quantum-RetinaNet offers a compelling alternative to existing CNN models, upholding accuracy standards without sacrificing real-time performance. The unique attributes of Quantum-RetinaNet have far-reaching implications for the future of intelligent surveillance. With its enhanced processing speed, it is poised to revolutionize the field, catering to the pressing need for rapid yet precise monitoring. As Quantum-RetinaNet becomes the new standard, it ensures public safety and security while pushing the boundaries of AI in surveillance.


Eight Months Pregnant and Arrested After False Facial Recognition Match

NYT > Business Day

After being charged in court with robbery and carjacking, Ms. Woodruff was released that evening on a $100,000 personal bond. In an interview, she said she went straight to the hospital where she was diagnosed with dehydration and given two bags of intravenous fluids. A month later, the Wayne County prosecutor dismissed the case against her. The ordeal started with an automated facial recognition search, according to an investigator's report from the Detroit Police Department. Ms. Woodruff is the sixth person to report being falsely accused of a crime as a result of facial recognition technology used by police to match an unknown offender's face to a photo in a database.


Artificial intelligence could aid in evaluating parole decisions

#artificialintelligence

Over the last decade, there has been an effort by lawmakers to reduce incarceration in the United States without impacting public safety. This effort includes parole boards making risk-based parole decisions -- releasing people assessed to be at low risk of committing a crime after being released. To determine how effective the current system of risk-based parole is, researchers from the UC Davis Violence Prevention Research Program and the University of Missouri, Kansas City, used machine learning to analyze parole data from New York. They suggest the New York State Parole Board could safely grant parole to more inmates. The study, "An Algorithmic Assessment of Parole Decisions," was published in the Journal of Quantitative Criminology.