Goto

Collaborating Authors

 reasonable person


The Silicon Reasonable Person: Can AI Predict How Ordinary People Judge Reasonableness?

Arbel, Yonathan A.

arXiv.org Artificial Intelligence

In everyday life, people make countless reasonableness judgments that determine appropriate behavior in various contexts. Predicting these judgments challenges the legal system, as judges' intuitions may not align with broader societal views. This Article investigates whether large language models (LLMs) can learn to identify patterns driving human reasonableness judgments. Using randomized controlled trials comparing humans and models across multiple legal contexts with over 10,000 simulated judgments, we demonstrate that certain models capture not just surface-level responses but potentially their underlying decisional architecture. Strikingly, these systems prioritize social cues over economic efficiency in negligence determinations, mirroring human behavior despite contradicting textbook treatments. These findings suggest practical applications: judges could calibrate intuitions against broader patterns, lawmakers could test policy interpretations, and resource-constrained litigants could preview argument reception. As AI agents increasingly make autonomous real-world decisions, understanding whether they've internalized recognizable ethical frameworks becomes essential for anticipating their behavior.


Uber's Self-Driving Car Killed Someone. Why Isn't Uber Being Charged?

Slate

Autonomous vehicle design involves an almost incomprehensible combination of engineering tasks including sensor fusion, path planning, and predictive modeling of human behavior. But despite the best efforts to consider all possible real world outcomes, things can go awry. More than two and a half years ago, in Tempe, Arizona, an Uber "self-driving" car crashed into pedestrian Elaine Herzberg, killing her. In mid-September, the safety driver behind the wheel of that car, Rafaela Vasquez, was charged with negligent homicide. Uber's test vehicle was driving 39 mph when it struck Herzberg. Uber's sensors detected her six seconds before impact but determined that the object sensed was a false positive.


A brave new world: Can robots be sued?

#artificialintelligence

Case study: Some car makers, including Volvo, Google, and Mercedes, have already said they would accept full liability for their vehicles' actions when they are in autonomous mode. Even without such a pledge, it's likely that manufacturers would end up paying if their autonomous car caused harm. If the offending car were considered a defective product, its maker could be held liable under strict product-design standards, potentially leading to class-action lawsuits and expensive product recalls -- like Takata faced for its dangerous airbags. Another possibility: Going deeper into the system, the AI itself could be held responsible, according to Gabriel Hallevy, a law professor at Ono Academic College in Israel, who wrote a book about AI and criminal negligence. That still means its programmer or manufacturer could be found negligent as well, or even accomplice to a crime.

  Country: Asia > Middle East > Israel (0.28)
  Industry:

'Socially-cooperative' cars are part of the future of driverless vehicles, says CMU professor - TechRepublic

#artificialintelligence

Driverless cars are our future, with nearly every automaker racing to create their own version of autonomous vehicles. But autonomous systems still have a long way to go--and the cues and signals that human drivers know instinctively are not second-nature for our machines. Why Dick's Sporting Goods decided to play its own game in e commerce Dick's Sporting Goods has long partnered with eBay Enterprise on its e -commerce platform. Learn the benefits and risks of this multi -million dollar IT bet. To find solutions for how vehicles with autonomous features can drive safely on the road, professor John Dolan, a principal systems scientist in the Robotics Institute at Carnegie Mellon University, studies how humans communicate and coordinate with these machines, helping them understand how to complete complex tasks on the road.


Can You Program Ethics Into a Self-Driving Car?

#artificialintelligence

A drunken man walking along a sidewalk at night trips and falls directly in front of a driverless car, which strikes him square on, killing him instantly. Had a human been at the wheel, the death would have been considered an accident because the pedestrian was clearly at fault and no reasonable person could have swerved in time. But the "reasonable person" legal standard for driver negligence disappeared back in the 2020s, when the proliferation of driverless cars reduced crash rates by 90 percent. Now the standard is that of the reasonable robot. The victim's family sues the vehicle manufacturer on that ground, claiming that, although the car didn't have time to brake, it could have swerved around the pedestrian, crossing the double yellow line and colliding with the empty driverless vehicle in the next lane.