Goto

Collaborating Authors

 law professor


Judges in England and Wales Given Cautious Approval to Use AI in Writing Legal Opinions

TIME - Tech

England's 1,000-year-old legal system -- still steeped in traditions that include wearing wigs and robes -- has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings. The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn't be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information. "Judges do not need to shun the careful use of AI," said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. "But they must ensure that they protect confidence and take full personal responsibility for everything they produce." At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry -- and society in general -- react to a rapidly advancing technology alternately portrayed as a panacea and a menace.


OpenAI investors considering suing board after CEO Altman's firing: Sources

Al Jazeera

Some investors in OpenAI, the creator of ChatGPT, are exploring legal recourse against the company's board, sources familiar with the matter have told the Reuters news agency, after the directors removed CEO Sam Altman and sparked a potential mass exodus of employees. Sources said investors are working with legal advisers to study their options. It was not immediately clear if these investors will sue OpenAI. Investors worry they could lose hundreds of millions of dollars they invested in OpenAI, a crown jewel in some of their portfolios, with the potential collapse of the hottest startup in the rapidly growing generative AI sector. OpenAI did not respond to a request for comment.


Google's antitrust showdown with US could 'dramatically change' competition

Al Jazeera

A landmark trial currently under way in Washington may well decide the future of the internet. In the dock is Google, the world's largest search engine. The United States Department of Justice has accused the search giant of muscling its way to dominance by paying other companies like Apple to be the default search engine on their devices. "Google pays billions of dollars each year to distributors -- including popular-device manufacturers such as Apple, LG, Motorola, and Samsung … to secure default status for its general search engine," the Justice Department's complaint says. This, the DOJ thinks, chokes off competition that includes other search engines like Microsoft's Bing, and privately held DuckDuckGo.


ChatGPT falsely accuses law professor of sex assault

#artificialintelligence

ChatGPT has falsely accused a law professor of sexually harassing one of his students in a case that has highlighted the dangers of AI defaming people. Jonathan Turley, of George Washington University, said the allegation was made by the chatbot during research done by another professor. The AI claimed he made sexually suggestive comments and attempted to touch a student during a class trip to Alaska. It cited an article from The Washington Post as evidence. Turley wrote in USA Today: "It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.


ChatGPT falsely accuses a law professor of a SEX ATTACK against students

Daily Mail - Science & tech

A law professor has been falsely accused of sexually harassing a student in reputation-ruining misinformation shared by ChatGPT, it has been alleged. US criminal defence attorney, Jonathan Turley, has raised fears over the dangers of artificial intelligence (AI) after being wrongly accused of unwanted sexual behaviour on a Alaska trip he never went on. To jump to this conclusion, it was claimed that ChatGPT relied on a cited Washington Post article that had never been written, quoting a statement that was never issued by the newspaper. The chatbot also believed that the'incident' took place while the professor was working in a faculty he had never been employed in. In a tweet, the George Washington University professor said: 'Yesterday, President Joe Biden declared that "it remains to be seen" whether Artificial Intelligence (AI) is "dangerous."


Can a chatbot earn a JD? This one averaged C-plus on law school exams

#artificialintelligence

An artificial intelligence tool called ChatGPT averaged a C-plus on exams at the University of Minnesota Law School, according to four law professors who gave it a try. The law professors used ChatGPT to answer the questions and then blindly graded the answers, along with answers by real students, report Reuters and Insider. The average C-plus grade was still below that of law students, who had a B-plus average. And ChatGPT's performance, while earning passing grades, was at or near the bottom of the class. The professors' findings are available here.


Landmark trial involving Tesla autopilot weighs if 'man or machine' at fault

The Guardian

Tesla will play a major role in a manslaughter trial this week over a fatal crash caused by a vehicle operating on autopilot, in what could be a defining case for the self-driving car industry. At the trial's heart is the question of who is legally responsible for a vehicle that can drive – or partially drive – itself. Kevin George Aziz Riad is on trial for his role in a 2019 crash. Police say Riad exited a freeway in southern California in a Tesla Model S, ran a red light and crashed into a Honda Civic, killing Gilberto Lopez and Maria Guadalupe Nieves-Lopez. Tesla's autopilot system, which can control speed, braking and steering, was engaged at the time of the crash that killed the couple, who were on their first date.


What if an Artificial Intelligence program actually becomes sentient?

#artificialintelligence

Silicon Valley is abuzz about artificial intelligence - software programs that can draw or illustrate or chat almost like a person. One Google engineer actually thought a computer program had gained sentience. A lot of AI experts, though, say there is no ghost in the machine. But what if it were true? That would introduce many legal and ethical questions.


A firm proposes Taser-armed drones to stop school shootings

NPR Technology

This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Axon Enterprise, Inc. via AP hide caption This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Taser developer Axon said this week it is working to build drones armed with the electric stunning weapons that could fly in schools and "help prevent the next Uvalde, Sandy Hook, or Columbine." But its own technology advisers quickly panned the idea as a dangerous fantasy. The publicly traded company, which sells Tasers and police body cameras, floated the idea of a new police drone product last year to its artificial intelligence ethics board, a group of well-respected experts in technology, policing and privacy. Some of them expressed reservations about weaponizing drones in over-policed communities of color.


A Startup Will Nix Algorithms Built on Ill-Gotten Facial Data

WIRED

Late last year, San Francisco face-recognition startup Everalbum won a $2 million contract with the Air Force to provide "AI-driven access control." Monday, another arm of the US government dealt the company a setback. The Federal Trade Commission said Everalbum had agreed to settle charges that it had applied face-recognition technology to images uploaded to a photo app without users' permission and retained them after telling users they would be deleted. The startup used millions of the photos to develop technology offered to government agencies and other customers under the brand Paravision. Paravision, as the company is now known, agreed to delete the data collected inappropriately.