law professor
Judges in England and Wales Given Cautious Approval to Use AI in Writing Legal Opinions
England's 1,000-year-old legal system -- still steeped in traditions that include wearing wigs and robes -- has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings. The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn't be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information. "Judges do not need to shun the careful use of AI," said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. "But they must ensure that they protect confidence and take full personal responsibility for everything they produce." At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry -- and society in general -- react to a rapidly advancing technology alternately portrayed as a panacea and a menace.
- Europe > United Kingdom > Wales (0.64)
- North America > United States > Pennsylvania (0.05)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Greater London > London (0.05)
- Law > Government & the Courts (1.00)
- Government > Regional Government > North America Government > United States Government (0.30)
OpenAI investors considering suing board after CEO Altman's firing: Sources
Some investors in OpenAI, the creator of ChatGPT, are exploring legal recourse against the company's board, sources familiar with the matter have told the Reuters news agency, after the directors removed CEO Sam Altman and sparked a potential mass exodus of employees. Sources said investors are working with legal advisers to study their options. It was not immediately clear if these investors will sue OpenAI. Investors worry they could lose hundreds of millions of dollars they invested in OpenAI, a crown jewel in some of their portfolios, with the potential collapse of the hottest startup in the rapidly growing generative AI sector. OpenAI did not respond to a request for comment.
- North America > United States > Nebraska (0.06)
- North America > United States > Connecticut (0.06)
- Law (1.00)
- Banking & Finance > Trading (0.38)
- Banking & Finance > Capital Markets (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Google's antitrust showdown with US could 'dramatically change' competition
A landmark trial currently under way in Washington may well decide the future of the internet. In the dock is Google, the world's largest search engine. The United States Department of Justice has accused the search giant of muscling its way to dominance by paying other companies like Apple to be the default search engine on their devices. "Google pays billions of dollars each year to distributors -- including popular-device manufacturers such as Apple, LG, Motorola, and Samsung … to secure default status for its general search engine," the Justice Department's complaint says. This, the DOJ thinks, chokes off competition that includes other search engines like Microsoft's Bing, and privately held DuckDuckGo.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Information Management > Search (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence (1.00)
ChatGPT falsely accuses law professor of sex assault
ChatGPT has falsely accused a law professor of sexually harassing one of his students in a case that has highlighted the dangers of AI defaming people. Jonathan Turley, of George Washington University, said the allegation was made by the chatbot during research done by another professor. The AI claimed he made sexually suggestive comments and attempted to touch a student during a class trip to Alaska. It cited an article from The Washington Post as evidence. Turley wrote in USA Today: "It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.
- Law > Criminal Law (0.77)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
ChatGPT falsely accuses a law professor of a SEX ATTACK against students
A law professor has been falsely accused of sexually harassing a student in reputation-ruining misinformation shared by ChatGPT, it has been alleged. US criminal defence attorney, Jonathan Turley, has raised fears over the dangers of artificial intelligence (AI) after being wrongly accused of unwanted sexual behaviour on a Alaska trip he never went on. To jump to this conclusion, it was claimed that ChatGPT relied on a cited Washington Post article that had never been written, quoting a statement that was never issued by the newspaper. The chatbot also believed that the'incident' took place while the professor was working in a faculty he had never been employed in. In a tweet, the George Washington University professor said: 'Yesterday, President Joe Biden declared that "it remains to be seen" whether Artificial Intelligence (AI) is "dangerous."
- North America > United States > Alaska (0.26)
- North America > United States > California (0.05)
- Media > News (1.00)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.51)
- (2 more...)
Can a chatbot earn a JD? This one averaged C-plus on law school exams
An artificial intelligence tool called ChatGPT averaged a C-plus on exams at the University of Minnesota Law School, according to four law professors who gave it a try. The law professors used ChatGPT to answer the questions and then blindly graded the answers, along with answers by real students, report Reuters and Insider. The average C-plus grade was still below that of law students, who had a B-plus average. And ChatGPT's performance, while earning passing grades, was at or near the bottom of the class. The professors' findings are available here.
- Law (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
Landmark trial involving Tesla autopilot weighs if 'man or machine' at fault
Tesla will play a major role in a manslaughter trial this week over a fatal crash caused by a vehicle operating on autopilot, in what could be a defining case for the self-driving car industry. At the trial's heart is the question of who is legally responsible for a vehicle that can drive – or partially drive – itself. Kevin George Aziz Riad is on trial for his role in a 2019 crash. Police say Riad exited a freeway in southern California in a Tesla Model S, ran a red light and crashed into a Honda Civic, killing Gilberto Lopez and Maria Guadalupe Nieves-Lopez. Tesla's autopilot system, which can control speed, braking and steering, was engaged at the time of the crash that killed the couple, who were on their first date.
- North America > United States > California (0.58)
- North America > United States > South Carolina (0.06)
- North America > United States > New York (0.06)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
- Government > Regional Government > North America Government > United States Government (0.51)
What if an Artificial Intelligence program actually becomes sentient?
Silicon Valley is abuzz about artificial intelligence - software programs that can draw or illustrate or chat almost like a person. One Google engineer actually thought a computer program had gained sentience. A lot of AI experts, though, say there is no ghost in the machine. But what if it were true? That would introduce many legal and ethical questions.
- North America > United States > California (0.26)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.06)
- Law (0.57)
- Information Technology (0.57)
- Education (0.37)
A firm proposes Taser-armed drones to stop school shootings
This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Axon Enterprise, Inc. via AP hide caption This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Taser developer Axon said this week it is working to build drones armed with the electric stunning weapons that could fly in schools and "help prevent the next Uvalde, Sandy Hook, or Columbine." But its own technology advisers quickly panned the idea as a dangerous fantasy. The publicly traded company, which sells Tasers and police body cameras, floated the idea of a new police drone product last year to its artificial intelligence ethics board, a group of well-respected experts in technology, policing and privacy. Some of them expressed reservations about weaponizing drones in over-policed communities of color.
- North America > United States > Texas > Uvalde County > Uvalde (0.27)
- North America > United States > New York (0.05)
A Startup Will Nix Algorithms Built on Ill-Gotten Facial Data
Late last year, San Francisco face-recognition startup Everalbum won a $2 million contract with the Air Force to provide "AI-driven access control." Monday, another arm of the US government dealt the company a setback. The Federal Trade Commission said Everalbum had agreed to settle charges that it had applied face-recognition technology to images uploaded to a photo app without users' permission and retained them after telling users they would be deleted. The startup used millions of the photos to develop technology offered to government agencies and other customers under the brand Paravision. Paravision, as the company is now known, agreed to delete the data collected inappropriately.