brundage
The Lawsuit That Could Reshape the AI Industry Is Going to Trial
Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? What to Know: Musk v. Altman Two artificial intelligence heavyweights will face off in court this spring, in a case that could have far-reaching outcomes for the future of AI. A judge ruled on Thursday that Elon Musk's lawsuit against Sam Altman, Microsoft, and other OpenAI co-founders can proceed to a jury trial, dismissing OpenAI's attempts to get the case thrown out. The lawsuit relates to the early days of OpenAI, which started as a nonprofit that was funded by around $38 million in donations from Musk.
- North America > United States > District of Columbia > Washington (0.05)
- Europe > France (0.05)
- Africa (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
A.I. and the Future of Cheating
No matter whether you were a straight-A student at university or more a student of beer pong, it's extremely unlikely that your positive memories of college took place in an examination hall. Beyond being generally miserable, exams exacerbate anxiety and other mental health issues, and do a poor job of assessing skills like critical thinking and creativity. Time-pressured tests are used as the key filter for several prestigious professions and universities and, some argue, for no apparent good reason. Given this sad state of affairs, it should be positive to see supervised exams and tests fall slowly out of vogue. Headmasters and professors have urged that more flexible, less time-pressured assessments like essays and written assignments should replace exams.
These AI bots created their own language to talk to each other
It is now table stakes for artificial intelligence algorithms to "learn" about the world around them. The next level: For AI bots to learn how to talk to each other -- and develop their own shared language. New research released last week by OpenAI, the artificial intelligence nonprofit lab founded by Elon Musk and Y Combinator president Sam Altman, details how they're training AI bots to create their own language, based on trial and error, as the bots move around a set environment. This is different from how artificial intelligence algorithms typically learn -- using large sets of data, like to recognize a dog by taking in thousands of pictures of dogs. The world the researchers created for the AI bots to learn in is a computer simulation of a simple, two-dimensional white square.
- North America > United States > California > Alameda County > Berkeley (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.42)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.30)
OpenAI Said Its Code Was Risky. Two Grads Re-Created It Anyway
In February, an artificial intelligence lab cofounded by Elon Musk informed the world that its latest breakthrough was too risky to release to the public. OpenAI claimed it had made language software so fluent at generating text that it might be adapted to crank out fake news or spam. On Thursday, two recent master's graduates in computer science released what they say is a re-creation of OpenAI's withheld software onto the internet for anyone to download and use. Aaron Gokaslan, 23, and Vanya Cohen, 24, say they aren't out to cause havoc and don't believe such software poses much risk to society yet. The pair say their release was intended to show that you don't have to be an elite lab rich in dollars and PhDs to create this kind of software: They used an estimated $50,000 worth of free cloud computing from Google, which hands out credits to academic institutions.
- Media > News (0.38)
- Education > Educational Setting > K-12 Education (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.97)
AI researchers debate the ethics of sharing potentially harmful programs
A recent decision by research lab OpenAI to limit the release of a new algorithm has caused controversy in the AI community. The nonprofit said it decided not to share the full version of the program, a text-generation algorithm named GPT-2, due to concerns over "malicious applications." But many AI researchers have criticized the decision, accusing the lab of exaggerating the danger posed by the work and inadvertently stoking "mass hysteria" about AI in the process. The debate has been wide-ranging and sometimes contentious. It even turned into a bit of a meme among AI researchers, who joked that they've had an amazing breakthrough in the lab, but the results were too dangerous to share at the moment.
- Government (0.71)
- Media > News (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.39)
AI researchers debate the ethics of sharing potentially harmful programs
A recent decision by research lab OpenAI to limit the release of a new algorithm has caused controversy in the AI community. The nonprofit said it decided not to share the full version of the program, a text-generation algorithm named GPT-2, due to concerns over "malicious applications." But many AI researchers have criticized the decision, accusing the lab of exaggerating the danger posed by the work and inadvertently stoking "mass hysteria" about AI in the process. The debate has been wide-ranging and sometimes contentious. It even turned into a bit of a meme among AI researchers, who joked that they've had an amazing breakthrough in the lab, but the results were too dangerous to share at the moment.
- Government (0.71)
- Media > News (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.75)
A Google program can pass as a human on the phone. Should it be required to tell people it's a machine?
Google's artificial-intelligence assistant sounds almost exactly like a human when it calls the salon to book a woman's hair appointment. It responds to questions, negotiates timing and thanks the receptionist for her help. It even says "um" and "mm-hmm." What it doesn't say, however, is that it's a machine -- and the receptionist doesn't show any sign that she can tell. Google's unveiling on Tuesday of Duplex -- an automated voice assistant that can book restaurant reservations, check opening hours and accomplish other tasks over the phone -- has thrown a spotlight on how advanced AI can now carry on conversations that are so lifelike that even a human listener can be fooled. The technology, debuted at Google's I/O developer conference, could be a huge convenience for anyone who hates picking up the phone.
- North America > United States > Massachusetts (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Information Technology > Services (0.50)
- Information Technology > Security & Privacy (0.31)
The pros and cons of AI
Science fiction books and movies have largely formed the public's worldview of artificial intelligence, often clouding the truth on where we stand with the technology. Many are under the impression that "the machines" will eventually eliminate our jobs, police human beings and take over mankind; others think AI will only enhance our lives. One thing's for certain: everybody's got a take on the matter. ASU Now enlisted two scholars -- Subbarao Kambhampati and Miles Brundage -- to have a discussion on the pros and cons of AI, which has increasingly become a part of our everyday lives. Kambhampati, a professor of computer science in Arizona State University's Ira A. Fulton Schools of Engineering, works in artificial intelligence and focuses on planning and decision-making, especially in the context of human-machine collaboration.
- North America > United States > Arizona (0.25)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Information Technology > Artificial Intelligence > Robots (0.72)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.71)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.71)
- Information Technology > Artificial Intelligence > Machine Learning (0.47)
Artificial intelligence poses risks of misuse by hackers, say researchers
Frankfurt: Rapid advances in artificial intelligence are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns. The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers. The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years. "We all agree there are a lot of positive applications of AI," Miles Brundage, a research fellow at Oxford's Future of Humanity Institute.
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Addiction Disorder (0.40)
- Information Technology > Security & Privacy (0.39)
- Transportation > Ground > Road (0.37)
The downside of AI: bad guys can use it too – and they already are
FRANKFURT (Reuters) – Rapid advances in artificial intelligence (AI) are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns. The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers. The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years. "We all agree there are a lot of positive applications of AI," Miles Brundage, a research fellow at Oxford's Future of Humanity Institute.
- Asia (0.40)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Government (1.00)
- Information Technology > Security & Privacy (0.39)