kokotajlo
- Europe > Ukraine (0.07)
- North America > United States > New York (0.06)
- Oceania > Australia (0.05)
- Asia > China (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.37)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.37)
The dangers of so-called AI experts believing their own hype
Demis Hassabis, CEO of Google DeepMind and a Nobel prizewinner for his role in developing the AlphaFold AI algorithm for predicting protein structures, made an astonishing claim on the 60 Minutes show in April. With the help of AI like AlphaFold, he said, the end of all disease is within reach, "maybe within the next decade or so". With that, the interview moved on. To those actually working on drug development and curing disease, this claim is laughable. According to medicinal chemist Derek Lowe, who has worked for decades on drug discovery, Hassabis's statements "make me want to spend some time staring silently out the window, mouthing unintelligible words to myself".
Two Paths for A.I.
Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He'd become convinced that the company wasn't prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in "alignment," he told me--the suite of techniques used to insure that A.I. acts in accordance with human commands and values--were lagging behind gains in intelligence.
- Asia > China (0.05)
- North America > United States > California (0.05)
- Asia > Taiwan (0.05)
'The Stakes Are Incredibly High.' Two Former OpenAI Employees On the Need for Whistleblower Protections
This could be a costly interview for William Saunders. The former safety researcher resigned from OpenAI in February, and--like many other departing employees--signed a non-disparagement agreement in order to keep the right to sell his equity in the company. Although he says OpenAI has since told him that it does not intend to enforce the agreement, and has made similar public commitments, he is still taking a risk by speaking out. "By speaking to you I might never be able to access vested equity worth millions of dollars," he tells TIME. "But I think it's more important to have a public dialogue about what is happening at these AGI companies."
Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections
A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," the letter, which was published on Tuesday, says. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI Employees Warn of a Culture of Risk and Retaliation
A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities. "These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," reads the letter published at righttowarn.ai. "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable." The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish "verifiable" ways for workers to provide anonymous feedback on their activities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)