toner
US attacks on science and research a 'great gift' to China on artificial intelligence, former OpenAI board member says
The US administration's targeting of academic research and international students is a "great gift" to China in the race to compete on artificial intelligence, former OpenAI board member Helen Toner has said. The director of strategy at Georgetown's Center for Security and Emerging Technology (CSET) joined the board of OpenAI in 2021 after a career studying AI and the relationship between the United States and China. Toner, a 33-year-old University of Melbourne graduate, was on the board for two years until a falling out with founder Sam Altman in 2023. Altman was fired by the board over claims that he was not "consistently candid" in his communications and the board did not have confidence in Altman's ability to lead. The chaotic months that followed saw Altman fired and then re-hired with three members of the board, including Toner, ousted instead.
- North America > United States (1.00)
- Asia > China (0.88)
- Oceania > Australia (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.92)
These Startups Are Building Advanced AI Models Without Data Centers
Researchers have trained a new kind of large language model (LLM) using GPUs dotted across the world and fed private as well as public data--a move that suggests that the dominant way of building artificial intelligence could be disrupted. Flower AI and Vana, two startups pursuing unconventional approaches to building AI, worked together to create the new model, called Collective-1. Flower created techniques that allow training to be spread across hundreds of computers connected over the internet. The company's technology is already used by some firms to train AI models without needing to pool compute resources or data. Vana provided sources of data including private messages from X, Reddit, and Telegram.
Silicon Valley Takes Artificial General Intelligence Seriously--Washington Must Too
Artificial General Intelligence--machines that can learn and perform any cognitive task that a human can--has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it's an impending reality that demands our immediate attention. On Sept. 17, during a Senate Judiciary Subcommittee hearing titled "Oversight of AI: Insiders' Perspectives," whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University's Center for Security and Emerging Technology, testified that, "The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence." She continued that leading AI companies such as OpenAI, Google, and Anthropic are "treating building AGI as an entirely serious goal."
- Government (0.93)
- Information Technology > Security & Privacy (0.30)
- Information Technology > Artificial Intelligence > Cognitive Science (0.84)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.52)
AI Doomers Had Their Big Moment
Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. The deep-learning revolution was drawing new converts to the cause.
- North America > United States > New Mexico (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
ToNER: Type-oriented Named Entity Recognition with Generative Language Model
Jiang, Guochao, Luo, Ziqin, Shi, Yuchen, Wang, Dixuan, Liang, Jiaqing, Yang, Deqing
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task. It has also been found that the information related to entities, such as entity types, can prompt a model to achieve NER better. However, it is not easy to determine the entity types indeed existing in the given sentence in advance, and inputting too many potential entity types would distract the model inevitably. To exploit entity types' merit on promoting NER task, in this paper we propose a novel NER framework, namely ToNER based on a generative model. In ToNER, a type matching model is proposed at first to identify the entity types most likely to appear in the sentence. Then, we append a multiple binary classification task to fine-tune the generative model's encoder, so as to generate the refined representation of the input sentence. Moreover, we add an auxiliary task for the model to discover the entity types which further fine-tunes the model to output more accurate results. Our extensive experiments on some NER benchmarks verify the effectiveness of our proposed strategies in ToNER that are oriented towards entity types' exploitation.
OpenAI's board allegedly learned about ChatGPT launch on Twitter
Helen Toner, one of OpenAI's former board members who was responsible for firing CEO Sam Altman last year, revealed that the company's board didn't know about the launch of ChatGPT until it was released in November 2022. "[The] board was not informed in advance of that," Toner said on Tuesday on a podcast called The Ted AI Show. "We learned about ChatGPT on Twitter." Toner's comments came just two days after criticized the way OpenAI was governed in an Economist piece published on Sunday that she co-wrote with Tasha McCauley, another former OpenAI board member. This is the first time that Toner has spoken openly about the circumstances that led to Altman's dramatic ouster from the company he co-founded in 2015, and his quick reinstatement following protests from employees.
- Oceania > Fiji (0.06)
- North America > United States > California (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI's directors have been anything but open. What the hell happened?
The OpenAI farce has moved at such speed in the past week that it is easy to forget that nobody has yet said in clear terms why Sam Altman – the returning chief executive and all-round genius, according to his vocal fanclub – was fired in the first place. Since we are constantly told, not least by Altman himself, that the worst outcome from the adoption of artificial general intelligence could be "lights out for all of us", somebody needs to find a voice here. If the old board judged, for example, that Altman was unfit for the job because he was taking OpenAI down a reckless path, lights-wise, there would plainly be an obligation to speak up. Or, if the fear is unfounded, the architects of the failed boardroom coup could do everybody a favour and say so. Saying nothing useful, especially when your previous stance has been that transparency and safety go hand in hand, is indefensible.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.87)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.87)
What the Firing and Rehiring of Sam Altman Actually Means
Folks, if you predicted on Friday that the closely watched OpenAI power struggle would end in the most pointless-seeming way possible … well, just look. Late Tuesday night, four days after CEO Sam Altman's shocking ouster from the A.I. company, we found ourselves (mostly) back where we started: Altman is returning to OpenAI as its CEO, albeit not to its board of directors; Greg Brockman is once again president of OpenAI, but also will not be a member of the board; Mira Murati, who briefly took the helm as interim CEO, is just regular ol' CTO again; the three researchers who'd stepped down Friday in solidarity with Altman and Brockman are either back at the company or requesting to return; Altman & co. will once again operate with the backing of Microsoft, not as direct employees of the Big Tech pioneer. When it comes to the Main Characters of this saga and their loyalists, it seems most everyone's pretty happy. "[W]e are so back," Brockman exclaimed, sharing a selfie with his smiling team (who celebrated, according to the Information's Erin Woo, by setting off a false fire alarm at OpenAI HQ). Twitch co-founder Emmett Shear is no longer interim CEO but is "deeply pleased by this result, after 72 very intense hours of work," and is "glad to have been a part of the solution."
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Sam Altman's ouster at OpenAI exposes growing rift in AI industry
Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.
- Information Technology > Communications > Social Media (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.40)
Why Biden's AI Executive Order Only Goes So Far
President Biden this week signed a sweeping Executive Order on artificial intelligence that seeks to tackle threats posed by the technology, but some experts say the regulation has left questions unanswered about how it could work in practice. The order tasks agencies with rethinking their approach to AI and aims to address threats relating to national security, competition and consumer privacy, while promoting innovation, competition, and the use of AI for public services. One of the most significant elements of the order is the requirement for companies developing the most powerful AI models to disclose the results of safety tests. On Tuesday, Secretary of Commerce Gina Raimondo told CNBC that under the Executive Order "the President directs the Commerce Department to require companies to tell us: what are the safety precautions they're putting in place and to allow us to judge whether that's enough. And we plan to hold these companies accountable."