empire
The New ChatGPT Resets the AI Race
Yesterday evening, Sam Altman shared an image of the Death Star on X. There was no caption on the picture, which showed the world-destroying Star Wars space station rising over an Earth-like planet, but his audience understood the context. In fewer than 24 hours, OpenAI would release an AI model intended to wipe out all the rest. That model, GPT-5, launched earlier today with all the requisite fanfare. In an announcement video, Altman said that the product will serve as a "legitimate Ph.D.-level expert in anything--any area you need, on demand--that can help you with whatever your goals are."
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.44)
Foundation's new season has dramatic potential – but sadly falls flat
Mel Brooks and Carl Reiner used to spend every evening watching movies. Their favourites were cheesy – the type of film where someone says, "Secure the perimeter!" Why do I mention this in the context of Foundation? Because this adaptation of Isaac Asimov's novels started out as a thought-provoking series, but is now a "Secure the perimeter!" It has been two years since Foundation last aired, so if you have forgotten where we left off, that is understandable.
- Media > Film (0.38)
- Leisure & Entertainment (0.38)
Inside OpenAI's empire: A conversation with Karen Hao
These are our subscriber-only events where you get to listen in to conversations between editors and reporters. Now, I'm delighted to say we've got an absolute cracker of an event today. I'm very happy to have our prodigal daughter, Karen Hao, a fabulous AI journalist, here with us to talk about her new book. Hello, Karen, how are you doing? Thank you so much for having me back, Niall.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.53)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.53)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.53)
The Download: how AI could improve construction site safety, and our Roundtables conversation with Karen Hao
More than 1,000 construction workers die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls. A new AI tool called Safety AI could help to change that. It analyzes the progress made on a construction site each day, and flags conditions that violate Occupational Safety and Health Administration rules, with what its creator Philip Lorenzo claims is 95% accuracy. Lorenzo says Safety AI is the first one of multiple emerging AI construction safety tools to use generative AI to flag safety violations. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence.
Roundtables: Inside OpenAI's Empire with Karen Hao
AI journalist Karen Hao's book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, tells the story of OpenAI's rise to power and its far-reaching impact all over the world. Hear from Karen Hao, former MIT Technology Review senior editor, and executive editor Niall Firth for a conversation exploring the AI arms race, what it means for all of us, and where it's headed.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Download: how AI can improve a city, and inside OpenAI's empire
Bright LEDs could spell the end of dark skies Scientists have known for years that light pollution is growing and can harm both humans and wildlife. In people, increased exposure to light at night disrupts sleep cycles and has been linked to cancer and cardiovascular disease, while wildlife suffers from interruption to their reproductive patterns, and increased danger. Astronomers, policymakers, and lighting professionals are all working to find ways to reduce light pollution. Many of them advocate installing light-emitting diodes, or LEDs, in outdoor fixtures such as city streetlights, mainly for their ability to direct light to a targeted area. But the high initial investment and durability of modern LEDs mean cities need to get the transition right the first time or potentially face decades of consequences.
- Asia > Middle East > Syria > Damascus Governorate > Damascus (0.09)
- Asia > Middle East > Syria > Aleppo Governorate > Aleppo (0.09)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.40)
OpenAI: The power and the pride
There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. In Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen Hao tells the story of the company's rise to power and its far-reaching impact all over the world.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Inside OpenAI's Empire
OpenAI started as a non-profit dedicated to building safe A.I. Now, they're obsessed with building artificial general intelligence by any means necessary - even if they don't quite know what that is. Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking "Try Free" at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.77)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.77)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.77)
'Every person that clashed with him has left': the rise, fall and spectacular comeback of Sam Altman
The short-lived firing of Sam Altman, the CEO of possibly the world's most important AI company, was sensational. When he was sacked by OpenAI's board members, some of them believed the stakes could not have been higher – the future of humanity – if the organisation continued under Altman. Imagine Succession, with added apocalypse vibes. In early November 2023, after three weeks of secret calls and varying degrees of paranoia, the OpenAI board agreed: Altman had to go. After his removal, Altman's most loyal staff resigned, and others signed an open letter calling for his reinstatement.
- North America > United States > California (0.16)
- South America (0.14)
- Law (1.00)
- Information Technology (0.97)
- Health & Medicine > Consumer Health (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.79)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.65)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.64)
More is More: Addition Bias in Large Language Models
Santagata, Luca, De Nobili, Cristiano
In this paper, we investigate the presence of additive bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over subtractive changes. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math$\Sigma$tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding letters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B produced longer summaries in 59.40% to 75.10% of cases when asked to improve its own or others' writing. These results indicate that, similar to humans, LLMs exhibit a marked additive bias, which might have implications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher economic costs due to overconsumption and waste. This bias should be considered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
- North America > United States (0.14)
- Pacific Ocean > North Pacific Ocean > Puget Sound (0.04)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
- (5 more...)
- Research Report > New Finding (0.86)
- Research Report > Experimental Study (0.54)