Thousands of four-wheeled robots will soon drop off burritos, salads and other food orders placed with Uber Eats. Robotics company Serve has been working with the delivery giant since 2021, and the firms announced Tuesday they are ready to unleash 2,000 self-driving bots in major US cities starting in 2026. The small AI-powered machines can carry up to 50 pounds of merchandise for 25 miles on a single charge, which Serve said is enough power for dozens of deliveries in one day. Select customers who place food orders via the Uber Eats app may receive the option to deliver their orders by a Serve robot. The partnership will provide customers with contact-free deliveries.
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.
The drone attack on Moscow early Tuesday morning showed that the war is real and near, not just for Ukrainians but also for Russians--a message that can't be good for Vladimir Putin. At least eight drones flew over Russia's capital in the wee hours, almost certainly launched by Ukraine (or perhaps by Russian rebels sympathetic to Ukraine's cause). The Kremlin claims that air-defense crews shot down or electronically jammed all the drones and that the damage done to a few apartment buildings was caused by metal shards of the disabled airframes as they fell from the sky. Even if this claim is true, it doesn't matter. The attack demonstrates that Russia's skies are porous, that Russian civilians are vulnerable.
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.
On Tuesday morning, at least eight attack drones entered Moscow's airspace before being shot down by the city's air defences, a few hitting residential buildings on the way down. The Russian government accused Ukraine of a "terrorist attack", which Kyiv officials wryly denied. "You know, we are being drawn into the era of artificial intelligence. Perhaps not all drones are ready to attack Ukraine and want to return to their creators and ask them questions like: 'Why are you sending us [to hit] the children of Ukraine? In Kyiv?'" Ukrainian presidential adviser Mykhailo Podolyak said on the YouTube breakfast show of exiled Russian journalist Alexander Plushev.
The question is no longer "What can ChatGPT do?" It's "What should I share with it?" Internet users are generally aware of the risks of possible data breaches, and the ways our personal information is used online. But ChatGPT's seductive capabilities seem to have created a blind spot around hazards we normally take precautions to avoid. OpenAI only recently announced a new privacy feature which lets ChatGPT users disable chat history, preventing conversations from being used to improve and refine the model. "It's a step in the right direction," said Nader Henein, a privacy research VP at Gartner who has two decades of experience in corporate cybersecurity and data protection.
It's the fantasy of Stable Diffusion image addicts, ChatGPT tinkerers, and everyone else who can't get enough of the new crop of AI content generation toys -- I mean tools: to get rich just by playing around with AI. "Ladies and gentleman it is happening: 'Prompt engineer' is now a job title, and it pays between 175k to 300k a year." So claims TikTok user @startingname in a calm but definitive tone(opens in a new tab) in his March TikTok post. To qualify for the job, he says, one just has to "spend time in the algorithm." In other words, it's a six-figure job for people who enjoy tinkering with generative AI. The New York Times called(opens in a new tab) prompt engineering "a skill that those who play around with ChatGPT long enough can add to their résumés."
With the rise of ChatGPT, Bard and other large language models (LLMs), we've been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories are a who's who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis.