Thousands of four-wheeled robots will soon drop off burritos, salads and other food orders placed with Uber Eats. Robotics company Serve has been working with the delivery giant since 2021, and the firms announced Tuesday they are ready to unleash 2,000 self-driving bots in major US cities starting in 2026. The small AI-powered machines can carry up to 50 pounds of merchandise for 25 miles on a single charge, which Serve said is enough power for dozens of deliveries in one day. Select customers who place food orders via the Uber Eats app may receive the option to deliver their orders by a Serve robot. The partnership will provide customers with contact-free deliveries.
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.
An AI capability has taken the internet by storm but assuredly not in the manner the creators had hoped. Basically, everyone is laughing at AI's ability -- or lack thereof -- to "expand" the background of classic art. It all started with a few different AI-focused accounts on Twitter posting expanded versions of classic art where, you guessed it, AI filled in the background of famous artwork. What if the Mona Lisa zoomed out a bit and had a much wider field of depth that included a Middle Earth looking castle-ish thing? This idea hints at the thing that AI Bros -- and they are often bros -- don't understand about art.
The drone attack on Moscow early Tuesday morning showed that the war is real and near, not just for Ukrainians but also for Russians--a message that can't be good for Vladimir Putin. At least eight drones flew over Russia's capital in the wee hours, almost certainly launched by Ukraine (or perhaps by Russian rebels sympathetic to Ukraine's cause). The Kremlin claims that air-defense crews shot down or electronically jammed all the drones and that the damage done to a few apartment buildings was caused by metal shards of the disabled airframes as they fell from the sky. Even if this claim is true, it doesn't matter. The attack demonstrates that Russia's skies are porous, that Russian civilians are vulnerable.
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
In recent months, artificial intelligence has transformed how we work and play, giving almost anyone the ability to write code, create art, and even make investments. For professional and hobbyist users alike, generative AI tools, such as ChatGPT, feature advanced capabilities to create decent quality content from a simple prompt given by the user. Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist As Microsoft adds GPT-4 to Bing, OpenAI adds Bing to ChatGPT, and Bard figures out PaLM 2, keeping up with all the latest AI tools can get confusing, to say the least. Knowing which of the three most popular AI chatbots is best to write code, generate text, or help build resumes is challenging, so we'll break down the biggest differences so you can choose one that fits your needs. To help determine which AI chatbot gives more accurate answers, I'm going to use a simple prompt to compare the three: "I have 5 oranges today, I ate 3 oranges last week. How many oranges do I have left?"
A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.
On Tuesday morning, at least eight attack drones entered Moscow's airspace before being shot down by the city's air defences, a few hitting residential buildings on the way down. The Russian government accused Ukraine of a "terrorist attack", which Kyiv officials wryly denied. "You know, we are being drawn into the era of artificial intelligence. Perhaps not all drones are ready to attack Ukraine and want to return to their creators and ask them questions like: 'Why are you sending us [to hit] the children of Ukraine? In Kyiv?'" Ukrainian presidential adviser Mykhailo Podolyak said on the YouTube breakfast show of exiled Russian journalist Alexander Plushev.
The question is no longer "What can ChatGPT do?" It's "What should I share with it?" Internet users are generally aware of the risks of possible data breaches, and the ways our personal information is used online. But ChatGPT's seductive capabilities seem to have created a blind spot around hazards we normally take precautions to avoid. OpenAI only recently announced a new privacy feature which lets ChatGPT users disable chat history, preventing conversations from being used to improve and refine the model. "It's a step in the right direction," said Nader Henein, a privacy research VP at Gartner who has two decades of experience in corporate cybersecurity and data protection.
When ChatGPT kicked off a generative AI frenzy in November, 2022, companies quickly took note of its massive potential to increase work productivity. There's ChatGPT, Bing, and Bard that are all free, standalone AI chatbots to be used for general purposes (to varying degrees of success). But then there are generative AI tools that get integrated into the work software you already use. Because they can access emails, CRMs, documents, and other work-related info (still getting used to that), generative AI tools for work can basically function as your personal assistant. Thanks to large language models like GPT-4, these tools can summarize and explain text, draft copy, compose emails, transcribe meetings, create agendas and notes, and even analyze spreadsheet data.