A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.
Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology'to mitigate' its risks. A stranger in a coffee shop can watch you and learn virtually everything about you, where you've been and even predict your movements "with greater ease and precision than ever before," experts say. All the user would need is a photo and advanced artificial intelligence technology that already exists, said Kevin Baragona, a founder of DeepAI.org. "There are services online that can use a photo of you, and I can find everything. Every instance of your face on the internet, every place you've been and use that for stalker-type purposes," Baragona told Fox News Digital.
Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls "2am brain", a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. "It's just like baking," she says. "You can't force it, you can't turn the temperature up, you can't make it go faster. It will take however long it takes. And when it's done baking, it will present itself."
Elizabeth Holmes convinced investors and patients that she had a prototype of a microsampling machine that could run a wide range of relatively accurate tests using a fraction of the volume of blood usually required. She lied; the Edison and miniLab devices didn't work. Worse still, the company was aware they didn't work, but continued to give patients inaccurate information about their health, including telling healthy pregnant women that they were having miscarriages and producing false positives on cancer and HIV screenings. But Holmes, who has to report to prison by May 30, was convicted of defrauding investors; she wasn't convicted of defrauding patients. This is because the principles of ethics for disclosure to investors, and the legal mechanisms used to take action against fraudsters like Holmes, are well developed.
Rishi Sunak is scrambling to update the government's approach to regulating artificial intelligence, amid warnings that the industry poses an existential risk to humanity unless countries radically change how they allow the technology to be developed. The prime minister and his officials are looking at ways to tighten the UK's regulation of cutting-edge technology, as industry figures warn the government's AI white paper, published just two months ago, is already out of date. Government sources have told the Guardian the prime minister is increasingly concerned about the risks posed by AI, only weeks after his chancellor, Jeremy Hunt, said he wanted the UK to "win the race" to develop the technology. Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator. Meanwhile Conservative and Labour MPs are calling on the prime minister to pass a separate bill that could create the UK's first AI-focused watchdog.
As the artificial intelligence frenzy builds, a sudden consensus has formed. While there's a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane. Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI's most influential avatar of the moment, OpenAI CEO Sam Altman. "I think if this technology goes wrong, it can go quite wrong," he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. "We want to work with the government to prevent that from happening."
An updated roadmap to focus federal investments in AI research and development (R&D). The National AI R&D Strategic Plan has been updated (for the first time since 2019), and outlines priorities and goals for federal investments in AI R&D. The executive summary of the document notes that: "The federal government must place people and communities at the center by investing in responsible R&D that serves the public good, protects people's rights and safety, and advances democratic values. This update to the National AI R&D Strategic Plan is a roadmap for driving progress toward that goal." The plan reaffirms the eight strategies from the 2019 plan, and adds a ninth.
Microsoft has called for the US federal government to create a new agency specifically focused on regulating AI, Bloomberg reports. In a Washington, DC-based speech attended by some members of Congress and non-governmental organizations, Microsoft vice chair and president Brad Smith remarked that "the rule of law and a commitment to democracy has kept technology in its proper place" and should do so again with AI. Another part of Microsoft's "blueprint" for regulating AI involves mandating redundant AI circuit breakers, a fail-safe that would allow algorithms to be shut down quickly. Smith also strongly suggested that President Biden create and sign an executive order necessitating that the National Institute of Standards and Technology's (NIST) risk management framework be followed by any federal agencies engaging with AI tools. He added that Microsoft would also adhere to the NIST's guidelines and publish a yearly AI report for transparency.
Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. The "Godfather of A.I.," Geoffrey Hinton, quit Google out of fear that his former employer intends to deploy artificial intelligence in ways that will harm human beings. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton recently told The New York Times. But stomping out the door does nothing to atone for his own actions, and it certainly does nothing to protect conservatives – who are the primary target of A.I. programmers – from being canceled. Here are five things to know as the battle over A.I. turns hot: Elon Musk recently revealed that Google co-founder Larry Page and other Silicon Valley leaders want AI to establish a "digital god" that "would understand everything in the world.