Goto

Collaborating Authors

Issues


The Leak That Has Big Tech and Regulators Panicked

Slate

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.


AI should be 'a global priority alongside pandemics and nuclear war',' new letter states

Daily Mail - Science & tech

A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.


AI Threat Placed on Par With Pandemics, Nuclear War

WSJ.com: WSJD - Technology

Tech executives and artificial-intelligence scientists are sounding the alarm about AI, saying in a joint statement Tuesday that the technology poses an extinction risk as great as pandemics and nuclear war.


Who is watching you? AI can stalk unsuspecting victims with 'ease and precision': experts

FOX News

Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology'to mitigate' its risks. A stranger in a coffee shop can watch you and learn virtually everything about you, where you've been and even predict your movements "with greater ease and precision than ever before," experts say. All the user would need is a photo and advanced artificial intelligence technology that already exists, said Kevin Baragona, a founder of DeepAI.org. "There are services online that can use a photo of you, and I can find everything. Every instance of your face on the internet, every place you've been and use that for stalker-type purposes," Baragona told Fox News Digital.


'I do not think ethical surveillance can exist': Rumman Chowdhury on accountability in AI

The Guardian

Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls "2am brain", a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. "It's just like baking," she says. "You can't force it, you can't turn the temperature up, you can't make it go faster. It will take however long it takes. And when it's done baking, it will present itself."


How to Stop the Elizabeth Holmes of A.I.

Slate

Elizabeth Holmes convinced investors and patients that she had a prototype of a microsampling machine that could run a wide range of relatively accurate tests using a fraction of the volume of blood usually required. She lied; the Edison and miniLab devices didn't work. Worse still, the company was aware they didn't work, but continued to give patients inaccurate information about their health, including telling healthy pregnant women that they were having miscarriages and producing false positives on cancer and HIV screenings. But Holmes, who has to report to prison by May 30, was convicted of defrauding investors; she wasn't convicted of defrauding patients. This is because the principles of ethics for disclosure to investors, and the legal mechanisms used to take action against fraudsters like Holmes, are well developed.


Rishi Sunak races to tighten rules for AI amid fears of existential risk

The Guardian

Rishi Sunak is scrambling to update the government's approach to regulating artificial intelligence, amid warnings that the industry poses an existential risk to humanity unless countries radically change how they allow the technology to be developed. The prime minister and his officials are looking at ways to tighten the UK's regulation of cutting-edge technology, as industry figures warn the government's AI white paper, published just two months ago, is already out of date. Government sources have told the Guardian the prime minister is increasingly concerned about the risks posed by AI, only weeks after his chancellor, Jeremy Hunt, said he wanted the UK to "win the race" to develop the technology. Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator. Meanwhile Conservative and Labour MPs are calling on the prime minister to pass a separate bill that could create the UK's first AI-focused watchdog.


Everyone Wants to Regulate AI. No One Can Agree How

WIRED

As the artificial intelligence frenzy builds, a sudden consensus has formed. While there's a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane. Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI's most influential avatar of the moment, OpenAI CEO Sam Altman. "I think if this technology goes wrong, it can go quite wrong," he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. "We want to work with the government to prevent that from happening."


Latest AI announcements from the US Government include updated strategic plan

AIHub

An updated roadmap to focus federal investments in AI research and development (R&D). The National AI R&D Strategic Plan has been updated (for the first time since 2019), and outlines priorities and goals for federal investments in AI R&D. The executive summary of the document notes that: "The federal government must place people and communities at the center by investing in responsible R&D that serves the public good, protects people's rights and safety, and advances democratic values. This update to the National AI R&D Strategic Plan is a roadmap for driving progress toward that goal." The plan reaffirms the eight strategies from the 2019 plan, and adds a ninth.


Microsoft appeals for a new US agency to regulate AI

Engadget

Microsoft has called for the US federal government to create a new agency specifically focused on regulating AI, Bloomberg reports. In a Washington, DC-based speech attended by some members of Congress and non-governmental organizations, Microsoft vice chair and president Brad Smith remarked that "the rule of law and a commitment to democracy has kept technology in its proper place" and should do so again with AI. Another part of Microsoft's "blueprint" for regulating AI involves mandating redundant AI circuit breakers, a fail-safe that would allow algorithms to be shut down quickly. Smith also strongly suggested that President Biden create and sign an executive order necessitating that the National Institute of Standards and Technology's (NIST) risk management framework be followed by any federal agencies engaging with AI tools. He added that Microsoft would also adhere to the NIST's guidelines and publish a yearly AI report for transparency.