regulated
How AI Can Be Regulated Like Nuclear Energy
Prominent AI researchers and figures have consistently dominated headlines by invoking comparisons that AI risk is on par with the existential and safety risks that were posed with the coming of the nuclear age. From statements that AI should be subject to regulation akin to nuclear energy, to declarations paralleling the risk of human extinction to that of nuclear war, the analogies drawn between AI and nuclear have been consistent. The argument for such extinction risk has hinged on the hypothetical and unproven risk of an Artificial General Intelligence (AGI) imminently arising from current Large Language Models (e.g., ChatGPT), necessitating increased caution with their creation and deployment. Sam Altman, the CEO of OpenAI, has even referenced to the well established nuclear practice of "licensing", deemed anti-competitive by some. He has called on the creation of a federal agency that can grant licenses to create AI models above a certain threshold of capabilities.
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (1.00)
Big Tech's Stranglehold on Artificial Intelligence Must Be Regulated
Google CEO Sundar Pichai has suggested--more than once--that artificial intelligence (AI) will affect humanity's development more profoundly than humanity's harnessing of fire. He was speaking, of course, of AI as a technology that gives machines or software the ability to mimic human intelligence to complete ever more complex tasks with little or no human input at all. You may laugh Pichai's comparison off as the usual Silicon Valley hype, but the company's dealmakers aren't laughing. Since 2007, Google has bought at least 30 AI companies working on everything from image recognition to more human-sounding computer voices--more than any of its Big Tech peers. One of these acquisitions, DeepMind, which Google bought in 2014, just announced that it can predict the structure of every protein in the human body from the DNA of cells--an achievement that could fire up numerous breakthroughs in biological and medical research.
- Health & Medicine (1.00)
- Information Technology > Services (0.49)
- Government > Regional Government > North America Government > United States Government (0.48)
Elon Musk Warns That All A.I. Must Be Regulated, Even at Tesla Digital Trends
Tesla CEO Elon Musk thinks that organizations developing article intelligence should be regulated, including his own companies. Musk tweeted his thoughts on A.I. on Monday night, February 17, in response to an article written about research company OpenAI, which was once backed by Musk himself. "OpenA.I. should be more open imo," Musk tweeted. "All orgs developing advanced A.I. should be regulated, including Tesla." Musk also said that both individual governments and global organizations should handle the regulation of A.I.
IBM: Face Recognition Tech Should be Regulated, Not Banned
IBM weighed in Nov 5 on the policy debate over facial recognition technology, arguing against an outright ban but calling for "precision regulation" to protect privacy and civil liberties. In a white paper posted on its website, the US computing giant said policymakers should understand that "not all technology lumped under the umbrella of'facial recognition' is the same". IBM said uneasiness about artificial intelligence technology which can use face scans for identification was reasonable. "However, blanket bans on technology are not the answer to concerns around specific use cases," said the paper by IBM chief privacy officer Christina Montgomery and Ryan Hagemann, co-director the IBM policy lab. "Casting such a wide regulatory net runs the very real risk of cutting us off from the many – and potentially life-saving – benefits these technologies offer."
- Information Technology > Security & Privacy (1.00)
- Law > Civil Rights & Constitutional Law (0.79)
AI Is Like Encryption: It Can't Be Regulated Out Of Existence
As the public becomes increasingly aware of the dangers of AI algorithmic bias and concerned over surveillance and militaristic applications of deep learning, there have been a growing number of calls for AI regulation. Whether new laws governing AI fairness or policies constraining the use of autonomous weapons systems, the challenge confronting policymakers is that AI is very much like encryption: it is not a single controlled algorithm that can be regulated, it is a portfolio of techniques that no single country controls and which are being advanced every day by researchers all across the world. The almost unimaginably rapid progression of deep learning over the past half-decade into every corner of modern life has ushered in profoundly existential questions about how to ensure accurate, fair and beneficial use of this rapidly evolving technology. When it comes to biased algorithms, the fundamental fairness of current AI systems has been largely left to market forces. In turn, basic economics has ensured that free but heavily biased data wins over costly but minimally biased data.
- Government > Military (0.52)
- Law > Statutes (0.40)
How Should AI Be Regulated?
New technologies often bring calls for new regulation. A current example is artificial intelligence (AI)--the creation of machines that think and act in ways that resemble human intelligence. There are plenty of AI optimists and AI pessimists. Both camps see the need for government intervention. Microsoft founder Bill Gates, who believes AI will "allow us to produce a lot more goods and services with less labor," foresees labor force dislocations and has suggested a robot tax.
- North America > Mexico (0.29)
- North America > Canada (0.29)
- North America > United States > Indiana > Monroe County > Bloomington (0.05)
- Asia > China (0.05)
- Transportation (1.00)
- Law > Statutes (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.51)
Point: Should AI Technology Be Regulated?
Government regulation is necessary to prevent harm. But regulation is also a blunt and slow-moving instrument that is easily subject to political interference and distortion. When applied to fast-moving fields like AI, misplaced regulations have the potential to stifle innovation and derail the enormous potential benefits that AI can bring in vehicle safety, improved productivity, and much more. We certainly do not want rules hastily cobbled as a knee-jerk response to a popular outcry against AI stoked by alarmists such as Elon Musk (who has urged U.S. governors to regulate AI "before it's too late"). To address this conundrum, I propose a middle way: that we avoid regulating AI research, but move to regulate AI applications in arenas such as transportation, medicine, politics, and entertainment.
- Law > Statutes (1.00)
- Government (1.00)
Should AI For Marketing Be Regulated?
The release of the most recent Blade Runner film has further fueled the long-standing debate as to whether artificial intelligence (AI) should be subject to regulation. One domain that has been quick to adopt AI technology is marketing. There's little debate that AI will fundamentally alter the marketing landscape. Already, the use of chatbots--the likes of Apple's Siri, GoogleAssistant, Amazon's Echo, Microsoft's Cortana, etc.--has empowered marketers to increase and optimize engagement with consumers. As well, AI has enabled marketers to more effectively target consumers and develop more relevant personalized content.
Op-ed: Should Artificial Intelligence Be Regulated? - Future of Life Institute
Should artificial intelligence be regulated? And if so, what should those regulations look like? These are difficult questions to answer for any technology still in development stages – regulations, like those on the food, pharmaceutical, automobile and airline industries, are typically applied after something bad has happened, not in anticipation of a technology becoming dangerous. But AI has been evolving so quickly, and the impact of AI technology has the potential to be so great that many prefer not to wait and learn from mistakes, but to plan ahead and regulate proactively. In the near term, issues concerning job losses, autonomous vehicles, AI- and algorithmic-decision making, and "bots" driving social media require attention by policymakers, just as many new technologies do. In the longer term, though, possible AI impacts span the full spectrum of benefits and risks to humanity – from the possible development of a more utopic society to the potential extinction of human civilization.
- Government (1.00)
- Law > Statutes (0.30)
Should Artificial Intelligence Be Regulated?
Should artificial intelligence be regulated? Should artificial intelligence be regulated? I understand how recent advances and associated hype can be scary for people, especially since doomsday scenarios related to AI have been part of our popular culture for many decades. I also understand, to address one of Ben Y. Zhao's concerns, that my opinion might come across as one of those "dismissive insiders". However, I think there at least 3 good reasons not to regulate AI.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Asia > North Korea (0.05)
- Asia > China (0.05)
- Law (1.00)
- Government (1.00)