As businesses, consumers and government agencies look for ways to take advantage of artificial intelligence tools, experts this week called on Congress to craft AI regulations addressing challenges facing the technology. AI concerns run the gamut from bias in algorithms that could affect decisions such as who is selected for housing and employment opportunities, to the use of deep fake AI that can artificially generate images and sounds that can imitate real human beings' appearances and voices. Yet AI has also led to the development of lifesaving drugs, advanced manufacturing and self-driving cars. Indeed, the increased adoption of artificial intelligence has led to the rapid growth of advanced technology in "virtually every sector," said Sen. Gary Peters (D-Mich.), chairman of the U.S. Senate Committee on Homeland Security and Governmental Affairs. Peters spoke during a committee hearing on AI risks and opportunities Wednesday.
As tech companies continue to leverage the powers of artificial intelligence, U.S. regulators are worried that the technology's fortitude will outpace existing laws and provisions. As a result, the US Chamber of Commerce called for AI to be regulated. U.S. lawmakers say that without proper legislative oversight, AI could become a national security risk or a hindrance to educational integrity. Little legislation currently exists to regulate AI, which is a significant concern for US policymakers. Like other aspects of technology, the dangers and pitfalls of an innovative and transformative technology tend to outpace laws.
An artificial intelligence boom is taking over Silicon Valley, with hi-tech firms racing to develop everything from self-driving cars to chatbots capable of writing poetry. Yet AI could also spread conspiracy theories and lies even more quickly than the internet already does – fueling political polarization, hate, violence and mental illness in young people. It could undermine national security with deepfakes. In recent weeks, members of Congress have sounded the alarm over the dangers of AI but no bill has been proposed to protect individuals or stop the development of AI's most threatening aspects. Most lawmakers don't even know what AI is, according to Representative Jay Obernolte, the only member of Congress with a master's degree in artificial intelligence.
Jets can be flown by A.I. and can even take off, land and participate in dogfights. Yes, you read the headline correctly. The United States Defense Department recently confirmed that artificial intelligence successfully flew a jet similar to an F-16 for 17 hours straight. The jet was flown over a series of 12 flights back in December 2022 at the Edwards Air Force Base in Kern County, California. CLICK TO GET KURT'S CYBERGUY NEWSLETTER WITH QUICK TIPS, TECH REVIEWS, SECURITY ALERTS AND EASY HOW-TO'S TO MAKE YOU SMARTER The Defense Department used an experimental plane called the Vista X-62A for the flights.
AI could benefit society, but it could also become a monster. An artificial intelligence boom is taking over Silicon Valley, with hi-tech firms racing to develop everything from self-driving cars to chatbots capable of writing poetry. Yet AI could also spread conspiracy theories and lies even more quickly than the internet already does – fueling political polarization, hate, violence and mental illness in young people. It could undermine national security with deepfakes. In recent weeks, members of Congress have sounded the alarm over the dangers of AI but no bill has been proposed to protect individuals or stop the development of AI's most threatening aspects.
Representative Ted Lieu, Democrat of California, wrote in a guest essay in The New York Times in January that he was "freaked out" by the ability of the ChatGPT chatbot to mimic human writers. Another Democrat, Representative Jake Auchincloss of Massachusetts, gave a one-minute speech -- written by a chatbot -- calling for regulation of A.I. But even as lawmakers put a spotlight on the technology, few are taking action on it. No bill has been proposed to protect individuals or thwart the development of A.I.'s potentially dangerous aspects. And legislation introduced in recent years to curb A.I. applications like facial recognition have withered in Congress.
"By failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible A.I.," she said. In the regulatory vacuum, the European Union has taken a leadership role. In 2021, E.U. policymakers proposed a law focused on regulating the A.I. technologies that might create the most harm, such as facial recognition and applications linked to critical public infrastructure like the water supply. The measure, which is expected to be passed as soon as this year, would require makers of A.I. to conduct risk assessments of how their applications could affect health, safety and individual rights, like freedom of expression. Companies that violated the law could be fined up to 6 percent of their global revenue, which could total billions of dollars for the world's largest tech platforms.
For corporate America, the biggest trend to latch onto at the moment is artificial intelligence, stoked by the popularity of ChatGPT. But worries about the dangers of widespread A.I. use are growing as well. There's one big hitch: Governments -- notably Washington -- haven't kept pace with regulations for the technology. That could lead to dire consequences: "By failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible A.I.," Carly Kind, the director of the Ada Lovelace Institute, a policy research group, told The Times. Washington has been largely hands off on A.I. rules, even as several lawmakers have pushed to tighten oversight.
The number of voice actors in a rabid panic over AI in the industry is reaching a head, with social media brimming with daily posts on the topic, despite very little real world evidence of synthetic voices impacting the bottom line of working pros, or even amateurs for that matter. There's a supposition among the masses that because the technology is improving, its ascension is inevitable, and that by definition it will supplant human voice actors to a highly disruptive degree. It's easy to get caught up in the terror, but worst-case scenarios….heck, Now, there's no question that numerous companies and platforms want AI voiceover to be an Earth-shattering thing. And, inevitably, we are going to start seeing even well-known casting platforms offer AI voices against or alongside their human talent. Many voice actors are busy creating their own voice clones which they expect to make available through their websites, casting platforms, or through the platforms of the companies creating these artificial voices for them.
Many agree on what responsible, ethical AI looks like -- at least at a zoomed-out level. But outlining key goals, like privacy and fairness, is only the first step. Policymakers need to determine whether existing laws and voluntary guidance are powerful enough tools to enforce good behavior, or if new regulations and authorities are necessary. And organizations will need to plan for how they can shift their culture and practices to ensure they're following responsible AI advice. That could be important for compliance purposes or simply for preserving customer trust.