U.S. Senate Minority Leader Mitch McConnell (R-KY), Speaker of the House Kevin McCarthy (R-CA), President Joe Biden, and Senate Majority Leader Chuck Schumer (D-NY) meet in the Oval Office of the White House on May 09, 2023 in Washington, DC. The Congressional lawmakers met with the President to negotiate how to address the debt ceiling before June 1, when U.S. Treasury Secretary Janet Yellen warned Congress that the United States would default on their debts. NO AGREEMENT YET - Elusive debt ceiling talks yield no agreement, WH, GOP eye specific cuts including to IRS. Continue reading … DOUBLE STANDARD - Media joins Dem pile-on against Feinstein -- after'concealing' Fetterman's health condition in 2022. READY FOR INSPECTION - AI-powered system can inspect a car in seconds using bomb-detecting tech.
Microsoft has called for the US federal government to create a new agency specifically focused on regulating AI, Bloomberg reports. In a Washington, DC-based speech attended by some members of Congress and non-governmental organizations, Microsoft vice chair and president Brad Smith remarked that "the rule of law and a commitment to democracy has kept technology in its proper place" and should do so again with AI. Another part of Microsoft's "blueprint" for regulating AI involves mandating redundant AI circuit breakers, a fail-safe that would allow algorithms to be shut down quickly. Smith also strongly suggested that President Biden create and sign an executive order necessitating that the National Institute of Standards and Technology's (NIST) risk management framework be followed by any federal agencies engaging with AI tools. He added that Microsoft would also adhere to the NIST's guidelines and publish a yearly AI report for transparency.
The White House has met with AI executives, released an AI bill of rights and an AI risk management framework, but who should run the show? WASHINGTON, D.C. – Congressional lawmakers agreed that AI needs federal oversight, but several were skeptical that President Biden or Vice President Kamala Harris were capable of leading the effort. "I wouldn't trust Joe Biden and Kamala Harris to be able to successfully operate an iPhone, much less be a key focal point of AI policy," Florida Rep. Matt Gaetz told Fox News. "That said, there are some leading minds in the Democratic Party here on the Hill who I think are evaluating these issues with great thoughtfulness: Ted Lieu, Ro Khanna." Rep. Matt Gaetz said neither Biden nor Harris should run the White House's AI efforts.
Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. Sam Altman, OpenAI CEO, said the company could stop all operations in the European Union if it can't comply with impending artificial intelligence regulations. During a stop of what he's called the "OpenAI world tour," Altman addressed the University College London to speak about the company's advancements and was asked about the EU's proposed AI regulations. The CEO explained OpenAI has issues with how the regulations are written at this time. According to Time, the regulations, which are still being revised, may designate the company's ChatGPT and GPT-4 as "high risk," requiring increased safety compliance.
OpenAI CEO Sam Altman took questions from reporters after his congressional hearing, including defining "scary AI." Artificial intelligence could become so powerful that it replaces professional experts "in most domains" within the next decade, OpenAI CEO Sam Altman warned. Altman, the chief of the AI lab behind popular platforms such as ChatGPT, published a blog post this week with two other OpenAI leaders, Greg Brockman and Ilya Sutskever, warning that "we must mitigate the risks of today's AI technology. "It's conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," reads the post, which was published on OpenAI's website. "In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the post continued. OPENAI CEO SAM ALTMAN REVEALS WHAT HE THINKS IS'SCARY' ABOUT AI Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on May 16, 2023. Altman and his fellow OpenAI executives compared artificial intelligence to nuclear energy and synthetic biology, arguing that regulations must be handled with "special treatment and coordination" to be effective. They suggested that a version of the International Atomic Energy Agency will be needed to regulate the "superintelligence" technology. "Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they wrote. Altman appeared before Congress this month to discuss how to regulate artificial intelligence, saying he welcomes U.S. leaders to craft such rules. Following the hearing, Altman provided examples of "scary AI" to Fox News Digital, which included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he said. "I think these are all scary.
D.C. residents said they don't trust Vice President Kamala Harris to lead the White House's response to artificial intelligence. WASHINGTON, D.C. – Vice President Kamala Harris wouldn't be able to effectively run the White House's response to artificial intelligence if she's charged with leading it, some residents of the nation's capital told Fox News. "I don't know if Kamala Harris has the background and the tech knowledge to really get a grasp on what AI can do and what its capabilities are, to be able to wrangle it in a space that is safe for everyone and not just beneficial for large corporations," Eric told Fox News. Vice President Kamala has been involved with the White House's AI efforts. But another D.C. local, Marlena, said: "I definitely trust her on the task force. She's a brilliant woman, extraordinarily accomplished."
Displaying bias, foreign adversaries like China becoming dominant, and outsmarting humans were all top artificial intelligence concerns for members of Congress. WASHINGTON, D.C. – Congressional lawmakers spouted an array of concerns about artificial intelligence after OpenAI CEO Sam Altman told a Senate subcommittee that he saw problems the technology could create. "The overall risk is allowing China to win the AI race, because obviously, China would use the technology to further their aims of global ambition and to export their model of total techno-totalitarian control, which is nightmarish and would make Orwell blush," Republican Rep. Mike Gallagher said. "The other risk is that we don't maintain control of the technology, somehow it escapes our control." OpenAI CEO Sam Altman told a Senate subcommittee Tuesday that he had concerns about artificial intelligence's possibilities.
'Media Buzz' host Howard Kurtz join'Sunday Night in America with Trey Gowdy' to discuss a poll claiming Americans are blaming the media for divisiveness in America. Be afraid, be very afraid. That's the message that is starting to dominate the media's many channels when it comes to artificial intelligence. And it's not just prognosticators but such voices as Elon Musk and the grandfather of AI that are saying an apocalyptic future may loom in the distance. I'm not hitting the panic button yet, but the sheer velocity of what AI is either able to achieve or is moving toward achieving seems to increase exponentially each week.
OpenAI CEO Sam Altman, the artificial intelligence lab behind ChatGPT, took questions from reporters after his congressional hearing, including his definition of "scary AI." OpenAI CEO Sam Altman testified before Congress in Washington, D.C., this week about regulating artificial intelligence as well as his personal fears over the tech and what "scary" AI systems means to him. Fox News Digital asked OpenAI's wildly popular chatbot, ChatGPT, to also weigh in on examples of "scary" artificial intelligence systems, and it reported six hypothetical instances of how AI could become weaponized or have potentially harmful impacts on society. When asked by Fox News Digital on Tuesday after his testimony before a Senate Judiciary subcommittee, Altman gave examples of "scary AI" that included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he continued. "I think these are all scary. These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important."
YouTube's recommendation algorithm continues to direct young video game fans down dark paths of violent and dangerous content, a report has found, years after critics first raised concerns about the system. A report from the Tech Transparency Project (TTP), a Washington DC-based nonprofit, observed the effects of the video sharing site's recommendation algorithms on a spread of accounts identified as those of boys aged nine and 14. Researchers created four fake "user accounts" – two identified as nine-year-old boys and two identified as 14-year-old boys – and used them to watch exclusively gaming-related videos, albeit not always strictly age-appropriate ones, in an attempt to come up with an accurate cross-section of what a real child and teenager would be looking at. For the nine-year-old, that included videos for games such as Roblox and Lego Star Wars, but also the horror game Five Nights at Freddy's, set in a parody of the Chuck E Cheese restaurant chain. For the 14-year-old, the playlist "consisted primarily of videos of first-person shooter games like Grand Theft Auto, Halo and Red Dead Redemption".