nonnecke
How's AI self-regulation going?
But AI nerds may remember that exactly a year ago, on July 21, 2023, Biden was posing with seven top tech executives at the White House. He'd just negotiated a deal where they agreed to eight of the most prescriptive rules targeted at the AI sector at that time. A lot can change in a year! The voluntary commitments were hailed as much-needed guidance for the AI sector, which was building powerful technology with few guardrails. Since then, eight more companies have signed the commitments, and the White House has issued an executive order that expands upon them--for example, with a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security.
From tort law to cheating, what is ChatGPT's future in higher education?
Berkeley experts in artificial intelligence are studying how things like ChatGPT will transform everything from admissions screening and research to writing college essays. It passed the bar exam, first with a mediocre score and then with a ranking among the top tier of newly minted lawyers. It scored better than 90% of SAT takers. It nearly aced the verbal section of the GRE -- though it has room for improvement with AP Composition. In the months since the machine-learning interface ChatGPT debuted, hundreds of headlines and hot-takes have whirled about how artificial intelligence will overhaul everything from health care and business to legal affairs and shopping.
- Law (1.00)
- Education > Educational Setting > Higher Education (0.85)
ChatGPT is suddenly everywhere. Are we ready?
For a product that its own creators, in a marketing pique, once declared "too dangerous" to release to the general public, OpenAI's ChatGPT is seemingly everywhere these days. The versatile automated text generation (ATG) system, which is capable of outputting copy that is nearly indistinguishable from a human writer's work, is officially still in beta but has already been utilized in dozens of novel applications, some of which extend far beyond the roles ChatGPT was originally intended for -- like that time it simulated an operational Linux shell or that other time when it passed the entrance exam to Wharton Business School. The hype around ChatGPT is understandably high, with myriad startups looking to license the technology for everything from conversing with historical figures to talking to historical literature, from learning other languages to generating exercise routines and restaurant reviews. But with these technical advancements come with a slew of opportunities for misuse and outright harm. And if our previous hamfisted attempts at handling the spread of deepfake video and audio technologies were any indication, we're dangerously underprepared for the havoc that at-scale, automated disinformation production will wreak upon our society.
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.52)
How social media recommendation algorithms help spread hate
Last week, the United States Senate played host to a number of social media company VPs during hearings on the potential dangers presented by algorithmic bias and amplification. While that meeting almost immediately broke down into a partisan circus of grandstanding grievance airing, Democratic senators did manage to focus a bit on how these recommendation algorithms might contribute to the spread of online misinformation and extremist ideologies. The issues and pitfalls presented by social algorithms are well-known and have been well-documented. So, really, what are we going to do about it? "So I think in order to answer that question, there's something critical that needs to happen: we need more independent researchers being able to analyze platforms and their behavior," Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget. Social media companies "know that they need to be more transparent in what's happening on their platforms, but I'm of the firm belief that, in order for that transparency to be genuine, there needs to be collaboration between the platforms and independent peer reviewed, empirical research."
- Media (1.00)
- Law Enforcement & Public Safety (1.00)
- Law (1.00)
- (2 more...)
The EU's proposed AI laws would regulate robot surgeons but not the military
While US lawmakers muddle through yet another congressional hearing on the dangers posed by algorithmic bias in social media, the European Commission (basically the executive branch of the EU) has unveiled a sweeping regulatory framework that, if adopted, could have global implications for the future of AI development. After extensive meetings with advocate groups and other stakeholders, the EC released both the first European Strategy on AI and Coordinated Plan on AI in 2018. Those were followed in 2019 by the Guidelines for Trustworthy AI, then again in 2020 by the Commission's White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. Just as with its ambitious General Data Protection Regulation (GDPR) plan in 2018, the Commission is seeking to establish a basic level of public trust in the technology based on strident user and data privacy protections as well as those against its potential misuse. "Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights," the Commission included in its draft regulations.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.89)
How State Politics Is Playing a Huge Role in Artificial Intelligence
New York Gov. Andrew Cuomo signed legislation in late July to create a temporary state commission that will examine how artificial intelligence impacts his state. In doing so, New York joined Vermont, Alabama, and Washington in establishing an A.I. task force that will examine the cutting-edge technology and then make recommendations about how it should be regulated. The groups vary in their mission, but the general message is the same: companies pushing A.I., the brains behind innovation like robotics and facial recognition software, can't necessarily be trusted to do what's in the best interest of state residents. Brandie Nonnecke, founding director of University of California's Center for Information Technology Research in the Interest of Society Policy Lab, says that task forces could help keep state lawmakers up to date about the technology. The end result, she says, will be better-written bills that don't get stuck in legislative purgatory.
- North America > United States > New York (0.46)
- North America > United States > Vermont (0.25)
- Asia > China > Guangdong Province (0.15)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.97)
- Government > Regional Government > North America Government > United States Government (0.70)