Goto

Collaborating Authors

Issues


'The Creator' Review: It's AI That Wants to Save Humanity

WIRED

There's a bill in US Congress right now to stop AI from gaining control of nuclear weapons, and roughly a dozen militaries around the world are investigating the possibilities of autonomous weaponry. That's why watching The Creator, a movie set roughly 40 years from now, feels surreal, jarring, and oddly welcome. From Metropolis to Terminator, sci-fi has taught us to fear the AI revolt. This one opts to wonder what would happen if AI got so empathetic to humanity it wanted to save people from themselves. In writer-director Gareth Edwards' latest, war has laid waste to both humans and robots.


The NSA has a new security center specifically for guarding against AI

Engadget

The National Security Agency (NSA) is starting a dedicated artificial intelligence security center, as reported by AP. This move comes after the government has begun to increasingly rely on AI, integrating multiple algorithms into defense and intelligence systems. The security center will work to protect these systems from theft and sabotage, in addition to safeguarding the country from external AI-based threats. The NSA's recent move toward AI security was announced Thursday by outgoing director General Paul Nakasone. He says that the division will operate underneath the umbrella of the pre-existing Cybersecurity Collaboration Center.


Netanyahu warns of potential 'eruption of AI-driven wars' that could lead to 'unimaginable' consequences

FOX News

Netanyahu addressed the United Nations General Assembly in New York last week. Israel Prime Minister Benjamin Netanyahu warned the world is on the cusp of an artificial intelligence revolution that could launch nations into prosperous times or lead to all-out destruction fueled by devastating high-tech wars. "The AI revolution is progressing at lightning speed," Netanyahu said during his U.N. General Assembly speech last week. "It took centuries for humanity to adapt to the agricultural revolution. It took decades to adapt to the industrial revolution. We may have but a few years to adapt to the AI revolution."


New AI poll reveals elites are way out of step with the rest of us

FOX News

Kara Frederick, tech director at the Heritage Foundation, discusses the need for regulations on artificial intelligence as lawmakers and tech titans discuss the potential risks. Artificial intelligence (AI) is shaping up to be yet another public policy issue where elites are out of step with the public. In Silicon Valley, Congress and the Biden administration, leaders are buzzing about AI. For those in the Valley, killer robots with lasers for eyes dominate the conversation. The Beltway, in contrast, is spending a lot of political capital to deal with bias in algorithms. And yet, the general public isn't primarily worried about either machines gaining control or about algorithms being biased.


These new tools could make AI vision systems less biased

MIT Technology Review

Traditionally, skin-tone bias in computer vision is measured using the Fitzpatrick scale, which measures from light to dark. The scale was originally developed to measure tanning of white skin but has since been adopted widely as a tool to determine ethnicity, says William Thong, an AI ethics researcher at Sony. It is used to measure bias in computer systems by, for example, comparing how accurate AI models are for people with light and dark skin. But describing people's skin with a one-dimensional scale is misleading, says Alice Xiang, the global head of AI ethics at Sony. By classifying people into groups based on this coarse scale, researchers are missing out on biases that affect, for example, Asian people, who are underrepresented in Western AI data sets and can fall into both light-skinned and dark-skinned categories.


An inside look at Congress's first AI regulation forum

MIT Technology Review

The AI Insight Forums were announced a few months ago by Senate Majority Leader Chuck Schumer as part of his "SAFE Innovation" initiative, which is really a set of principles for AI legislation in the United States. The invite list was heavily skewed toward Big Tech execs, including CEOs of AI companies, though a few civil society and AI ethics researchers were included too. Coverage of the meeting thus far has put a particular emphasis on the reportedly unanimous agreement about the need for AI regulation, and on issues raised by Elon Musk and others about the "civilizational risks" created by AI. (This tracker from Tech Policy Press is pretty handy if you want to know more.) But to really dig below the surface, I caught up with one of the other attendees, Inioluwa Deborah Raji, who gave me an inside look at how the first meeting went, the pernicious myths she needed to debunk, and where disagreements could be felt in the room. Raji is a researcher at the University of California, Berkeley, and a fellow at Mozilla.


No 10 worried AI could be used to create advanced weapons that escape human control

The Guardian

Concerns that criminals or terrorists could use artificial intelligence to cause mass destruction will dominate discussion at a summit of world leaders, as concern grows in Downing Street about the power of the next generation of technological advances. British officials are touring the world ahead of an AI safety summit in Bletchley Park in November as they look to build consensus over a joint statement that would warn about the dangers of rogue actors using the technology to cause death on a large scale. Some of those around the prime minister, Rishi Sunak, worry the technology will soon be powerful enough to help individuals create bioweapons or evade human control altogether. Officials have become increasingly concerned about such possibilities, and the need for regulation to mitigate them, after recent discussions with senior technology executives. Last week, the scientist behind a landmark letter calling for a pause in developing powerful AI systems said tech executives privately agreed with the concept of a hiatus but felt they were locked into an AI arms race with rivals.


AI bias might not be a threat and here's why

FOX News

OpenAI, developer of the ChatGPT language model, is the best-funded and largest AI platform company with over $10 billion in funding at a valuation of nearly $30 billion. Microsoft uses OpenAI, but Google, Meta, Apple and Amazon have their own AI platforms and there are hundreds of other AI startups in Silicon Valley. Will industry forces drive one of these to become a monopoly? When Google started its search business, there were already a dozen existing search platforms such as Yahoo, AltaVista, Excite and InfoSeek. Many observers asked if we need Google as yet another search engine?


Your right to be forgotten in the age of AI

AIHub

Earlier this year, ChatGPT was briefly banned in Italy due to a suspected privacy breach. To help overturn the ban, the chatbot's parent company, OpenAI, committed to providing a way for citizens to object to the use of their personal data to train artificial intelligence (AI) models. The right to be forgotten (RTBF) law plays an important role in the online privacy rights of some countries. It gives individuals the right to ask technology companies to delete their personal data. It was established via a landmark case in the European Union (EU) involving search engines in 2014.


How the U.N. Plans to Shape the Future of AI

TIME - Tech

As the United Nations General Assembly gathered this week in New York, the U.N. Secretary-General's envoy on technology, Amandeep Gill, hosted an event titled Governing AI for Humanity, where participants discussed the risks that AI might pose and the challenges of achieving international cooperation on artificial intelligence. Secretary-General António Guterres and Gill have said they believe that a new U.N. agency will be required to help the world cooperate in managing this powerful technology. But the issues that the new entity would seek to address and its structure are yet to be determined, and some observers say that ambitious plans for global cooperation like this rarely get the required support of powerful nations. Gill has led efforts to make advanced forms of technology safer before. He was chair of the Group of Governmental Experts of the Convention on Certain Conventional Weapons when the Campaign to Stop Killer Robots, which sought to compel governments to outlaw the development of lethal autonomous weapons systems, failed to gain traction with global superpowers including the U.S. and Russia.