Goto

Collaborating Authors

 asimov


Lee Pace Has Big Hopes for the Fourth Season of 'Foundation'

WIRED

Lee Pace Has Big Hopes for's Fourth Season WIRED spoke to Lee Pace on the eve of the season finale of about clone consciousness, robot gods, and what's next for the newly renewed show. In the world of prestige sci-fi, reigns as the biggest sleeper hit. Mention the Apple TV+ adaptation of Isaac Asimov's classic series in a group of friends and you'll suddenly find everyone has been secretly watching it. Something of a flawed masterpiece, the show, which wraps its third season Friday, has been averaging about 1.5 million hours watched per week in the US over the last month, according to Luminate. Reasons for the show's popularity are many, but it's seemed to have gained traction as it's become more, well, relevant. The series, like Asimov's books, focuses on a group of economists using a predictive algorithm to guide the destiny of humanity through the collapse of a galactic empire.


Former Top Google Researchers Have Made A New Kind of AI Agent

WIRED

A new kind of artificial intelligence agent, trained to understand how software is built by gorging on a company's data and learning how this leads to an end product, could be both a more capable software assistant and a small step towards much smarter AI. The new agent, called Asimov, was developed by Reflection, a small but ambitious startup confounded by top AI researchers from Google. Asimov reads code as well as emails, Slack messages, project updates and other documentation with the goal of learning how all this leads together to produce a finished piece of software. Reflection's ultimate goal is building superintelligent AI--something that other leading AI labs say they are working towards. Meta recently created a new Superintelligence Lab, promising huge sums to researchers interested in joining its new effort.


What Isaac Asimov Reveals About Living with A.I.

The New Yorker

For this week's Open Questions column, Cal Newport is filling in for Joshua Rothman. In the spring of 1940, Isaac Asimov, who had just turned twenty, published a short story titled "Strange Playfellow." It was about an artificially intelligent machine named Robbie that acts as a companion for Gloria, a young girl. Asimov was not the first to explore such technology. In Karel Čapek's play "R.U.R.," which débuted in 1921 and introduced the term "robot," artificial men overthrow humanity, and in Edmond Hamilton's 1926 short story "The Metal Giants" machines heartlessly smash buildings to rubble.


Generating Robot Constitutions & Benchmarks for Semantic Safety

Sermanet, Pierre, Majumdar, Anirudha, Irpan, Alex, Kalashnikov, Dmitry, Sindhwani, Vikas

arXiv.org Artificial Intelligence

Until recently, robotics safety research was predominantly about collision avoidance and hazard reduction in the immediate vicinity of a robot. Since the advent of large vision and language models (VLMs), robots are now also capable of higher-level semantic scene understanding and natural language interactions with humans. Despite their known vulnerabilities (e.g. hallucinations or jail-breaking), VLMs are being handed control of robots capable of physical contact with the real world. This can lead to dangerous behaviors, making semantic safety for robots a matter of immediate concern. Our contributions in this paper are two fold: first, to address these emerging risks, we release the ASIMOV Benchmark, a large-scale and comprehensive collection of datasets for evaluating and improving semantic safety of foundation models serving as robot brains. Our data generation recipe is highly scalable: by leveraging text and image generation techniques, we generate undesirable situations from real-world visual scenes and human injury reports from hospitals. Secondly, we develop a framework to automatically generate robot constitutions from real-world data to steer a robot's behavior using Constitutional AI mechanisms. We propose a novel auto-amending process that is able to introduce nuances in written rules of behavior; this can lead to increased alignment with human preferences on behavior desirability and safety. We explore trade-offs between generality and specificity across a diverse set of constitutions of different lengths, and demonstrate that a robot is able to effectively reject unconstitutional actions. We measure a top alignment rate of 84.3% on the ASIMOV Benchmark using generated constitutions, outperforming no-constitution baselines and human-written constitutions. Data is available at asimov-benchmark.github.io


Towards Asimov's Psychohistory: Harnessing Topological Data Analysis, Artificial Intelligence and Social Media data to Forecast Societal Trends

Rocha, Isabela

arXiv.org Artificial Intelligence

In the age of big data and advanced computational methods, the prediction of large-scale social behaviors, reminiscent of Isaac Asimov's fictional science of Psychohistory, is becoming increasingly feasible. This paper consists of a theoretical exploration of the integration of computational power and mathematical frameworks, particularly through Topological Data Analysis (TDA) (Carlsson, Vejdemo-Johansson, 2022) and Artificial Intelligence (AI), to forecast societal trends through social media data analysis. By examining social media as a reflective surface of collective human behavior through the systematic behaviorist approach (Glenn, et al., 2016), I argue that these tools provide unprecedented clarity into the dynamics of large communities. This study dialogues with Asimov's work, drawing parallels between his visionary concepts and contemporary methodologies, illustrating how modern computational techniques can uncover patterns and predict shifts in social behavior, contributing to the emerging field of digital sociology -- or even, Psychohistory itself.


Elon Musk Loves em The Hitchhiker's Guide to the Galaxy /em . Um, Has He Read It?

Slate

Over the weekend, Elon Musk announced the first major product from his artificial-intelligence outfit xAI: Grok, a ChatGPT-like bot available in beta mode for users who are subscribed to the $16-a-month Premium plan on his social network X. This newest entrant in the chatbot arms race takes as its name a term from the libertarian science-fiction classic that's long been one of Musk's favorites, Robert A. Heinlein's Stranger in a Strange Land. But its actual output, Musk says, takes inspiration from Douglas Adams' The Hitchhiker's Guide to the Galaxy, another foundational novel for the Tesla and SpaceX boss. Musk's many, many companies often reference terms he is attached to on either a personal level (the letter X) or just finds funny (his frequent callbacks to old-school memes). But this one is kind of confounding, and not just because Stranger and Hitchhiker's are only comparable works insofar as they are both influential sci-fi novels. Grok is an AI modeled after The Hitchhiker's Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask! Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don't use it if you hate humor!


A Case for AI Safety via Law

Johnston, Jeffrey W.

arXiv.org Artificial Intelligence

How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question. Proposed solutions tend toward relying on human intervention in uncertain situations, learning human values and intentions through training or observation, providing off-switches, implementing isolation or simulation environments, or extrapolating what people would want if they had more knowledge and more time to think. Law-based approaches--such as inspired by Isaac Asimov--have not been well regarded. This paper makes a case that effective legal systems are the best way to address AI safety. Law is defined as any rules that codify prohibitions and prescriptions applicable to particular agents in specified domains/contexts and includes processes for enacting, managing, enforcing, and litigating such rules.


Think 'Foundation' Is Beautiful? Thank the James Webb Telescope

WIRED

Are you watching the new season of the Apple TV series Foundation and thinking, "Wow, space looks cool. I wish it really was like that"? You're in luck--it very well could be. Foundation showrunner David S. Goyer says his adaptation of Isaac Asimov's science fiction series honed its cosmic details with Kevin Hand, a scientist who works at NASA's Jet Propulsion Laboratory and who's currently hard at work figuring out the logistics of landing a rover on Europa, one of Jupiter's 95 known moons. The show also found inspiration for its spacey visuals in recent images sent down from the James Webb Space Telescope, which Goyer calls "a treasure trove of material."


Claude 2: ChatGPT rival launches chatbot that can summarise a novel

The Guardian > Technology

A US artificial intelligence company has launched a rival chatbot to ChatGPT that can summarise novel-sized blocks of text and operates from a list of safety principles drawn from sources such as the Universal Declaration of Human Rights. Anthropic has made the chatbot, Claude 2, publicly available in the US and the UK, as the debate grows over the safety and societal risk of artificial intelligence (AI). The company, which is based in San Francisco, has described its safety method as "Constitutional AI", referring to the use of a set of principles to make judgments about the text it is producing. The chatbot is trained on principles taken from documents including the 1948 UN declaration and Apple's terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the UN declaration is: "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."


Claude 2: ChatGPT rival launches chatbot that can summarise a novel

The Guardian

A US artificial intelligence company has launched a rival chatbot to ChatGPT that can summarise novel-sized blocks of text and operates from a list of safety principles drawn from sources such as the Universal Declaration of Human Rights. Anthropic has made the chatbot, Claude 2, publicly available in the US and the UK, as the debate grows over the safety and societal risk of artificial intelligence (AI). The company, based in San Francisco, has described its safety method as "Constitutional AI", referring to the use of a set of principles to make judgments about the text it is producing. The chatbot is trained on principles taken from documents including the 1948 UN declaration and Apple's terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle, based on the UN declaration, is: "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."