Gizmodo is 20 years old! To celebrate the anniversary, we're looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11, I felt spiritually and existentially lost. It's hard to believe now, but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I haven't set foot in a church since, aside from the occasional wedding or baptism. I didn't realize it at the time, but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didn't survive this mental reboot and return to form, but it did lead me to some very positive places, resulting in my adoption of secular Buddhism, meditation, and a decade-long stint with vegetarianism.
This article was published in The Stream (July 6, 2022) under the title "The Church of Artificial Intelligence of the Future" and is republished with permission. There is a church that worships artificial intelligence (AI). Zealots believe that an extraordinary AI future is inevitable. The technology is not here yet, but we are assured that it's coming. We will have the ability to be uploaded onto a computer and thereby achieve immortality.
There is a church that worships artificial intelligence (AI). Zealots believe that an extraordinary AI future is inevitable. The technology is not here yet, but we are assured that it's coming. We will have the ability to be uploaded onto a computer and thereby achieve immortality. You will be reborn into a new, immortal silicon body.
But what happens when artificial intelligence is biased? What if it makes mistakes on important decisions -- from who gets a job interview or a mortgage to who gets arrested and how much time they ultimately serve for a crime? "These everyday decisions can greatly affect the trajectories of our lives and increasingly, they're being made not by people, but by machines," said UC Davis computer science professor Ian Davidson. A growing body of research, including Davidson's, indicates that bias in artificial intelligence can lead to biased outcomes, especially for minority populations and women. Facial recognition technologies, for example, have come under increasing scrutiny because they've been shown to better detect white faces than they do the faces of people with darker skin.
Who are the inventors of patents? Since George Washington signed the first patent in 1790, the United States has issued patents to people of various ages, ethnicities, and genders, with some patent inventors being as young as two when they filed. The varied backgrounds of these inventors stems from the United States Patent and Trademark Office's ("USPTO") broad definition of an inventor, laying out an inventor to "mean the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter the invention." But what happens when the inventor is a machine? This is the exact issue Dr. Stephen Thaler sought to resolve with the USPTO as well as other worldwide patent offices.
When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize. We will explore two different types of AI, Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI). To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.
Legal scholars have in the last several years embarked upon an ongoing discussion and debate over a potential Legal Singularity that might someday occur, involving a variant or law-domain offshoot leveraged from the Artificial Intelligence (AI) realm amid its many decades of deliberations about an overarching and generalized technological singularity (referred to classically as The Singularity). This paper examines the postulated Legal Singularity and proffers that such AI and Law cogitations can be enriched by these three facets addressed herein: (1) dovetail additionally salient considerations of The Singularity into the Legal Singularity, (2) make use of an in-depth and innovative multidimensional parametric analysis of the Legal Singularity as posited in this paper, and (3) align and unify the Legal Singularity with the Levels of Autonomy (LoA) associated with AI Legal Reasoning (AILR) as propounded in this paper.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
America's intelligence collectors are already using AI in ways big and small, to scan the news for dangerous developments, send alerts to ships about rapidly changing conditions, and speed up the NSA's regulatory compliance efforts. But before the IC can use AI to its full potential, it must be hardened against attack. The humans who use it -- analysts, policy-makers and leaders -- must better understand how advanced AI systems reach their conclusions. Dean Souleles is working to put AI into practice at different points across the U.S. intelligence community, in line with the ODNI's year-old strategy. The chief technology advisor to the principal deputy to the Director of National Intelligence wasn't allowed to discuss everything that he's doing, but he could talk about a few examples.