human intelligence


What Is Artificial Intelligence (AI)?

#artificialintelligence

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.


Harness the Power of AI Automation to Optimize Storytelling

#artificialintelligence

Many agencies are still in the early stages of collecting data, and learning how to use it to drive value through AI automation. Yet, a 2018 study found 49% percent of those surveyed agreed artificial intelligence (AI) and automation will change the way we work. And 31% believed they'd already seen the benefits. But organizations are also dealing with the fear that AI and automation could cause job loss across many sectors. Technology advances will undoubtedly alter current roles, as well as create new ones.


How to prepare students for the rise of artificial intelligence in the workforce

#artificialintelligence

The future impacts of artificial intelligence (AI) on society and the labour force have been studied and reported extensively. In a recent book, AI Superpowers, Kai-Fu Lee, former president of Google China, wrote that 40 to 50 per cent of current jobs will be technically and economically viable with AI and automation over the next 15 years. Artificial intelligence refers to computer systems that collect, interpret and learn from external data to achieve specific goals and tasks. Unlike natural intelligence displayed by humans and animals, it is an artificial form of intelligence demonstrated by machines. This has raised questions about the ethics of AI decision-making and impacts of AI in the workplace.


Are we risking a planetary AI intelligence explosion?

#artificialintelligence

In 2015, a Harvard artificial intelligence (AI) and statistics researcher offered a grim vision of the AI world to come. It was recently recirculated by the Boston-based Future of Life Institute which is "working to mitigate existential risks facing humanity." Viktoriya Krakovna begins by quoting an early authority, computer scientist I. J. Good, who said in 1965, "An ultraintelligent machine could design even better machines; there would then unquestionably be an'intelligence explosion,' and the intelligence of man would be left far behind." Krakovna argues that even if that doesn't happen, existential risk looms: The incentives are to continue improving AI systems until they hit physical limits on intelligence, and those limitations (if they exist at all) are likely to be above human intelligence in many respects. Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design.


How Wearable AI Will Amplify Human Intelligence

#artificialintelligence

Imagine that your team is meeting to decide whether to continue an expensive marketing campaign. After a few minutes, it becomes clear that nobody has the metrics on-hand to make the decision. You chime in with a solution and ask Amazon's virtual assistant Alexa to back you up with information: "Alexa, how many users did we convert to customers last month with Campaign A?" and Alexa responds with the answer. You just amplified your team's intelligence with AI. But this is just the tip of the iceberg.


Ethics, Digitisation and Machine Intelligence

#artificialintelligence

The wheel of technological innovation keeps spinning – and picking up speed. Recently, the digitization discourse has been negotiated under a new old term: Artificial intelligence. The discourse is often marked by fears: AI seems to cause ethical problems, to be a high-risk technology. The starting point for these concerns is always the autonomous weapons, the morally unsolvable dilemmas of autonomous driving or the (in the opinion of very few) imminent AI systems with consciousness and the apparently logically unavoidable urge to take control of the world. On the other hand, there is a very optimistic discourse that emphasizes the chances: prosperity can only be secured in the economic connection to technology, so one argument.


The Comedian Is in the Machine. AI Is Now Learning Puns

#artificialintelligence

Here's a groaner for you: The greyhound stopped to get a hare cut. A pun generator might not sound like serious work for an artificial intelligence researcher--more the sort of thing knocked out over the weekend to delight the labmates come Monday. But for He He, who designed just that during her postdoc at Stanford, it's an entry point to a devilish problem in machine learning. He's aim is to build AI that's natural and fun to talk to--bots that don't just read us the news or tell us the weather, but can crack jokes or compose a poem, even tell a compelling story. But getting there, she says, runs up against the limits of how AI typically learns.


Will Artificial Intelligence Enhance or Hack Humanity?

#artificialintelligence

This week, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence. The event was hosted by the Stanford Center for Ethics and Society, the Stanford Institute for Human-Centered Artificial Intelligence, and the Stanford Humanities Center. A transcript of the event follows, and a video is posted below. Nicholas Thompson: Thank you, Stanford, for inviting us all here. I want this conversation to have three parts: First, lay out where we are; then talk about some of the choices we have to make now; and last, talk about some advice for all the wonderful people in the hall. Yuval, the last time we talked, you said many, many brilliant things, but one that stuck out was a line where you said, "We are not just in a technological crisis. We are in a philosophical crisis." So explain what you meant and explain how it ties to AI. Let's get going with a note of ...


Ethics of Artificial Intelligence Demarcations

arXiv.org Artificial Intelligence

In this paper we present a set of key demarcations, particularly important when discussing ethical and societal issues of current AI research and applications. Properly distinguishing issues and concerns related to Artificial General Intelligence and weak AI, between symbolic and connectionist AI, AI methods, data and applications are prerequisites for an informed debate. Such demarcations would not only facilitate much-needed discussions on ethics on current AI technologies and research. In addition sufficiently establishing such demarcations would also enhance knowledge-sharing and support rigor in interdisciplinary research between technical and social sciences.


In defense of the black box

Science

The science fiction writer Douglas Adams imagined the greatest computer ever built, Deep Thought, programmed to answer the deepest question ever asked: the Great Question of Life, the Universe, and Everything. After 7.5 million years of processing, Deep Thought revealed its answer: Forty-two (1). As artificial intelligence (AI) systems enter every sector of human endeavor--including science, engineering, and health--humanity is confronted by the same conundrum that Adams encapsulated so succinctly: What good is knowing the answer when it is unclear why it is the answer? What good is a black box? In an informal survey of my colleagues in the physical sciences and engineering, the top reason for not using AI methods such as deep learning, voiced by a substantial majority, was that they did not know how to interpret the results.