Goto

Collaborating Authors

Results


Language Is The Next Great Frontier In AI

#artificialintelligence

Johannes Gutenberg's printing press, introduced in the fifteenth century, transformed society ... [ ] through language. The creation of machines that can understand language may have an even greater impact. Language is the cornerstone of human intelligence. The emergence of language was the most important intellectual development in our species' history. It is through language that we formulate thoughts and communicate them to one another. Language enables us to reason abstractly, to develop complex ideas about what the world is and could be, and to build on these ideas across generations and geographies. Almost nothing about modern civilization would be possible without language. Building machines that can understand language has thus been a central goal of the field of artificial intelligence dating back to its earliest days. It has proven maddeningly elusive.


Language Is The Next Great Frontier In AI

#artificialintelligence

Johannes Gutenberg's printing press, introduced in the fifteenth century, transformed society ... [ ] through language. The creation of machines that can understand language may have an even greater impact. Language is the cornerstone of human intelligence. The emergence of language was the most important intellectual development in our species' history. It is through language that we formulate thoughts and communicate them to one another. Language enables us to reason abstractly, to develop complex ideas about what the world is and could be, and to build on these ideas across generations and geographies. Almost nothing about modern civilization would be possible without language. Building machines that can understand language has thus been a central goal of the field of artificial intelligence dating back to its earliest days. It has proven maddeningly elusive.


Why Computers Don't Need to Match Human Intelligence

#artificialintelligence

Speech and language are central to human intelligence, communication, and cognitive processes. Understanding natural language is often viewed as the greatest AI challenge--one that, if solved, could take machines much closer to human intelligence. In 2019, Microsoft and Alibaba announced that they had built enhancements to a Google technology that beat humans in a natural language processing (NLP) task called reading comprehension. This news was somewhat obscure, but I considered this a major breakthrough because I remembered what had happened four years earlier. In 2015, researchers from Microsoft and Google developed systems based on Geoff Hinton's and Yann Lecun's inventions that beat humans in image recognition. I predicted at the time that computer vision applications would blossom, and my firm made investments in about a dozen companies building computer-vision applications or products.


Why Computers Don't Need to Match Human Intelligence

WIRED

Speech and language are central to human intelligence, communication, and cognitive processes. Understanding natural language is often viewed as the greatest AI challenge--one that, if solved, could take machines much closer to human intelligence. In 2019, Microsoft and Alibaba announced that they had built enhancements to a Google technology that beat humans in a natural language processing (NLP) task called reading comprehension. This news was somewhat obscure, but I considered this a major breakthrough because I remembered what had happened four years earlier. In 2015, researchers from Microsoft and Google developed systems based on Geoff Hinton's and Yann Lecun's inventions that beat humans in image recognition.


AI Is Harder Than We Think: 4 Key Fallacies in AI Research

#artificialintelligence

Artificial intelligence has been all over headlines for nearly a decade, as systems have made quick progress in long-standing AI challenges like image recognition, natural language processing, and games. Tech companies have sown machine learning algorithms into search and recommendation engines and facial recognition systems, and OpenAI's GPT-3 and DeepMind's AlphaFold promise even more practical applications, from writing to coding to scientific discoveries. Indeed, we're in the midst of an AI spring, with investment in the technology burgeoning and an overriding sentiment of optimism and possibility towards what it can accomplish and when. This time may feel different than previous AI springs due to the aforementioned practical applications and the proliferation of narrow AI into technologies many of us use every day--like our smartphones, TVs, cars, and vacuum cleaners, to name just a few. But it's also possible that we're riding a wave of short-term progress in AI that will soon become part of the ebb and flow in advancement, funding, and sentiment that has characterized the field since its founding in 1956. AI has fallen short of many predictions made over the last few decades; 2020, for example, was heralded by many as the year self-driving cars would start filling up roads, seamlessly ferrying passengers around as they sat back and enjoyed the ride.


Transformation Driven Visual Reasoning

arXiv.org Artificial Intelligence

This paper defines a new visual reasoning paradigm by introducing an important factor, i.e., transformation. The motivation comes from the fact that most existing visual reasoning tasks, such as CLEVR in VQA, are solely defined to test how well the machine understands the concepts and relations within static settings, like one image. We argue that this kind of state driven visual reasoning approach has limitations in reflecting whether the machine has the ability to infer the dynamics between different states, which has been shown as important as state-level reasoning for human cognition in Piaget's theory. To tackle this problem, we propose a novel transformation driven visual reasoning task. Given both the initial and final states, the target is to infer the corresponding single-step or multi-step transformation, represented as a triplet (object, attribute, value) or a sequence of triplets, respectively. Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i.e., Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views). Experimental results show that the state-of-the-art visual reasoning models perform well on Basic, but are still far from human-level intelligence on Event and View. We believe the proposed new paradigm will boost the development of machine visual reasoning. More advanced methods and real data need to be investigated in this direction. Code is available at: https://github.com/hughplay/TVR.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Why AI needs more human intelligence if it's to succeed

#artificialintelligence

I got suckered into watching "The Robot Will See You Now" during Channel 4's November week of robot programmes. Shoddy though it was, it conveyed more of the truth than the more technically correct offerings. Its premise: a family of hand-picked reality TV stereotypes being given access to robot Jess, complete with a "cute" flat glass face that looked like a thermostat display, and they consulted he/she/it about their relationship problems and other worries. Channel 4 admitted Jess operates "with some human assistance", which could well have meant somebody sitting in the next room speaking into a microphone, but Jess was immediately recognisable to me as ELIZA in a smart new plastic shell. ELIZA, for those too young to know, was one of the first AI natural language programs, written in 1964 by Joseph Weizenbaum at MIT – and 16 years later by me while learning Lisp.


A Quick Introduction to AI

#artificialintelligence

Artificial Intelligence (AI) has the potential to revolutionize the human civilization and will impact industries, companies and how we live our life. To understand the motivation behind Artificial Intelligence, let us compare some of the differences between traditional Computer programs vs. Human Intelligence. Normal humans have the same intellectual mechanisms, but the difference in intelligence is related to "quantitative biochemical and physiological conditions." On the other hand, computer programs have plenty of speed and memory but their abilities corresponding to the intellectual mechanisms solely depend on the efficiency of the computer programmers and what they to put into programs. Traditionally, computing is used for performing mechanical computations using fixed procedures.


Focus on Artificial Intelligence and its sub-disciplines

#artificialintelligence

"The last 10 years have been about building a world that is mobile-first, turning our phones into remote controls for our lives. But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available." What we call AI is Artificial Intelligence, a term created in 1956 by John McCarthy, an assistant at Dartmouth College, which describes a machine capable of performing tasks usually requiring human intelligence. The field of Artificial Intelligence is so wide that it is difficult to measure its density. This area covers different disciplines including understanding, calculation, reasoning, learning, perception and natural language dialogue.