licklider
Homo Cyberneticus: The Era of Human-AI Integration
Author Keywords HCI vision; human-augmentation; human-AI integration HUMAN-AUGMENTATION Neo: Can you fly that thing? In the movie "The Matrix," Trinity responds to Neo right before having the helicopter's maneuverability downloaded Will such a future come? The idea that technology enhances humanity has a long history. "There may be found many Mechanical Inventions to improve GUIs were tools to realize that goal. In that regard, J.C.R. Licklider's "Man-Computer Symbiosis" [12] is worth reviewing. Here, symbiosis means "living together in intimate association, or even close union, of two dissimilar organisms.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Health & Medicine > Therapeutic Area (0.71)
- Media > Film (0.55)
The Intelligence Enigma: Balancing the Power Between Humans and Machines
Empowering the human is a piece of the puzzle often missing from the fast-paced tech world but remains one of the most important drivers of success and true disruption. Think about the people behind the companies creating or using the most innovative technologies--even the biggest businesses rely on human creativity and emotional intelligence as much as they rely on technological development to survive, let alone thrive in the digital age. These are all digital advancements that are discussed in the context of technology and the sheer computational power of the machine. But what many business leaders fail to understand is that machines can't solve problems alone. Machines are the enabler, but without situational context and logic, these technologies can never serve as a replacement for humans. So, what exactly does this mean for the future of humans and intelligent technology?
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.05)
- Europe > Germany (0.05)
Untold History of AI: The DARPA Dreamer Who Aimed for Cyborg Intelligence
The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. At 10:30pm on 29 October 1969, a graduate student at UCLA sent a two-letter message from an SDS Sigma 7 computer to another machine a few hundred miles away at the Stanford Research Institute in Menlo Park. The student had meant to send "LOGIN," but the packet switching network supporting the transmission of the message, the ARPANET, crashed before the whole message could be typed out.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
The Cloud Imperative: The Foundation of a Truly Intelligent Enterprise
In 1963, the American psychologist and computer scientist J. C. R. Licklider wrote a series of forward-thinking, perhaps even visionary, memos that he addressed to the "Members and Affiliates of the Intergalactic Computer Network." Just over half a century later, his foresight sits at the center of a global transformation as organizations strive to become truly intelligent. This transformation is both driven and underpinned by cloud computing technology – itself arguably representing a further iteration of Licklider's early theories. Simply put, cloud computing makes it possible for users to access data, applications, and services over the Internet and ultimately provides the foundation for the intelligent enterprise which is ready for tomorrow's world. The cloud eliminates the need for costly hardware, such as hard drives and servers – and gives users the ability to work from anywhere.As such, cloud software offers several distinct advantages that help sharpen a company'scompetitive edge: It speeds up processes, makes them easier, and above all smarter – all aiming at realizing a company's intelligent enterprise.
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.55)
Machine Learning and Misinformation
Communication is an essential pillar of society. Humanity's progression over the past millennium was largely driven by the development and evolution of communication as a tool for distributing siloed thoughts from one individual to others. Communication is naively defined as content and the mode of transmission -- symbols manifested as images, language transmitted through speech and writing, digital files sent through the internet. These are methods through which we communicate thoughts, ideas, facts, and opinions. New forms of communication emerge to expand the lexicon of thought and reduce the friction required to create and transmit content.
Intelligence amplification - Wikipedia
Intelligence amplification (IA) (also referred to as cognitive augmentation and machine augmented intelligence) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s by cybernetics and early computer pioneers. IA is sometimes contrasted with AI (artificial intelligence), that is, the project of building a human-like intelligence in the form of an autonomous technological system such as a computer or robot. AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend the information processing capabilities of the human mind (see extended mind and distributed cognition).
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Social Media (0.40)
- Information Technology > Communications > Collaboration (0.40)
- Information Technology > Communications > Networks (0.36)
Cognitive collaboration
Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.
- North America > United States > Wisconsin (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Leisure & Entertainment > Games > Chess (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Games > Chess (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
Cognitive collaboration
Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.
- North America > United States > Wisconsin (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Leisure & Entertainment > Games > Chess (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Games > Chess (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
People and computers need each other - CNN.com
Shyam Sankar: Some say computers could attain artificial intelligence superior to humans A more realistic approach is to envision computers aided by human intelligence, he says Computers can spot patterns from the past but can't anticipate as people can, he says Sankar: Human thought, aided by computer power, can make sense of "big data" Computers can spot patterns from the past but can't anticipate as people can, he says Sankar: Human thought, aided by computer power, can make sense of "big data" In 1997, Garry Kasparov was defeated by IBM's Deep Blue supercomputer. It seemed like a watershed moment, recalling the rise of the machines long prophesied in science fiction. Yet in 2005, a freestyle chess tournament featured teams of humans partnering with computers in various combinations. Shockingly, two amateurs using three fairly weak laptops emerged victorious, beating grand masters and supercomputers in turn. This contrast is fittingly emblematic of two great visionaries of computer science, Marvin Minsky and J.C.R. Licklider.
Learning to trust artificial intelligence systems accountability, com…
They generate not just answers to numerical problems, but hypotheses, reasoned arguments and recommendations about more complex -- and meaningful -- bodies of data. What's more, cognitive systems can make sense of the 80 percent of the world's data that computer scientists call "unstructured." This enables them to keep pace with the volume, complexity and unpredictability of information and systems in the modern world. None of this involves either sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand -- and act upon -- the complex systems of our society. This augmented intelligence is the necessary next step in our ability to harness technology in the pursuit of knowledge, to further our expertise and to improve the human condition. That is why it represents not just a new technology, but the dawn of a new era of technology, business and society: the Cognitive Era. The success of cognitive computing will not be measured by Turing tests or a computer's ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved. It's not surprising that the public's imagination has been ignited by Artificial Intelligence since the term was first coined in 1955. In the ensuing 60 years, we have been alternately captivated by its promise, wary of its potential for abuse and frustrated by its slow development. But like so many advanced technologies that were conceived before their time, Artificial Intelligence has come to be widely misunderstood --co-opted by Hollywood, mischaracterized by the media, portrayed as everything from savior to scourge of humanity. Those of us engaged in serious information science and in its application in the real world of business and society understand the enormous potential of intelligent systems. The future of such technology -- which we believe will be cognitive, not "artificial"-- has very different characteristics from those generally attributed to AI, spawning different kinds of technological, scientific and societal challenges and opportunities, with different requirements for governance, policy and management. Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans naturally. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. They are made possible by advances in a number of scientific fields over the past half-century, and are different in important ways from the information systems that preceded them. Here at IBM, we have been working on the foundations of cognitive computing technology for decades, combining more than a dozen disciplines of advanced computer science with 100 years of business expertise. Now we are seeing first hand its potential to transform businesses, governments and society.
- North America > Canada > Ontario > Toronto (0.05)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Law (0.96)
- (2 more...)