computer


The dream of augmented humans endures, despite skepticism

The Japan Times

PARIS - Brain implants, longer lives, genetically modified humans: For the prophets of "transhumanism" -- the scientifically assisted evolution of humans beyond our current limitations -- it is just a matter of time. But many scientists insist that some problems are not so easily solved. Sooner or later, they argue, the movement that crystallized in the can-do culture of 1980s California will hit the brick wall of the scientifically impossible. The most recent controversy was in November, when Chinese scientist He Jiankui claimed to have created the world's first genetically edited babies, who he said were HIV-resistant. The backlash from the scientific community led to his work being suspended, as questions were raised not just about the quality of the science, but the ethics of the research.


Machine Learning: Arthur Samuel, Artificial Intelligence (AI) & Big Data

#artificialintelligence

Programmed by Arthur Samuel, this big data discipline of artificial intelligence replaces the tedious task of trying to understand the problem well enough to be able to write a program, which can take much longer or be virtually impossible. Techopedia defines the discipline of machine learning as "an artificial intelligence (AI) discipline geared toward the technological development of human knowledge. Machine learning allows computers to handle new situations via analysis, self-training, observation and experience. Machine learning facilitates the continuous advancement of computing through exposure to new scenarios, testing and adaptation, while employing pattern and trend detection for improved decisions in subsequent (though not identical) situations." In 1959, IBM employee Arthur Samuel wanted to teach a computer to play checkers.


Ethical implications of artificial intelligence

#artificialintelligence

In the race to adopt rapidly developing technologies, organisations run the risk of overlooking potential ethical implications. And that could produce unwelcome results, especially in artificial intelligence (AI) systems that employ machine learning. Machine learning is a subset of AI in which computer systems are taught to learn on their own. Algorithms allow the computer to analyse data to detect patterns and gain knowledge or abilities without having to be specifically programmed. It is this type of technology that empowers voice-enabled assistants such as Apple's Siri or the Google Assistant, among myriad other uses.


Coding skills won't save your job -- but the humanities will

#artificialintelligence

Coding boot camps are becoming almost as popular as college degrees: Code schools graduated more than 22,000 students in 2017 alone. The bet for many is that coding and computer programming will save their jobs from automation, and there's a resulting wave of emphasis on STEM skills. But while a basic understanding of computer science may always be valuable, it is not a future-proof skill. If people want a skill set that can adapt and ride the wave of workplace automation, they should look to -- the humanities. Having knowledge of human culture and history allows us to shape the direction of how technology is developed, identifying what problems it should solve and what real-world concerns should be considered throughout the process.


A Look at the Most Used Terminology Around Artificial Intelligence 7wData

#artificialintelligence

Artificial Intelligence (AI), once only present in science fiction, is now a science reality manifesting itself in every industry. It raises questions that make us wonder about how we should explore the possibilities of AI for our organization, institution, home, or city. But what do we really mean when we speak about AI? In general, AI is a broad field of science encompassing much more than just computer science. AI includes also psychology, philosophy, linguistics, and other areas.


Opinion Lies, damned lies, and artificial intelligence

#artificialintelligence

Algorithms are as biased as the data they feed on. And all data are biased. Even "official" statistics cannot be assumed to stand for objective, eternal "facts". The figures that governments publish represent society as it is now, through the lens of what those assembling the data consider to be relevant and important. The categories and classifications used to make sense of the data are not neutral.


U.S. sees quantum computing and AI as 'emerging threats'

#artificialintelligence

The U.S. government may be planning for a quantum computing workforce (the government passed a bill to foster an active quantum computing industry in September 2018), yet this hasn't prevented the security community from regarding quantum computing being seen as an'emerging threat' together with certain forms of artificial intelligence. The study was commissioned by the U.S. Government Accountability Office, in a white paper titled "Long-Range Emerging Threats Facing the United States As Identified by Federal Agencies." Here Federal agencies identified 26 long-term threats within four categories, which were: Adversaries' Political and Military Advancements--e.g., China's increasing ability to match the U.S. military's strength. Dual-Use Technologies--e.g., self-driving cars might be developed for private use, but militaries can use them too. Within this, the future of "dual-use technologies" took center stage, according to TechCrunch.


Artificial intelligence and the limits of the machine model - Resilience

#artificialintelligence

In his bestselling book, Up the Organization, former Avis president Robert Townsend captured the problem of automation precisely. Writing at a time when the vast paper systems of corporate America were being transferred to computers, he warned that it was important first to make sure that a company's paper systems are actually effective and accurate. "Otherwise," he quipped, "your new computer will just speed up the mess." Today, we are faced with a new wave of optimism about the prospects of what is called artificial intelligence (AI). It is important to parse these words carefully for they will tell you why artificial intelligence as it is currently conceived will very likely "just speed up the mess."


If tech experts worry about artificial intelligence, shouldn't you as well? John Naughton

#artificialintelligence

Fifty years ago last Sunday, a computer engineer named Douglas Engelbart gave a live demonstration in San Francisco that changed the computer industry and, indirectly, the world. In the auditorium, several hundred entranced geeks watched as he used something called a "mouse" and a special keypad to manipulate structured documents and showed how people in different physical locations could work collaboratively on shared files, online. It was, said Steven Levy, a tech historian who was present, "the mother of all demos". "As windows open and shut and their contents reshuffled," he wrote, "the audience stared into the maw of cyberspace. Engelbart, with a no-hands mic, talked them through, a calm voice from Mission Control as the truly final frontier whizzed before their eyes."


We're thinking about the Turing Test all wrong

#artificialintelligence

In 1950, five years before computer scientist John McCarthy would coin the term "artificial intelligence," mathematician Alan Turing famously posited: Can machines think? To answer, Turing devised a simple test. Known as the "Imitation Game," the machine passes as "intelligent" if, during a text-based chat, it can fool us that it's human. Since then, his eponymous Turing Test has inspired countless competitions, fierce philosophical debates, media frenzies, and epic sci-fi plots from Westworld to Ex Machina to Her--not to mention copious criticism from academia. But Turing Test detractors who believe that "winning" the Imitation Game has "little practical significance for artificial intelligence" are missing the finer point contained in Turing's premise: That the fundamental defining feature of human intelligence is language.