If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Artificial intelligence has been used to generate an endless stream of death metal music which will play on YouTube for 24 hours a day. The creation comes from two US programmers who built a virtual band known as'Dadabot'. It creators are now letting the technology play forever via a live stream called'Relentless Doppelganger' on the video-sharing platform. An AI system has been programmed to generate heavy metal music on YouTube 24 hours a day, with the aim to do so until'infinity'. Dadabot, and its continual supply of music, was trained using a large amount of music from Canadian death metal band Archspire.
Last year, I participated in a discussion of The Human Use of Human Beings, Norbert Weiner's groundbreaking book on cybernetics theory. Out of that grew what I now consider a manifesto against the growing singularity movement, which posits that artificial intelligence, or AI, will supersede and eventually displace us humans. The notion of singularity – which includes the idea that AI will supercede humans with its exponential growth, making everything we humans have done and will do insignificant – is a religion created mostly by people who have designed and successfully deployed computation to solve problems previously considered impossibly complex for machines. They have found a perfect partner in digital computation, a seemingly knowable, controllable, machine-based system of thinking and creating that is rapidly increasing in its ability to harness and process complexity and, in the process, bestowing wealth and power on those who have mastered it. In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a feedback loop, lacking in self-regulation (although #techwontbuild, #metoo and #timesup are forcing some reflection).
A Japanese tech start-up is using deep learning to teach a pair of machines a simple job for a human, but a surprisingly tricky task for a robot - cleaning a bedroom. Though it may seem like a basic, albeit tedious, task for a human, robots find this type of job surprisingly complicated. A Japanese tech start-up is using deep learning to teach AI how to deal with disorder and chaos in a child's room. Deep learning is where algorithms, inspired by the human brain, learn from large amounts of data so they're able to perform complex tasks. Some tasks, like welding car chassis in the exact same way day after day, are easy for robots as it is a repetitive process and the machines do not suffer with boredom in the same way as disgruntled employees.
Before computers, no sane person would have set out to count gender pronouns in 4,000 novels, but the results can be revealing, as MIT's new digital humanities program recently discovered. Launched with a $1.3 million grant from the Andrew W. Mellon Foundation, the Program in Digital Humanities brings computation together with humanities research, with the goal of building a community "fluent in both languages," says Michael Scott Cuthbert, associate professor of music, Music21 inventor, and director of digital humanities at MIT. "In the past, it has been somewhat rare, and extremely rare beyond MIT, for humanists to be fully equipped to frame questions in ways that are easy to put in computer science terms, and equally rare for computer scientists to be deeply educated in humanities research. There has been a communications gap," Cuthbert says. While traditional digital humanities programs attempt to provide humanities scholars with some computational skills, the situation at MIT is different: Most MIT students already have or are learning basic programming skills, and all MIT undergraduates also take some humanities classes. Cuthbert believes this difference will make MIT's program a great success.
From automated to autonomous and now cognitive, a paradigm shift is taking place in the design principles of machines, matter, methods and more. So, what are the principles of cognitive design? And are they centered around the future of humanity? From tabulating systems to programmable systems and now with cognitive systems, the evolution in computing allows humans to move beyond numbers and data to knowledge and intelligence. It is no longer about the replacement of man with machine, but rather about intelligence augmentation.
Hera uses infrared to scan impact crater. Judging by the valuations of companies such as Waymo, Lyft and Uber, humanity is placing a big bet on self-driving cars as the future of transportation. But the future of humanity itself may rest on the hopes of self-driving spacecraft. The European Space Agency is currently developing a self-driving craft for its Hera planetary defense mission to the Didymos asteroid, which could happen as soon as 2023. "If you think self-driving cars are the future on Earth, then Hera is the pioneer of autonomy in deep space," Paolo Martino, lead systems engineer of ESA's proposed Hera mission, said in a statement.
The rapid progress and development in artificial intelligence (AI) is prompting desperate speculation about its dual-use applications and security risks. From autonomous weapons systems (AWS) to facial recognition technology to decision-making algorithms, each emerging application of artificial intelligence brings with it both good and bad. It is this dual nature of artificial intelligence technology that is bringing enormous security risks to not only individuals and entities across nations: its government, industries, organizations, and academia (NGIOA) but also the future of humanity. The reality is that any new AI innovation might be used for both beneficial and harmful purposes: any single algorithm that may provide important economic applications might also lead to the production of unprecedented weapons of mass destruction on a scale that is difficult to fathom. As a result, the concerns about artificial intelligence-based automation are growing.
What skills, ideas, and experiences should students expect to leave college with? The annual celebration of learning is named after the late Margaret MacVicar, the first dean for undergraduate education and the founder of the Undergraduate Research Opportunities Program (UROP). Vice Chancellor Ian Waitz hosted the afternoon's festivities and began by introducing the 2019 MacVicar Faculty Fellows: Ford Professor of Economics Joshua Angrist, computer science professor Erik Demaine, anthropology professor Graham Jones, and comparative media studies professor T.L. Taylor. Each was honored for their contributions to undergraduate education and selected through nominations from their colleagues and students. This year, four faculty members and three students were asked to present three-minute lightning talks on what is important to today's learners.
When Stanford announced a new artificial intelligence institute, the university said the "designers of AI must be broadly representative of humanity" and unveiled 120 faculty and tech leaders partnering on the initiative. Some were quick to notice that not a single member of this "representative" group appeared to be black. The backlash was swift, sparking discussion on the severe lack of diversity across the AI field. But the problems surrounding representation extend far beyond exclusion and prejudice in academia. Major tech corporations have launched AI "ethics" boards that not only lack diversity, but sometimes include powerful people with interests that don't align with the ethics mission.