Amazon Alexa: "Sorry to hear that. Talking to a friend, listening to music or taking a walk might help. I hope you feel better soon." Alexa's artificial intelligence-infused heart may be in the right place, but there's only so far it or any AI can go to comfort someone who is alone. All the same, Alexa's response raises questions about just what kind of role an AI can play to "cure" loneliness, especially among the elderly.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Jellyfish are about 95% water, making them some of the most diaphanous, delicate animals on the planet. But the remaining 5% of them have yielded important scientific discoveries, like green fluorescent protein (GFP) that is now used extensively by scientists to study gene expression, and life-cycle reversal that could hold the keys to combating aging. Jellyfish may very well harbor other, potentially life-changing secrets, but the difficulty of collecting them has severely limited the study of such "forgotten fauna." The sampling tools available to marine biologists on remotely operated vehicles (ROVs) were largely developed for the marine oil and gas industries, and are much better-suited to grasping and manipulating rocks and heavy equipment than jellies, often shredding them to pieces in attempts to capture them. Now, a new technology developed by researchers at Harvard's Wyss Institute for Biologically Inspired Engineering, John A. Paulson School of Engineering and Applied Sciences (SEAS), and Baruch College at CUNY offers a novel solution to that problem in the form of an ultra-soft, underwater gripper that uses hydraulic pressure to gently but firmly wrap its fettuccini-like fingers around a single jellyfish, then release it without causing harm.
Artificial intelligence (AI), machine learning (ML), autonomous systems, robotic process automation, chat bots, augmented and mixed reality and many other buzzwords are flying around water coolers and leadership team meetings across enterprises. It signifies the interest and the potential benefits to the organizations or institutions (in the case of higher education) and how these technologies can be adopted successfully to gain an advantage in the already very competitive higher education business. Part of AI is what is called unconscious AI. What does this really mean, and what are the different perspectives of unconscious AI? To explore unconscious AI, we first must understand what AI is and what different approaches are taken by technology providers and consumers to make AI effective and useful in daily life.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Artificial intelligence, it seems, is infiltrating every corner of higher education. From improving the efficiency of sprinkler systems to supporting students with virtual teaching assistants, AI has quickly become a near-ubiquitous presence on some campuses. Colleges and universities are being asked to do more with less as they grapple with shifting demographics and the need to not just respond to, but also anticipate, the needs of today's students. And early returns suggest that AI can play a role in helping institutions tackle pernicious challenges -- from "summer melt" to student engagement -- and enable students to navigate the complexity of financial aid, admissions, campus life and course scheduling. In response, a growing number of products are touting AI and machine learning as part of their sales pitch.
Two Princeton University computer science professors will lead a new Google AI lab opening in January in the town of Princeton. The lab is expected to expand New Jersey's burgeoning innovation ecosystem by building a collaborative effort to advance research in artificial intelligence. The lab, at 1 Palmer Square, will start with a small number of faculty members, graduate and undergraduate student researchers, recent graduates and software engineers. The lab builds on several years of close collaboration between Google and professors Elad Hazan and Yoram Singer, who will split their time working for Google and Princeton. The work in the lab will focus on a discipline within artificial intelligence known as machine learning, in which computers learn from existing information and develop the ability to draw conclusions and make decisions in new situations that were not in the original data.
On March 18, 2018, at around 10 p.m., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system--artificial intelligence--was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system's programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg's death? "Artificial intelligence" refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches.
Welcome to the club if you are still behind the artificial intelligence curve. This is the last chapter of my AI series, and I hope it has shed a humble light upon the linchpin of the Fourth Industrial Revolution (4IR). Included below are links to previous installments. You do not want to miss the mini-documentary in part 3. Keep the following quotes in mind as I prognosticate today on AI jobs for the near-term. "I have all the tools and gadgets. I tell my son, who is a producer.