Goto

Collaborating Authors

 opinion


Opinion

#artificialintelligence

Among the many unique experiences of reporting on A.I. is this: In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down. What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms.


Opinion

#artificialintelligence

In October, the White House released a 70-plus-page document called the "Blueprint for an A.I. Bill of Rights." The document's ambition was sweeping. It called for the right for individuals to "opt out" from automated systems in favor of human ones, the right to a clear explanation as to why a given A.I. system made the decision it did, and the right for the public to give input on how A.I. systems are developed and deployed. But if it did become law, it would transform how A.I. systems would need to be devised. And, for that reason, it raises an important set of questions: What does a public vision for A.I. actually look like?


Opinion

#artificialintelligence

Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits. I didn't write the above paragraph.


Opinion

#artificialintelligence

A mountain man buys his first chain saw. He comes back to the store a week later complaining that it cuts down only two trees a day when he was told it would cut down 20. The service person says, "Well, let's see what the trouble is," and starts it up. The mountain man jumps back and asks, "What's that noise?" (He'd been sawing without the engine on.) I feel like that mountain man when it comes to ChatGPT, the powerful new artificial intelligence chatbot that seemingly everyone is experimenting with.


Opinion

#artificialintelligence

ChatGPT makes an irresistible first impression. It's got a devastating sense of humor, a stunning capacity for dead-on mimicry, and it can rhyme like nobody's business. Then there is its overwhelming reasonableness. When ChatGPT fails the Turing test, it's usually because it refuses to offer its own opinion on just about anything. When was the last time real people on the internet declined to tell you what they really think?


Opinion

#artificialintelligence

Plato mourned the invention of the alphabet, worried that the use of text would threaten traditional memory-based arts of rhetoric. In his "Dialogues," arguing through the voice of Thamus, the Egyptian king of the gods, Plato claimed the use of this more modern technology would create "forgetfulness in the learners' souls, because they will not use their memories," that it would impart "not truth but only the semblance of truth" and that those who adopt it would "appear to be omniscient and will generally know nothing," with "the show of wisdom without the reality." If Plato were alive today, would he say similar things about ChatGPT? ChatGPT, a conversational artificial intelligence program released recently by OpenAI, isn't just another entry in the artificial intelligence hype cycle. It's a significant advancement that can produce articles in response to open-ended questions that are comparable to good high school essays.


Opinion

AI Magazine

AI Magazine Volume 18 Number 2 (1997) ( AAAI) Date: 4/1/2002 WASA -- World Aeronautics & Space Administration Executive Summary of Committee Report on Disaster Investigation, Incident # 362 Analysis of records downloaded from the 2001 Jupiter Orbital Black Parallelopiped Investigation Mission indicates that the basic source of failure was excessive emotional stress in the HAL computer, leading to a previously unknown condition now called Computational Paranoia. This in turn was an unforeseen side-effect of the design of the HAL-9000 series. HAL was given a genuine personality, enabling it to act as an onboard psychiatric advisor, colleague, and confidante to the human crew members. As a consequence, much of HAL's perceptual software was devoted to reading subtleties of facial expression, unconscious intonation stresses, and other emotional signals. Its performance at empathy and emotional insight was at least two orders of magnitude (as measured by the Kraft-Ebbing-Rachmaninoff method) better than that of the rest of the crew.


The Long-Term Effects of Secondary Sensing

AI Magazine

To integrate robotics into society, it is first necessary to measure and analyze current societal responses to areas within robotics. This article is the second in a continuing series of reports on the societal effects of various aspects of robotics. In my previous article, I discussed the problems of sensor abuse and outlined a program of treatment. However, despite the wide dissemination of that article, there are still numerous empty beds at the Susan Calvin Clinic for the Prevention of Sensor Abuse. Sensor abuse continues unabated despite strong evidence that there is a better way.


Opinion

AI Magazine

One of the major problems faced by businesses in the 1990s is how to produce environmentally friendly products and stay profitable. A pioneering consortium at Carnegie Mellon University (CMU) is using AI, combined with operations research, environmental science, public policy, and other disciplines, to build tools for green engineering. Green engineering is an approach to product development that balances environmental compatibility against economic profitability. It looks at the entire life cycle of the product, from design to disposal, and seeks to extend this life cycle through remanufacturing, reusing, and recycling products and components. Today, industrial solutions to environmental problems focus largely on recycling, figuring out how to dispose of products at the end of their useful lives.


748

AI Magazine

DITORIAL AI Magazine Volume 11 Number 2 (1990) ( AAAI) In this issue, Luc Steels takes a new and insightful look at knowledgebased systems and provides a synthesis of several different approaches to analyzing expertise. It's a long article but, in my opinion, an important one. I recommend it to anyone with an interest in knowledge-level analysis of expert systems. On the same general topic of expert systems but from a different perspective is the article by Rob Weitz, who proposes a methodology for forecasting the impact of expert systems on the workplace over the near term. Finally, James Hendler, Austin Tate, and Mark Drummond present an extensive survey of AI systems and techniques for plan generation.