The author and publisher of this book have used their best efforts in preparing this book and the programs contained in it. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs. Please submit any corrections to email@example.com
The author and publisher of this book have used their best efforts in preparing this book and the programs contained in it. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs.
It was reported and given the fake news environment it may not be true but certainly could be true that Elon Musk said, "he was afraid of AI more than nukes." Given his stature in the business world which may be crumbling but still he brings to the forefront of the news and business thinking that AI could bring about a real-man made global extinction event. Yet what he was really talking about is not some terrible global brain that kills everyone but more like a virus that kills us from within. In the following companion to my other article I will explore the use of algorithms or rules of thumbs to kill everything around us and humans as well. That is, you don't have to kill humans directly all you have to do is kill the oxygen, water, food or create viruses like super flu that kill us.
"The greatest bar to wise action and the greatest source of fear is ignorance. A tiny candle gives misleading light and throws huge and ominous shadows. The sun at noon gives great light and throws no shadows. It is time to get this whole problem of men and machines under a blazing noonday beam. Computers will never rob man of his initiative or replace the need for his creative thinking.
They generate not just answers to numerical problems, but hypotheses, reasoned arguments and recommendations about more complex -- and meaningful -- bodies of data. What's more, cognitive systems can make sense of the 80 percent of the world's data that computer scientists call "unstructured." This enables them to keep pace with the volume, complexity and unpredictability of information and systems in the modern world. None of this involves either sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand -- and act upon -- the complex systems of our society. This augmented intelligence is the necessary next step in our ability to harness technology in the pursuit of knowledge, to further our expertise and to improve the human condition. That is why it represents not just a new technology, but the dawn of a new era of technology, business and society: the Cognitive Era. The success of cognitive computing will not be measured by Turing tests or a computer's ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved. It's not surprising that the public's imagination has been ignited by Artificial Intelligence since the term was first coined in 1955. In the ensuing 60 years, we have been alternately captivated by its promise, wary of its potential for abuse and frustrated by its slow development. But like so many advanced technologies that were conceived before their time, Artificial Intelligence has come to be widely misunderstood --co-opted by Hollywood, mischaracterized by the media, portrayed as everything from savior to scourge of humanity. Those of us engaged in serious information science and in its application in the real world of business and society understand the enormous potential of intelligent systems. The future of such technology -- which we believe will be cognitive, not "artificial"-- has very different characteristics from those generally attributed to AI, spawning different kinds of technological, scientific and societal challenges and opportunities, with different requirements for governance, policy and management. Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans naturally. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. They are made possible by advances in a number of scientific fields over the past half-century, and are different in important ways from the information systems that preceded them. Here at IBM, we have been working on the foundations of cognitive computing technology for decades, combining more than a dozen disciplines of advanced computer science with 100 years of business expertise.