Goto

Collaborating Authors

Talking Digital Future: Artificial Intelligence

#artificialintelligence

Quantum computing could potentially break much of the encryption algorithms and protocols that currently secure the internet and computational industry as they are. I chose artificial intelligence as my next topic, as it can be considered as one of the most known technologies, and people imagine it when they talk about the future. But the right question would be: What is artificial intelligence? Artificial intelligence is not something that just happened in 2015 and 2016. It's been around for a hundred years as an idea, but as a science, we started seeing developments from the 1950s. So, this is quite an old tech topic already, but because of the kinds of technology that we have access to today -- specifically, processing performance and storage -- we're starting to see significant leaps in AI development. When I started the course entitled, "Foundations of the Fourth Industrial Revolution (Industry 4.0)," I got deeper into the topic of artificial intelligence. One of the differences between the third industrial revolution -- defined by the microchip and digitization -- and the fourth industrial revolution is the scope, velocity and breakthroughs in medicine and biology, as well as widespread use of artificial intelligence across our society. Thus, AI is not only a product of Industry 4.0 but also an impetus as to why the fourth industrial revolution is currently happening and will continue to do so.


Talking Digital Future: Artificial Intelligence Cointelegraph

#artificialintelligence

I chose artificial intelligence as my next topic, as it can be considered as one of the most known technologies, and people imagine it when they talk about the future. But the right question would be: What is artificial intelligence? Artificial intelligence is not something that just happened in 2015 and 2016. It's been around for a hundred years as an idea, but as a science, we started seeing developments from the 1950s. So, this is quite an old tech topic already, but because of the kinds of technology that we have access to today -- specifically, processing performance and storage -- we're starting to see significant leaps in AI development. When I started the course entitled, "Foundations of the Fourth Industrial Revolution (Industry 4.0)," I got deeper into the topic of artificial intelligence. One of the differences between the third industrial revolution -- defined by the microchip and digitization -- and the fourth industrial revolution is the scope, velocity and breakthroughs in medicine and biology, as well as widespread use of artificial intelligence across our society. Thus, AI is not only a product of Industry 4.0 but also an impetus as to why the fourth industrial revolution is currently happening and will continue to do so. I think there are two ways to understand AI: the first way is to try giving a quick definition of what it is, but the second is to also think about what it is not.


3 human qualities digital technology can't replace in the future economy: experience, values and judgement

#artificialintelligence

Some very intelligent people – including Stephen Hawking, Elon Musk and Bill Gates – seem to have been seduced by the idea that because computers are becoming ever faster calculating devices that at some point relatively soon we will reach and pass a "singularity" at which computers will become "more intelligent" than humans. Some are terrified that a society of intelligent computers will (perhaps violently) replace the human race, echoing films such as the Terminator; others – very controversially – see the development of such technologies as an opportunity to evolve into a "post-human" species. Already, some prominent technologists including Tim O'Reilly are arguing that we should replace current models of public services, not just in infrastructure but in human services such as social care and education, with "algorithmic regulation". Algorithmic regulation proposes that the role of human decision-makers and policy-makers should be replaced by automated systems that compare the outcomes of public services to desired objectives through the measurement of data, and make automatic adjustments to address any discrepancies. Not only does that approach cede far too much control over people's lives to technology; it fundamentally misunderstands what technology is capable of doing. For both ethical and scientific reasons, in human domains technology should support us taking decisions about our lives, it should not take them for us. At the MIT Sloan Initiative on the Digital Economy last week I got a chance to discuss to discuss some of these issues with Andy McAfee and Erik Brynjolfsson, authors of "The Second Machine Age", recently highlighted by Bloomberg as one of the top books of 2014. Andy and Erik compare the current transformation of our world by digital technology to the last great transformation, the Industrial Revolution.


3 human qualities digital technology can't replace in the future economy: experience, values and judgement

#artificialintelligence

Some very intelligent people – including Stephen Hawking, Elon Musk and Bill Gates – seem to have been seduced by the idea that because computers are becoming ever faster calculating devices that at some point relatively soon we will reach and pass a "singularity" at which computers will become "more intelligent" than humans. Some are terrified that a society of intelligent computers will (perhaps violently) replace the human race, echoing films such as the Terminator; others – very controversially – see the development of such technologies as an opportunity to evolve into a "post-human" species. Already, some prominent technologists including Tim O'Reilly are arguing that we should replace current models of public services, not just in infrastructure but in human services such as social care and education, with "algorithmic regulation". Algorithmic regulation proposes that the role of human decision-makers and policy-makers should be replaced by automated systems that compare the outcomes of public services to desired objectives through the measurement of data, and make automatic adjustments to address any discrepancies. Not only does that approach cede far too much control over people's lives to technology; it fundamentally misunderstands what technology is capable of doing. For both ethical and scientific reasons, in human domains technology should support us taking decisions about our lives, it should not take them for us. At the MIT Sloan Initiative on the Digital Economy last week I got a chance to discuss some of these issues with Andy McAfee and Erik Brynjolfsson, authors of "The Second Machine Age", recently highlighted by Bloomberg as one of the top books of 2014. Andy and Erik compare the current transformation of our world by digital technology to the last great transformation, the Industrial Revolution.


The Fourth Industrial Revolution – Patrick Stepanek – Medium

#artificialintelligence

Each technological revolution has created new environments for human beings. Our societies are shaped by the infrastructure we utilize to survive. In the modern age, we are seeing distinct changes to the ways people, businesses and governments operate. The first innovations that created an initial industrial revolution occurred in the United Kingdom. In the late 18th century and early 19th, they restricted the sale of their machinery and intellectual property from leaving the boundaries of the powerful nation. Once the knowledge reached Belgium it spread like wildfire around the world.