"The construction of computer programs that simulate aspects of social behaviour can contribute to the understanding of social processes."
– Nigel Gilbert. Computational Social Science: Agent-based social simulationCentre for Research on Social Simulation, University of Surrey. Guildford, UK. 6 November 2005; revised and updated 20 May 2007.
We never have been so closer to the future than we are now. There are news spreading across the media about the robots takeover of our jobs, driverless cars hitting the road with outstanding proficiency in driving standards, while at the same time, virtual assistants make us feel a bit less lonely telling us jokes and spending time with us. In fact, Siri, Alexa or Cortana have something machines didn't have before: a simulated human conscious capable of keep conversations with humans without being uncovered. AI is now at its most advanced development stage ever, but… do we need to worry about how smart are getting the robots? Will we ever need to?
Over the last few months, I've been catching up more systematically on what's been happening in machine learning and AI research in the last 5 years or so and noticed that a lot of people are starting to talk about the neural net developing a'mental' representation of the problem at hand. As someone who's preoccupied with mental representations a lot, this struck me as odd because what was being described for the machine learning algorithms did not seem to match what else we know about mental representations. So I've been formulating this post when I was pointed to this interview with Judea Pearl. "That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it's still a curve-fitting exercise, albeit complex and nontrivial."
Two years ago, Google made an internal video that didn't stay internal for long. Recently acquired by The Verge, it tells the speculative story of how the technology giant might develop a universal model of human behavior by collecting as much data from people as possible. The video, titled "The Selfish Ledger," is a thought experiment that shows how a major institution like Google could make use of the complex data profile built up by each person as they buy, browse, and communicate online. Then in true form to tech monoliths' disregard for data privacy, the video suggests the following: What if the ledger could be given a volition or purpose, rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information?
We are on the verge of another revolution in health care: deeply personalized medicine. It's the next computerized step in tailoring medical treatments and medical drugs to your specific body, your very unique anatomy, the specific ways your body works and doesn't, and your path to live your life and keep healthy. But we may soon run into problems of ethics and personal privacy that could make the recent furor over Facebook and data mining look small by comparison. Personalized health and wellness comes from the intersection of improved body-worn sensors, data science, computational physiology, individually customized health assistance and -- if necessary -- highly targeted medical treatment, all coming together at once. As a computer scientist with an interest in complex biological systems -- such as the human body -- I have been working for some time toward this future alongside medical researchers, physicians, and health practitioners.
A new algorithm in artificial intelligence enables a 3D model of a person to be created in just a few seconds after videoing their features. Artificial intelligence is used during video games and virtual reality to create 3D objects of people and objects. But typically it requires special equipment when filming in order to transfer the video of someone into a 3D figure. New video software is able to take the footage and transfer it into the model in seconds from just one angle. A minute-and-a-half long video shows how the algorithm is able to transform the images of men and women into a 3D character after they turn around themselves, Science Magazine reported.
Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people's bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle. The system has three stages.
If the environment is included in the simulation, this will require additional computing power -- how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed -- only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don--t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft.
Stay tuned for additional content in this series. So you want to build a cognitive application, but you want it to be great. You want it to be useful, exciting, and inspiring -- in essence, to create a truly cognitive experience. You might be wondering what is a cognitive experience? Should the application I'm designing be cognitive?