Simulation of Human Behavior

Talespin's virtual human platform uses VR and AI to teach employees soft skills


Training employees how to perform specific tasks isn't difficult, but building their soft skills -- their interactions with management, fellow employees, and customers -- can be more challenging, particularly if there aren't people around to practice with. Virtual reality training company Talespin announced today that it is leveraging AI to tackle that challenge, using a new "virtual human platform" to create realistic simulations for employee training purposes. Unlike traditional employee training, which might consist of passively watching a video or lightly interacting with collections of canned multiple choice questions, Talespin's system has a trainee interact with a virtual human powered by AI, speech recognition, and natural language processing. Because the interactions use VR headsets and controllers, the hardware can track a trainee's gaze, body movement, and facial expressions during the session. Talespin's virtual character is able to converse realistically, guiding trainees through branching narratives using natural mannerisms and believable speech.

Virtual Humans


There is an interesting move underway to establish a pan-European AI research federation - a sort of decentralised CERN for AI. From their website: "CLAIRE is an initiative by the European AI community that seeks to strengthen European excellence in AI research and innovation. To achieve this, CLAIRE proposes the establishment of a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe that achieves "brand recognition" similar to CERN." "The CLAIRE initiative aims to establish a pan-European network of Centres of Excellence in AI, strategically located throughout Europe, and a new, central facility with state-of-the-art, "Google-scale", CERN-like infrastructure – the CLAIRE Hub – that will promote new and existing talent and provide a focal point for exchange and interaction of researchers at all stages of their careers, across all areas of AI. The CLAIRE Hub will not be an elitist AI institute with permanent scientific staff, but an environment where Europe's brightest minds in AI meet and work for limited periods of time. This will increase the flow of knowledge among European researchers and back to their home institutions."

Detroit auto show models -- the human ones -- embrace their changing role in the #MeToo era

The Japan Times

DETROIT - Every year at the Detroit auto show, good-looking women -- and men -- are deployed by the carmakers to present their new vehicles. But with the shock wave created by the #MeToo movement still reverberating across the U.S., there are fewer auto show models of the human variety -- and they are not just pretty faces. The "product specialists" still have picture-perfect smiles, but they also can tick off the features of each car and prices with such assurance that the iPads they carry for reference can seem merely decorative. Auto companies are also making sure their fleet of specialists are ethnically and physically diverse. Perched on stilettos, Priscilla Tejeda is working for Toyota.

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach


An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the potential to act as orders-of-magnitude-faster replacements for computationally expensive tasks in simulation, while maintaining a remarkable level of accuracy. Dr. Federico Carminati (Project Coordinator, CERN) points out, "This work demonstrates the potential of'black box' machine-learning models in physics-based simulations." A poster describing this work was awarded the prize for best poster in the category'programming models and systems software' at ISC'18. This recognizes the importance of the work, which was carried out by Dr. Federico Carminati, Gul Rukh Khattak, and Dr. Sofia Vallecorsa at CERN, as well as Jean-Roch Vlimant at Caltech.

AI Ethics: When Robots Outsmart Humans -


We never have been so closer to the future than we are now. There are news spreading across the media about the robots takeover of our jobs, driverless cars hitting the road with outstanding proficiency in driving standards, while at the same time, virtual assistants make us feel a bit less lonely telling us jokes and spending time with us. In fact, Siri, Alexa or Cortana have something machines didn't have before: a simulated human conscious capable of keep conversations with humans without being uncovered. AI is now at its most advanced development stage ever, but… do we need to worry about how smart are getting the robots? Will we ever need to?

Does machine learning produce mental representations?


Over the last few months, I've been catching up more systematically on what's been happening in machine learning and AI research in the last 5 years or so and noticed that a lot of people are starting to talk about the neural net developing a'mental' representation of the problem at hand. As someone who's preoccupied with mental representations a lot, this struck me as odd because what was being described for the machine learning algorithms did not seem to match what else we know about mental representations. So I've been formulating this post when I was pointed to this interview with Judea Pearl. "That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it's still a curve-fitting exercise, albeit complex and nontrivial."

[D] Does (or can) machine learning produce mental representations? • r/MachineLearning


I tried to show that in my post. One problem is the current inputs lose a lot of the information that human mental representations draw on. But NNs only work on vectors represented as numbers.

Google shows how to theoretically control user's behavior based on their data


Two years ago, Google made an internal video that didn't stay internal for long. Recently acquired by The Verge, it tells the speculative story of how the technology giant might develop a universal model of human behavior by collecting as much data from people as possible. The video, titled "The Selfish Ledger," is a thought experiment that shows how a major institution like Google could make use of the complex data profile built up by each person as they buy, browse, and communicate online. Then in true form to tech monoliths' disregard for data privacy, the video suggests the following: What if the ledger could be given a volition or purpose, rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information?

The virtual human is here -- how much are you willing to share about yourself with the world?

FOX News

We are on the verge of another revolution in health care: deeply personalized medicine. It's the next computerized step in tailoring medical treatments and medical drugs to your specific body, your very unique anatomy, the specific ways your body works and doesn't, and your path to live your life and keep healthy. But we may soon run into problems of ethics and personal privacy that could make the recent furor over Facebook and data mining look small by comparison. Personalized health and wellness comes from the intersection of improved body-worn sensors, data science, computational physiology, individually customized health assistance and -- if necessary -- highly targeted medical treatment, all coming together at once. As a computer scientist with an interest in complex biological systems -- such as the human body -- I have been working for some time toward this future alongside medical researchers, physicians, and health practitioners.

Incredible moment artificial intelligence software creates a 3D model of a person in just seconds

Daily Mail

A new algorithm in artificial intelligence enables a 3D model of a person to be created in just a few seconds after videoing their features. Artificial intelligence is used during video games and virtual reality to create 3D objects of people and objects. But typically it requires special equipment when filming in order to transfer the video of someone into a 3D figure. New video software is able to take the footage and transfer it into the model in seconds from just one angle. A minute-and-a-half long video shows how the algorithm is able to transform the images of men and women into a 3D character after they turn around themselves, Science Magazine reported.