Goto

Collaborating Authors

Simulation of Human Behavior


The Differences Between AI and Machine Learning - AI Time Journal - Artificial Intelligence, Automation, Work and Business

#artificialintelligence

Contrary to what mass media might have you believe, artificial intelligence (AI) is not a new concept. AI was first mathematically conceptualized in 1950 by Alan Turing, a British polymath. Turing proposed that machines could use available information and logic to solve problems and make decisions the same way humans do. Although no tangible program came out of Turing's speculations, Allen Newell, Cliff Shaw, and Herbert Simon soon proved that AI was not simply science fiction. In 1955, Newell, Shaw, and Simon created the first "artificial intelligence" program, Logic Theorist.


What's AI-powered Virtual Human

#artificialintelligence

According to the "Digital Virtual Human Depth Industry Report", by 2030, the overall market size of China's digital virtual human will reach 270 billion. The digital virtual human has the appearance of a human being, and even the fineness of the skin is close to that of a real person. It has human behavior and can be expressed through language, facial expressions or body movements; it has human thoughts and can interact with human beings in real time, which is almost the same as human beings. The mainstream technology-driven routes of virtual digital humans are divided into AI-driven and human-driven digital human. Human-driven digital people are driven by real people. The main principle is that the real person communicates with the user in real time according to the user video sent by the video surveillance system, and at the same time, the expression and action of the real person are presented on the virtual digital human image through the motion capture collection system, so as to interact with the user.


'Good Night Oppy': How a documentary captures the human-robot bond

Christian Science Monitor | Science

Mars rovers Opportunity and Spirit departed Earth in 2003. Upon successfully touching down on the red planet, they were only expected to last about 90 days. The scientists and engineers at NASA were flabbergasted that the pair survived for many years. In his latest documentary, "Good Night Oppy," director Ryan White examines the doting relationship between the control room crew members – people from across the globe – and their robotic progeny. It's a story of gumption: When a machine gets mired in quicksand 140 million miles away, how do you rescue it?


[100%OFF] Group Dynamics: Psychology Of Group Behavior

#artificialintelligence

Inclusion and Identity – Learn how we internalize group values and goals and how our social groups become part of the way we identify ourselves. We'll explore how these identity processes influence our behavior and how they can lead to a sense of group cohesion. Group Formation Principles – Learn what types of people are attracted to group settings and what types of factors contribute to attraction and relationship formation. We'll also explore the different individual motivations that drive people into group settings and explore ways of overcoming social anxiety and loneliness. Group Development and Group Cohesion – Learn how all groups go through a predictable set of stages and how these stages influence behavior.


Physics-Based Simulation and the Future of the Metaverse

#artificialintelligence

Some of the world's biggest companies are going all-in on the metaverse. One you may not know about is Ansys, a US public company that makes engineering simulation software and has been around since 1970. Dr. Prith Banerjee is its Chief Technology Officer, and I spoke to him last week about his vision for the metaverse -- and specifically, why he thinks the metaverse can't reach its full potential without "optimum physics-based modeling and simulation." Ansys, it turns out, already has a number of partnerships with companies building the metaverse -- including global telecoms companies, microchip and GPU manufacturers, data center and storage companies, and "all the cloud providers," according to Banerjee. He said that Ansys provides a mix of hardware and software expertise to these customers; everything from building hardware to designing a structural electromagnetics system.


Nvidia Unveils Virtual Human Builder for Metaverse Characters - Voicebot.ai

#artificialintelligence

Nvidia has introduced a new platform for building virtual beings to interact with in the digital realms of the metaverse, which Nvidia refers to as its Omniverse. The Nvidia Omniverse Avatar Cloud Engine (ACE) provides a collection of AI models and related tools for users to design the AI creations that will populate their virtual worlds, including synthetic voices and visual media. The cloud-based ACE catalog streamlines building virtual beings and applies Nvidia's computing power to setting up and embedding the AI avatars in digital worlds. The resulting synthetic being can converse in multiple languages, offering recommendations based on the conversation and even process its digital environment enough to interact with objects around it. The system used Nvidia's Unified Compute Framework of software products, including the Riva speech AI technology and the NeMo Megatron natural language understanding using large language models.


The cognitive dissonance of watching the end of Roe unfold online

MIT Technology Review

"This is it," said SCOTUSblog media editor Katie Barlow on TikTok, posting live from outside the court. Barlow was one of the few correspondents on camera the moment the opinion was released. She was silent for a few seconds, glancing down at her phone, nodding, before looking up again and succinctly announcing the crux of it: "The Constitution does not confer a right to abortion." A reader on TikTok commented that it was hard to watch live as Barlow silently read the opinion, "to see the reality of the decision wash over you," adding: "Thank you for your work." It was a fitting way to enter the official post-Roe age: on platforms that can feel so personal to their publics, even as history unfolds.


How are Realistic Virtual Humans made?

#artificialintelligence

The "metaverse" will be built based on realistic virtual persons, and this foundation will facilitate distant presence, cooperation, education, and entertainment. To make this possible, new 3D virtual human creation tools must be developed that are easy to use and can be easily animated. Traditionally, this has required a lot of time and money spent by the AI artist. Because of this, these methods are not scalable. Allowing people to build their own avatars from one or more photographs is a more realistic solution.


New Electronics - Altair and LG embrace AI-based simulation for product development

#artificialintelligence

Together, Altair and LG will promote research and development and the construction of a simulation platform. The two companies will share information in priority fields of research, including computer-aided engineering (CAE), data analytics, automation, and more. In addition, the two companies plan to build a more advanced digital transformation development environment by integrating LG's product development technology with Altair's simulation and AI technology. Together, the companies will cooperate on CAE/automation platform development and digital twin technology, which LG uses to develop products. "Altair has the advanced simulation, high-performance computing, and data analytics technology to support manufacturing companies as they develop products quickly and efficiently," said Sam Mahalingam, chief technology officer, Altair.


AFAFed -- Protocol analysis

#artificialintelligence

In this paper, we design, analyze the convergence properties and address the implementation aspects of AFAFed. This is a novel Asynchronous Fair Adaptive Federated learning framework for stream-oriented IoT application environments, which are featured by time-varying operating conditions, heterogeneous resource-limited devices (i.e., coworkers), non-i.i.d. local training data and unreliable communication links. The key new of AFAFed is the synergic co-design of: (i) two sets of adaptively tuned tolerance thresholds and fairness coefficients at the coworkers and central server, respectively; and, (ii) a distributed adaptive mechanism, which allows each coworker to adaptively tune own communication rate. The convergence properties of AFAFed under (possibly) non-convex loss functions is guaranteed by a set of new analytical bounds, which formally unveil the impact on the resulting AFAFed convergence rate of a number of Federated Learning (FL) parameters, like, first and second moments of the per-coworker number of consecutive model updates, data skewness, communication packet-loss probability, and maximum/minimum values of the (adaptively tuned) mixing coefficient used for model aggregation.