Jean-Michel Besnier is a French philosopher who teaches at Sorbonne University in Paris. His research focuses on the philosophical and ethical impact of science and technology on individual and collective representations and imagination. We met with him to talk about the consequences of the explosion of robotics and artificial intelligence (AI) in the healthcare sector, especially since the beginning of the Covid-19 pandemic. MedicalExpo e-magazine: Can you give us your definition of artificial intelligence? Jean-Michel Besnier: I have the same definition that everyone has. I am more attentive to the conceptual extension of the notion of artificial intelligence, which at the beginning referred to something rather simple, that is to say the implementation of devices capable of solving problems in an automatic or algorithmic way.
He is working on improving the performance and safety of our machine learning models. Originally from Zagreb, Croatia, Vilim studied electrical engineering and computer science for his bachelor's degree. He increasingly became intrigued with neuroscience, which became the focus of his Master's degree at LMU in Munich. Vilim stayed on in Munich to obtain his PhD at the Max Planck Institute of Neurobiology, where he investigated neural processing for motor control with whole-brain imaging. Machine learning has always been a part of Vilim's work and studies, and he is excited to completely focus on it for his new role.
Aging (spelled ageing in British English) is the process of becoming older, that involves a series of functional changes that appear over time and are not the result of illness or accident, but occur as a consequence of accumulating disorders in the body's structure and functions. It is an unpreventable chronological, social and biological process and is genetically determined and environmentally modulated. Let's see now how aging and life expectancy are affected. In the case of mammals, life expectancy varies hugely and it ranges from 3–4 years in small rodents to as long as 150–200 years in bowhead whales. As for us humans, we can potentially live for one hundred and twenty years, and just now an international research team has identified more than 2,000 new genes linked to longevity in humans (linked to DNA repair, coagulation and inflammatory response) during an evolutionary comparative genomic study that included 57 species of mammals.
Summary: A new AI algorithm can detect behavioral symptoms associated with anxiety with over 90% accuracy. Researchers are using artificial intelligence (AI) to detect behavioral signs of anxiety with more than 90 percent accuracy, and suggest that AI could have future applications for addressing mental health and well-being. Their research is published in the journal Pervasive and Mobile Computing. "In the two years since the onset of COVID-19, and one climate disaster after another, more and more people are experiencing anxiety," says Simon Fraser University visiting professor and social psychologist Gulnaz Anjum. "Our research appears to show that AI could provide a highly reliable measurement for recognizing the signs that someone is anxious."
About the Company: Founded in 2021 by Sahaj Garg (CTO) and Tanay Kothari (CEO), Wispr AI is a developer of the next generation of neural interfaces to provide a seamless interface with immersive technology. The company is working with the leading neuroscientist, hardware engineers, machine learning researchers, and product engineers to bring frontier technology to the mass consumer market. Wispr AI is looking to use deliberate thought as digital input, allowing users to interface in a seamless manner with an increasingly digital world. The startup is doing this by combining the latest technologies in the fields of deep learning, electrical interfaces, and neuroscience.
China is pursuing what its leaders call a "first-mover advantage" in artificial intelligence (AI), facilitated by a state-backed plan to achieve breakthroughs by modeling human cognition. While not unique to China, the research warrants concern since it raises the bar on AI safety, leverages ongoing U.S. research, and exposes U.S. deficiencies in tracking foreign technological threats. The article begins with a review of the statutory basis for China's AI-brain program, examines related scholarship, and analyzes the supporting science. China's advantages are discussed along with the implications of this brain-inspired research. Recommendations to address our concerns are offered in conclusion. All claims are based on primary Chinese data.1 Analysts familiar with China's technical development programs understand that in China things happen by plan, and that China is not reticent about announcing these plans. On July 8, 2017 China's State Council released its "New Generation AI Development Plan"2 to advance Chinese artificial intelligence in three stages, at the end of which, in 2030, China would lead the world in AI theory, technology, and applications.3
Wispr AI, the neurotechnology company aimed at developing the next generation of human-computer interfaces, announced it has closed $4.6 million in seed funding co-led by New Enterprise Associates (NEA) and 8VC. Additional participants in the financing include CTRL-Labs CSO & Co-founder Josh Duyan, Berkeley Neuroscience Professor & iota Biosciences Co-CEO Jose Carmena, Warby Parker CEO Dave Gilboa, Stanford NLP Professor Chris Manning, Salesforce Chief Scientist Richard Socher, Nesos CTO Vivek Sharma and Whoop Founder & CEO Will Ahmed. Wispr AI plans to use the funding to accelerate development of the first functional thought-powered digital interface. The company is bringing together a formidable team of world-class neuroscientists, hardware engineers, ML engineers and product engineers who are passionate about changing the world. Wispr AI is building a wearable that can convert deliberate thought into action and high-bandwidth digital input.
In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically neural signals called spike trains, can be fed into machine-learning algorithms to improve the usability of brain-computer interfaces (BCI). BCI systems work by analyzing a person's brain signals and translating that neural activity into commands, allowing the user to control digital devices like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome -- when a person is fully conscious but unable to move or communicate. Various forms of BCI are already available, from caps that measure brain signals to devices implanted in brain tissues.
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process.