Goto

Collaborating Authors

Research Report


3D-printed brain implant 'could be used to treat human patients with paralysis'

Daily Mail - Science & tech

Scientists are creating 3D-printed brain chips which could be used to treat nervous system conditions, including paralysis, by detecting and firing electrical signals. The chip has been developed and successfully tested on animals, and researchers are now hopeful it can be adapted for use in humans. It will also be able to connect to a computer and offer a host of next-generation medical benefits, scientists say. Linking the human brain to a computer is usually the work of science fiction writers and filmmakers, but moves are underway to make the technology a reality. Last month, Elon Musk hosted a high-profile event where he spoke about the developments with his own version of brain chip technology, Neuralink.


Challenges of Comparing Human and Machine Perception

#artificialintelligence

Deep Neural Networks (DNNs) have become very successful in the domain of artificial intelligence. They have begun to directly influence our lives through image recognition, automated machine translation, precision medicine and many other solutions. Furthermore, there are many parallels between these modern artificial algorithms and biological brains: The two systems resemble each other in their function - for example, they can solve surprisingly complex tasks - and in their anatomical structure - for example, they contain many hierarchically structured neurons. Given these apparent similarities, many questions arise: How similar are human and machine vision really? Can we understand human vision by studying machine vision?


AI supported test for very early signs of glaucoma progression - Neuroscience News

#artificialintelligence

Summary: A new artificial intelligence algorithm can detect the progression of glaucoma up to 18 months earlier than conventional methods. A new test can detect glaucoma progression 18 months earlier than the current gold standard method, according to results from a UCL-sponsored clinical trial. The technology, supported by an artificial intelligence (AI) algorithm, could help accelerate clinical trials, and eventually may be used in detection and diagnostics, according to the Wellcome-funded study published today in Expert Review of Molecular Diagnostics. Lead researcher Professor Francesca Cordeiro (UCL Institute of Ophthalmology, Imperial College London, and Western Eye Hospital Imperial College Healthcare NHS Trust) said: "We have developed a quick, automated and highly sensitive way to identify which people with glaucoma are at risk of rapid progression to blindness." Glaucoma, the leading global cause of irreversible blindness, affects over 60 million people, which is predicted to double by 2040 as the global population ages.


How we remember could help AI be less forgetful

#artificialintelligence

A brain mechanism referred to as "replay" inspired researchers at Baylor College of Medicine to develop a new method to protect deep neural networks, found in artificial intelligence (AI), from forgetting what they have previously learned. The study, in the current edition of Nature Communications, has implications for both neuroscience and deep learning. Deep neural networks are the main drivers behind the recent fast progress in AI. These networks are extremely good at learning to solve individual tasks. However, when they are trained on a new task, they typically lose the ability to solve the previously learned task completely.


Artificial intelligence in COVID-19 drug repurposing

#artificialintelligence

One study estimated that pharmaceutical companies spent US$2·6 billion in 2015, up from $802 million in 2003, for the development of a new chemical entity approved by the US Food and Drug Administration (FDA). N Engl J Med. 2015; 372: 1877-1879 The increasing cost of drug development is due to the large volume of compounds to be tested in preclinical stages and the high proportion of randomised controlled trials (RCTs) that do not find clinical benefits or with toxicity issues. Given the high attrition rates, substantial costs, and low pace of de-novo drug discovery, exploiting known drugs can help improve their efficacy while minimising side-effects in clinical trials. As Nobel Prize-winning pharmacologist Sir James Black said, "The most fruitful basis for the discovery of a new drug is to start with an old drug". New uses for old drugs.


BERG To Present Discovery/Validation Of Biomarkers Associated With Survival In Pancreatic Ductal Adenocarcinoma (PDAC) Treated With BPM 31510-IV At The European Society For Medical Oncology (ESMO) 2020 Congress

#artificialintelligence

BERG, a clinical-stage biotech that employs artificial intelligence (AI) to investigate diseases and develop innovative treatments, today announced two major medical/clinical research developments on pancreatic ductal adenocarcinoma (PDAC) to be presented virtually at the European Society for Medical Oncology (ESMO) 2020 Congress taking place from September 19-21, 2020. The first study entitled "Project Survival: High Fidelity Longitudinal Phenotypic and Multi-omic Characterization of Pancreatic Ductal Adenocarcinoma (PDAC) for Biomarker Discovery", is the culmination of the largest existing high-fidelity characterization of pancreatic cancer from a phenotypic/adaptive multi-omic perspective. BERG's Interrogative Biology platform was employed to identify causal relationships between existing pancreatic cancer therapies and changes in proteomic, metabolic and lipidomic responses to 253 treatment interventions and 211 progression events. The research cohort included PDAC patients across different stages including early, locally advanced and metastatic to yield the most accurate characterization of the evolution of the disease. Throughout the course of the study, 470,000 clinical data points were gathered.


AI Weekly: Cutting-edge language models can produce convincing misinformation if we don't stop them

#artificialintelligence

It's been three months since OpenAI launched an API underpinned by cutting-edge language model GPT-3, and it continues to be the subject of fascination within the AI community and beyond. Portland State University computer science professor Melanie Mitchell found evidence that GPT-3 can make primitive analogies, and Columbia University's Raphaël Millière asked GPT-3 to compose a response to the philosophical essays written about it. But as the U.S. presidential election nears, there's growing concern among academics that tools like GPT-3 could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies' Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3's strength in generating "informational," "influential" text could be leveraged to "radicalize individuals into violent far-right extremist ideologies and behaviors." Bots are increasingly being used around the world to sow the seeds of unrest, either through the spread of misinformation or the amplification of controversial points of view.


AI researchers devise failure detection method for safety-critical machine learning

#artificialintelligence

Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad. Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled "Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems," recently published on arXiv, the authors assert their approach can satisfy both the public's right to know that a system has been rigorously tested and an organization's desire to treat AI models like trade secrets.


Predicting the frequencies of drug side effects

#artificialintelligence

A central issue in drug risk-benefit assessment is identifying frequencies of side effects in humans. Currently, frequencies are experimentally determined in randomised controlled clinical trials. We present a machine learning framework for computationally predicting frequencies of drug side effects. Our matrix decomposition algorithm learns latent signatures of drugs and side effects that are both reproducible and biologically interpretable. We show the usefulness of our approach on 759 structurally and therapeutically diverse drugs and 994 side effects from all human physiological systems. Our approach can be applied to any drug for which a small number of side effect frequencies have been identified, in order to predict the frequencies of further, yet unidentified, side effects. We show that our model is informative of the biology underlying drug activity: individual components of the drug signatures are related to the distinct anatomical categories of the drugs and to the specific drug routes of administration. Currently, the frequencies of drug side effects are determined in randomised controlled clinical trials. Here the authors develop an interpretable machine learning approach to predict the frequencies of unknown side effects for drugs with a small number of determined side effect frequencies.


AI With Human-Like Characteristics Is Viewed Differently, Study Finds

#artificialintelligence

Edmond De Belamy, a portrait generated by an algorithm, sold at Christie's for $432,500. The use of artificial intelligence to create art raises questions about how credit and responsibility should be allocated. In a recent study, MIT Media Lab researcher and Ph.D. student Zivvy Epstein and MIT Sloan School of Management Prof. David Rand found that the answers depend on the extent to which people view AI as human: the more people humanize AI, the greater the responsibility they allocate to the technology . "The language we use to talk about AI can impact the way people think not just about AI itself, but all of the stakeholders involved. In the art world, AI has the potential to become a major force in artistic endeavors, and understanding how people perceive its role can affect the future of art," says Rand. Epstein notes, "The way we allocate responsibility is complicated when AI is involved. AI is simply a tool created and used by humans, but when we describe it with human characteristics, people tend to view it very differently. It can be seen more as an agent with independent thought and the ability to create."