Neuralink has announced that the U.S. Food and Drug Administration (FDA) has approved the launch of its first clinical study in humans. "We are excited to share that we have received the FDA's approval to launch our first-in-human clinical study!" Neuralink's official Twitter account wrote on Thursday.(opens in a new tab) "This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people." The neurotechnology company isn't recruiting test subjects just yet, and hasn't released any information on exactly what the clinical trial will involve. Even so, fans of Neuralink founder Elon Musk are already chomping(opens in a new tab) at(opens in a new tab) the(opens in a new tab) bit(opens in a new tab) to implant questionable experimental technology in their grey matter. Neuralink aims to develop implantable devices that will let people control computers with their brain, as well as restore vision or mobility to people with disabilities.
Neuralink, Elon Musk's brain-implant company, said on Thursday it had received a green light from the US Food and Drug Administration (FDA) to kickstart its first in-human clinical study, a critical milestone after earlier struggles to gain approval. Musk has predicted on at least four occasions since 2019 that his medical device company would begin human trials for a brain implant to treat severe conditions such as paralysis and blindness. Yet the company, founded in 2016, only sought FDA approval in early 2022 – and the agency rejected the application, seven current and former employees told Reuters in March. The FDA had pointed out several concerns to Neuralink that needed to be addressed before sanctioning human trials, according to the employees. Major issues involved the lithium battery of the device, the possibility of the implant's wires migrating within the brain and the challenge of safely extracting the device without damaging brain tissue.
Turns out Elon Musk's FDA prediction was only off by about a month. After reportedly denying the company's overtures in March, the FDA approved Neuralink's application to begin human trials of its prototype Link brain-computer interface (BCI) on Thursday. Founded in 2016, Neuralink aims to commercialize BCIs in wide-ranging medical and therapeutic applications -- from stroke and spinal cord injury (SCI) rehabilitation, to neural prosthetic controls, to the capacity "to rewind memories or download them into robots," Neuralink CEO Elon Musk promised in 2020. BCIs essentially translate the analog electrical impulses of your brain (monitoring it using hair-thin electrodes delicately threaded into that grey matter) into the digital 1's and 0's that computers understand. Since that BCI needs to be surgically installed in a patient's noggin, the FDA -- which regulates such technologies -- requires that companies conduct rigorous safety testing before giving its approval for commercial use.
Medical experts have issued a fresh call to halt the development of artificial intelligence (AI), warning it poses an'existential threat' to people. A team of five doctors and global health policy experts from across four continents said there were three ways in which the tech could wipe out humans. First is the risk that AI will help amplify authoritarian tactics like surveillance and disinformation. 'The ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras,' they say, could make it easier for authoritarian or totalitarian regimes to come to power and stay in power. Second, the group warns that AI can accelerate mass murder via the expanded use of Lethal Autonomous Weapon Systems (LAWS).
The Biden administration's investment in responsible AI research and development is a $140 million grant, which will increase the number of national AI research institutes. These institutes are focused on advancing artificial intelligence research in areas ranging from public health to cybersecurity. The investment is just a fraction of the billions that private sector companies are pouring into advancing the technology. Microsoft previously invested $10 billion in OpenAI.
Artificial intelligence (AI) holds the promise of fueling explosive economic growth and improving public health and welfare in profound ways--but only if we let it. To make progress on AI innovation and its governance, America needs a better approach than the all-or-nothing extremes driving today's public discourse. America does not need a convoluted new regulatory bureaucracy or thicket of new rules for AI. We are on the cusp of untold advances in nearly every field thanks to AI. Our success depends on using flexible governance and practical solutions to avoid diminishing the pro-innovation model central to U.S. success in the technology sector.
Should artificial intelligence or machine learning (AI/ML) be allowed to alter FDA approved software in medical devices? If so, where should the guardrails be set? The discussions and debates surrounding AI/ML are heated; some believe the technology may destroy humanity, while others look forward to the speed of advancement it will allow. The FDA is getting out ahead on this debate. This week the agency drafted a list of “guiding principles” intended to begin developing best practices for machine learning within medical devices. A new framework envisioned by the FDA includes a “predetermined change control plan” in premarket submissions. This plan would include the types of anticipated modifications, referred to as “Software as a Medical Device Pre-Specifications”. The associated methodology used to implement those changes in a measured and controlled approach that manages risk the FDA calls the “Algorithm Change Protocol.”
We propose and study Collapsing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus "collapsing" any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve. The goal is to keep as many arms in the "good" state as possible by planning a limited budget of actions per round. Such Collapsing Bandits are natural models for many healthcare domains in which health workers must simultaneously monitor patients and deliver interventions in a way that maximizes the health of their patient cohort. Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable. Our derivation hinges on novel conditions that characterize when the optimal policies may take the form of either "forward" or "reverse" threshold policies.