Neuralink has announced that the U.S. Food and Drug Administration (FDA) has approved the launch of its first clinical study in humans. "We are excited to share that we have received the FDA's approval to launch our first-in-human clinical study!" Neuralink's official Twitter account wrote on Thursday.(opens in a new tab) "This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people." The neurotechnology company isn't recruiting test subjects just yet, and hasn't released any information on exactly what the clinical trial will involve. Even so, fans of Neuralink founder Elon Musk are already chomping(opens in a new tab) at(opens in a new tab) the(opens in a new tab) bit(opens in a new tab) to implant questionable experimental technology in their grey matter. Neuralink aims to develop implantable devices that will let people control computers with their brain, as well as restore vision or mobility to people with disabilities.
Neuralink, Elon Musk's brain-implant company, said on Thursday it had received a green light from the US Food and Drug Administration (FDA) to kickstart its first in-human clinical study, a critical milestone after earlier struggles to gain approval. Musk has predicted on at least four occasions since 2019 that his medical device company would begin human trials for a brain implant to treat severe conditions such as paralysis and blindness. Yet the company, founded in 2016, only sought FDA approval in early 2022 – and the agency rejected the application, seven current and former employees told Reuters in March. The FDA had pointed out several concerns to Neuralink that needed to be addressed before sanctioning human trials, according to the employees. Major issues involved the lithium battery of the device, the possibility of the implant's wires migrating within the brain and the challenge of safely extracting the device without damaging brain tissue.
Turns out Elon Musk's FDA prediction was only off by about a month. After reportedly denying the company's overtures in March, the FDA approved Neuralink's application to begin human trials of its prototype Link brain-computer interface (BCI) on Thursday. Founded in 2016, Neuralink aims to commercialize BCIs in wide-ranging medical and therapeutic applications -- from stroke and spinal cord injury (SCI) rehabilitation, to neural prosthetic controls, to the capacity "to rewind memories or download them into robots," Neuralink CEO Elon Musk promised in 2020. BCIs essentially translate the analog electrical impulses of your brain (monitoring it using hair-thin electrodes delicately threaded into that grey matter) into the digital 1's and 0's that computers understand. Since that BCI needs to be surgically installed in a patient's noggin, the FDA -- which regulates such technologies -- requires that companies conduct rigorous safety testing before giving its approval for commercial use.
Doctors believe Artificial Intelligence is now saving lives, after a major advancement in breast cancer screenings. A.I. is detecting early signs of the disease, in some cases years before doctors would find the cancer on a traditional scan. California's reparations task force is recommending as part of its set of proposals to make amends for slavery and anti-Black racism that state lawmakers address what it calls "racially biased" artificial intelligence used in health care. The task force, created by state legislation signed by Gov. Gavin Newsom in 2020, formally approved last weekend its final recommendations to the California Legislature, which will decide whether to enact the measures and send them to the governor's desk to be signed into law. The recommendations include several proposals related to health care, including some concerning medical artificial intelligence (AI), which the task force describes as "racially biased" and contributing to alleged systemic racism against Black Californians.
Medical experts have issued a fresh call to halt the development of artificial intelligence (AI), warning it poses an'existential threat' to people. A team of five doctors and global health policy experts from across four continents said there were three ways in which the tech could wipe out humans. First is the risk that AI will help amplify authoritarian tactics like surveillance and disinformation. 'The ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras,' they say, could make it easier for authoritarian or totalitarian regimes to come to power and stay in power. Second, the group warns that AI can accelerate mass murder via the expanded use of Lethal Autonomous Weapon Systems (LAWS).
The Biden administration's investment in responsible AI research and development is a $140 million grant, which will increase the number of national AI research institutes. These institutes are focused on advancing artificial intelligence research in areas ranging from public health to cybersecurity. The investment is just a fraction of the billions that private sector companies are pouring into advancing the technology. Microsoft previously invested $10 billion in OpenAI.
Dukes and Jackson, both with No Left Turn in Education, said parents should be concerned about how AI is being used in schools, and what information it may gather on students. Educators at over 120 districts across the country are implementing a pervasive school curriculum that has been denounced by opponents as an effort to manipulate children's values and beliefs and replace parents as the primary moral authority in their child's lives, with many critics specifically pointing to similarities with programs from the Centers for Disease Control and Prevention (CDC) as a major point of contention. The School Superintendent's Association (AASA), with the help of superintendents, board members and school administrators, is implementing the Learning 2025 program, which calls for an equity-focused, "holistic redesign" of the United States' public education system by 2025, in districts across the country The parents' advocacy group, No Left Turn in Education (NLTE), is sounding the alarm about the curriculum's alleged ties to the CDC, especially since Learning 2025 outlines its plans as a solution to the fallout of the COVID-19 pandemic. Learning 2025 frequently references the idea of a "Whole Child" educational framework to promote the notion that school districts should focus on a collective, whole community vision that is strikingly similar to the Whole School, Whole Community, Whole Child (WSCC) educational framework devised by the CDC. Both programs place a strong emphasis on students' and teachers' social and emotional health, including employee wellness programs, as well as psychological and social services like school-based health and counseling centers.
Artificial intelligence (AI) holds the promise of fueling explosive economic growth and improving public health and welfare in profound ways--but only if we let it. To make progress on AI innovation and its governance, America needs a better approach than the all-or-nothing extremes driving today's public discourse. America does not need a convoluted new regulatory bureaucracy or thicket of new rules for AI. We are on the cusp of untold advances in nearly every field thanks to AI. Our success depends on using flexible governance and practical solutions to avoid diminishing the pro-innovation model central to U.S. success in the technology sector.