A man unable to speak after a stroke has produced sentences through a system that reads electrical signals from speech production areas of his brain, researchers report this week. The approach has previously been used in nondisabled volunteers to reconstruct spoken or imagined sentences. But this first demonstration in a person who is paralyzed “tackles really the main issue that was left to be tackled—bringing this to the patients that really need it,” says Christian Herff, a computer scientist at Maastricht University who was not involved in the new work. The participant had a stroke more than a decade ago that left him with anarthria—an inability to control the muscles involved in speech. Because his limbs are also paralyzed, he communicates by selecting letters on a screen using small movements of his head, producing roughly five words per minute. To enable faster, more natural communication, neurosurgeon Edward Chang of the University of California, San Francisco, tested an approach that uses a computational model known as a deep-learning algorithm to interpret patterns of brain activity in the sensorimotor cortex, a brain region involved in producing speech ( Science , 4 January 2019, p. ). The approach has so far been tested in volunteers who have electrodes surgically implanted for nonresearch reasons such as to monitor epileptic seizures. In the new study, Chang's team temporarily removed a portion of the participant's skull and laid a thin sheet of electrodes smaller than a credit card directly over his sensorimotor cortex. To “train” a computer algorithm to associate brain activity patterns with the onset of speech and with particular words, the team needed reliable information about what the man intended to say and when. So the researchers repeatedly presented one of 50 words on a screen and asked the man to attempt to say it on cue. Once the algorithm was trained with data from the individual word task, the man tried to read sentences built from the same set of 50 words, such as “Bring my glasses, please.” To improve the algorithm's guesses, the researchers added a processing component called a natural language model, which uses common word sequences to predict the likely next word in a sentence. With that approach, the system only got about 25% of the words in a sentence wrong, they report this week in The New England Journal of Medicine . That's “pretty impressive,” says Stephanie Riès-Cornou, a neuroscientist at San Diego State University. (The error rate for chance performance would be 92%.) Because the brain reorganizes over time, it wasn't clear that speech production areas would give interpretable signals after more than 10 years of anarthria, notes Anne-Lise Giraud, a neuroscientist at the University of Geneva. The signals' preservation “is surprising,” she says. And Herff says the team made a “gigantic” step by generating sentences as the man was attempting to speak rather than from previously recorded brain data, as most studies have done. With the new approach, the man could produce sentences at a rate of up to 18 words per minute, Chang says. That's roughly comparable to the speed achieved with another brain-computer interface, described in Nature in May. That system decoded individual letters from activity in a brain area responsible for planning hand movements while a person who was paralyzed imagined handwriting. These speeds are still far from the 120 to 180 words per minute typical of conversational English, Riès-Cornou notes, but they far exceed what the participant can achieve with his head-controlled device. The system isn't ready for use in everyday life, Chang notes. Future improvements will include expanding its repertoire of words and making it wireless, so the user isn't tethered to a computer roughly the size of a minifridge. : http://www.sciencemag.org/content/363/6422/14
Proteins are the minions of life, working alone or together to build, manage, fuel, protect, and eventually destroy cells. To function, these long chains of amino acids twist and fold and intertwine into complex shapes that can be slow, even impossible, to decipher. Scientists have dreamed of simply predicting a protein's shape from its amino acid sequence—an ability that would open a world of insights into the workings of life. “This problem has been around for 50 years; lots of people have broken their head on it,” says John Moult, a structural biologist at the University of Maryland, Shady Grove. But a practical solution is in their grasp. Several months ago, in a result hailed as a turning point, computational biologists showed that artificial intelligence (AI) could accurately predict protein shapes. Now, David Baker and Minkyung Baek at the University of Washington, Seattle, and their colleagues have made AI-based structure prediction more powerful and accessible. Their method, described online in Science this week, works on not just simple proteins, but also complexes of proteins, and its creators have made their computer code freely available. Since the method was posted online last month, the team has used it to model more than 4500 protein sequences submitted by other researchers. Savvas Savvides, a structural biologist at Ghent University, had tried six times to model a problematic protein. He says Baker's and Baek's program, called RoseTTAFold, “paved the way to a structure solution.” In fall of 2020, DeepMind, a U.K.-based AI company owned by Google, wowed the field with its structure predictions in a biennial competition ( Science , 4 December 2020, p. ). Called Critical Assessment of Protein Structure Prediction (CASP), the competition uses structures newly determined using laborious lab techniques such as x-ray crystallography as benchmarks. DeepMind's program, AlphaFold2, did “really extraordinary things [predicting] protein structures with atomic accuracy,” says Moult, who organizes CASP. But for many structural biologists, AlphaFold2 was a tease: “Incredibly exciting but also very frustrating,” says David Agard, a structural biophysicist at the University of California, San Francisco. DeepMind has yet to publish its method and computer code for others to take advantage of. In mid-June, 3 days after the Baker lab posted its RoseTTAFold preprint, Demis Hassabis, DeepMind's CEO, tweeted that AlphaFold2's details were under review at a publication and the company would provide “broad free access to AlphaFold for the scientific community.” DeepMind's 30-minute presentation at CASP was enough to inspire Baek to develop her own approach. Like AlphaFold2, it uses AI's ability to discern patterns in vast databases of examples, generating ever more informed and accurate iterations as it learns. When given a new protein to model, RoseTTAFold proceeds along multiple “tracks.” One compares the protein's amino acid sequence with all similar sequences in protein databases. Another predicts pairwise interactions between amino acids within the protein, and a third compiles the putative 3D structure. The program bounces among the tracks to refine the model, using the output of each one to update the others. DeepMind's approach, although still under wraps, involves just two tracks, Baek and others believe. Gira Bhabha, a cell and structural biologist at New York University School of Medicine, says both methods work well. “Both the DeepMind and Baker lab advances are phenomenal and will change how we can use protein structure predictions to advance biology,” she says. A DeepMind spokesperson wrote in an email, “It's great to see examples such as this where the protein folding community is building on AlphaFold to work towards our shared goal of increasing our understanding of structural biology.” But AlphaFold2 solved the structures of only single proteins, whereas RoseTTAFold has also predicted complexes, such as the structure of the immune molecule interleukin-12 latched onto its receptor. Many biological functions depend on protein-protein interactions, says Torsten Schwede, a computational structural biologist at the University of Basel. “The ability to handle protein-protein complexes directly from sequence information makes it extremely attractive for many questions in biomedical research.” Baker concedes that, in general, AlphaFold2's structures are more accurate. But Savvides says the Baker lab's approach better captures “the essence and particularities of protein structure,” such as identifying strings of atoms sticking out of the sides of the protein—features key to interactions between proteins. Agard adds that Baker's and Baek's approach is faster and requires less computing power than DeepMind's, which relied on Google's massive servers. However, the DeepMind spokesperson wrote that its latest algorithm is more than 16 times as fast as the one it used at CASP in 2020. As a result, she wrote, “It's not clear to us that the system being described is an advance in speed.” Beginning on 1 June, Baker and Baek began to challenge their method by asking researchers to send in their most baffling protein sequences. Fifty-six head scratchers arrived in the first month, all of which have now predicted structures. Agard's group sent in an amino acid sequence with no known similar proteins. Within hours, his group got a protein model back “that probably saved us a year of work,” Agard says. Now, he and his team know where to mutate the protein to test ideas about how it functions. Because Baek's and Baker's group has released its computer code on the web, others can improve on it; the code has been downloaded 250 times since 1 July. “Many researchers will build their own structure prediction methods upon Baker's work,” says Jinbo Xu, a computational structural biologist at the Toyota Technological Institute at Chicago. Moult agrees: “When there's a breakthrough like this, 2 years later, everyone is doing it as well if not better than before.” : http://www.sciencemag.org/content/370/6521/1144
In a medical first, researchers harnessed the brainwaves of a paralyzed man unable to speak and turned what he intended to say into sentences on a computer screen. It will take years of additional research but the study, reported Wednesday, marks an important step toward one day restoring more natural communication for people who can't talk because of injury or illness. "Most of us take for granted how easily we communicate through speech," said Dr Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work. "It's exciting to think we're at the very beginning of a new chapter, a new field" to ease the devastation of patients who have lost that ability. Today, people who can't speak or write because of paralysis have very limited ways of communicating.
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. In a medical first, researchers harnessed the brain waves of a paralyzed man unable to speak -- and turned what he intended to say into sentences on a computer screen. It will take years of additional research but the study, reported Wednesday, marks an important step toward one day restoring more natural communication for people who can't talk because of injury or illness. "Most of us take for granted how easily we communicate through speech," said Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work.
A severely paralyzed man has been able to communicate using a new type of technology that translates signals from his brain to his vocal tract directly into words that appear on a screen. Developed by researchers at UC San Francisco, the technique is a more natural way for people with speech loss to communicate than other methods we've seen to date. So far, neuroprosthetic technology has only allowed paralyzed users to type out just one letter at a time, a process that can be slow and laborious. It also tapped parts of the brain that control the arm or hand, a system that's not necessarily intuitive for the subject. The USCF system, however, uses an implant that's placed directly on the part of the brain dedicated to speech.
An Australian Femtech company with US headquarters in San Francisco announced new technology to help couples get pregnant via artificial intelligence-assisted in vitro fertilization (IVF). Life Whisperer is the fertility arm of Presagen, a global artificial intelligence company. The company, whose US headquarters is in San Francisco, announced in a press release new women's health technology applying artificial intelligence to the IVF embryo selection process. IVF clinics around the world can add an artificial intelligence platform to help doctors select the healthiest embryos with the best chance of success. Embryo selection is an important part of the IVF process, where the healthiest embryos are chosen for implantation.
To understand the promise and peril of artificial intelligence for food safety, consider the story of Larry Brilliant. Brilliant is a self-described "spiritual seeker," "social change addict," and "rock doc." During his medical internship in 1969, he responded to a San Francisco Chronicle columnist's call for medical help to Native Americans then occupying Alcatraz. Then came Warner Bros.' call to have him join the cast of Medicine Ball Caravan, a sort-of sequel to Woodstock Nation. That caravan ultimately led to a detour to India, where Brilliant spent 2 years studying at the foot of the Himalayas in a monastery under guru Neem Karoli Baba. Toward the end of the stay, Karoli Baba informed Brilliant of his calling: join the World Health Organization (WHO) and eradicate smallpox. He joined the WHO as a medical health officer, as a part of a team making over 1 billion house calls collectively. In 1977, he observed the last human with smallpox, leading WHO to declare the disease eradicated. After a decade battling smallpox, Brilliant went on to establish and lead foundations and start-up companies, and serve as a professor of international health at the University of Michigan. As one corporate brand manager wrote, "There are stories that are so incredible that not even the creative minds that fuel Hollywood could write them with a straight face."
Fox News correspondent Claudia Cowan joins'Your World' with the details from San Francisco This is a rush transcript from "Your World with Neil Cavuto" June 8, 2021. This copy may not be in its final form and may be updated. NEIL CAVUTO, FOX NEWS ANCHOR: How about some good news to kick off things, like herd immunity happening in a lot of parts of this country, including in San Francisco, where close to eight out of 10 residents older than 12 years old have already had at least one vaccination shot? It reads similarly in other cities, like Philadelphia, 67.4 percent have been vaccinated, in Denver, close to 70 percent, in San Diego, north of 65 percent, and, in New York City, more than 52 percent. And this is "Your World." And FOX on top of vaccinations that are surprisingly robust across a country that is rapidly leading the world in finally putting a spike in this horrific, horrific disease. Now, the implications of all of this are being weighed in the medical community, as well as the political community, as to how much longer term this means we get to, well, herd immunity, if we even need to get to that, technically, at the rate we're going. Let's go to Claudia Cowan following all of this in San Francisco -- Claudia. The City by the Bay is on the cusp of herd immunity, which means that the coronavirus is having trouble finding new hosts. The city is reporting that nearly 80 percent of teens and adults have been vaccinated with at least one dose against COVID-19, while 68 percent are fully vaccinated. The number of new cases is the lowest since the city shut down in March of 2020. And no one has died of COVID in over a month. San Francisco pushed people to get the shot while infections hospitalizations and death rates were low. Officials believe that made a world of difference. While there is some debate over what exactly constitutes herd immunity, one expert says the numbers here are among the best in the country. MONICA GANDHI, UNIVERSITY OF CALIFORNIA, SAN FRANCISCO: And there are places, like in the Bay Area, that are up to 76, 77 percent. So, we are doing great in terms of high vaccination rates, high immunity, low cases, low hospitalizations, low deaths, low test positivity rate.
Inside a 13th-floor boardroom in downtown San Francisco, the atmosphere was tense. It was November 2015, and Databricks, a two-year-old software company started by a group of seven Berkeley researchers, was long on buzz but short on revenue. The directors awkwardly broached subjects that had been rehashed time and again. The startup had been trying to raise funds for five months, but venture capitalists were keeping it at arm's length, wary of its paltry sales. Seeing no other option, NEA partner Pete Sonsini, an existing investor, raised his hand to save the company with an emergency $30 million injection. Founding CEO Ion Stoica had agreed to step aside and return to his professorship at the University of California, Berkeley. The obvious move was to bring in a seasoned Silicon Valley executive, which is exactly what Databricks' chief competitor Snowflake did twice on its way to a software-record $33 billion IPO in September 2020.
A San Francisco health tech startup that offers artificial intelligence tools to automate operations for health care providers is moving its headquarters to Nashville. A five-person team at DARVIS, an acronym for Data Analytic Real-World Visual Intelligence System, will move into a MetroCenter office space at 240 Great Circle Road in June. The company also has international offices in Germany, the United Kingdom and Pakistan. DARVIS offers technology that automates clinical workflow, from determining whether personnel is wearing appropriate personal protective equipment, to tracking the condition and availability of hospital beds throughout a facility. The AI-powered tech can also keep track of medical inventory and assess medical equipment sanitation.