Collaborating Authors

New Finding

AI Weekly: Cutting-edge language models can produce convincing misinformation if we don't stop them


It's been three months since OpenAI launched an API underpinned by cutting-edge language model GPT-3, and it continues to be the subject of fascination within the AI community and beyond. Portland State University computer science professor Melanie Mitchell found evidence that GPT-3 can make primitive analogies, and Columbia University's Raphaël Millière asked GPT-3 to compose a response to the philosophical essays written about it. But as the U.S. presidential election nears, there's growing concern among academics that tools like GPT-3 could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies' Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3's strength in generating "informational," "influential" text could be leveraged to "radicalize individuals into violent far-right extremist ideologies and behaviors." Bots are increasingly being used around the world to sow the seeds of unrest, either through the spread of misinformation or the amplification of controversial points of view.

AI researchers devise failure detection method for safety-critical machine learning


Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad. Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled "Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems," recently published on arXiv, the authors assert their approach can satisfy both the public's right to know that a system has been rigorously tested and an organization's desire to treat AI models like trade secrets.

Artificial intelligence system developed to help better select embryos for implantation


For many people who are struggling to conceive, in-vitro fertilization (IVF) can offer a life-changing solution. But the average success rate for IVF is only about 30 percent. Investigators from Brigham and Women's Hospital and Massachusetts General Hospital are developing an artificial intelligence system with the goal of improving IVF success by helping embryologists objectively select embryos most likely to result in a healthy birth. Using thousands of embryo image examples and deep-learning artificial intelligence (AI), the team developed a system that was able to differentiate and identify embryos with the highest potential for success significantly better than 15 experienced embryologists from five different fertility centers across the United States. Results of their study are published in eLife.

Species-specific segmentation clock periods are due to differential biochemical reaction speeds


Many animals display similarities in their organization (body axis, organ systems, and so on). However, they can display vastly different life spans and thus must accommodate different developmental time scales. Two studies now compare human and mouse development (see the Perspective by Iwata and Vanderhaeghen). Matsuda et al. studied the mechanism by which the human segmentation clock displays an oscillation period of 5 to 6 hours, whereas the mouse period is 2 to 3 hours. They found that biochemical reactions, including protein degradation and delays in gene expression processes, were slower in human cells compared with their mouse counterparts. Rayon et al. looked at the developmental tempo of mouse and human embryonic stem cells as they differentiate to motor neurons in vitro. Neither the sensitivity of cells to signals nor the sequence of gene-regulatory elements could explain the differing pace of differentiation. Instead, a twofold increase in protein stability and cell cycle duration in human cells compared with mouse cells was correlated with the twofold slower rate of human differentiation. These studies show that global biochemical rates play a major role in setting the pace of development. Science , this issue p. [1450][1], p. [eaba7667][2]; see also p. [1431][3] Although mechanisms of embryonic development are similar between mice and humans, the time scale is generally slower in humans. To investigate these interspecies differences in development, we recapitulate murine and human segmentation clocks that display 2- to 3-hour and 5- to 6-hour oscillation periods, respectively. Our interspecies genome-swapping analyses indicate that the period difference is not due to sequence differences in the HES7 locus, the core gene of the segmentation clock. Instead, we demonstrate that multiple biochemical reactions of HES7 , including the degradation and expression delays, are slower in human cells than they are in mouse cells. With the measured biochemical parameters, our mathematical model accounts for the two- to threefold period difference between the species. We propose that cell-autonomous differences in biochemical reaction speeds underlie temporal differences in development between species. [1]: /lookup/doi/10.1126/science.aba7668 [2]: /lookup/doi/10.1126/science.aba7667 [3]: /lookup/doi/10.1126/science.abe0953

Researchers say artificial intelligence and machine learning could enhance scientific peer review


As the COVID-19 pandemic has swept the world, researchers have published hundreds of papers each week reporting their findings--many of which have not undergone a thorough peer review process to gauge their reliability. In some cases, poorly validated research has massively influenced public policy, as when a French team reported COVID patients were cured by a combination of hydroxychloroquine and azithromycin. The claim was widely publicized, and soon U.S. patients were prescribed these drugs under an emergency use authorization. Further research involving larger numbers of patients has cast serious doubts on these claims, however. With so much COVID-related information being released each week, how can researchers, clinicians and policymakers keep up?

NASA weighs mission to Venus after recent discovery of possible life

The Japan Times

Washington – NASA is considering approving by next April up to two planetary science missions from four proposals under review, including one to Venus that scientists involved in the project said could help determine whether or not that planet harbors life. An international research team on Monday described evidence of potential microbes residing in the harshly acidic Venusian clouds: traces of phosphine, a gas that on Earth is produced by bacteria inhabiting oxygen-free environments. It provided strong potential evidence of life beyond Earth. The U.S. space agency in February shortlisted four proposed missions that are now being reviewed by a NASA panel, two of which would involve robotic probes to Venus. One of those, called DAVINCI, would send a probe into the Venusian atmosphere.

A Closer Look at the Generalization Gap in Large Batch Training of Neural Networks


Deep learning architectures such as recurrent neural networks and convolutional neural networks have seen many significant improvements and have been applied in the fields of computer vision, speech recognition, natural language processing, audio recognition and more. The most commonly used optimization method for training highly complex and non-convex DNNs is stochastic gradient descent (SGD) or some variant of it. DNNs however typically have some non-convex objective functions which are a bit difficult optimize with SGD. Thus, SGD, at best, finds a local minimum of this objective function. Although the solutions of DNNs are a local minima, they have produced great end results.

A next-gen safety algorithm may be the future of self-driving cars


Taking a snooze at your car's steering wheel as it hurdles down a freeway is a terrible idea, but with the continued expansion of self-driving car technology, getting a few extra minutes of sleep on your commute may not remain a pipe dream for much longer. But there are more than a few issues with this vision of the future. Chief among them is that the people don't trust self-driving cars, at least not enough to confidently put their lives in vehicles' "hands." Research from computer scientists in Germany could swing public opinion, however. Armed with a new algorithm, autonomous vehicles would be able to make safety calls in real-time.

Researchers classify the most effective flirtatious cues including slight smiles and head tilts

Daily Mail - Science & tech

When it comes to flirting, it's all in a look. Women give specific facial cues when they're flirting, according to researchers at the University of Kansas. The team used the Facial Action Coding System (FACS) to identify the most recognizable flirtatious facial expressions. The technology, which describes facial movements, showed the most effective flirting cues include a head turned to one side and tilted down slightly, a slight smile, and eyes turned forward toward the target. Flirting is one of the key components of human sexuality, but it's only just begun to be analyzed by scientists.

Tech Behind Nasa's ML Model To Predict Hurricane Intensity


With part of the world dealing with the adverse effects of hurricanes and intense tropical cyclones, it has become imperative for researchers and scientists to develop a way to predict and analyse these hurricane patterns. Thus in an attempt to forecast future hurricane intensity, scientists at NASA's Jet Propulsion Laboratory in Southern California have proposed a machine learning model that claims to predict rapid-intensification events of the future accurately. The critical factor in understanding the intensity of a hurricane is the wind speed. Traditionally it has been a challenge to predict the severity of storms or hurricanes while it's brewing. However, NASA's new ML model can improve the accuracy of the prediction and provide better results.