Transformer architectures have become the building blocks for many state-of-the-art natural language processing (NLP) models. While transformers are certainly powerful, researchers' understanding of how they actually work remains limited. This is problematic due to the lack of transparency and the possibility of biases being inherited via training data and algorithms, which could cause models to produce unfair or incorrect predictions. In the paper Transformer Visualization via Dictionary Learning: Contextualized Embedding as a Linear Superposition of Transformer Factors, a Yann LeCun team from Facebook AI Research, UC Berkeley and New York University leverages dictionary learning techniques to provide detailed visualizations of transformer representations and insights into the semantic structures -- such as word-level disambiguation, sentence-level pattern formation, and long-range dependencies -- that are captured by transformers. Previous attempts to visualize and analyze this "black box" issue in transformers include direct visualization and, more recently, "probing tasks" designed to interpret transformer models.
Fusion reactor technologies are well-positioned to contribute to our future power needs in a safe and sustainable manner. Numerical models can provide researchers with information on the behavior of the fusion plasma, as well as valuable insight on the effectiveness of reactor design and operation. However, to model the large number of plasma interactions requires a number of specialized models that are not fast enough to provide data on reactor design and operation. Aaron Ho from the Science and Technology of Nuclear Fusion group in the department of Applied Physics at Eindhoven University of Technology has explored the use of machine learning approaches to speed up the numerical simulation of core plasma turbulent transport. Ho defended his thesis on March 17th.
It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate. Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased. A team of researchers have created a website – emojify.info One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context. Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.
The first wireless commands to a computer have been demonstrated in a breakthrough for people with paralysis. The system is able to transmit brain signals at "single-neuron resolution and in full broadband fidelity", say researchers at Brown University in the US. A clinical trial of the BrainGate technology involved a small transmitter that connects to a person's brain motor cortex. Trial participants with paralysis used the system to control a tablet computer, the journal IEEE Transactions on Biomedical Engineering reports. The participants were able to achieve similar typing speeds and point-and-click accuracy as they could with wired systems.
A football player wears a vest holding a GPS sensor. The data captured feed into an algorithm.Credit: Matthew Ashton/AMA/Corbis via Getty In 2005, 17-year-old aspiring footballer Alessio Rossi tore two ligaments in his right ankle during training for lower league Italian football club USD Olginatese. The injury ended his dream of playing at the highest level. Today, Rossi is a postdoctoral researcher at the University of Pisa, Italy, where he collects and analyses reams of data to help prevent players at top teams getting injuries of their own. When Rossi was playing, his coaches' instincts and experiences were all they had to predict whether he might receive an injury.
Yes, but: In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains racist and sexist labels as well as photos of people's faces obtained without consent. The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%.
A microscopic, living robot that can heal and power itself has been created out of frog skin cells. Xenobots, named after the frog species Xenopus laevis that the cells come from, were first described last year. Now the team behind the robots has improved their design and demonstrated new capabilities. To create the spherical xenobots, Michael Levin at Tufts University in Massachusetts and his colleagues extracted tissue from 24-hour-old frog embryos which formed into spheroid structures after minimal physical manipulation. Where the previous version relied on the contraction of heart muscle cells to move them forward by pushing off surfaces, these new xenobots swim around faster, being self-propelled by hair-like structures on their surface.
The current boom in artificial intelligence can be traced back to 2012 and a breakthrough during a competition built around ImageNet, a set of 14 million labeled images. In the competition, a method called deep learning, which involves feeding examples to a giant simulated neural network, proved dramatically better at identifying objects in images than other approaches. That kick-started interest in using AI to solve different problems. But research revealed this week shows that ImageNet and nine other key AI data sets contain many errors. Researchers at MIT compared how an AI algorithm trained on the data interprets an image with the label that was applied to it.
Those that climb need to be both fast and stable to avoid predation and find food. A robot made to mimic their movements has now shown how the rotation of their legs and the speed with which they move up vertical surfaces helps them climb efficiently. "Most lizards look a lot like other lizards," says Christofer Clemente at University of the Sunshine Coast, Australia. To find out why, Clemente and his team built a robot based on a lizard's body plan to explore its efficiency. It is about 24 centimetres long, and its legs and feet were programmed to mimic the gait of climbing lizards.