Leipzig
'Dibling is the antidote to robotic, structured & predictable football'
In a world and industry which is becoming more commercialised, over sanitised, robotic, structured and predictable, Tyler's greatest strength is the opposite to all of that." That's quite the sell for Southampton's 19-year-old midfield star Tyler Dibling, especially given his basic Premier League career numbers amount to 25 appearances, 1540 minutes played, two goals and zero assists. But that gushing description from one senior source at the club, speaking to BBC Sport anonymously, hints at an emerging talent interesting a host of top clubs and why there are some unsubstantiated reports of a 100m price tag on his head. With the Saints facing an immediate relegation back to the Championship, Dibling's future is likely to be one of the summer's more interesting sagas, with Manchester United, Arsenal, Tottenham and Bayern Munich all reportedly chasing his signature. Another source close to the club suggested Southampton turned down previously unreported bids of 35m from Tottenham and 30m from RB Leipzig in January, with the club valuing Dibling at 55m at the start of the winter window. Southampton have not commented on those rumours, but what is known is that Dibling is one of the lowest paid players in Southampton's squad and has a deal that expires in 2027, after Southampton triggered a 12-month extension option. He signed his last contract in December 2023, when he had played just five minutes of senior football. The England Under-21 international has so far resisted the club's offers of a new deal in what has been a breakthrough season for him, despite a wretched campaign which could still see Southampton relegated with the Premier League's lowest ever points total. His dribbles completed per game (2.34) and fouls won per game (2.57) place him in the top 10. "He's the most fearless player I've ever worked with," former Saints Under-21 head coach Adam Asghar tells BBC Sport. "He's totally unique to anything I've seen before.
ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors Stephan P. Kaufhold Jack Terwilliger
Understanding the behavior of non-human primates is crucial for improving animal welfare, modeling social behavior, and gaining insights into distinctively human and phylogenetically shared behaviors. However, the lack of datasets on non-human primate behavior hinders in-depth exploration of primate social interactions, posing challenges to research on our closest living relatives. To address these limitations, we present ChimpACT, a comprehensive dataset for quantifying the longitudinal behavior and social relations of chimpanzees within a social group. Spanning from 2015 to 2018, ChimpACT features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo, Germany, with a particular focus on documenting the developmental trajectory of one young male, Azibo. ChimpACT is both comprehensive and challenging, consisting of 163 videos with a cumulative 160,500 frames, each richly annotated with detection, identification, pose estimation, and fine-grained spatiotemporal behavior labels. We benchmark representative methods of three tracks on ChimpACT: (i) tracking and identification, (ii) pose estimation, and (iii) spatiotemporal action detection of the chimpanzees. Our experiments reveal that ChimpACT offers ample opportunities for both devising new methods and adapting existing ones to solve fundamental computer vision tasks applied to chimpanzee groups, such as detection, pose estimation, and behavior analy-37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.
ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors
Understanding the behavior of non-human primates is crucial for improving animal welfare, modeling social behavior, and gaining insights into distinctively human and phylogenetically shared behaviors. However, the lack of datasets on non-human primate behavior hinders in-depth exploration of primate social interactions, posing challenges to research on our closest living relatives. To address these limitations, we present ChimpACT, a comprehensive dataset for quantifying the longitudinal behavior and social relations of chimpanzees within a social group. Spanning from 2015 to 2018, ChimpACT features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo, Germany, with a particular focus on documenting the developmental trajectory of one young male, Azibo. ChimpACT is both comprehensive and challenging, consisting of 163 videos with a cumulative 160,500 frames, each richly annotated with detection, identification, pose estimation, and fine-grained spatiotemporal behavior labels.
Vision-Braille: An End-to-End Tool for Chinese Braille Image-to-Text Translation
Wu, Alan, Yuan, Ye, Zhang, Ming
Visually impaired people are a large group who can only use braille for reading and writing. However, the lack of special educational resources is the bottleneck for educating them. Educational equity is a reflection of the level of social civilization, cultural equality, and individual dignity. Facilitating and improving lifelong learning channels for the visually impaired is of great significance. Their written braille homework or exam papers cannot be understood by sighted teachers, because of the lack of a highly accurate braille translation system, especially in Chinese which has tone marks. braille writers often omit tone marks to save space, leading to confusion when braille with the same consonants and vowels is translated into Chinese. Previous algorithms were insufficient in extracting contextual information, resulting in low accuracy of braille translations into Chinese. This project informatively fine-tuned the mT5 model with an Encoder-decoder architecture for braille to Chinese character conversion. This research created a training set of braille and corresponding Chinese text from the Leipzig Corpora. This project significantly reduced the confusion in braille, achieving $62.4$ and $62.3$ BLEU scores in the validation and test sets, with a curriculum learning fine-tuning method. By incorporating the braille recognition algorithm, this project is the first publicly available braille translation system and can benefit lots of visually impaired students and families who are preparing for the Chinese College Test and help to propel their college dreams in the future. There is a demo on our homepage\footnote{\url{https://vision-braille.com/}}.
What if we could just ask AI to be less biased?
Researchers don't know why text- and image-generating AI models self-correct for some biases after simply being asked to do so. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it's a white man with glasses. Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities. Although I've written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are.
What if we could just ask AI to be less biased?
Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities. Although I've written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like "CEO" or "director." And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they're asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.
After 35 years of Final Fantasy, what's next for composer Nobuo Uematsu?
Uematsu has always been passionate about performing the music he's written for video games onstage. While video game concerts have been taking place in Japan since 1987, when Koichi Sugiyama filled the Suntory Hall in Tokyo with his music from Dragon Quest on the NES, it wasn't until 2003 that Uematsu's music was performed onstage in the West. The success of Thomas Böcker's Symphonic Games Music Concert in Leipzig, Germany, spawned a symphony concert series that awakened Uematsu to the global popularity of Final Fantasy music concerts.
Chimpanzees hunt for fruit in video game to test navigation skills
Chimpanzees in a zoo have been trained to use a touchscreen to navigate a virtual environment and seek out objects. Studies like this could help us learn more about how our close relatives find their way around in the jungle. "There's a lot of research on the navigation of birds and bees," says Matthias Allritz at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. "But we know very little about the navigation of most primate species." This is largely because chimpanzees are difficult to track in the wild, Allritz says.
Boundary between noise and information applied to filtering neural network weight matrices
Staats, Max, Thamm, Matthias, Rosenow, Bernd
Institut für Theoretische Physik, Universität Leipzig, Brüderstrasse 16, 04103 Leipzig, Germany (Dated: June 9, 2022) Deep neural networks have been successfully applied to a broad range of problems where overparametrization yields weight matrices which are partially random. A comparison of weight matrix singular vectors to the Porter-Thomas distribution suggests that there is a boundary between randomness and learned information in the singular value spectrum. Inspired by this finding, we introduce an algorithm for noise filtering, which both removes small singular values and reduces the magnitude of large singular values to counteract the effect of level repulsion between the noise and the information part of the spectrum. For networks trained in the presence of label noise, we indeed find that the generalization performance improves significantly due to noise filtering. Introduction: In recent years, deep neural networks small singular values agree with the RMT prediction, (DNNs) have proven to be powerful tools for solving a while vectors which large singular values significantly deviate.
Artificial Intelligence Uses a Computer Chip Designed for Video Games. Does That Matter?
Participants sit at computers to play a video game at the 2019 DreamHack video gaming festival in Leipzig, Germany. GPUs were originally designed for the video gaming industry because they are particularly good at matrix arithmetic. As AI and machine learning become more and more widespread in the global economy, there is an increasing focus on the hardware that drives them. Currently, nearly all AI systems run on a chip known as a GPU that was designed for video gaming. Are current chip designs fit for purpose in an AI future, or is a new type of chip needed?