illuminate
Fractal Patterns May Illuminate the Success of Next-Token Prediction
We study the fractal structure of language, aiming to provide a precise formalism for quantifying properties that may have been previously suspected but not formally shown. We establish that language is: (1) self-similar, exhibiting complexities at all levels of granularity, with no particular characteristic context length, and (2) long-range dependent (LRD), with a Hurst parameter of approximately 0.7.Based on these findings, we argue that short-term patterns/dependencies in language, such as in paragraphs, mirror the patterns/dependencies over larger scopes, like entire documents. This may shed some light on how next-token prediction can capture the structure of text across multiple levels of granularity, from words and clauses to broader contexts and intents. In addition, we carry out an extensive analysis across different domains and architectures, showing that fractal parameters are robust.Finally, we demonstrate that the tiny variations in fractal parameters seen across LLMs improve upon perplexity-based bits-per-byte (BPB) in predicting their downstream performance. We hope these findings offer a fresh perspective on language and the mechanisms underlying the success of LLMs.
This Is the First Time Scientists Have Seen Decisionmaking in a Brain
Twelve laboratories around the world have joined forces to map neuronal activity in a mouse's brain as it makes decisions. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Neuroscientists from around the world have worked in parallel to map, for the first time, the entire brain activity of mice while they were making decisions. This achievement involved using electrodes inserted inside the brain to simultaneously record the activity of more than half a million neurons distributed across 95 percent of the rodents' brain volume.
- Asia > China (0.05)
- South America (0.05)
- North America > United States > California (0.05)
- (4 more...)
Fractal Patterns May Illuminate the Success of Next-Token Prediction
We study the fractal structure of language, aiming to provide a precise formalism for quantifying properties that may have been previously suspected but not formally shown. We establish that language is: (1) self-similar, exhibiting complexities at all levels of granularity, with no particular characteristic context length, and (2) long-range dependent (LRD), with a Hurst parameter of approximately 0.7.Based on these findings, we argue that short-term patterns/dependencies in language, such as in paragraphs, mirror the patterns/dependencies over larger scopes, like entire documents. This may shed some light on how next-token prediction can capture the structure of text across multiple levels of granularity, from words and clauses to broader contexts and intents. In addition, we carry out an extensive analysis across different domains and architectures, showing that fractal parameters are robust.Finally, we demonstrate that the tiny variations in fractal parameters seen across LLMs improve upon perplexity-based bits-per-byte (BPB) in predicting their downstream performance. We hope these findings offer a fresh perspective on language and the mechanisms underlying the success of LLMs.
Active Illumination for Visual Ego-Motion Estimation in the Dark
Crocetti, Francesco, Dionigi, Alberto, Brilli, Raffaele, Costante, Gabriele, Valigi, Paolo
In this paper, we propose a novel active illumination framework to enhance the performance of VO and V-SLAM algorithms in these challenging conditions. The developed approach dynamically controls a moving light source to illuminate highly textured areas, thereby improving feature extraction and tracking. Specifically, a detector block, which incorporates a deep learning-based enhancing network, identifies regions with relevant features. Then, a pan-tilt controller is responsible for guiding the light beam toward these areas, so that to provide information-rich images to the ego-motion estimation algorithm. Experimental results on a real robotic platform demonstrate the effectiveness of the proposed method, showing a reduction in the pose estimation error up to 75% with respect to a traditional fixed lighting technique. I. INTRODUCTION Vision-based pose estimation is one of the most widespread strategies to achieve mobile robot localization. Several effective Visual Odometry (VO) and Visual SLAM (V -SLAM) approaches have flourished in the last decades [1], and the recent emergence of visual-inertial techniques has shown even more impressive results [2], [3]. The effectiveness of VO and V -SLAM solutions depends on the capability to extract robust and highly-descriptive visual features.
These Brilliant BenQ ScreenBar Lamps Are My Favorite WFH Accessory
I review a lot of home office gear for my job, which means my workstation is in flux. A new desk today, another office chair tomorrow--you get the idea. You may have heard of BenQ before--the Taiwanese company makes excellent monitors and projectors--but this lamp is my number one work-from-home accessory. The ScreenBar doesn't take up any desk space, because it hangs over your computer monitor and brilliantly illuminates the desktop. It's a simple little thing, but it brings me a lot of joy, and the company has been iterating on it, with the latest version being the ScreenBar Pro. I'm here to tell you that you should probably get one for your home office.
Dataai launches two innovative products
The addition of App IQ and IAP SKU (In-App Purchase SKU) will provide insights to drive effective consumer strategies. Digital success requires engaging with consumers where they spend the vast majority of their time - mobile. The challenge is that mobile app store categories are antiquated causing enterprise teams to spend precious bandwidth on onerous research and manual analysis of competitors. App IQ illuminates the digital landscape by providing an industry-first, robust taxonomy (19 genres / 152 subgenres), combining both app stores. Enterprises can now identify new partnership opportunities, competitive threats and quickly react to the ever-changing landscape.
Eye test uses AI to predict macular degeneration
A new eye test that uses artificial intelligence AI to study retina scans can predict age-related macular degeneration (AMD) three years before symptoms start. The first part of the'pioneering' test, developed by researchers at University College London, is called DARC. DARC involves injecting dye into a person's bloodstream to illuminate'stressed' endothelial cells in the retina, so they appear bright white under a fluorescent camera. These'stressed' retinal cells could lead to abnormalities and later leaking blood vessels – causing AMD, which can severely compromise the central field of vision. The second part of the test uses an AI algorithm, trained to detect whether the highlighted white spots are around the macula – which indicates high AMD risk.
- Europe > United Kingdom (0.06)
- North America > United States > Texas (0.05)
Why organizations might want to design and train less-than-perfect AI
These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered. Each of these AI systems also keeps humans in the decision-making loop. That's because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers). In addition, giving too much authority to AI systems can unintentionally reduce human motivation.
Giant larvacean could help the battle against climate change
A strange sea creature that lives 1,000 feet below the surface encased in a giant bubble of mucus may be key to removing carbon dioxide from the atmosphere. These bubble-houses are discarded and replaced regularly as the animal grows in size and its filters become clogged with particles. Once discarded, they sink to the seafloor and encapsulate the carbon for good, preventing it from re-entering the atmosphere. Larvaceans also capture and dispose of microplastics in this way, which can come from clothing and cosmetics and often ingested by other marine species. Researchers used a system of lasers mounted on a 12,000 pound robot to map the giant larvacean's delicate body in a series of 3D images.
Machine learning microscope adapts lighting to improve diagnosis
Engineers at Duke University have developed a microscope that adapts its lighting angles, colors and patterns while teaching itself the optimal settings needed to complete a given diagnostic task. In the initial proof-of-concept study, the microscope simultaneously developed a lighting pattern and classification system that allowed it to quickly identify red blood cells infected by the malaria parasite more accurately than trained physicians and other machine learning approaches. The results appear online on November 19 in the journal Biomedical Optics Express. "A standard microscope illuminates a sample with the same amount of light coming from all directions, and that lighting has been optimized for human eyes over hundreds of years," said Roarke Horstmeyer, assistant professor of biomedical engineering at Duke. "But computers can see things humans can't," Hortmeyer said. "So not only have we redesigned the hardware to provide a diverse range of lighting options, we've allowed the microscope to optimize the illumination for itself."