Goto

Collaborating Authors

 uc berkeley


At-home test works like coffee rings to spot serious illness faster

FOX News

HHS Secretary told members of Congress on Tuesday that wearables are "a way of people can take control over their own health." Have you ever noticed how a spilled cup of coffee leaves behind a telltale brown ring? While those stains might be annoying, the science behind them, known as the coffee ring effect, has sparked innovations in health technology. UC Berkeley researchers recently turned this everyday phenomenon into a breakthrough medical test, making rapid and reliable disease detection as easy as brewing your morning coffee. Curious how a simple coffee stain could inspire cutting-edge diagnostics and revolutionize at-home testing?


AI Agents Are Getting Better at Writing Code--and Hacking It as Well

WIRED

The latest artificial intelligence models are not only remarkably good at software engineering--new research shows they are getting ever-better at finding bugs in software, too. AI researchers at UC Berkeley tested how well the latest AI models and agents could find vulnerabilities in 188 large open source codebases. Using a new benchmark called CyberGym, the AI models identified 17 new bugs including 15 previously unknown, or "zero-day," ones. "Many of these vulnerabilities are critical," says Dawn Song, a professor at UC Berkeley who led the work. Many experts expect AI models to become formidable cybersecurity weapons.


Have scientists discovered a new colour called 'olo'?

Al Jazeera

A team of scientists claims to have discovered a new colour that humans cannot see without the help of technology. The researchers based in the United States said they were able to "experience" the colour, which they named "olo", by firing laser pulses into their eyes using a device named after the Wizard of Oz. Olo cannot be seen with the naked eye, but the five people who have seen it describe it as being similar to teal. Professors from the University of California, Berkeley and the University of Washington School of Medicine published an article in the journal, Science Advances, on April 18 in which they put forth their discovery of a hue beyond the gamut of human vision. They explained that they had devised a technique called Oz, which can "trick" the human eye into seeing olo.


From eating rocks to putting glue on your pizza and smoking while pregnant, here's what Google's new AI tool is (incorrectly) telling users to do

Daily Mail - Science & tech

It looks like Google's latest attempt at making people's lives easier with artificial intelligence (AI) is backfiring. The tech giant's new tool, 'AI Overviews', gives users AI-powered summaries of search results on Chrome, Firefox and the Google app browser. But since it started rolling out this month, people have noticed that it's returning incorrect statements and suggestions – many of which are dangerous. Among them, it claims you can'use gasoline to make a spicy spaghetti dish', eat rocks and put glue on your pizza. In response to the search'cheese not sticking to pizza', Google suggests adding'non-toxic glue' to the sauce to give it more tackiness'.


The Audio-Visual BatVision Dataset for Research on Sight and Sound

Brunetto, Amandine, Hornauer, Sascha, Yu, Stella X., Moutarde, Fabien

arXiv.org Artificial Intelligence

Vision research showed remarkable success in understanding our world, propelled by datasets of images and videos. Sensor data from radar, LiDAR and cameras supports research in robotics and autonomous driving for at least a decade. However, while visual sensors may fail in some conditions, sound has recently shown potential to complement sensor data. Simulated room impulse responses (RIR) in 3D apartment-models became a benchmark dataset for the community, fostering a range of audiovisual research. In simulation, depth is predictable from sound, by learning bat-like perception with a neural network. Concurrently, the same was achieved in reality by using RGB-D images and echoes of chirping sounds. Biomimicking bat perception is an exciting new direction but needs dedicated datasets to explore the potential. Therefore, we collected the BatVision dataset to provide large-scale echoes in complex real-world scenes to the community. We equipped a robot with a speaker to emit chirps and a binaural microphone to record their echoes. Synchronized RGB-D images from the same perspective provide visual labels of traversed spaces. We sampled modern US office spaces to historic French university grounds, indoor and outdoor with large architectural variety. This dataset will allow research on robot echolocation, general audio-visual tasks and sound phaenomena unavailable in simulated data. We show promising results for audio-only depth prediction and show how state-of-the-art work developed for simulated data can also succeed on our dataset. The data can be downloaded at https://forms.gle/W6xtshMgoXGZDwsE7


Will AGI Systems Undermine Human Control? OpenAI, UC Berkeley & Oxford U Explore the Alignment Problem

#artificialintelligence

In the new paper The Alignment Problem From a Deep Learning Perspective, a research team from OpenAI, UC Berkeley and the University of Oxford examines the alignment problem with regard to deep learning, identifying potential issues and how we might mitigate them.


Tired of laundry folding? AI breaks the robot folding speed record

#artificialintelligence

While it's possible that someone out there enjoys folding clothes, it's probably not a beloved pastime. Accordingly, researchers at UC Berkeley's AUTOLAB have developed a new robotic method of folding garments at record speed (for a robot) called SpeedFolding. Using machine vision, a neural network called BiManual Manipulation Network (BiMaMa-Net), and a pair of industrial robot arms, SpeedFolding can fold 30–40 randomly positioned garments per hour, usually finishing each within two minutes. While that rate does not sound impressive compared to a human, previous robotic garment-folding methods reached only "3-6 FPH" (that's "folds per hour") according to the researchers in a paper submitted for presentation at IROS2022 next week in Kyoto. Speed achievement aside, the paper is worth a read to enjoy how the researchers describe the garment-folding problem in technical terms.


TensorFlow Machine Learning Projects: Build 13 real-world projects with advanced numerical computations using the Python ecosystem: Jain, Ankit, Fandango, Armando, Kapoor, Amita: 9781789132212: Amazon.com: Books

#artificialintelligence

Ankit currently works as a Senior Research Scientist at Uber AI Labs, the machine learning research arm of Uber. His work primarily involves the application of Deep Learning methods to a variety of Uber's problems ranging from forecasting, food delivery to self driving cars. Previously, he has worked in variety of data science roles at Bank of America, Facebook and other startups. Additionally, he has been a featured speaker in many of the top AI conferences and universities across US including UC Berkeley, OReilly AI conference etc. He completed his MS from UC Berkeley and BS from IIT Bombay (India).


Meet Colossal-AI Team at SC22 and Other 3 Renowned International Conferences

#artificialintelligence

Recently, Colossal-AI Team, which developed a unified deep learning system for the big model era, has been accepted and invited to deliver keynote speeches at a series of notable international conferences including SuperComputing 2022 (SC22), Open Data Science Conference (ODSC), World Artificial Intelligence Conference (WAIC), and AWS Summit. In the event, Colossal-AI Team is going to share many up-to-date and amazing things and technologies of High Performance Computing (HPC) and Artificial Intelligence (AI) that will change the world. Follow us and stay tuned! SC (formerly Supercomputing), the International Conference for High Performance Computing, Networking, Storage and Analysis, is the annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. SC brings together the world's top research institutions and companies in the computer industry to share about the cutting-edge developments and innovations in HPC, networking, storage and analysis that will unlock new solutions and change our world.


Astronomers propose new theory for observing far-off worlds – TechCrunch

#artificialintelligence

Machine learning models are increasingly augmenting human processes, either performing repetitious tasks faster or providing some systematic insight that helps put human knowledge in perspective. Astronomers at UC Berkeley were surprised to find both happen after modeling gravitational microlensing events, leading to a new unified theory for the phenomenon. Gravitational lensing occurs when light from far-off stars and other stellar objects bends around a nearer one directly between it and the observer, briefly giving a brighter -- but distorted -- view of the farther one. Depending on how the light bends (and what we know about the distant object), we can also learn a lot about the star, planet or system that the light is bending around. For example, a momentary spike in brightness suggests a planetary body transiting the line of sight, and this type of anomaly in the reading, called a "degeneracy" for some reason, has been used to spot thousands of exoplanets.