Data is eating the world and there are numerous indicators of its ubiquitous presence in our lives and how it makes businesses and consumers both anxious and animated. Data dominates our deeds, debates, and dreams. "Covid has only accelerated the digital transformation, and automation is the cornerstone of digital transformation services"--Daniel Dines, co-founder and CEO, UiPath, Robotic Process Automation (RPA) startup whose revenues increased 81% in 2020 and its April 20, 2021, IPO, valued it at $36 billion "…the whole reason [AI] takes so long in the first place is that it's not easy"-- Erik Brynjolfsson, director of the Stanford Digital Economy Lab "When the [NFT] bubble bursts, it's not going to wipe out this technology. It's just going to wipe out the junk"--Beeple (artist Mike Winkelmann whose NFT-certified digital mosaic piece sold for $69 million) "Data is now at the center of global trade… Digital technologies trafficking in data now enable, and in some cases have replaced, traditional trade in goods and services… The global economy has become a perpetual motion machine of data: it consumes it, processes it, and produces ever more quantities of it"--David H. McCormick and Matthew J. Slaughter "We've been talking about home robots coming for a long time, and all we have so far is the vacuum cleaner"--Jeff Burnstein, President, Association for Advancing Automation "As a supply-chain provider, as a logistics provider, we are very much in the data business"--Mario Harik, CIO, XPO Logistics "People are getting confused about the meaning of AI in discussions of technology trends--that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans. We don't have that, but people are talking as if we do"--Michael Jordan, University of California, Berkeley
When children first learn to crawl, walk, and run it is a process full of trial and error -- expressed with frustrating cries and bumped heads. This tender learning process from early childhood may seem like an innately human experience, but it's actually incredibly similar to what engineers at the University of California, Berkeley sent their bipedal robot Cassie through in order to teach it to walk. Dancing and fighting robots, like those made by and parodied of Boston Dynamics' robots, have taken the internet by storm in the past few years. But what these videos don't show are the fine-tuned and choreographed movements often lurking in their code. Zhongyu Li is a Ph.D. candidate at the University of Berekely studying robotic locomotion.
BERKELEY, California – The fatal crash of a Tesla with no one apparently behind the wheel has cast a new light on the safety of semiautonomous vehicles and the nebulous U.S. regulatory terrain they navigate. Police in Harris County, Texas, said a Tesla Model S smashed into a tree on Saturday at high speed after failing to negotiate a bend and burst into flames, killing one occupant found in the front passenger seat and the owner in the back seat. Tesla Chief Executive Elon Musk tweeted on Monday that preliminary data downloaded by Tesla indicate the vehicle was not operating on Autopilot, and was not part of the automaker's "Full Self-Driving" (FSD) system. Tesla's Autopilot and FSD, as well as the growing number of similar semi-autonomous driving functions in cars made by other automakers, present a challenge to officials responsible for motor vehicle and highway safety. U.S. federal road safety authority, the National Highway Traffic Safety Administration (NHTSA), has yet to issue specific regulations or performance standards for semi-autonomous systems such as Autopilot, or fully autonomous vehicles (AVs).
A pioneer in machine learning has argued that the technology is best placed to augment human intelligence and bemoaned'confusion' over the meaning of artificial intelligence (AI). Michael I. Jordan, a professor in the department of electrical engineering and computer science, and department of statistics, at the University of California, Berkeley, told the IEEE that while science-fiction discussions around AI were'fun', they were also a'distraction.' "There's not been enough focus on the real problem, which is building planetary-scale machine learning-based systems that actually work, deliver value to humans, and do not amplify inequities," said Jordan, in an article from IEEE Spectrum author Kathy Pretz. Jordan, whose awards include the IEEE John von Neumann Medal, awarded last year for his contributions to machine learning and data science, wrote an article entitled'Artificial Intelligence: The Revolution Hasn't Happened Yet', first published in July 2019 but last updated at the start of this year. With various contributors thanked at the foot of the article – including one Jeff Bezos – Jordan outlined the rationale for caution.
A four-legged, robotic guide dog system that can safely lead blind people around obstacles and through narrow passages has been developed by US researchers. Just like a real assistance canine, the bot guides its user by means of a leash -- which it can pull taut but also allow to go slack in order to better lead around tight turns. The setup -- built on a robot design called a mini cheetah -- features a laser-ranging system to map out its surroundings and a camera to track the human it is guiding. Given an end point to reach, the machine maps out a simple route, adapting its course as it progresses to accommodate obstacles and the handler's movements. The robot has the potential to cut down on the time and expense of training guide dogs -- although, they would lack the mental and social benefits of a real animal. According to lead researcher and roboticist Zhongyu Li of the University of California, Berkeley, the training of mechanical guide dogs would be scalable.
Hany Farid, a digital forensics expert at UC Berkeley, says the dangers in sophisticated phony videos called "deepfakes" are amplified in their potential to travel rapidly across social media. Hany Farid, a digital forensics expert at UC Berkeley, says the dangers in sophisticated phony videos called "deepfakes" are amplified in their potential to travel rapidly across social media. The videos, uploaded to TikTok in recent weeks by the account @deeptomcruise, have raised new fears over the proliferation of convincing deepfakes -- the nickname for media generated by artificial intelligence technology showing phony events that often seem realistic enough to dupe an audience. Hany Farid, a professor at the University of California, Berkeley, told NPR's All Things Considered that the Cruise videos demonstrate a step up in the technology's evolving sophistication. "This is clearly a new category of deepfake that we have not seen before," said Farid, who researches digital forensics and misinformation.
Mathematics is the foundation of countless sciences, allowing us to model things like planetary orbits, atomic motion, signal frequencies, protein folding, and more. Moreover, it's a valuable testbed for the ability to problem solve, because it requires problem solvers to analyze a challenge, pick out good methods, and chain them together to produce an answer. It's revealing, then, that as sophisticated as machine learning models are today, even state-of-the-art models struggle to answer the bulk of math problems correctly. A new study published by researchers at the University of California, Berkeley finds that large language models including OpenAI's GPT-3 can only complete 2.9% to 6.9% of problems from a dataset of over 12,500. The coauthors believe that new algorithmic advancements will likely be needed to give models stronger problem-solving skills.
Marisa Johnson's six-year-old daughter was just learning to read independently when her Alameda, California, school shut down last year. Without solid literacy skills and lots of time stuck at home, the tot is spending much more time playing video games and watching shows than reading books. "She's definitely reading less," Johnson says. "The only way we can be alone among ourselves is with screens." As many parents know, screen time has ballooned during the pandemic.
Most people in AI don't care too much about the details, says Jeff Hawkins, a neuroscientist and tech entrepreneur. He wants to change that. Hawkins has straddled the two worlds of neuroscience and AI for nearly 40 years. In 1986, after a few years as a software engineer at Intel, he turned up at the University of California, Berkeley, to start a PhD in neuroscience, hoping to figure out how intelligence worked. But his ambition hit a wall when he was told there was nobody there to help him with such a big-picture project. Frustrated, he swapped Berkeley for Silicon Valley and in 1992 founded Palm Computing, which developed the PalmPilot--a precursor to today's smartphones.
Technology that measures emotions based on biometric indicators such as facial movements, tone of voice or body movements is increasingly being marketed in China, researchers say, despite concerns about its accuracy and wider human rights implications. Drawing upon artificial intelligence, the tools range from cameras to help police monitor a suspect's face during an interrogation to eye-tracking devices in schools that identify students who are not paying attention. A report released this week from U.K.-based human rights group Article 19 identified dozens of companies offering such tools in the education, public security and transportation sectors in China. "We believe that their design, development, deployment, sale and transfers should be banned due to the racist foundations and fundamental incompatibility with human rights," said Vidushi Marda, a senior program officer at Article 19. Human emotions cannot be reliably measured and quantified by technology tools, said Shazeda Ahmed, a doctoral candidate studying cybersecurity at the University of California, Berkeley and the report's co-author. Such systems can perpetuate bias, especially those sold to police that purport to identify criminality based on biometric indicators, she added.