Many Tesla fans view the electric carmaker as a world leader in self-driving technology. CEO Elon Musk himself has repeatedly claimed that the company is less than two years away from perfecting fully self-driving technology. But in an interview with Germany's Manager magazine, Waymo CEO John Krafcik dismissed Tesla as a Waymo competitor and argued that Tesla's current strategy was unlikely to ever produce a fully self-driving system. "For us, Tesla is not a competitor at all," Krafcik said. "We manufacture a completely autonomous driving system. Tesla is an automaker that is developing a really good driver assistance system."
According to the National Oceanic and Atmospheric Administration (NOAA), more than 80% of the ocean "remains unmapped, unobserved, and unexplored" – despite constituting more than 70% of the planet's surface. Now, a pair of Navy veterans are looking to change that with a line of autonomous robot vehicles that will plunge the ocean's depths in search of big data for the company's clients. "The company really started when Joe [Wolfel] and I first got together, which was back in 2004," said Judson Kauffman, who shares the CEO role with Wolfel, in an interview with Datanami. "We met in [Navy] SEAL training together, and ended up being assigned the same unit, and then went into combat together and became very close friends. There, they developed the idea for Terradepth, which "stemmed from some knowledge that we gained in the Navy" – really, Kauffman said, "just of how ignorant humanity is of what's underwater, what's in the sea." "It was shocking to learn how little we know, how little the U.S. Navy knew," he continued – and the more they dug into the issue after their time in the Navy, the more surprised they were.
His research activities include assistive robotics, robot adaptation, human-robot interactions and grasping of deformables. We spoke about some of the projects he is involved in and his plans for future work. The SOCRATES project is about quality of interaction between robots and users, and our role is focussed on adapting the robot behaviour to user needs. We have concentrated on a very nice use case: cognitive training of mild dementia patients. We are working with a day-care facility in Barcelona and asked if we could provide some technological help for the caregiver.
Advertising as a sector is notorious for major paradigm shifts. That's because the shell game of grabbing consumers' attention never stops, and as programmatic advertising gave way to influencers gave way to branded content gave way to ... and on and on, so the game will always roll ahead as savvy marketers break new terrain and legions follow behind in a desperate bid for ears and eyes. Not surprising, then, that the winds are shifting yet again, and this time the leading edge of the industry is turning its attention to AI. I caught up with Sheri Bachstein, Global Head of Watson Advertising and The Weather Company, to discuss the transformative impact AI will have on the advertising game, as well as what we can expect in terms of adoption in traditional advertising and untested ecosystems like AR/VR. Me: What role will AI play in marketing in the years ahead?
On a semi-weekly basis, we compile a collection of the best long-form stories on tech, tech culture and more. We've collected a list of the best selections from 2020 for you to revisit -- or enjoy for the first time -- as we finish up one dumpster fire of a year. One of the biggest sports stories of the year broke in mid-January. Major League Baseball determined the Houston Astros used various methods, including video feeds, to steal signs from the opposition during the team's 2017 championship season -- including the World Series. MLB found that it continued to do so during the 2018 season, too.
It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.
The following week, she took part in several workshops at NeurIPS, the largest annual AI research conference, which over 20,000 people attended this year. It was "therapeutic," she says, to see how the community she'd helped build showed up and supported one another. Now, another week later, she's just winding down and catching her breath--and trying to make sense of it all. On Monday, December 14, I caught up with Gebru via Zoom. She recounted what happened during her time at Google, reflected on what it meant for the field and AI ethics research, and gave parting words of advice to those who want to keep holding tech companies accountable.
Earlier this fall, A.I. ethicist Timnit Gebru submitted a paper for consideration at an academic conference about predictive language models: on their environmental cost, and how they could learn racist and sexist language and also spread misinformation. Since she was working for Google, the company first wanted to review the paper--which Gebru wrote with several of her colleagues--and sign off on it. She was then told by senior managers that the paper didn't meet Google's publication bar, and that she should retract it or remove the names of Google employees. Gebru wanted more clarity on why they wanted it retracted and said that if Google couldn't provide that information, she would resign. This kicked off a few days of wrangling and several intense emails--until a manager emailed Gebru's boss, saying they had accepted her resignation.
"Where Is the Future?" is a series of interviews with industry leaders considering the potential and complexity of technology on the horizon. Two summers ago, Courtenay Cotton led a workshop on machine learning that I attended with a New York–based group called the Women and Surveillance Initiative. It was a welcome introduction to the subject and a rare opportunity to cut through the hype to understand both the value of machine learning and the complications of this field of research. In our recent interview, Cotton, who now works as lead data scientist at n-Join, once again offered her clear thinking on machine learning and where it is headed. What kind of problems is machine learning designed to solve?
Asia is better placed to leverage the current business environment and drive the value of data, as the region accelerates its 5G rollout. Businesses also realise they need data to facilitate planning and maintenance, whether it is to in human resources, inventory, or financial. Asia had put in far greater investment in 5G and had been more aggressive in rolling out these next-generation networks. This meant it was better positioned to take advantage of the current circumstances brought about by the global pandemic, said Irfan Khan, SAP's president of platform and technologies. Businesses here also could access a broad and robust ecosystem of developers, including citizen developers, who tapped new tools to rapidly create workloads, Khan said, in an interview with ZDNet.