The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub. Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android's upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone's microphone (or an external microphone), and relies on the Google Cloud Speech API.
What I saw didn't look very much like the future -- or at least the automated one you might imagine. The offices could have been call centers or payment processing centers. One was a timeworn former apartment building in the middle of a low-income residential neighborhood in western Kolkata that teemed with pedestrians, auto rickshaws and street vendors. In facilities like the one I visited in Bhubaneswar and in other cities in India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines. Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries.
Privacy campaigners have warned of an "epidemic" of facial recognition use in shopping centres, museums, conference centres and other private spaces around the UK. An investigation by Big Brother Watch (BBW), which tracks the use of surveillance, has found that private companies are spearheading a rollout of the controversial technology. The group published its findings a day after the information commissioner, Elizabeth Denham, announced she was opening an investigation into the use of facial recognition in a major new shopping development in central London. Sadiq Khan, the mayor of London, has already raised questions about the legality of the use of facial recognition at the 27-hectare (67-acre) Granary Square site in King's Cross after its owners admitted using the technology "in the interests of public safety". BBW said it had uncovered that sites across the country were using facial recognition, often without warning visitors.
Japan has told the United States it is ready to provide its robot technology for use in dismantling nuclear and uranium enrichment facilities in North Korea as Washington and Pyongyang pursue further denuclearization talks, government sources said Friday. As Japan turns to the remotely controlled robots it has developed to decommission reactors crippled by the triple core meltdown in 2011 at the Fukushima No. 1 power plant, it believes the same technology can be used in North Korea, according to the sources. The offer is part of Japan's efforts to make its own contribution to the denuclearization talks amid concern that Tokyo could be left out of the loop as the United States and North Korea step up diplomacy. Tokyo has already told Washington it would shoulder part of the costs of any International Atomic Energy Agency inspections of North Korean facilities and dispatch its own nuclear experts to help. The scrapping of nuclear facilities, such as the Yongbyon complex, which has a graphite-moderated reactor, will come into focus in forthcoming working-level talks between Washington and Pyongyang.
Martin Spano is the author of Artificial Intelligence in a Nutshell, a book that explores the mystified subject of artificial intelligence (AI) with simple, non-technical language. Spano's passion for AI began after he watched 2001: A Space Odyssey, but he insists this ever-changing technology is not just the subject for sci-fi novels and movies; artificial intelligence is present in our everyday lives. Alex Krizhevsky was born in Ukraine but lived most of his life in Canada. After finishing his undergraduate studies, he continued as a postgraduate under the supervision of Geoffrey Hinton, legendary computer scientist and cognitive psychologist, one of the foremost advocates of using artificial neural networks for artificial intelligence. Krizhevsky stumbled upon an algorithm by Hinton that used graphics cards instead of processors for its execution.
The UK's privacy watchdog has opened an investigation into the use of facial recognition cameras in a busy part of central London. The information commissioner, Elizabeth Denham, announced she would look into the technology being used in Granary Square, close to King's Cross station. Two days ago the mayor of London, Sadiq Khan, wrote to the development's owner demanding to know whether the company believed its use of facial recognition software in its CCTV systems was legal. The Information Commissioner's Office (ICO) said it was "deeply concerned about the growing use of facial recognition technology in public spaces" and was seeking detailed information about how it is used. "Scanning people's faces as they lawfully go about their daily lives in order to identify them is a potential threat to privacy that should concern us all," Denham said.
Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.
The two companies have butted heads for years, and it's likely they'll continue to do so--Spotify's protest web page (in which Spotify details accusations that Apple engages in anticompetitive behavior) is just one example of hurt feelings. But despite the mutual dislike, Apple and Spotify are reportedly in talks to integrate Spotify more tightly with Siri, Apple's digital assistant. The companies are "discussing a plan" that would let iPhone users ask Siri to play music with Spotify, instead of requiring them to manually navigate to whatever song, album, or playlist they want to hear via the third-party app. The Information's report on this handy potential change cites three anonymous sources who are "familiar with the discussions." Neither company confirmed the report when contacted by Fast Company.
A robotic ship from the University of New Hampshire's Marine School that can map the ocean floor is part of the latest effort to find out what happened to famed pilot Amelia Earhart, who disappeared over the Pacific Ocean eight decades ago. The autonomous vessel, known as BEN, the Bathymetric Explorer and Navigator, will be mapping the seafloor near the island where Earhart sent her last radio transmission. The area is too deep for divers and too shallow for safe navigation by deep-water sonar systems. Maps produced by BEN will be used to target later dives by remotely operated vehicles, searching for remnants of Earhart's plane. The work is part of the mission led by oceanographer Robert Ballard, best known for finding the wreck of the Titanic, to look into the disappearance of Earhart in 1937.