But Austin Huang, Associate Director and the Biomedical Data Science lead in Pfizer's Genome Sciences and Technologies group in Kendall Square, Cambridge, Massachusetts, explains that "the methods that companies like Google and Facebook use to study large, complex datasets can also be used to help predict disease and possible treatment outcomes in human health data." If the ultimate goal of a self-driving car is to navigate a busy city street, in pharmaceutical research, the goal is to navigate the connections between a potential treatment and its effectiveness in treating a disease. Austin Huang, Associate Director and the Biomedical Data Science lead in Pfizer's Genome Sciences and Technologies group And if other fields of AI are any indication, he says, "when breakthroughs happen, change can follow very quickly," likening it to a "tipping point." To enable AI to reach those kinds of breakthroughs, it's important to teach computers how to "think" abstractly in discovering patterns in large datasets.
Despite the recent emergence of browser-based transcription aids, transcription's an area of drudgery in the modern Western economy where machines can't quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could. Automatic speech recognition, or ASR, is an area that has gripped the firm's chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland's Edinburgh University. Huang and his colleagues used their software to transcribe the NIST 2000 CTS test set, a bundle of recorded conversations that's served as the benchmark for speech recognition work for more than 20 years.
Below, he tells GovInsider how the island state is exploring plans to predict train disruptions; make every vehicle a traffic sensor; and ways to use artificial intelligence to secure all this infrastructure. As a solution, the government is looking to use drones and artificial intelligence to predict when trains and tracks will need repairs. Outside of public transport, Singapore is trialling autonomous trucks to transport containers at ports. This mentality has been shaken up by the Government Technology Agency's centralisation under the Prime Minister's Office, he believes.
And although the pace may have slowed down, the number of transistors that could fit per square inch did continue to increase, doubling not every year but after every 18 months instead. Dennard scaling -- named after Robert H. Dennard who co-authored the concept -- states that even while transistors become smaller, power density remains constant such that power consumption remains proportional with its area. On NVIDIA's end, Huang assures that their venture into artificial intelligence and deep learning will keep them ahead even with the death of Moore's Law. That's not to say, though, that NVIDIA will stop making their GPUs more powerful.
These simulators, most recently announced by Nvidia as a project called Isaac's Lab but also pioneered by Alphabet's DeepMind and Elon Musk's OpenAI, are 3D spaces that have physics just like reality, with virtual objects that act the same way as their physical counterparts. "We imagine that one of these days, we'll be able to go into the Holodeck, design a product, design the factory that's going to make the product, and design the robots that's going to make the factory that makes the products. Alphabet's DeepMind has had similar ideas: The AI research lab is most well-known for applying its AI to games, notably AlphaGo, which continues to beat human world-champions at Go, but also building AI that beats video games like Atari and Starcraft. While Nvidia's Isaac's Lab is meant to help build robots and products that do specific tasks in the real world, DeepMind's Lab is geared more towards research, or finding ways to build AI that can learn about its surroundings with little input.
The Chinese Association for Artificial Intelligence, meanwhile, is pushing for a higher academic status for AI courses such as machine learning and computer vision, which today are often nestled within computer science departments. For $131, anyone interested can take a class in deep learning--the AI approach that powers AlphaGo--offered by a Beijing-based company called ChinaHadoop. At Tsinghua University, a machine-learning course is capped at 60 students, but sometimes as many as 120 students show up, says Jie Tang, an associate professor there who studies machine learning and data mining. The summit will "trigger a new round of thinking and discussion regarding AI," in China, adds Minlie Huang, an associate professor at Tsinghua University who specializes in deep learning and natural-language processing.
The other components of the strategy revolve around showcasing cutting-edge AI applications across sectors, building partnerships with other companies, nurturing technology start-ups and help build an AI ecosystem. TensorFlow is Google's deep learning framework while CUDA (Compute Unified Device Architecture) is a free software platform provided by Nvidia that enables users to program GPUs. Further, public cloud services providers such as Alibaba Group Holdings Ltd, Amazon Web Services, Baidu Inc., Facebook, Google, IBM, Microsoft and Tencent Holdings Ltd use Nvidia GPUs in their data centres, prompting Nvidia to launch its GPU Cloud platform, which integrates deep learning frameworks, software libraries, drivers and the operating system. Nvidia also worked with SAP SE to develop a product called Brand Impact--a fully automated and scalable video analytics service for brands, media agencies and media production companies.
The new chip has 21 billion transistors, and it is an order of magnitude more powerful than the 15-billion transistor Pascal-based processor that Nvidia announced a year ago. He noted that deep learning neural network research started to pay off about five years ago, when researchers started using graphics processing units (GPUs) to process data in parallel to train neural networks quickly. And this year, Nvidia plans to train 100,000 developers to use deep learning. Volta has 12 times the Tensor FLOPs for deep learning training compared to last year's Pascal-based processor.
One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn't yet deployed their AI applications in production. If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you'll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time.
"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.