New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
A representation of a deep learning neural network designed to intelligently extract text-based information from cancer pathology reports. Despite steady progress in detection and treatment in recent decades, cancer remains the second leading cause of death in the United States, cutting short the lives of approximately 500,000 people each year. To better understand and combat this disease, medical researchers rely on cancer registry programs--a national network of organizations that systematically collect demographic and clinical information related to the diagnosis, treatment, and history of cancer incidence in the United States. The surveillance effort, coordinated by the National Cancer Institute (NCI) and the Centers for Disease Control and Prevention, enables researchers and clinicians to monitor cancer cases at the national, state, and local levels. Much of this data is drawn from electronic, text-based clinical reports that must be manually curated--a time-intensive process--before it can be used in research.
To compete in today's data-driven world, organizations need to accelerate the digital transformation process that puts technology at the heart of products, services and operations. Digital transformation enables both private and public entities to provide better outcomes and experiences for the people they serve -- from smarter vehicles to personalized healthcare, from customized shopping experiences to the prevention of credit card fraud. A common thread to these and countless other digital transformation use cases is artificial intelligence. AI applications and their underlying technologies, including machine learning and deep learning, enable organizations to train systems to use massive amounts of data to sense, learn, reason, make predictions and evolve. Under the hood, the engine that makes it all go is the blazingly fast processing power of high-performance computing (HPC) clusters.
Use of GPU is fast expanding outside of the 3D video game realm and offering numerous benefits for enterprise as well as industrial applications. With Deep learning taking a center stage in the industrial 4.0 revolution, GPU and x86 CPU manufacturers are ensuring that solution developers are not restrained by the range of options when it comes to choosing the right silicon for their product. So let's review what GPU can do differently from a CPU and vice versa, also how they make the perfect couple in the world of robot surgeons, cryptocurrencies, smart factories and self-driving cars. Let's review one by one and discuss their basic differentiating characteristics. The central processing unit (CPU) of a computer is often referred to as its brain where all the processing and multitasking takes place.
If you follow developments in cloud architecture, you may have been hearing a lot recently on the importance of an "intelligent cloud" and an "intelligent edge." Cloud providers who have traditionally focused on providing infrastructure and software have begun to realize that there is only so much value they can drive through these as-a-service offerings, and it is no surprise that the word "cognitive" has begun to creep into more marketing and speechifying on cloud. But it's important for developers and data scientists to be able to distinguish between the marketing and the reality of a truly cognitive cloud. IBM is leading in artificial intelligence, with Watson's deep domain expertise helping clients of every size, across all industries, every day. Watson -- which is available only on the IBM Cloud --has the full range of cognitive technology – ML, AI, cognitive -- because that's what is needed for decision making and transformative business outcomes.
This proclamation, from NVIDIA co-founder, president, and CEO Jensen Huang at the GPU Technology Conference (GTC), held from March 26 to March 29 in San Jose, Calif., only hints at this company's growing impact on state-of-the-art computing. Read also: Nvidia's new supercomputer Clara designed to act as hospital processing hub Nvidia's physical products are accelerators (for third-party hardware) and the company's own GPU-powered workstations and servers. Jensen Huang, co-founder, president, and CEO at Nvidia, presents the sweep of the company's growing AI Platform at GTC 2018 in San Jose, Calif. On the hardware front, the headlines from GTX built on the foundation of Nvidia's graphical processing unit advances. If the "feeds and speeds" stats mean nothing to you, let's put them into the context of real workloads.
The 10th edition of the $9.71-billion NVIDIA Corporation's annual GPU Technology Conference (GTC 2018) for GPU developers opened on Tuesday to an audience of 8,500 where its Founder, President and CEO, Jensen Huang unveiled a series of advances to its deep learning computing platform. For over two hours, Huang took the audience through some "amazing graphics, amazing science, amazing AI and amazing robots." Introducing NVIDIA RTX technology that runs on a Quadro GV100 processor, he said: "This technology is the most important advance in computer graphics in 15 years as we can now bring real-time ray tracing to the market. Virtually everyone is adopting it." Elaborating on its relevance, he said, the gaming industry that makes 400 games a year, uses ray-tracing to render entire games in advance.
Nvidia Corp. has advanced deep learning techniques, but now it's looking to take AI technology into new areas: Putting self-driving cars into virtual reality instead of our roads, and setting its sights on Hollywood and hospitals. Over the past few years, Nvidia has made inroads into equipping cars with the computer hardware that gives them self-driving capability. That move has become so crucial that Nvidia NVDA, -7.76% shares fell more than 6% in recent trading as the company kicked off its GPU Technology Conference in San Jose, Calif., after it confirmed that it is suspending real-world testing following a recent fatality in Arizona in one of Uber Technologies Inc.'s self-driving cars. In his keynote address Tuesday morning, Chief Executive Jensen Huang did not mention the halt, but did show off a potential solution to the problem of testing self-driving automobiles on public roads. Huang showed off a simulator that can allow companies to test their self-driving systems in a virtual environment, providing opportunity to drive billions of miles in a year without endangering pedestrians.
Nvidia has unveiled several updates to its deep-learning computing platform, including an absurdly powerful GPU and supercomputer. At this year's GPU Technology Conference in San Jose, Nvidia CEO Jensen Huang unveiled the DGX-2, a new computer for researchers who are "pushing the outer limits of deep-learning research and computing" to train artificial intelligence. The computer, which will ship later this year, is the world's first system to sport a whopping two petaflops of performance. For some perspective: A Macbook Pro might have around one teraflop. A petaflop is one thousand teraflops.
The computer chip industry over the last couple of decades has seen its innovation stem from just a few top players like Intel, AMD, NVIDIA, and Qualcomm. In this same time span, the VC industry has shown waning interest in start-up companies that made computer chips. The risk was just too great; how could a start-up compete with a behemoth like Intel which made the CPUs that operated more than 80% of the world's PCs? In areas that that Intel wasn't the dominate force, companies like Qualcomm and NVIDIA were a force for the smartphone and gaming markets. The recent resurgence of the field of artificial intelligence (AI) has upended this status quo.
Over at the Lenovo Blog, Dr. Bhushan Desam writes that the company just updated its LiCO tools to accelerate AI deployment and development for Enterprise and HPC implementations. The newly updated Lenovo Intelligent Computing Orchestration (LiCO) tools are designed to overcome recurring pain points for enterprise customers and others implementing multi-user environments using clusters for both HPC workflows and AI development. LiCO simplifies resource management and makes launching AI training jobs in clusters easy. LiCO currently supports multiple AI frameworks, including TensorFlow, Caffe, Intel Caffe, and MXNet. Additionally, multiple versions of those AI frameworks can easily be maintained and managed using Singularity containers.