That helps engineers and designers build and interact with photorealistic people, objects and robots in a fully simulated environment. For instance, NVIDIA showed how the engineers that built the Koenigsegg supercar could explore the car "at scale and in full visual fidelity" and consult in real time on design changes. With Holodeck, NVIDIA is taking on Microsoft and its Hololens in the enterprise and design arena -- though the latter AR system is more about letting engineers interact with real and virtual objects at the same time. Much of this very advanced tech is bound to trickle down to consumers, hopefully making VR and AR good enough to actually become popular.
Neural networks apply computational resources to solve machine learning linear algebra problems with very large matrices, iterating to make statistically accurate decisions. Most of the machine learning models in operation today started in academia, such as natural language or image recognition, and were further researched by large well-staffed research and engineering teams at Google, Facebook, IBM and Microsoft. Enterprise machine learning experts and data scientists will have to start from scratch with research and iterate to build new high-accuracy models. It is a specialty business because the enterprises need four characteristics not necessarily found together: a large corpus of data for training, highly skilled data scientists and machine learning experts, a strategic problem that machine learning can solve, and a reason not to use Google's or Amazon's pay-as-you-go offerings.
Intel's Nervana based chip will likely clock in at 30 teraflops by mid-2017. In late breaking news, AMD has revealed its new AMD Instinct line of Deep Learning accelerators. In the old days, we had monolithic DL systems with single analytic objective functions. Deep Learning systems and Unsupervised Learning systems are likely these new kinds of things that we have never encountered before.
Though its biggest business is still PC graphics and gaming, Nvidia sells AI processors to internet and tech companies engaged in cloud computing. They include enterprise software providers like Salesforce.com (CRM), internet giant Facebook, tech consultancy Accenture (ACN), Alphabet's (GOOGL) Google, Microsoft and lesser known names like Splunk (SPLK). Steven Milunovich, analyst at UBS, says web-connected devices, generally called the "Internet of Things," will require AI to process real-world data gathered from sensor networks, suggesting a very large market. Salesforce.com is in the early stages of embedding AI tools in existing cloud products to make them more predictive, analysts say.
H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.
H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. The company made the announcement at Nvidia's GPU Tech event in San Jose, Calif. H2O.ai will offer its machine learning algorithms in a newly minted GPU-edition and its Deep Water product on Nvidia GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.
ERP deployments led by SAP and others were a direct result of the Business Process re-engineering phenomenon. How would you rethink Enterprise business processes using Augmented Intelligence? Deep Learning involves automatic feature detection using the data. I am exploring these ideas in more as part of my work on the Enterprise AI lab we are launching in London and Berlin in partnership with UPM and Nvidia both for Enterprises and Cities.
Investors continued to awaken to NVIDIA's enviable long-term growth potential in 2016, continuing the extended rally in its shares. As part of the transaction, shareholders of the non-enterprise business will receive a one-time $1.5 billion dividend, roughly 5 times the company's 2016 cash dividend payments. So the powerful rally Computer Sciences shares enjoyed in 2016 seems more likely to be fueled by the excitement surrounding the merger, rather than by the favorable long-term outlook for its business. The Motley Fool recommends Accenture.
NVIDIA's graphics processing (GPU) technology has been one of the biggest beneficiaries of the rise of specialized computing, gaining traction with workloads in supercomputing, artificial intelligence (AI) and connected cars. NVIDIA believes the DGX-1's combination of compute power and energy efficiency can democratize AI computing hardware, most of which currently runs in research labs or the hyperscale data centers of cloud providers, who deliver machine learning as a service. The leading cloud players are offering interest in GPUs, as Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM all offer GPU cloud servers. Intel recently introduced an FPGA accelerator that combines traditional Intel CPUs with field programmable gate arrays (FPGAs), semiconductors that can be reprogrammed to perform specialized computing tasks.
It boasts an end-to-end AI platform, a similar car platform, and GPUs specifically designed for deep learning. NVIDIA announced it would be collaborating with Microsoft (NASDAQ: MSFT) in the design of its upcoming server chip by including and optimizing an enterprise AI framework. Additionally, they announced a deep learning software toolkit, called IBM PowerAI, which would run on the optimized server. The technology would combine Baidu's cloud platform and mapping technology with NVIDIA's self-driving computing platform.