Results


AMD Wins Another Cloud Provider With Baidu ABC Services

#artificialintelligence

Advanced Micro Devices celebrated another victory in the cloud market yesterday. Chinese Artificial Intelligence (AI) and search giant Baidu announced the immediate availability of AI, big data and cloud computing (ABC) services based on single-socket servers powered by EPYC. This announcement follows Microsoft's announcement last week that its new L-Series storage-optimized virtual machines would be powered by EPYC. Hewlett Packard Enterprise also recently announced the availability of the ProLiant DL385 Gen10 server platform for virtualized infrastructure, powered by EPYC as well. It was not entirely surprising that AMD secured Baidu as a customer--Baidu publicly announced support for EPYC at AMD's launch event back in June.


What goes into the right storage for AI? - IBM IT Infrastructure Blog

#artificialintelligence

Artificial intelligence (AI), machine learning and cognitive analytics are having a tremendous impact in areas ranging from medical diagnostics to self-driving cars. AI systems are highly dependent on enormous volumes of data--both at rest in repositories and in motion in real time--to learn from experience, make connections and arrive at critical business decisions. Usage of AI is also expected to expand significantly in the not-so-distant future. As a result, having the right storage to support the massive amounts of data required for AI workloads is an important consideration for an increasing number of organizations. Availability: When a business leader uses AI for critical tasks such as understanding how best to run their manufacturing process or to optimize their supply chain, they cannot afford to risk any loss of availability in the supporting storage system.


Rebuilding the enterprise with A.I. HCL Blogs

#artificialintelligence

Change is a constant in the world of Business. That is why, at any given point in modern history, Enterprises are dealing with some "transformational trend" or the other. In the 80s and 90s it was computing; in the 2000s it was the Internet followed by Mobility, Cloud, and now the latest mantra – Digitalization. In a way, it is this constant need to evolve, change, and improve on the status quo, with all the tools available to us, that defines us as a human race. The only way is forward and progress is only limited by our own inventiveness.


Amazon brings machine learning to "everyday developers" » Banking Technology

#artificialintelligence

"Amazon has a long history of machine learning" Amazon Web Services (AWS) is looking to bring machine learning (ML) to ordinary developers, launching the SageMaker service to simplify building applications, reports Enterprise Cloud News (Banking Technology's sister publication). ML is too complicated for ordinary developers, AWS CEO Andy Jassy said at a keynote during the AWS re:Invent event. "If you want to enable most enterprises and companies to be able to use ML in an expansive way, we have to solve the problem of making it accessible to everyday developers and scientists," he said. Amazon has a long history of ML, Jassy says. "We've been doing ML at Amazon for 20 years," he said.


author.asp?section_id=570&doc_id=738605&

@machinelearnbot

AWS re:Invent -- Amazon Web Services is looking to bring machine learning to ordinary developers, launching the SageMaker service to simplify building machine learning applications. Machine learning is too complicated for ordinary developers, Amazon Web Services Inc. CEO Andy Jassy said at a keynote Wednesday. "If you want to enable most enterprises and companies to be able to use machine learning in an expansive way, we have to solve the problem of making it accessible to everyday developers and scientists," he said. Amazon has a long history of machine learning, Jassy says.


NFL bets on AWS for machine learning

ZDNet

The National Football League will use Amazon Web Services as its standard machine learning and analytics provider to boost the performance of the league's player statistics platform. The announcement is just the latest customer win AWS has touted at its re:Invent conference this week, following similar cloud deals with Time Warner and Intuit. AWS also announced new cloud deals with the the Walt Disney Company and Expedia on Wednesday. Amazon said the NFL will use AWS' machine learning and data analytics services to improve the statistical capabilities and performance of the league's Next Gen Stats platform, which basically tags up players and tracks new stats like speed, rushes and passes. AWS will also become an "Official Technology Provider" of the NFL.


HPE pushes toward autonomous data center with InfoSight AI recommendation engine

#artificialintelligence

HPE is adding an AI-based recommendation engine to the InfoSight predictive analytics platform for flash storage, taking another step toward what it calls the autonomous data centre, where systems modify themselves to run more efficiently. The ultimate goal is to simplify and automate infrastructure management in order to cut operation expenses. HPE acquired InfoSight as part of its $1 billion deal earlier this year for Nimble Software, a maker of all-flash and hybrid flash storage products. Along with the announcement of the new recommendation engine, HPE Tuesday also said it is extending InfoSight to work with 3Par high-end storage technology it acquired in 2010. HPE says that is only the beginning of what it is doing to develop InfoSight's ability to monitor infrastructure, predict possible problems and recommend ways to enhance performance.


HPE introduces new set of artificial intelligence platforms and services - ET CIO

#artificialintelligence

Bengaluru: Hewlett Packard Enterprise (HPE) today announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning. Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data. Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems. To help customers overcome these challenges and realize the potential of AI, HPE is announcing the following offerings: • HPE's Rapid Software Development for AI: HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications.


IBM Introduces New Software to Ease Adoption of AI, Machine Learning and Deep Learning - insideBIGDATA

#artificialintelligence

IBM announced new software to deliver faster time to insight for high performance data analytics (HPDA) workloads, such as Spark, Tensor Flow and Caffé, for AI, Machine Learning and Deep Learning. Based on the same software, which will be deployed for the Department of Energy's CORAL Supercomputer Project at both Oak Ridge and Lawrence Livermore, IBM will enable new solutions for any enterprise running HPDA workloads. New to this launch is Deep Learning Impact (DLI), a set of software tools to help users develop AI models with the leading open source deep learning frameworks, like TensorFlow and Caffe. The DLI tools are complementary to the PowerAI deep learning enterprise software distribution already available from IBM. Also new is web access and simplified user interfaces for IBM Spectrum LSF Suites, combining a powerful workload management platform with the flexibility of remote access.


Lenovo says AI crucial for enterprise as it announces new tech for training machine-learning systems

ZDNet

Lenovo has announced new hardware and software for firms building machine-learning systems, as the Chinese tech giant double down on AI. Lenovo expects firms will increasingly rely on AI systems to make rapid decisions based on the vast amount of data being generated, predicting will be 44 trillion gigabytes of data will exist by 2020. To serve the fast-growing market, Lenovo today announced new hardware and software for streamlining machine-learning on high-performance computer systems. The ThinkSystem SD530, a two-socket server in a 0.5U rack form factor, is now available with the latest NVIDIA GPU accelerators and Intel Xeon Scalable family CPUs. By including the option of adding NVIDIA's Tesla V100 GPU accelerator, Lenovo is giving businesses the ability to massively boost the performance of AI-related tasks.