Results


Developing the AI future

#artificialintelligence

Artificial Intelligence (AI) is starting to change how many businesses operate. The ability to accurately process and deliver data faster than any human could is already transforming how we do everything from studying diseases and understanding road traffic behaviour to managing finances and predicting weather patterns. For business leaders, AI's potential could be fundamental for future growth. With so much on offer and at stake, the question is no longer simply what AI is capable of, but where AI can best be used to deliver immediate business benefits. According to Forrester, 70% of enterprises will be implementing AI in some way over the next year.


What goes into the right storage for AI? - IBM IT Infrastructure Blog

#artificialintelligence

Artificial intelligence (AI), machine learning and cognitive analytics are having a tremendous impact in areas ranging from medical diagnostics to self-driving cars. AI systems are highly dependent on enormous volumes of data--both at rest in repositories and in motion in real time--to learn from experience, make connections and arrive at critical business decisions. Usage of AI is also expected to expand significantly in the not-so-distant future. As a result, having the right storage to support the massive amounts of data required for AI workloads is an important consideration for an increasing number of organizations. Availability: When a business leader uses AI for critical tasks such as understanding how best to run their manufacturing process or to optimize their supply chain, they cannot afford to risk any loss of availability in the supporting storage system.


AI and HPC: Inferencing, Platforms & Infrastructure

#artificialintelligence

This feature continues our series of articles that survey the landscape of HPC and AI. This post focuses on inferencing, platforms, and infrastructure at the convergence of HPC and AI. Inferencing is the operation that makes data derived models valuable because they can predict the future and perform recognition tasks better than humans. Inferencing works because once the model is trained (meaning the bumpy surface has been fitted) the ANN can interpolate between known points on the surface to correctly make predictions for data points it has never seen before--meaning they were not in the original training data. Without getting too technical, during inferencing, ANNs perform this interpolation on a nonlinear (bumpy) surface, which means that ANNs can perform better than a straight line interpolation like a conventional linear method.


HPE pushes toward autonomous data center with InfoSight AI recommendation engine

#artificialintelligence

HPE is adding an AI-based recommendation engine to the InfoSight predictive analytics platform for flash storage, taking another step toward what it calls the autonomous data centre, where systems modify themselves to run more efficiently. The ultimate goal is to simplify and automate infrastructure management in order to cut operation expenses. HPE acquired InfoSight as part of its $1 billion deal earlier this year for Nimble Software, a maker of all-flash and hybrid flash storage products. Along with the announcement of the new recommendation engine, HPE Tuesday also said it is extending InfoSight to work with 3Par high-end storage technology it acquired in 2010. HPE says that is only the beginning of what it is doing to develop InfoSight's ability to monitor infrastructure, predict possible problems and recommend ways to enhance performance.


HPE introduces new set of artificial intelligence platforms and services - ET CIO

#artificialintelligence

Bengaluru: Hewlett Packard Enterprise (HPE) today announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning. Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data. Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems. To help customers overcome these challenges and realize the potential of AI, HPE is announcing the following offerings: • HPE's Rapid Software Development for AI: HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications.


AIOps tools portend automated infrastructure management

#artificialintelligence

Automated infrastructure management took a step forward with the emergence of AIOps monitoring tools that use machine learning to proactively identify infrastructure problems. Orchestration tools are becoming increasingly popular as part of the DevOps process as they allow admins to focus on more critical tasks, rather than the routine steps it takes to move a workflow along. Our experts analyze the top solutions in the market, namely: Automic, Ayehu, BMC Control-M, CA, Cisco, IBM, Micro Focus, Microsoft, ServiceNow, and VMware. You forgot to provide an Email Address. This email address doesn't appear to be valid.


HPE Introduces New Set of Artificial Intelligence Platforms and Services

#artificialintelligence

HPE Rapid Software Installation for AI: HPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs. HPE Deep Learning Cookbook: Built by the AI Research team at Hewlett Packard Labs, the deep learning cookbook is a set of tools to guide customers in selecting the best hardware and software environment for different deep learning tasks. These tools help enterprises estimate performance of various hardware platforms, characterize the most popular deep learning frameworks, and select the ideal hardware and software stacks to fit their individual needs. The Deep Learning Cookbook can also be used to validate the performance and tune the configuration of already purchased hardware and software stacks.


Artificial Intelligence Impacts Business: the AI-Business Revolution - Datamation

#artificialintelligence

Artificial intelligence and business, it seems, is a marriage that is all but inevitable. Artificial intelligence is intelligence built into computing systems, or as MIT professor Marvin Minsky put it, "the science of making machines do those things that would be considered intelligent if they were done by people." In sum, this concept of extending the "intelligence" of systems is the core of how AI enables critical advantage for businesses. People use it every day in their personal and professional lives. What is new is are new business offerings thanks to two major factors: 1) a massive increase in computer processing speeds at reasonable costs, and 2) massive amounts of rich data for mining and analysis.


The AI revolution in HPC - IBM Systems Blog: In the Making

#artificialintelligence

In a few months, when the HPC community gathers in Denver for SuperComputing 2017, I expect it will become clear that the supercomputing field is poised to take the next giant step in its evolutionary path. For decades, the HPC community has spoken longingly of efficiently steering simulations, improving the interpretation of complex model outputs and building more efficient and representative models of complex phenomena. And now we are beginning to see these desires realized as researchers and commercial enterprises demonstrate the utility of melding AI with HPC in products and approaches across a broad spectrum of problems and industries. IBM has been focused on merging AI and HPC for some time. Our recently-announced PowerAI Vision is a natural adjunct to HPC simulations producing visual outputs.


HPE Introduces New Set of AI Platforms and Services

#artificialintelligence

HPE announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning. Inspired by the human brain, deep learning is typically implemented for challenging tasks such as image and facial recognition, image classification and voice recognition. To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data. Many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs.