Want optimized AI? Rethink your storage infrastructure and data pipeline

#artificialintelligence

Most discussions of AI infrastructure start and end with compute hardware -- the GPUs, general-purpose CPUs, FPGAs, and tensor processing units responsible for training complex algorithms and making predictions based on those models. But AI also demands a lot from your storage. Keeping a potent compute engine well-utilized requires feeding it with vast amounts of information as fast as possible. Anything less and you clog the works and create bottlenecks. Optimizing an AI solution for capacity and cost, while scaling for growth, means taking a fresh look at its data pipeline.


Want optimized AI? Rethink your storage infrastructure and data pipeline

#artificialintelligence

Most discussions of AI infrastructure start and end with compute hardware -- the GPUs, general-purpose CPUs, FPGAs, and tensor processing units responsible for training complex algorithms and making predictions based on those models. But AI also demands a lot from your storage. Keeping a potent compute engine well-utilized requires feeding it with vast amounts of information as fast as possible. Anything less and you clog the works and create bottlenecks. Optimizing an AI solution for capacity and cost, while scaling for growth, means taking a fresh look at its data pipeline.


insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads - insideBIGDATA

#artificialintelligence

Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads, we'll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation. The intended audience for this important new technology guide includes enterprise thought leaders (CIOs, director level IT, etc.), along with data scientists and data engineers who are a seeking guidance in terms of infrastructure for AI and DL in terms of specialized hardware. The emphasis of the guide is "real world" applications, workloads, and present day challenges.


Storage strategies for machine learning and AI workloads

#artificialintelligence

Businesses are increasingly using data assets to accelerate their competitiveness and drive greater revenue. Part of this strategy is to use machine learning and AI tools and technologies. But AI workloads have significantly different data storage and computing needs than generic workloads. AI and machine learning workloads require huge amounts of data both to build and train the models and to keep them running. When it comes to storage for these workloads, high-performance and long-term data storage are the most important concerns.


Delivering Healthcare Innovation In A Heartbeat - Information Technology

#artificialintelligence

Artificial intelligence (AI) and analytics are providing clinicians and researchers with actionable insights, from early detection to end-of-life-care, and by changing the way research is done and diagnoses are made. However, unlocking the data treasure trove is not a simple exercise for any healthcare organisation. With Asia-Pacific (APAC) expected to become the global leader in IoT spending according to IDC1, healthcare is unsurprisingly becoming increasingly connected in the region. However, it is this connectivity that adds complexity to the data challenge. Healthcare data is now growing at a rate of 48 per cent every year.