Goto

Collaborating Authors

Big Data Analytics Infrastructure

@machinelearnbot

Recent surveys suggest the number one investment area for both private and public organizations is the design and building of a modern data warehouse (DW) / business intelligence (BI) / data analytics architecture that provides a flexible, multi-faceted analytical ecosystem. The goal is to leverage both internal and external data to obtain valuable, actionable insights that allows the organization to make better decisions. Unfortunately, the amount of recent DW / BI / Data Analytics innovation, themes and paths is causing confusion. The "Big Data" and "Hadoop" hype is causing many organizations to roll-out Hadoop / MapReduce systems to dump data into - without a big-picture information management strategic plan or understanding how all the pieces of a data analytics ecosystem fit together to optimize decision making capabilities. This has resulted in the creation of a new word: Hadump - meaning data dumped into Hadoop with no plan.


Hadoop as a Data Management Hub

@machinelearnbot

Forward-looking organizations should consider Hadoop as a data management hub. New features in Hadoop 2.0 provide greater flexibility and manageability, allowing firms to efficiently move more workloads to Hadoop and build analytics models using a much larger data pool. In financial services for example, many organizations have been experimenting with Hadoop projects. With rack-mountable blade servers densely deployed and the power of multi-parallel processing the economics of setting up Hadoop clusters is quite compelling. Hadoop is an open-source general-purpose data storage and processing framework.


Everything a Data Scientist Should Know About Data Management - KDnuggets

#artificialintelligence

To be a real "full-stack" data scientist, or what many bloggers and employers call a "unicorn," you have to master every step of the data science process -- all the way from storing your data, to putting your finished product (typically a predictive model) in production. But the bulk of data science training focuses on machine/deep learning techniques; data management knowledge is often treated as an afterthought. Data science students usually learn modeling skills with processed and cleaned data in text files stored on their laptop, ignoring how the data sausage is made. Students often don't realize that in industry settings, getting the raw data from various sources to be ready for modeling is usually 80% of the work. And because enterprise projects usually involve a massive amount of data that their local machine is not equipped to handle, the entire modeling process often takes place in the cloud, with most of the applications and databases hosted on servers in data centers elsewhere. Even after the student landed a job as a data scientist, data management often becomes something that a separate data engineering team takes care of. As a result, too many data scientists know too little about data storage and infrastructure, often to the detriment of their ability to make the right decisions at their jobs. The goal of this article is to provide a roadmap of what a data scientist in 2019 should know about data management -- from types of databases, where and how data is stored and processed, to the current commercial options -- so the aspiring "unicorns" could dive deeper on their own, or at least learn enough to sound like one at interviews and cocktail parties.


Hadoop vs. Spark – An Accurate Question? techsocialnetwork

#artificialintelligence

I just googled Hadoop vs. Spark and got nearly 35 million results. That's because Hadoop and Spark are two of the most prominent distributed systems for processing data on the market today. It's a hot subject that organizations are interested in when addressing their big data analytics. Choosing the Right Big Data Software; Which is the best Big Data Framework?; How Do Hadoop and Spark Stack Up?


What Is Big Data?

#artificialintelligence

Big Data is the ocean of information we swim in every day – vast zettabytes of data flowing from our computers, mobile devices, and machine sensors. This data is used by organizations to drive decisions, improve processes and policies, and create customer-centric products, services, and experiences. Big Data is defined as "big" not just because of its volume, but also due to the variety and complexity of its nature. Typically, it exceeds the capacity of traditional databases to capture, manage, and process it. And, Big Data can come from anywhere or anything on earth that we're able to monitor digitally.