Other organizations can leverage business data to drive data-informed project management, allowing business leaders to more accurately determine how long certain operations may take and will cost. The fundamentals of these technologies are rooted in data-driven algorithms that enable machines to develop learned responses or predictive capabilities. As a result, with AI and machine learning comes data--big data--that requires resources to be allocated, not only specialists like programmers, but additional on-premises resources such as storage, server CPUs, networking bandwidth, and cloud-hosted storage services. As businesses look to develop their digital transformation strategies and create unique competitive advantage, AI and machine learning are increasingly considered the keys to unlocking the value of an organization's accumulated data.
For example, for personalized recommendations, we have been working with learning to rank methods that learn individual rankings over item sets. Figure 1: Typical data science workflow, starting with raw data that is turned into features and fed into learning algorithms, resulting in a model that is applied on future data. This means that this pipeline is iterated and improved many times, trying out different features, different forms of preprocessing, different learning methods, or maybe even going back to the source and trying to add more data sources. Probably the main difference between production systems and data science systems is that production systems are real-time systems that are continuously running.
In another example of disruption through AI, travel companies have begun using behavioral data and predictive analytics to customize brand experiences based on individuals' preferences and patterns. Automating IT functions alone reduces expenses by 14 to 28 percent, so companies that launch using automated services quickly establish a financial advantage over larger, legacy-burdened competitors. Some tech experts believe that the current generation of applied AI systems, such as predictive analytics, will give small businesses advantages through increased automation and efficiency. New BI platforms offer data visualization, customer relationship management programs, and other critical BI services.
There is no shortage of attention lately on the "Internet of Things". As a case in point, see the "Developing Innovation and Growing the Internet of Things Act" or "DIGIT Act", i.e., S. 2607, a bill introduced in the Senate on March 1, 2016 and amended on September 28, 2016, "to ensure appropriate spectrum planning and inter-agency coordination to support the Internet of Things" – A companion bill, H.R. 5117, was introduced in the House of Representatives on April 28, 2016. However, since there is no "internet" dedicated to "things", it is fair to state that the Internet of Things does not exist as such. We are left with a definitional vacuum, but it is hammering the obvious to acknowledge that there is no dearth of attempts around the world to fill the gap. Perhaps as a helpful shortcut, we could view the expression as a metaphor that captures the arrival of almost anything and everything, until now out of scope, into the communications space.
Human resources departments rarely, if ever, are thought of as cutting edge when it comes to the use of technology. A closer look, however, shows the implementation of new technologies, including solutions powered by Artificial Intelligence (AI), in almost every aspect of the talent function. According to a recent Towers Watson HR Service Delivery and Technology Survey, HR professionals are overhauling structure to improve quality and efficiency with 33% of the group spending significantly more on technology in the last year. HR's investment in new technology has also spurred the creation of new data sources. Data around employee productivity, wellness, manager effectiveness, and a host of other activities is quickly dwarfing the traditional data set that HR has traditionally been using.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before.
BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters. Modeled after Torch, BigDL provides comprehensive support for deep learning, including numeric computing (via Tensor) and high level neural networks; in addition, users can load pre-trained Caffe or Torch models into Spark programs using BigDL. To achieve high performance, BigDL uses Intel MKL and multi-threaded programming in each Spark task. Consequently, it is orders of magnitude faster than out-of-box open source Caffe, Torch or TensorFlow on a single-node Xeon. BigDL can efficiently scale out to perform data analytics at "Big Data scale", by leveraging Apache Spark (a lightning fast distributed data processing framework), as well as efficient implementations of synchronous SGD and all-reduce communications on Spark.
No longer was it an esoteric discipline commanded by the few, the proud, the data scientists. Now it was, in theory, everyone's business. Machine learning's power and promise, and all that surrounded and supported it, moved more firmly into the enterprise development mainstream. GET A 15% DISCOUNT through Jan.15, 2017: Use code 8TIISZ4Z. Cut to the key news in technology trends and IT breakthroughs with the InfoWorld Daily newsletter, our summary of the top tech happenings.
As data scientists, we are aware that bias exists in the world. We read up on stories about how cognitive biases can affect decision-making. We know that, for instance, a resume with a white-sounding name will receive a different response than the same resume with a black-sounding name, and that writers of performance reviews use different language to describe contributions by women and men in the workplace. We read stories in the news about ageism in healthcare and racism in mortgage lending. Data scientists are problem solvers at heart, and we love our data and our algorithms that sometimes seem to work like magic, so we may be inclined to try to solve these problems stemming from human bias by turning the decisions over to machines.
No longer was it an esoteric discipline commanded by the few, the proud, the data scientists. Now it was, in theory, everyone's business. Machine learning's power and promise, and all that surrounded and supported it, moved more firmly into the enterprise development mainstream. That movement revolved around three trends: new and improved tool kits for machine learning, better hardware (and easier access to it), and more cloud-hosted, as-a-service variants of machine learning that provided both open source and proprietary tools. Once upon a time, if you wanted to implement machine learning in an app, you had to roll the algorithms yourself.