If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine Learning (ML) powers an increasing number of the applications and services that we use daily. For organizations who are beginning to leverage datasets to generate business insights -- the next step after you've developed and trained your model is deploying the model to use in a production scenario. That could mean integration directly within an application or website, or it may mean making the model available as a service. As ML continues to mature the emphasis starts to shift from development towards deployment. You need to transition from developing models to real world production scenarios that are concerned with issues of inference performance, scaling, load balancing, training time, reproducibility and visibility.
You have worked for weeks on building your machine learning system and the performance is not something you are satisfied with. You think of multiple ways to improve your algorithm's performance, viz, collect more data, add more hidden units, add more layers, change the network architecture, change the basic algorithm etc. But which one of these will give the best improvement on your system? You can either try them all, invest a lot of time and find out what works for you. You can use the following tips from Ng's experience.
After decades of struggle and disappointing results, Artificial Intelligence (AI) is finally coming into its own. Recent advances in computational power, mathematical refinements enabling the creation of much deeper neural networks, and dramatic improvements in techniques used to train machine learning systems have all combined to create applications with real practical value. But what is Artificial Intelligence? In the broadest terms, AI is the attempt to create human level intelligence in machines. This is not something we've achieved and some argue we never will (though I wouldn't bet against human innovation).
Companies running AI applications often need as much computing muscle as researchers who use supercomputers do. IBM's latest system is aimed at both audiences. The company last week introduced its first server powered by the new Power9 processor designed for AI and high-performance computing. The powerful technologies inside have already attracted the likes of Google and the US Department of Energy as customers. The new IBM Power System AC922 is equipped with two Power9 CPUs and from two to six NVIDIA Tesla V100 GPUs.
As a digital analyst or marketer, you know the importance of analytical decision making. Go to any industry conference, blog, meet up, or even just read the popular press, and you will hear and see topics like machine learning, artificial intelligence, and predictive analytics everywhere. Because many of us don't come from a technical/statistical background, this can be both a little confusing and intimidating. But don't sweat it, in this post, I will try to clear up a some of this confusion by introducing a simple, yet powerful framework – the intelligent agent – which will help link these new ideas with familiar tools and concepts like A/B Testing and Optimization. Note: the intelligent agent framework is used as the guiding principle in Russell and Norvig's excellent text Artificial Intelligence: A Modern Approach – it's an awesome book, and I recommend anyone who wants to learn more to go get a copy or check out their online AI course.
Personalized learning, which tailors educational content to the unique needs of individual students, has become a huge component of K–12 education. A growing number of college educators are embracing the trend, taking advantage of data analytics and artificial intelligence to deliver just-right, just-in-time learning to their students. Data-driven insights are becoming integral to business and financial decision-making by institutional leaders, and educators are quickly finding ways to leverage analytics to increase student retention. Applying data analytics to adaptive learning programs is proving to be another smart application. In adaptive learning, educators collect data on various aspects of student performance -- from engagement with course content to exam performance -- and tailor material to each student's knowledge level and ideal learning style.
Machine learning (ML) powers an increasing number of the applications and services that we use daily. For organizations who are beginning to leverage datasets to generate business insights, the next step after you've developed and trained your model is deploying the model to use in a production scenario. That could mean integration directly within an application or website, or it may mean making the model available as a service. As ML continues to mature, the emphasis starts to shift from development towards deployment, you need to transition from developing models to real-world production scenarios that are concerned with issues of inference performance, scaling, load balancing, training time, reproducibility, and visibility. In previous posts, we've explored the ability to save and load trained models with TensorFlow that allow them to be served for inference.
This work was done in collaboration with Ding Ding and Sergey Ermolin from Intel. In recent years, the scale of datasets and models used in deep learning has increased dramatically. Although larger datasets and models can improve the accuracy in many AI applications, they often take much longer to train on a single machine. However, it is not very common to distribute the training to large clusters using current popular deep learning frameworks, compared to what's been long around in the Big Data area, as it's often harder to gain access to a large GPU cluster and lack of convenient facilities in popular DL frameworks for distributed training. By leveraging the cluster distribution capabilities in Apache Spark, BigDL successfully performs very large-scale distributed training and inference.
Change is a constant in the world of Business. That is why, at any given point in modern history, Enterprises are dealing with some "transformational trend" or the other. In the 80s and 90s it was computing; in the 2000s it was the Internet followed by Mobility, Cloud, and now the latest mantra – Digitalization. In a way, it is this constant need to evolve, change, and improve on the status quo, with all the tools available to us, that defines us as a human race. The only way is forward and progress is only limited by our own inventiveness.
In this special guest feature from Scientific Computing World, David Yip, HPC and Storage Business Development at OCF, provides his take on the place of GPU technology in HPC. There was an interesting story published earlier this week in which NVIDIA's founder and CEO, Jensen Huang, said: 'As advanced parallel-instruction architectures for CPU can be barely worked out by designers, GPUs will soon replace CPUs'. There are only so many processing cores you can fit on a single CPU chip. There are optimized applications that take advantage of a number of cores, but typically they are used for sequential serial processing (although Intel is doing an excellent job of adding more and more cores to its CPUs and getting developers to program multicore systems). By contrast, a GPU has massively parallel architecture consisting of many thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.