"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
ML Model Explainability (sometimes referred to as Model Interpretability or ML Model Transparency) is a fundamental pillar of AI Quality. It is impossible to trust a machine learning model without understanding how and why it makes its decisions, and whether these decisions are justified. Peering into ML models is absolutely necessary before deploying them in the wild, where a poorly understood model can not only fail to achieve its objective, but also cause negative business or social impacts, or encounter regulatory trouble. Explainability is also an important backbone to other trustworthy ML pillars like fairness and stability. Yet "explainability" is often a broad and sometimes confusing concept.
This article is intended for data scientists who may consider using deep learning algorithms, and want to know more about the cons of implementing these type of models into your work. Deep learning algorithms have many benefits, are powerful, and can be fun to show off. However, there are a few times when you should avoid them. I will be discussing those times when you should stop using deep learning below, so keep on reading if you would like a deeper dive into deep learning. Because other algorithms have been around longer, they have countless amounts of documentation, including examples and functions that make interpretability easier.
Cloud adoption is accelerating fast in enterprises surging towards modernity. But are there better ways of utilizing the full potential of cloud computing? Leaving behind the constraints of a single cloud computing platform, you will find various other arrangements like hybrid and multi-cloud computing. The annual RightScale State of the Cloud Report suggests, 90% of respondents believe that multi-cloud is already the most common pattern with businesses and enterprises. So, let's delve into understanding more about multi-cloud for modern enterprises.
I'm sure you've heard the word somewhere. AI can detect skin cancer!… AI can beat the champion of GO!… Many people, including myself, believe that Artificial Intelligence (AI) is going to be the next big thing thing to take our society by storm in the coming years. Yet, what if we have the wrong forecast?
Machine Learning has a wide variety of dimensionality reduction techniques. It is one of the most important aspects in the Data Science field. As a result, in this article, I will present one of the most significant dimensionality reduction techniques used today, known as Principal Component Analysis (PCA). But first, we need to understand what Dimensionality Reduction is and why it is so crucial. Dimensionality reduction, also known as dimension reduction, is the transformation of data from a high-dimensional space to a low-dimensional space in such a way that the low-dimensional representation retains some meaningful properties of the original data, preferably close to its underlying dimension.
OpenAI has recently published an important work, focused on the alignment problem, the problem of ensuring that general-purpose AI and machine learning systems align with human intentions. The "Paperclip Maximizer" is a famous example of alignment gone wrong. To test scalable alignment methods, OpenAI trained a model to summarize entire books, as described in their blog on KDnuggets: Scaling human oversight of AI systems for difficult tasks – OpenAI approach. OpenAI model works by first summarizing small sections of a book, then summarizing those summaries into a higher-level summary, and so on. The results were pretty amazing, so we have asked OpenAI to summarize two top KDnuggets blogs from last year, and here are the summaries.
Last year, Charlene Xia '17, SM '20 found herself at a crossroads. She was finishing up her master's degree in media arts and sciences from the MIT Media Lab and had just submitted applications to doctoral degree programs. All Xia could do was sit and wait. In the meantime, she narrowed down her career options, regardless of whether she was accepted to any program. "I had two thoughts: I'm either going to get a PhD to work on a project that protects our planet, or I'm going to start a restaurant," recalls Xia.
We already covered how AI is integral to Alphabet. We had left out Google. As AI is starting to power all Google products, Google deserves its own focus. We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. From smartphone assistants to image recognition and translation, a myriad of AI functionality hide within google apps that you daily use.