Goto

Collaborating Authors

Education


10 Ways AI Is Transforming Enterprise Software - InformationWeek

#artificialintelligence

If you are currently in the market for almost any kind of enterprise software, you will almost certainly run across at least one vendor claiming that its product includes artificial intelligence (AI) capabilities. Of course, some of these claims are no more than marketing hyperbole, or "AI washing." However, in many cases, software makers truly are integrating new capabilities related to analytics, vision, natural language, or other areas that deserve the AI label. The market researchers at IDC have gone so far as to call AI "inescapable." Similarly, Omdia Tractica predicted that worldwide revenue from AI software will climb from $10.1 billion in 2018 to $126.0 billion in 2025, led in large part by advancements in deep learning technology.


A Beginner's Guide to Face Recognition with OpenCV in Python - Sefik Ilkin Serengil

#artificialintelligence

OpenCV becomes a de facto standard for image processing studies. The library offers some legacy techniques for face recognition as well. Local binary patterns histograms (LBPH), EigenFace and FisherFace methods are covered in the package. It is a fact that these conventional face recognition algorithms ARE NOT state-of-the-art techniques anymore. Nowadays, CNN based deep learning approaches overperform than these old-fashioned methods.


Supervised The Complete Supervised Machine Learning Models in Python

#artificialintelligence

The Complete Supervised Machine Learning Models in Python 4.6 (46 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. In this course, you are going to learn all types of Supervised Machine Learning Models implemented in Python. The Math behind every model is very important. Without it, you can never become a Good Data Scientist. That is the reason, I have covered the Math behind every model in the intuition part of each Model.


Google to invest Rs 75,000 cr to to boost Digital India drive - Express Computer

#artificialintelligence

Google and Alphabet CEO Sundar Pichai has announced a Google for India Digitisation Fund through which the company will invest Rs 75,000 crore, or approximately $10 billion, over the next five to seven years to drive digital transformation in the country. "We'll do this through a mix of equity investments, partnership investments, and operational, infrastructure and ecosystem investments. This is a reflection of our confidence in the future of India and its digital economy," Pichai said during the Google for India virtual conference. The fund will focus on four areas that are important to India's digitisation. These include enabling affordable access to the internet and to information for every Indian in their own language and building new products and services that are deeply relevant to India's unique needs including consumer tech, education, health and agriculture.


Machine learning will help to grow artificial organs – IAM Network

#artificialintelligence

Akbar Solo Researchers in Moscow and America have discovered how to use machine learning to grow artificial organs, especially to tackle blindness Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs. The study was published in Frontiers in Cellular Neuroscience. How would this enable easier organ growth? This would allow to expand the applications of the technology for multiple fields including the drug discovery and development of cell replacement therapies to treat blindnessIn multicellular organisms, the cells making up different organs and tissues are not the same.


Embrace Uncertainty in Machine Learning Models to Maximize Business Value - Covail

#artificialintelligence

'All models are wrong, but some are useful' As this famous quote by George Box (known as the Box Theorem) shows, no model is ever going to be 100% accurate. If one is, run for the hills! Rather, models should be evaluated by their impact on the bottom line, or how useful they are to the business. In this blog post, we will explore a way in which models can be more useful, by embracing and leveraging uncertainty to maximize business results. Much of the time, business users want a single number to represent the'goodness' of a model, but machine learning models can tell us so much more than just a single number (like accuracy).


Boundary thickness and robustness in learning models

#artificialintelligence

Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the so-called mixup training.


Fujitsu Develops AI Tech for High-Dimensional Data Without Labeled Training Data

#artificialintelligence

In recent years, there has been a surge in demand for AI-driven big data analysis in various business fields. AI is also expected to help support the detection of anomalies in data to reveal things like unauthorized attempts to access networks, or abnormalities in medical data for thyroid values or arrhythmia data. Data used in many business operations is high-dimensional data. As the number of dimensions of data increases, the complexity of calculations required to accurately characterize the data increases exponentially, a phenomenon widely known as the "Curse of Dimensionality"(1). In recent years, a method of reducing the dimensions of input data using deep learning has been identified as a promising candidate for helping to avoid this problem. However, since the number of dimensions is reduced without considering the data distribution and probability of occurrence after the reduction, the characteristics of the data have not been accurately captured, and the recognition accuracy of the AI is limited and misjudgment can occur (Figure 1). Solving these problems and accurately acquiring the distribution and probability of high-dimensional data remain important issues in the AI field.


Is your model overfitting? Or maybe underfitting? An example using a neural network in python

#artificialintelligence

Underfitting means that our ML model can neither model the training data nor generalize to new unseen data. A model that underfits the data will have poor performance on the training data. For example, in a scenario where someone would use a linear model to capture non-linear trends in the data, the model would underfit the data. A textbook case of underfitting is when the model's error on both the training and test sets (i.e. during training and testing) is very high. It is obvious that there is a trade-off between overfitting and underfitting.


Take a deep dive into AI with this $35 training bundle

#artificialintelligence

It's not an exaggeration to say that when it comes to the future of human progress, nothing is more important than Artificial Intelligence (AI). Although often thought to only be associated with everyday entities such as self-driving cars and Google search rankings, AI is in fact the driving force behind virtually every major and minor technology that's bringing people together and solving humanity's problems. You'd be hard-pressed to find an industry that hasn't embraced AI in some shape or form, and our reliance on this field is only going to grow in the coming years--as microchips become more powerful and quantum computing begins to be more accessible. So it should go without saying that if you're truly interested in staying ahead of the curve in an AI-driven world, you're going to have to have at least a baseline understanding of the methodologies, programming languages, and platforms that are used by AI professionals around the world. This can be an understandably intimidating reality for anyone who doesn't already have years of experience in tech or programming, but the good news is that you can master the basics and even some of the more advanced elements of AI and all of its various implications without spending an obscene amount of time or money on a traditional education.