Collaborating Authors

Machine Learning

Deep Learning at Scale with PyTorch, Azure Databricks, and Azure Machine Learning


PyTorch is a popular open source machine learning framework. PyTorch is ideal for deep learning applications such as computer vision and natural language processing. MLflow is an open source platform for the end-to-end machine learning lifecycle. Delta Lake is an open source storage layer that brings reliability to data lakes. Azure Databricks is the first-party Databricks service on Azure that provides massive scale data engineering and collaborative data science.

Fujitsu Develops AI Tech for High-Dimensional Data Without Labeled Training Data


In recent years, there has been a surge in demand for AI-driven big data analysis in various business fields. AI is also expected to help support the detection of anomalies in data to reveal things like unauthorized attempts to access networks, or abnormalities in medical data for thyroid values or arrhythmia data. Data used in many business operations is high-dimensional data. As the number of dimensions of data increases, the complexity of calculations required to accurately characterize the data increases exponentially, a phenomenon widely known as the "Curse of Dimensionality"(1). In recent years, a method of reducing the dimensions of input data using deep learning has been identified as a promising candidate for helping to avoid this problem. However, since the number of dimensions is reduced without considering the data distribution and probability of occurrence after the reduction, the characteristics of the data have not been accurately captured, and the recognition accuracy of the AI is limited and misjudgment can occur (Figure 1). Solving these problems and accurately acquiring the distribution and probability of high-dimensional data remain important issues in the AI field.

Approximation spaces of deep neural networks


We study the expressivity of deep neural networks. Measuring a network's complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of "skip connections" does not change the resulting approximation spaces. We also discuss the role of the network's nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.

Is your model overfitting? Or maybe underfitting? An example using a neural network in python


Underfitting means that our ML model can neither model the training data nor generalize to new unseen data. A model that underfits the data will have poor performance on the training data. For example, in a scenario where someone would use a linear model to capture non-linear trends in the data, the model would underfit the data. A textbook case of underfitting is when the model's error on both the training and test sets (i.e. during training and testing) is very high. It is obvious that there is a trade-off between overfitting and underfitting.

Supercharge Your Shallow ML Models With Hummingbird


Since the most recent resurgence of deep learning in 2012, a lion's share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed by massive corporations, and likely aren't going away anytime soon. This also presents a problem, however, as the deep learning community has diverged from popular traditional ML software libraries like scikit-learn, XGBoost, and LightGBM. When it comes time for companies to bring multiple models with different software and hardware assumptions into production, things get…hairy. Using microservices in Kubernetes can solve the design pattern issue to an extent by keeping things de-coupled…if that's even what you want?

ICML 2020 Test of Time award


The International Conference on Machine Learning (ICML) Test of Time award is given to a paper from ICML ten years ago that has had significant impact. This year the award goes to Niranjan Srinivas, Andreas Krause, Sham Kakade and Matthias Seeger for their work "Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design". This paper brought together the fields of Bayesian optimization, bandits and experimental design by analyzing Gaussian process bandit optimization, giving a novel approach to derive finite-sample regret bounds in terms of a mutual information gain quantity. This paper has had profound impact over the past ten years, including the method itself, the proof techniques used, and the practical results. These have all enriched our community by sparking creativity in myriad subsequent works, ranging from theory to practice.

Machine Learning Can Help Detect Misinformation Online


As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns.

How To Create An AI (Artificial Intelligence) Model


Digital generated image of data. Lemonade is one of this year's hottest IPOs and a key reason for this is the company's heavy investments in AI (Artificial Intelligence). The company has used this technology to develop bots to handle the purchase of policies and the managing of claims. Then how does a company like this create AI models? Well, as should be no surprise, it is complex and susceptible to failure.

Take a deep dive into AI with this $35 training bundle


It's not an exaggeration to say that when it comes to the future of human progress, nothing is more important than Artificial Intelligence (AI). Although often thought to only be associated with everyday entities such as self-driving cars and Google search rankings, AI is in fact the driving force behind virtually every major and minor technology that's bringing people together and solving humanity's problems. You'd be hard-pressed to find an industry that hasn't embraced AI in some shape or form, and our reliance on this field is only going to grow in the coming years--as microchips become more powerful and quantum computing begins to be more accessible. So it should go without saying that if you're truly interested in staying ahead of the curve in an AI-driven world, you're going to have to have at least a baseline understanding of the methodologies, programming languages, and platforms that are used by AI professionals around the world. This can be an understandably intimidating reality for anyone who doesn't already have years of experience in tech or programming, but the good news is that you can master the basics and even some of the more advanced elements of AI and all of its various implications without spending an obscene amount of time or money on a traditional education.

Council Post: AI In Lending: Fad Or Future?


Dmitry Dolgorukov is the Co-Founder and CRO of HES Fintech, a leader in providing financial institutions with intelligent lending platforms. Artificial intelligence survived the early stages of the maturity cycle and reached the plateau of productivity to the extent that Andrew Ng claimed, "AI is the new electricity." Stanford University indicates that the number of active AI-based startups increased by 1,400% between 2000 and 2017. In this regard, Forbes cites research findings revealing that AI-associated startups attract up to 50% more in funds than "ordinary" technological companies. If you want an analogy, it's the Gold Rush, but digital.