Collaborating Authors

Regression during model updates


Consider you have a prediction system h1 (example a photo tagger) whose output is consumed in real world (example tagging your photos on phone). Now, you train a system h2 whose aggregate metrics suggest that it is better than h1. Let's consider an unlabeled dataset D of examples (a pool of all user photos). Prediction update refers to the process where h2 is used to score examples in dataset D and update the predictions provided by h1. The problem here is that even though h2 is better than h1 globally, we haven't determined if h2 is significantly worse for some users or some specific pattern of examples.

Apple considers using ML to make augmented reality more useful


A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful. Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple's decision to introduce LiDAR in its recent devices has given AR a boost but it's clear that more needs to be done to make applications more useful. A newly filed patent suggests that Apple is exploring how machine learning can be used to automatically (or "automagically," the company would probably say) detect objects in AR. The first proposed use of the technology would be for Apple's own Measure app. Measure's previously dubious accuracy improved greatly after Apple introduced LiDAR but most people probably just grabbed an actual tape measure unless they were truly stuck without one available.

DEELIG: A Deep Learning Approach to Predict Protein-Ligand Binding Affinity - Docwire News


Protein-ligand binding prediction has extensive biological significance. Binding affinity helps in understanding the degree of protein-ligand interactions and is a useful measure in drug design. Protein-ligand docking using virtual screening and molecular dynamic simulations are required to predict the binding affinity of a ligand to its cognate receptor. Performing such analyses to cover the entire chemical space of small molecules requires intense computational power. Recent developments using deep learning have enabled us to make sense of massive amounts of complex data sets where the ability of the model to "learn" intrinsic patterns in a complex plane of data is the strength of the approach.

Real-time Interpretation: The next frontier in radiology AI - MedCity News


In the nine years since AlexNet spawned the age of deep learning, artificial intelligence (AI) has made significant technological progress in medical imaging, with more than 80 deep-learning algorithms approved by the U.S. FDA since 2012 for clinical applications in image detection and measurement. A 2020 survey found that more than 82% of imaging providers believe AI will improve diagnostic imaging over the next 10 years and the market for AI in medical imaging is expected to grow 10-fold in the same period. Despite this optimistic outlook, AI still falls short of widespread clinical adoption in radiology. A 2020 survey by the American College of Radiology (ACR) revealed that only about a third of radiologists use AI, mostly to enhance image detection and interpretation; of the two thirds who did not use AI, the majority said they saw no benefit to it. In fact, most radiologists would say that AI has not transformed image reading or improved their practices.

This useful AI-powered writing tool is on sale for 94% off


TL;DR: A lifetime subscription to the Rytr AI Writing Tool is on sale for £54.53 as of July 25, saving you 94% on list price. Rytr is an intuitive, AI-powered writing tool that can create high-quality content for you. All you have to do is feed it some relevant information -- like topic, tone, and format -- and in return, it will deliver content to fit your needs. Rytr is easily accessible on any browser (via desktop or mobile). Once you open it up, you'll choose your use case from over 25 different categories, including emails, Facebook ads, blog text, landing pages, captions, product descriptions, taglines, headlines, and more.

Tenure track Assistant Professor in Machine Learning


An applicant who has received a Degree of Doctor or has the equivalent academic expertise shall be qualified for this appointment. Priority shall be given to a person who has been awarded a doctoral degree or achieved equivalent academic expertise no more than five years before the deadline for applications for employment as assistant professor. A person who has been awarded a doctoral degree or has achieved equivalent expertise at a previous date may, however, be considered in special circumstances. Special circumstances is here used to describe: sick leave, parental leave, and other similar circumstances. Grounds for assessment As grounds for assessment when appointing an assistant professor, the level of proficiency required to qualify for the appointment shall apply.

Why our fears of job-killing robots are overblown


The more general point is that computer algorithms will have a devil of a time predicting which jobs are most at risk for being replaced by computers, since they have no comprehension of the skills required to do a particular job successfully. In one study that was widely covered (including by The Washington Post, The Economist, Ars Technica, and The Verge), Oxford University researchers used the U.S. Department of Labor's O NET database, which assesses the importance of various skill competencies for hundreds of occupations. For example, using a scale of 0 to 100, O*NET gauges finger dexterity to be more important for dentists (81) than for locksmiths (72) or barbers (60). The Oxford researchers then coded each of 70 occupations as either automatable or not and correlated these yes/no assessments with O*NET's scores for nine skill categories. Using these statistical correlations, the researchers then estimated the probability of computerization for 702 occupations.

Artificial intelligence, its diffusion and uses in manufacturing


Using artificial intelligence (AI) and other digital technologies in manufacturing, and other areas of production, is essential for raising labour …

AI strawberries and blockchain chicken: how digital agriculture could rescue global food security


In May 2020, with technical support from the UN FAO, China Agricultural University and Chinese e-commerce platform Pinduoduo hosted a "smart agriculture competition". Three teams of top strawberry growers – the Traditional teams – and four teams of scientific AI experts – the Technology teams – took part in a strawberry-growing competition in the province of Yunnan, China, billed as an agricultural version of the historical match between a human Go player and Google's DeepMind AI. At the beginning, the Traditional teams were expected to draw best practices from their collective planting and agricultural experience. And they did – for a while. They led in efficient production for a few months before the Technology teams gradually caught up, employing internet-enabled devices (such as intelligent sensors), data analysis and fully digital greenhouse automation.

SambaNova Systems and ScaleWorX Enter Historic Partnership to Drive Artificial Intelligence …


ScaleWorX, which provides businesses with infrastructure solutions for data-intensive computing and AI, will now offer innovative and best-in-class AI …