Our second example deals with a more challenging problem: the recognition of hand-printed letters of the alphabet. The characters that people print in the ordinary course of filling out forms and questionnaires are surprisingly varied. Gaps abound wherecontinuous lines might be expected; curves and sharp angles appear interchangeably; there is almost every imaginable distortion of slant, shape and size. Even human readers cannot always identify such characters; their error rate is about 3 per cent on randomly selected letters and numbers, seen out of context.
– from Oliver G. Selfridge & Ulric Neisser. PATTERN RECOGNITION BY MACHINE . In Computers & thought, Edward A. Feigenbaum and Julian Feldman (Eds.). MIT Press, Cambridge, MA, USA, 1963. pp. 8-30.
Research firm Gartner estimates the market for hyperautomation-enabling technologies will reach $596 billion in 2022, up nearly 24% from the $481.6 billion in 2020. Gartner is expecting significant growth for technology that enables organizations to rapidly identify, vet, and automate as many processes as possible and says it will become a "condition of survival" for enterprises. Hyperautomation-enabling technologies include robotic process automation (RPA), low-code application platforms (LCAP), AI, and virtual assistants. As organizations look for ways to automate the digitization and structuring of data and content, technologies that automate content ingestion, such as signature verification tools, optical character recognition, document ingestion, conversational AI, and natural language technology (NLT), will be in high demand. For example, these tools could be used to automate the process of digitizing and sorting paper records.
Many of the most successful and widely used machine learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million people in the US alone earn money each month by doing work on these platforms. Around 250,000 of them earn at least three quarters of their income this way. But despite many working for some of the richest AI labs in the world, they are paid below minimum wage and given no opportunities to develop their skills.
The magnitude of the Covid-19 pandemic will largely depend on how quickly safe and effective vaccines and treatments can be developed and tested. Many assume a widely available vaccine is years away, if ever. Others believe that a 12- to 18-month development cycle is a given. Our best bet to reduce even that record-breaking timeline is by using artificial intelligence. The problem is twofold: discovering the right set of molecules among billions of possibilities, and then waiting for clinical trials. These processes ordinarily take several years, but AI holds the key to radically shortening both.
Machine learning has advanced radically over the past 10 years, and machine learning algorithms now achieve human-level performance or better on a number of tasks, including face recognition,31 optical character recognition,8 object recognition,29 and playing the game Go.26 Yet machine learning algorithms that exceed human performance in naturally occurring scenarios are often seen as failing dramatically when an adversary is able to modify their input data even subtly. Machine learning is already used for many highly important applications and will be used in even more of even greater importance in the near future. Search algorithms, automated financial trading algorithms, data analytics, autonomous vehicles, and malware detection are all critically dependent on the underlying machine learning algorithms that interpret their respective domain inputs to provide intelligent outputs that facilitate the decision-making process of users or automated systems. As machine learning is used in more contexts where malicious adversaries have an incentive to interfere with the operation of a given machine learning system, it is increasingly important to provide protections, or "robustness guarantees," against adversarial manipulation. The modern generation of machine learning services is a result of nearly 50 years of research and development in artificial intelligence--the study of computational algorithms and systems that reason about their environment to make predictions.25 A subfield of artificial intelligence, most modern machine learning, as used in production, can essentially be understood as applied function approximation; when there is some mapping from an input x to an output y that is difficult for a programmer to describe through explicit code, a machine learning algorithm can learn an approximation of the mapping by analyzing a dataset containing several examples of inputs and their corresponding outputs. Google's image-classification system, Inception, has been trained with millions of labeled images.28 It can classify images as cats, dogs, airplanes, boats, or more complex concepts on par or improving on human accuracy. Increases in the size of machine learning models and their accuracy is the result of recent advancements in machine learning algorithms,17 particularly to advance deep learning.7 One focus of the machine learning research community has been on developing models that make accurate predictions, as progress was in part measured by results on benchmark datasets. In this context, accuracy denotes the fraction of test inputs that a model processes correctly--the proportion of images that an object-recognition algorithm recognizes as belonging to the correct class, and the proportion of executables that a malware detector correctly designates as benign or malicious. The estimate of a model's accuracy varies greatly with the choice of the dataset used to compute the estimate.
But a new project could change all that. Known as In Codice Ratio, it uses a combination of artificial intelligence and optical-character-recognition (OCR) software to scour these neglected texts and make their transcripts available for the very first time. If successful, the technology could also open up untold numbers of other documents at historical archives around the world. OCR has been used to scan books and other printed documents for years, but it's not well suited for the material in the Secret Archives. Traditional OCR breaks words down into a series of letter-images by looking for the spaces between letters.