Goto

Collaborating Authors

Machine Learning


Deploying and Hosting a Machine Learning Model with FastAPI and Heroku

#artificialintelligence

This tutorial looks at how to deploy a machine learning model, for predicting stock prices, into production on Heroku as a RESTful API using FastAPI.


Jumio BrandVoice: 5 Ways To Keep AI Bias Out Of Online Identity Verification

#artificialintelligence

When bias becomes embedded in machine learning models, it can have an adverse impact on our daily lives. It's exhibited in the form of exclusion, such as certain groups being denied loans or not being able to use the technology. As AI continues to become more a part of our lives, the risks from bias only grow larger. In the context of facial recognition, demographic traits such as race, age, gender, socioeconomic factors, and even the quality of the camera/device can impact software's ability to compare one face to a database of faces. In these types of surveillance, the quality and robustness of the underlying database is what can fuel bias in the AI models.


Train Your Custom Deep Learning Model in AWS SageMaker

#artificialintelligence

If you are someone like me who does not want to setup an at home server to train your Deep Learning model, this article is for you. Likely, cloud-based Machine Learning infrastructures are your options. I will go over the step-by-step process of how to do this in AWS SageMaker. Amazon SageMaker comes with a good number of pre-trained models. These models are prebuilt docker images in AWS.


Fast Oriented Text Spotting with a Unified Network (FOTS)

#artificialintelligence

Text detection and recognition (also known as Text Spotting) from an image is a very useful and challenging problem that deep learning researchers have been working on since many years because of its practical applications in fields like document scanning, robot navigation and image retrieval, etc. Almost all the methods consisted of two separate stages so far: 1) Text detection 2) Text recognition. Text detection just finds out where the text is located in the given image and on these results, text recognition actually recognizes the characters from the text. Because of these two stages, two separate models were required to be trained and hence prediction time was a bit higher. Because of higher test time, the models were not suitable for real time applications. Contrary to this, FOTS solves this two stage problem using a unified end to end trainable model/network by detecting and recognizing text simultaneously.


How COVID-19 Broke AI, And Why AI May Break Again

#artificialintelligence

A woman is frustrated by the answer given by a digital assistant. The last year has seen no shortage of unprecedented circumstances. All aspects of our lives, from work to travel to shopping, have changed. During this massive disruption, we have (unfortunately) learned why ML Ops - the practice of machine learning (ML) in production and the management of an ML lifecycle, should not be an afterthought but rather a critical element of getting value from AI. Figure 1 below shows a simplified example of an AI model in action. First trained by data - past examples of the environment, the model is then put into the real world to make predictions on new inputs - which are implicitly assumed to be sufficiently similar to what the training examples were.


Machine learning could aid mental health diagnoses: Study - ET CIO

#artificialintelligence

Washington: In order to accurately identify patients with a mix of psychotic and depressive symptoms, researchers from the University of Birmingham recently developed a way of using machine learning to do so. The findings of the research were published in the journal'Schizophrenia Bulletin'. Patients with depression or psychosis rarely experience symptoms of purely one or the other illness. Historically, this has meant that mental health clinicians give a diagnosis of a'primary' illness, but with secondary symptoms. Making an accurate diagnosis is a big challenge for clinicians and diagnoses often do not accurately reflect the complexity of individual experience or indeed neurobiology.


Applications of Artificial Intelligence for Retinopathy of Prematurity Screening - Docwire News

#artificialintelligence

OBJECTIVES: Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors.


Artificial Intelligence Begins to Realize Its Potential

#artificialintelligence

In my previous column, I looked at the problem of artificial intelligences forcing hardware to consume too much power, which could lead to an unsustainable spike in demand at data centers in this country by 2025. To test out their appetites for more power, I employed several advanced artificial intelligences, and also their close cousins machine learning, cognitive computing, deep learning and advanced expert system technology. For that column, I only measured how much power they consumed, but my original intention was to actually test them out to show some innovative things the technology was accomplishing. I am circling back to that effort now. For many years we have been reporting on the technology of artificial intelligence, about how it's being built out and made more efficient, or how it can be paired with other technologies like quantum computing to become even more accurate.


Google's deep learning finds a critical path in AI chips

#artificialintelligence

Characteristic to many AI chips are parallel, identical processor elements for masses of simple math operations, here called a "PE," for doing lots of vector-matrix multiplications that are the workhorse of neural net processing. A year ago, ZDNet spoke with Google Brain director Jeff Dean about how the company is using artificial intelligence to advance its internal development of custom chips to accelerate its software. Dean noted that deep learning forms of artificial intelligence can in some cases make better decisions than humans about how to lay out circuitry in a chip. This month, Google unveiled to the world one of those research projects, called Apollo, in a paper posted on the arXiv file server, "Apollo: Transferable Architecture Exploration," and a companion blog post by lead author Amir Yazdanbakhsh. Apollo represents an intriguing development that moves past what Dean hinted at in his formal address a year ago at the International Solid State Circuits Conference, and in his remarks to ZDNet.


Hot papers on arXiv from the past month – February 2021

AIHub

Abstract: Conceptual abstraction and analogy-making are key abilities underlying humans' abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.