Information Technology


Detroit police chief cops to 96-percent facial recognition error rate

#artificialintelligence

Detroit's police chief admitted on Monday that facial recognition technology used by the department misidentifies suspects about 96 percent of the time. It's an eye-opening admission given that the Detroit Police Department is facing criticism for arresting a man based on a bogus match from facial recognition software. Last week, the ACLU filed a complaint with the Detroit Police Department on behalf of Robert Williams, a Black man who was wrongfully arrested for stealing five watches worth $3,800 from a luxury retail store. Investigators first identified Williams by doing a facial recognition search with software from a company called DataWorks Plus. Under police questioning, Williams pointed out that the grainy surveillance footage obtained by police didn't actually look like him.


Global Big Data Conference

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.


MIT and Toyota release innovative dataset to accelerate autonomous driving research

#artificialintelligence

The following was issued as a joint release from the MIT AgeLab and Toyota Collaborative Safety Research Center. How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations? These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg. Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information. "In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modeling that enables the next generation of assisted driving and automotive safety technologies," says Bryan Reimer, principal researcher.


AI Can Help Make Supply Chains Sustainable

#artificialintelligence

Few issues are as important to businesses today than sustainability. Because the modern consumer cares about the environment, companies need to meet higher expectations about eco-friendly practices. Supply chains, in particular, have a lot of room to improve. It's no secret that logistics chains aren't exactly eco-friendly. They account for more than 80% of carbon emissions globally. The modern business world can't exist without supply chains, but the natural world won't exist in the same way if they don't improve. The good news is there's an . . .


Global Big Data Conference

#artificialintelligence

The world is in the midst of a historical turning point. The COVID-19 pandemic has effectively halted life as we once knew it, and left the open question, "what will our world look like when'normal' life resumes?" While we don't have a crystal ball that allows us to peer into the future, history has given us a template on what to expect. Past pandemics have shaped politics, crashed economies, purred revolutions and produced other profound societal transformations. In the 14th century, the bubonic plague killed more than 60 percent of Europe's population – a dramatic population decline that actually improved living standards for the survivors and marked the decline in serfdom.


11 Essential Neural Network Architectures, Visualized & Explained

#artificialintelligence

The perceptron is the most basic of all neural networks, being a fundamental building block of more complex neural networks. It simply connects an input cell and an output cell. The feed-forward network is a collection of perceptrons, in which there are three fundamental types of layers -- input layers, hidden layers, and output layers. During each connection, the signal from the previous layer is multiplied by a weight, added to a bias, and passed through an activation function. Feed-forward networks use backpropagation to iteratively update the parameters until it achieves a desirable performance.


Roadmap to Natural Language Processing (NLP)

#artificialintelligence

Natural Language Processing (NLP) is the area of research in Artificial Intelligence focused on processing and using Text and Speech data to create smart machines and create insights. One of nowadays most interesting NLP application is creating machines able to discuss with humans about complex topics. IBM Project Debater represents so far one of the most successful approaches in this area. All of these preprocessing techniques can be easily applied to different types of texts using standard Python NLP libraries such as NLTK and Spacy. Additionally, in order to extrapolate the language syntax and structure of our text, we can make use of techniques such as Parts of Speech (POS) Tagging and Shallow Parsing (Figure 1).


Global Big Data Conference

#artificialintelligence

Artificial intelligence is beginning to be usefully deployed in almost every industry from customer call centers and finance to drug research. Yet the field is also plagued by relentless hype, opaque jargon and esoteric technology making it difficult for outsiders identify the most interesting companies. To cut through the spin, Forbes partnered with venture firms Sequoia Capital and Meritech Capital to create our second annual AI 50, a list of private, U.S.-based companies that are using artificial intelligence in meaningful business-oriented ways. To be included, companies had to be privately-held and focused on techniques like machine learning (where systems learn from data to improve on tasks), natural language processing (which enables programs to "understand" written or spoken language), or computer vision (which relates to how machines "see"). The list was compiled through a submission process open to any AI company in the U.S. The application asked companies to provide details on their technology, business model, customers and financials like funding, valuation and revenue history (companies had the option to submit information confidentially, to encourage greater transparency).


R&D Roundup: Tech giants unveil breakthroughs at computer vision summit – TechCrunch

#artificialintelligence

Computer vision summit CVPR has just (virtually) taken place, and like other CV-focused conferences, there are quite a few interesting papers. More than I could possibly write up individually, in fact, so I've collected the most promising ones from major companies here. Facebook, Google, Amazon and Microsoft all shared papers at the conference -- and others too, I'm sure -- but I'm sticking to the big hitters for this column. Redmond has the most interesting papers this year, in my opinion, because they cover several nonobvious real-life needs. One is documenting that shoebox we or perhaps our parents filled with old 3x5s and other film photos.


Deploy Machine Learning Pipeline on AWS Fargate - KDnuggets

#artificialintelligence

In our last post on deploying a machine learning pipeline in the cloud, we demonstrated how to develop a machine learning pipeline in PyCaret, containerize it with Docker and serve it as a web application using Google Kubernetes Engine. If you haven't heard about PyCaret before, please read this announcement to learn more. In this tutorial, we will use the same machine learning pipeline and Flask app that we built and deployed previously. This time we will demonstrate how to containerize and deploy a machine learning pipeline serverless using AWS Fargate. This tutorial will cover the entire workflow starting from building a docker image locally, uploading it onto Amazon Elastic Container Registry, creating a cluster and then defining and executing task using AWS-managed infrastructure i.e.