Goto

Collaborating Authors

Machine Learning


When AI Systems Fail: Introducing the AI Incident Database - The Partnership on AI

#artificialintelligence

Governments, corporations, and individuals are increasingly deploying intelligent systems to safety-critical problem areas, such as transportation, energy, health care, and law enforcement, as well as challenging social system domains such as recruiting. Failures of these systems pose serious risks to life and wellbeing, but even well-intentioned intelligent system developers fail to imagine what can go wrong when their systems are deployed in the real world. These failures can lead to dire consequences, some of which we've already witnessed, from a trading algorithm causing a market "flash crash" in 2010 to an autonomous car killing a pedestrian in 2018 and a facial recognition system causing the wrongful arrest of an innocent person in 2019. Worse, the artificial intelligence community has no formal systems or processes whereby practitioners can discover and learn from the mistakes of the past, especially since there is not a widely used centralized place to collect information about what has gone wrong previously. Avoiding repeated AI failures requires making past failures known.


Artificial Intelligence Against Corruption

#artificialintelligence

Corruption grows when accountability is low -- it is hard to imagine a politician abusing their power for personal gain if they knew for certain that they would get caught and punished. This is why improving accountability is a wining strategy for fighting corruption, and Artificial Intelligence technology can help us do that. Whether we realize it or not, AI technologies that spot wrongdoing are already all around us. Credit card companies, for example, have been using it for years -- if your card is used in strange countries, to buy strange products, in a price range that is strange to your normal behavior, the company's AI models are likely to flag it as suspicious. And it does so incredibly fast for millions and millions of transactions everyday.


Role Of AI And Machine Learning In Logistics Industry

#artificialintelligence

We see rapid technological development in the fields of big data, algorithmic development, connectivity, cloud computing and processing power every day. These new technologies have made the performance, accessibility, and costs of AI more favourable than ever before. The introduction of modern and new technologies such as artificial intelligence, machine learning and blockchain has transformed the unorganised and fragmented logistics sector. These technologies bring modifications in logistics industries such as predictive analytics, autonomous vehicles, and smart roads. Artificial intelligence and Machine Learning are capturing more and more industries in every sector and spheres of our lives and logistics is not an exception.


A definitive explanation to Hinge Loss for Support Vector Machines.

#artificialintelligence

NOTE: This article assumes that you are familiar with how an SVM operates. If this is not the case for you, be sure to check my out previous article which breaks down the SVM algorithm from first principles, and also includes a coded implementation of the algorithm from scratch! I have seen lots of articles and blog posts on the Hinge Loss and how it works. However, I find most of them to be quite vague and not giving a clear explanation of what exactly the function does and what it is. Instead, most of the time an unclear graph is shown and the reader is left bewildered.


Building Fair Machine Learning: Interview the Co-Founders of Fairly AI

#artificialintelligence

David Van Bruwaene and Fion Lee-Madan are the co-founders of Fairly AI, a Waterloo-Toronto Founder Institute portfolio company. Fairly AI is a tool for organizations to audit their artificial intelligence (AI) systems from across all business units, to eliminate bias, protect privacy, and ensure transparency of automated decisions. Research into fairness in machine learning is a topic becoming of increasingly greater interest to laypeople and non-technologists, because the implications that "biased" artificial intelligence can have on society are enormous. For example, if industries like housing, lending, education, or human resources that utilize AI in their decision-making - based on historical data that included variables such as gender, ethnicity, or disability - the AI may learn to replicate that input data's statistical regularities. If there was a pattern of discrimination in the "input data," then there will likely be a discriminatory pattern in the "output" data, resulting in machine learning that is not'fair.'


How to approach AutoML as a data scientist

#artificialintelligence

In the past five years, one trend that has made AI more accessible and acted as the driving force behind several companies is automated machine learning (AutoML). Many companies such as H2O.ai, DataRobot, Google, and SparkCognition have created tools that automate the process of training machine learning models. All the user has to do is upload the data, select a few configuration options, and then the AutoML tool automatically tries and tests different machine learning models and hyperparameter combinations and comes up with the best models. Does this mean that we no longer need to hire data scientists? In fact, AutoML makes the jobs of data scientists just a little easier by automating a small part of the data science workflow.


Building natural conversation flows using context management in Amazon Lex

#artificialintelligence

Understanding the direction and context of an ever-evolving conversation is beneficial to building natural, human-like conversational interfaces. Being able to classify utterances as the conversation develops requires managing context across multiple turns. Consider a caller who asks their financial planner for insights regarding their monthly expenses: "What were my expenses this year?" They may also ask for more granular information, such as "How about for last month?" As the conversation progresses, the bot needs to understand if the context is changing and adjust its responses accordingly.


Humanizing AI: How to Close the Trust Gap in Healthcare - InformationWeek

#artificialintelligence

Physician turnover in the United States, due to burnout and related factors, was conservatively estimated to cost the US healthcare system some $4.6 billion annually, according to a 2019 Annals of Internal Medicine study. The results reflect a familiar dynamic, where too many doctors are crushed in paperwork, which takes time away from being with patients. Just five months after this study was publicized, Harvard Business Review published "How AI in the Exam Room Could Reduce Physician Burnout," examining multiple artificial intelligence initiatives that may streamline providers' administrative tasks, thus reducing burnout. Still, barriers to trust in AI solutions remain, highlighted by 2020 KPMG International survey findings that note only 35% of leaders have a high degree of trust in data analytics powered by AI within their own organizations. This lack of confidence even in their own AI-driven solutions underscores the significant trust gap that exists between decision-makers and technology in the current digital era.


Data Scientists Can Help Inform Pandemic Policy With Innovative Ways To Use AI

#artificialintelligence

It's been almost one year since the Covid-19 pandemic started. Data scientists worldwide have been analyzing data gathered during the pandemic to inform policies. As we have seen, policymaking has not been straight forward. During this time of social isolation, it's been a great opportunity for policymakers to figure out the right approach to making sense of the data to gain flexibility in community-based policy decisions. On Nov 17th, 2020, XPrize and Cognizant announced their Pandemic Response Challenge.


Model Compression via Pruning

#artificialintelligence

To obtain fast and accurate inference on edge devices, a model has to be optimized for real-time inference. Fine-tuned state-of-the-art models like VGG16/19, ResNet50 have 138 million and 23 million parameters respectively and inference is often expensive on resource-constrained devices. Previously I've talked about one model compression technique called "Knowledge Distillation" using a smaller student network to mimic the performance of a larger teacher network (Both student and teacher network has different network architecture). Today, the focus will be on "Pruning" one model compression technique that allows us to compress the model to a smaller size with zero or marginal loss of accuracy. In short, pruning eliminates the weights with low magnitude (That does not contribute much to the final model performance).