Goto

Collaborating Authors

metric


Singapore to establish AI framework for 'fairness' credit scoring metrics

ZDNet

Singapore has kicked off efforts to develop a framework to ensure the "responsible" adoption of artificial intelligence (AI) and data analytics in credit risk scoring and customer marketing. Two teams comprising banks and industry players have been tasked to establish metrics that can assist financial institutions in ensuring the "fairness" of their AI and data analytics tools in these instances. The Monetary Authority of Singapore (MAS) said a whitepaper detailing the metrics would be published by year-end along with an open source code to enable financial institutions to adopt the metrics. These organisations then would be able to integrate the open source code into their own IT systems to assess the fairness of their AI applications, the industry regulator said in a statement Friday. It added that the open source code would be deployed on the online global marketplace and sandbox, API Exchange (APIX), which enabled fintech and FSI companies to integrate and test applications via a cloud-based platform.


Google's federated analytics method could analyze end user data without invading privacy

#artificialintelligence

In a blog post today, Google laid out the concept of federated analytics, a practice of applying data science methods to the analysis of raw data that's stored locally on edge devices. As the tech giant explains, it works by running local computations over a device's data and making only the aggregated results -- not the data from the particular device -- available to authorized engineers. While federated analytics is closely related to federated learning, an AI technique that trains an algorithm across multiple devices holding local samples, it only supports basic data science needs. It's "federated learning lite" -- federated analytics enables companies to analyze user behaviors in a privacy-preserving and secure way, which could lead to better products. Google for its part uses federated techniques to power Gboard's word suggestions and Android Messages' Smart Reply feature.


Measuring the Effectiveness of AI in the SOC

#artificialintelligence

In a previous blog post, I covered some of the challenges encountered by security operations centers (SOCs) and how leveraging artificial intelligence (AI) can help alleviate these challenges, including the cybersecurity skills shortage, unaddressed security risks and long dwell times. According to ISACA's State of Cybersecurity Report, 78 percent of respondents expect the demand for technical cybersecurity roles to increase in the future. The report also mentions that the effects of the skills shortage are going to get worse. This is where AI can step in and help lighten the load considerably. During a time of tight budgets and IT spend, there is no doubt that any new expenditures must have solid business justifications.


Chatbots in a nutshell - The Digital Transformation People

#artificialintelligence

Marketing scientist Kevin Gray asks Dr. Anna Farzindar of the University of Southern California about chatbots and the ways they are used. Is there a formal definition you prefer? Conversational or dialog agents are designed to communicate with us in human language. These software agents are deployed everywhere around us; when talking to your car, communicating with robots, or using your personal assistant on any device or smartphone, such as Alexa, Cortona, SIRI or Google Assistant. The term "chatbot" is often used in industry for conversational agents that can be integrated through any online messaging application.


Your Ultimate Data Science Statistics & Mathematics Cheat Sheet

#artificialintelligence

Classifier metrics are metrics used to evaluate the performance of machine learning classifiers -- models that put each training example into one of several discrete categories. Confusion Matrix is a matrix used to indicate a classifier's predictions on labels. It contains four cells, each corresponding to one combination of a predicted true or false and an actual true or false. Many classifier metrics are based on the confusion matrix, so it's helpful to keep an image of it stored in your mind. Sensitivity/Recall is the number of positives that were accurately predicted.


__OG_TITLE__

#artificialintelligence

Model hyperparameters are free parameters of a model which control different aspects of the learning process of your model. Hyperparameter search is the process of finding the model hyperparameters which result in the most performant model. Spell lets you automate hyperparameter searches with the spell hyper command. For an interactive, runnable tutorial on hyperparameter search refer to our blog post: "An introduction to hyperparameter search with CIFAR10". The spell hyper command kicks off your hyperparameter search.


Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic

#artificialintelligence

SARS-COV-2 has upended modern health care, leaving health systems struggling to cope. Addressing a fast-moving and uncontrolled disease requires an equally efficient method of discovery, development and administration. Artificial Intelligence (AI) and Machine Learning driven health care solutions provide such an answer. AI-enabled health care is not "the medicine of the future," nor does it mean robot doctors rolling room to room in hospitals treating patients. Instead of a hospital from some future Jetsons-like fantasy, AI is poised to make impactful and urgent contributions to the current health care ecosystem.


Council Post: AI-Led Operations: A Way For Enterprises To Scale With Foresight

#artificialintelligence

Things move quickly in business today. And organizations are increasingly using data to move their businesses forward. IT systems are generating a growing variety, velocity and volume of data. This both creates challenges and opens up new opportunities. Taking advantage of data to reach business goals requires the ability to scale IT operations quickly.


Researchers measure reliability, confidence for next-gen AI

#artificialintelligence

A team of Army and industry researchers have developed a metric for neural networks--computing systems modeled loosely after the human brain--that could assess the reliability and confidence of the next generation of artificial intelligence and machine learning algorithms. Deep neural network, or DNNs, are a form of machine learning that use training data to learn. Once trained, they can make predictions when given new information or inputs; however, they can be easily deceived if the new information is too far outside its training. Researchers said given the diversity of information in training data and potential new inputs, coming up with a solution is challenging. "This opens a new research opportunity to create the next generation of algorithms that are robust and resilient," said Dr. Brian Jalaian, a scientist at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory.


A Speech-To-Text Practitioner's Criticisms of Industry and Academia

#artificialintelligence

I really like the expression "being bitten by the SOTA bug". In a nut shell it means that if a large group of people focuses on pursuing a top result on some abstract metric, this metric loses its meaning (a classic manifestation of Goodhart's Law). The exact reason why this happens is usually different each time and it may be very technical, but in ML what is usually occurring is that the models are overfit to some hidden intrinsic qualities of the dataset that are used to calculate the metrics. For example, in CV such patterns are usually clusters of visually similar images. A small idealistic under-the-radar community pursuing an academic or scientific goal is much less prone to falling victim to Goodhart's law than a larger and more popular community. Once a certain degree of popularity is reached, the community starts pursuing metrics or virtue signalling (showing off one's moral values for the sake of showing off when no real effort is required) and the real progress stops until some crisis arrives. This is what it means to be bitten by the SOTA bug. For example, in the field of Natural Language Processing this attitude has lead to irrational over-investment into huge models optimized on public academic benchmarks, but the usefulness of such "progress" is very limited for a number of reasons: