building trust
Building trust in AI: Transparent models for better decisions
AI is becoming a part of our daily lives, from approving loans to diagnosing diseases. AI model outputs are used to make increasingly important decisions, based on smart algorithms and data. But if we can't understand these decisions, how can we trust them? One approach to making AI decisions more understandable is to use models that are inherently interpretable. These are models that are designed in such a way that consumers of the model outputs can infer the model's behaviour by reading the parameters of the model. Popular inherently interpretable models include Decision Trees and Linear Regression.
Building Trust Through Voice: How Vocal Tone Impacts User Perception of Attractiveness of Voice Assistants
Pias, Sabid Bin Habib, Freel, Alicia, Huang, Ran, Williamson, Donald, Kim, Minjeong, Kapadia, Apu
Voice Assistants (VAs) are popular for simple tasks, but users are often hesitant to use them for complex activities like online shopping. We explored whether the vocal characteristics like the VA's vocal tone, can make VAs perceived as more attractive and trustworthy to users for complex tasks. Our findings show that the tone of the VA voice significantly impacts its perceived attractiveness and trustworthiness. Participants in our experiment were more likely to be attracted to VAs with positive or neutral tones and ultimately trusted the VAs they found more attractive. We conclude that VA's perceived trustworthiness can be enhanced through thoughtful voice design, incorporating a variety of vocal tones.
- North America > United States > Indiana (0.07)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Ohio (0.04)
- North America > United States > Massachusetts > Barnstable County > Falmouth (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- Information Technology (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Machine Learning (0.68)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (0.67)
Strengthening trust in machine-learning models
Probabilistic machine learning methods are becoming increasingly powerful tools in data analysis, informing a range of critical decisions across disciplines and applications, from forecasting election results to predicting the impact of microloans on addressing poverty. This class of methods uses sophisticated concepts from probability theory to handle uncertainty in decision-making. But the math is only one piece of the puzzle in determining their accuracy and effectiveness. In a typical data analysis, researchers make many subjective choices, or potentially introduce human error, that must also be assessed in order to cultivate users' trust in the quality of decisions based on these methods. To address this issue, MIT computer scientist Tamara Broderick, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems (LIDS), and a team of researchers have developed a classification system--a "taxonomy of trust"--that defines where trust might break down in a data analysis and identifies strategies to strengthen trust at each step.
- North America > Mexico (0.06)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Kentucky (0.05)
- (2 more...)
Achieving Individual -- and Organizational -- Value With AI
At Land O'Lakes, a member-owned cooperative agribusiness, farmers are using data and artificial intelligence to make smarter decisions. Over the past 30 years, corn farmers have used advances in bioengineering, chemicals, and analytics to boost their average yields by 50%, from 120 to 180 bushels per acre. Those advances pale in contrast to future corn yields that will be made possible using data and AI: Demonstrations promise to triple that average -- to 540 bushels per acre -- by the end of this decade. Farmers don't have to wait that long to see some of those benefits, however. Through extensive experimentation and complex algorithms, Land O'Lakes is already providing AI-driven recommendations to help individual farmers become more productive.
Why conversational AI needs to feel more human, not sound more human
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! With more and more of our lives spent online, it's no wonder today's headlines are rife with stories that sound like science fiction plot lines blurring the lines between human and computer. Earlier this year, a Google engineer voiced concerns that its chatbot model LaMDA had become sentient. Shortly after, the head of global affairs for Meta published a lengthy article about the impactful experiences that immersive technology makes possible, which featured an image of a man playing chess with a humanoid hologram -- implying technology can generate and replace that kind of human-to-human experience.
Building Trust in AI To Ensure Equitable Solutions
Your smart phone can feel like a lifeline, helping you navigate a new town or delivering an urgent message to a friend. Many people have a funny or embarrassing anecdote about an autocorrected text message or a roundabout route to a destination. But these artificial intelligence (AI) flaws exist on a spectrum, from minor inconveniences to unfair treatment or even risk to human life. The people who create and use these AI technologies are also imperfect; we have our own biases, whether we are aware of them or not. Unconscious bias can influence our decisions and lead to unintended consequences; overt prejudice can result in our unethical and harmful exploitation of AI technologies.
- Energy > Renewable (1.00)
- Information Technology (0.97)
- Energy > Power Industry (0.96)
- Government > Regional Government > North America Government > United States Government (0.48)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.84)
- Information Technology > Communications > Mobile (0.55)
Global Big Data Conference
How can humans learn to trust self-healing machines? See how teams can build trust in machines through "truth and proof," evidence-driven AIOps tools. IT systems are only getting more complex, with greater pressures to solve issues faster and demonstrate value consistently. Issues within systems, which dev teams could once handle all on their own, sprout up too fast and too often for direct human intervention. Artificial intelligence for IT Operations (AIOps) tools exist today to deliver automated monitoring and solution development, "no humans required" -- significantly easing dev teams' many burdens.
- Information Technology > Artificial Intelligence (0.43)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
Building Trust with Responsible AI
Artificial Intelligence is being used in almost every aspect of life. AI symbolizes growth and productivity in the minds of some, but it is raising questions as well on the fairness, privacy, and security of these systems. Many legitimate issues exist, including biased choices, labor replacement, and a lack of security. When it comes to robots, this is very frightening. Self-driving automobiles, for example, can cause injury or death if they make mistakes.
- Transportation > Passenger (0.64)
- Transportation > Ground > Road (0.64)
You Don't Trust AI? How to Overcome Your Fears
In a recent episode of Star Trek: Discovery, the crew struggled with the question of how to trust their newly sentient ship's computer Zora. The issue of trust came to a head when Zora made a unilateral decision the crew didn't like. In the face of such insubordination, is there any way the crew could trust Zora to follow the chain of command? Today's AI is many years away from suddenly waking up sentient, but the question of trust is front and center in every professional's mind. If there's a chance that some AI-driven software might get an answer wrong – either clearly incorrect or perhaps more perniciously, subtly biased – then how can we ever trust it?