consumer trust
Ethical AI in Retail: Consumer Privacy and Fairness
The adoption of artificial intelligence (AI) in retail has significantly transformed the industry, enabling more personalized services and efficient operations. However, the rapid implementation of AI technologies raises ethical concerns, particularly regarding consumer privacy and fairness. This study aims to analyze the ethical challenges of AI applications in retail, explore ways retailers can implement AI technologies ethically while remaining competitive, and provide recommendations on ethical AI practices. A descriptive survey design was used to collect data from 300 respondents across major e-commerce platforms. Data were analyzed using descriptive statistics, including percentages and mean scores. Findings shows a high level of concerns among consumers regarding the amount of personal data collected by AI-driven retail applications, with many expressing a lack of trust in how their data is managed. Also, fairness is another major issue, as a majority believe AI systems do not treat consumers equally, raising concerns about algorithmic bias. It was also found that AI can enhance business competitiveness and efficiency without compromising ethical principles, such as data privacy and fairness. Data privacy and transparency were highlighted as critical areas where retailers need to focus their efforts, indicating a strong demand for stricter data protection protocols and ongoing scrutiny of AI systems. The study concludes that retailers must prioritize transparency, fairness, and data protection when deploying AI systems. The study recommends ensuring transparency in AI processes, conducting regular audits to address biases, incorporating consumer feedback in AI development, and emphasizing consumer data privacy.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services > e-Commerce Services (0.48)
Enhancing transparency in AI-powered customer engagement
This paper addresses the critical challenge of building consumer trust in AI-powered customer engagement by emphasising the necessity for transparency and accountability. Despite the potential of AI to revolutionise business operations and enhance customer experiences, widespread concerns about misinformation and the opacity of AI decision-making processes hinder trust. Surveys highlight a significant lack of awareness among consumers regarding their interactions with AI, alongside apprehensions about bias and fairness in AI algorithms. The paper advocates for the development of explainable AI models that are transparent and understandable to both consumers and organisational leaders, thereby mitigating potential biases and ensuring ethical use. It underscores the importance of organisational commitment to transparency practices beyond mere regulatory compliance, including fostering a culture of accountability, prioritising clear data policies and maintaining active engagement with stakeholders. By adopting a holistic approach to transparency and explainability, businesses can cultivate trust in AI technologies, bridging the gap between technological innovation and consumer acceptance, and paving the way for more ethical and effective AI-powered customer engagements. KEYWORDS: artificial intelligence (AI), transparency
- Europe (0.14)
- North America > United States > New York (0.05)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- (2 more...)
- Overview (0.46)
- Research Report (0.40)
- Law (1.00)
- Information Technology (1.00)
- Banking & Finance (1.00)
- Government > Regional Government > North America Government > United States Government (0.47)
The role of organisational culture in data privacy and transparency
In an era of mass personalisation and technological innovation, organisations increasingly need to make consideration of the way they use consumer data a part of their organisational culture. Since the GDPR's inception back in May 2018, there have been some encouraging findings (as I have discussed before) indicating that consumers are increasingly willing to share their data in exchange for personalised services and improved experiences. In addition, marketers are more confident about their reputation in the eyes of consumers. However, there is still a long way to go to improve consumer trust in marketing and highlight how data can be used as a force for good. Recent Adobe research reveals that over 75 per cent of UK consumers are concerned about how companies use their data.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Insurtech: what is it and what does it mean for insurance? - Economics Observatory
The basic idea of insurance is based on risk transfer and has been around, in some form, for thousands of years. For example, Chinese merchants travelling through treacherous rivers over two thousand years ago would spread their goods across many vessels to avoid losing everything if a single vessel were to capsize. The use of public storehouse granaries for the purpose of communal protection in the event of a famine is another example. Modern-day insurance can be traced back to events such as the Great Fire of London in 1666. This prompted the development of property insurance, the establishment of Edward Lloyd's London coffee shop (which became a central place for marine insurance to develop and eventually the famous Lloyd's of London insurance market) and the founding of the Amicable Society for a Perpetual Assurance Office in 1706 and the Equitable Life Assurance Society in 1762.
The role of organisational culture in data privacy and transparency
In an era of mass personalisation and technological innovation, organisations increasingly need to make consideration of the way they use consumer data a part of their organisational culture. Since the GDPR's inception back in May 2018, there have been some encouraging findings (as I have discussed before) indicating that consumers are increasingly willing to share their data in exchange for personalised services and improved experiences. In addition, marketers are more confident about their reputation in the eyes of consumers. However, there is still a long way to go to improve consumer trust in marketing and highlight how data can be used as a force for good. Recent Adobe research reveals that over 75 per cent of UK consumers are concerned about how companies use their data.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Chatbot or human? Either way, what matters for customer trust is 'perceived humanness'
The helpful person guiding you through your online purchase might not be a person at all. As artificial intelligence and natural language processing advance, we often don't know if we are talking to a person or an AI-powered chatbot, says Tom Kelleher, Ph.D., an advertising professor in the University of Florida's College of Journalism and Communications. What matters more than who (or what) is on the other side of the chat, Kelleher has found, is the perceived humanness of the interaction. With text-based bots becoming ubiquitous and AI-powered voice systems emerging, consumers of everything from shoes to insurance may find themselves talking to non-humans. Companies will have to decide when bots are appropriate and effective and when they're not.
- North America > United States > Connecticut (0.06)
- North America > United States > California (0.06)
Chatbot or human? Either way, what matters for customer trust is "perceived humanness"
The helpful person guiding you through your online purchase might not be a person at all. As artificial intelligence and natural language processing advance, we often don't know if we are talking to a person or an AI-powered chatbot, says Tom Kelleher, Ph.D., an advertising professor in the University of Florida's College of Journalism and Communications. What matters more than who (or what) is on the other side of the chat, Kelleher has found, is the perceived humanness of the interaction. With text-based bots becoming ubiquitous and AI-powered voice systems emerging, consumers of everything from shoes to insurance may find themselves talking to non-humans. Companies will have to decide when bots are appropriate and effective and when they're not.
- North America > United States > Connecticut (0.06)
- North America > United States > California (0.06)
On the podcast: Autonomous finance's obstacles and opportunities
Autonomous finance uses AI to make financial decisions on behalf of consumers without the need for direct human input. The service has become especially relevant over the last year as consumers have struggled to maintain financial health during the COVID-19 pandemic. In this episode, Paul Condra, head of emerging technology research, and Robert Le, senior emerging tech analyst, discuss how autonomous finance helps consumers better manage their financial health and performance, as well as the challenges for the technology--including computing costs, consumer trust, regulations and transaction categorization. Listen to all of Season 3 and subscribe to get future episodes of "In Visible Capital" on Apple Podcasts, Spotify, Google Podcasts or wherever you listen. For inquiries, please contact us at podcast@pitchbook.com. Transcript Adam Lewis: Welcome back to "In Visible Capital," a show that discusses the inner workings of the private markets. Today, we'll be sharing a fascinating conversation on autonomous finance from a recent webinar with Paul Condra, our head of emerging tech research and Robert Le, a senior emerging tech analyst who focuses on fintech and insurtech. Adam: Alec, would you believe it if I told you that you could purchase a robot to run your personal finances and wealth management? Alexander: Well, normally, Adam, the skeptic in me would say that that's probably just a little impossible-sounding. The Silicon Valley fintech mavens, you never know what they're going to come up with. The fact is that millions of dollars of venture capital are being bet on apps that can do all of those things and more.
- North America > United States > California (0.24)
- Europe > United Kingdom (0.14)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Financial Services (1.00)
- Information Technology > Security & Privacy (0.68)
- Information Technology > e-Commerce > Financial Technology (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence (1.00)
Kitchens in the cloud: AI is restoring consumer trust in the food delivery business - Microsoft Stories India
Before COVID-19 struck India, Rajesh Agrawal and his wife, Meenakshi, would often get food from restaurants delivered to their home. A weekly treat of chicken tikka masala or lamb biryani would be a break from the vegetarian dishes they cook at home. It's been nearly a year since the Agrawals stopped ordering in food from their favorite restaurants. "There's no way to tell how clean and hygienic the restaurant kitchens are really," Mr. Agrawal says. "Sure, the government has released processes for restaurants during the pandemic. But we can't be certain that they're following those, can we?"
- Asia > India (0.66)
- Asia > Singapore (0.05)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
- (3 more...)
Banks look at 'explainable' AI systems to boost consumer trust - Roll Call
Banks and other financial firms are investing in "explainable" artificial intelligence that lets auditors and analysts trace how decisions about loans and other services are made by financial technologies, experts say. The increasing use of software with AI capabilities such as machine learning and data mining has automated banking operations, increasing efficiency and providing more services. But privacy and civil liberties groups contend that has come at a cost, with bias in the AI systems' algorithms leading to discrimination in the form of loans or other services denied based on sex or ethnicity. This perception of algorithmic bias is a big problem for banks, which are investing in technical solutions to solve the problem, Moutusi Sau, an analyst at research and advisory company Gartner Inc., told CQ Roll Call. That issue is known as the black box problem with AI systems: software decision-making processes that often are opaque to humans, making it difficult or impossible to determine how a decision was made.
- Banking & Finance (1.00)
- Law > Civil Rights & Constitutional Law (0.61)
- Government > Military (0.38)