Goto

Collaborating Authors

ai system


How to De-Bias Artificial Intelligence in Banking

#artificialintelligence

Michelle Palomera, Global Head of Banking and Capital Markets at Rightpoint has experience with this. With over 25 years of experience in customer and digital consulting, Michelle combines practical industry and technology knowledge with a personalised style in working directly with clients and team members. Her extensive knowledge of financial services, which spans consumer, buy-side/wealth, commercial and institutional banking helps clients develop strategies for new revenue channels as well as launch new businesses through digital products and services. Here she explains how to de-bias AI in banking. When bias becomes embedded in AI software, financial institutions may unfairly reward certain groups over others, make bad decisions, issue false positives and diminish their opportunity. This will ultimately result in poor customer experience, decreased revenues and increased costs and risks.


How to Increase Group Insurance Sales with AI - Global IQX

#artificialintelligence

During peak business periods for group carriers, such as open enrollment in the United States, artificial intelligence can be leveraged to increase group insurance sales by streamlining quoting, optimizing resources, automating manual tasks and eliminating duplication of effort before and during enrollment. Peak enrollment period is here once again as group and voluntary benefits providers put their remote work arrangements to the test in what will be an unusually demanding season. This year has been the year of digital transformation in the insurance industry, and 2020's challenges will inspire new approaches and digitization within carrier ecosystems. Fortunately, insurers can use AI and predictive analytics to increase group insurance sales. AI can help carriers streamline quoting and enrollment, optimize resources, and automate manual tasks.


AI systems can easily create fake faces

#artificialintelligence

There is no lack of businesses that sell fake people for characters in a video game and to help websites appear more diverse. By choosing different values for sizes of eyes and other features, AI systems can alter the whole image. The simulated people are playing the role of spies to infiltrate the intelligence community and even, online harassers.


5 Foundational Pillars for Ensuring Responsible AI

#artificialintelligence

We are seeing overwhelming growth in AI/ML systems to process oceans of data that are being generated in the new digital economy. However, with this growth, there is a need to seriously consider the ethical and legal implications of AI. As we entrust increasingly more sophisticated and important tasks to AI systems, such as automatic loan approval, for example, we must be absolutely certain that these systems are responsible and trustworthy. Reducing bias in AI has become a massive area of focus for many researchers and has huge ethical implications, as does the amount of autonomy that we give these systems. The concept of Responsible AI is an important framework that can help build trust in your AI deployments.


Researchers developed 'explainable' AI to help diagnose and treat at-risk children

#artificialintelligence

A pair of researchers from the Oak Ridge Laboratory have developed an "explainable" AI system designed to aid medical professionals in the diagnosis and treatment of children and adults who've experienced childhood adversity. While this is a decidedly narrow use-case, the nuts and bolts behind this AI have particularly interesting implications for the machine learning field as a whole. Plus, it represents the first real data-driven solution to the outstanding problem of empowering general medical practitioners with expert-level domain diagnostic skills – an impressive feat in itself. Let's start with some background. Adverse childhood experiences (ACEs) are a well-studied form of medically relevant environmental factors whose effect on people, especially those in minority communities, throughout the entirety of their lives has been thoroughly researched. While the symptoms and outcomes are often difficult to diagnose and predict, the most common interventions are usually easy to employ.


Google is testing an AI system to help vision-impaired people run races

Engadget

Google is testing an artificial intelligence system designed to help blind and vision-impaired people to run races by themselves. Project Guideline, which is an early-phase research program, is an attempt to give those people more independence. They wouldn't necessarily need to rely on a tethered human guide or a guide dog to help them around a course. To use the system, a runner attaches an Android phone to a Google-designed harness that goes around their waist, according to VentureBeat. A Project Guideline app can use the phone's camera to track a guideline that's been laid down on a course. The app then sends audio cues to bone-conducting headphones when a runner veers away from the line -- the sound will get louder in one ear the further they stray to the side.


Time may be right for professionalizing artificial intelligence practices

ZDNet

With so much riding on the performance and accuracy of artificial intelligence algorithms -- from medical diagnoses to legal advice to financial planning -- there have been calls for the "professionalization" of AI developers, through mechanisms such as certifications and accreditations, all the way up to government mandates. After all, it is argued, healthcare professionals, lawyers and financial advisors all require varying levels of certification, why shouldn't the people creating the AI systems that could replace the advice of these professionals also be verified? "For example, you understand that architects, electricians and other construction professionals know how to build a house," says Fernando Lucini, global lead in data science and machine learning engineering at Accenture. "They've had requisite training and understand their roles and responsibilities, safety standards and protocols to follow throughout the construction process. It's unlikely that you'd trust a'citizen architect' to build your home in the same way that you wouldn't visit a'citizen doctor' when you get sick."


UN and Europol Warn of Growing AI Cyber-Threat

#artificialintelligence

Cyber-criminals are just getting started with their malicious targeting and abuse of artificial intelligence (AI), according to a new report from Europol and the UN. Compiled with help from Trend Micro, the Malicious Uses and Abuses of Artificial Intelligence report predicts AI will in the future be used as both attack vector and attack surface. In effect, that means cyber-criminals are looking for ways to use AI tools in attacks, but also methods via which to compromise or sabotage existing AI systems, like those used in image and voice recognition and malware detection. The report warned that, while deepfakes are the most talked about malicious use of AI, there are many other use cases which could be under development. These include machine learning or AI systems designed to produce highly convincing and customized social engineering content at scale, or perhaps to automatically identify the high-value systems and data in a compromised network that should be exfiltrated.


Making Sense of the AI Landscape

#artificialintelligence

As AI tools become more commonplace, many businesses find themselves playing catch up when it comes to incorporating these new systems into their existing infrastructure. And that's more than understandable -- these tools are highly varied, often poorly-understood, and they're constantly evolving. To start making sense of the AI landscape and determine how your business will need to adapt, the first thing to understand is that the term "AI" in fact covers a huge spectrum of different things. In a study presented in the forthcoming book Artificial Intelligence for Sustainable Value Creation, we mapped out how more than 800 different AI systems were being used across 14 industries. Based on our analysis, we found that these systems fell into four distinct categories: systems that complete rote tasks with limited ethical implications, systems that complete rote tasks that do have an ethical component, systems that complete creative tasks with limited ethical implications, and systems that require both creativity and ethical decision-making.


Don't Fear Artificial General Intelligence

#artificialintelligence

AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps.