The fight against fraud has always been a messy business, but it's especially grisly in the digital age. To keep ahead of the cybercriminals, investment in technology – particularly artificial intelligence – is paramount, says Ajay Bhalla, president of cyber and intelligence solutions at Mastercard. Since the opening salvo of the coronavirus crisis, cybercriminals have launched increasingly sophisticated attacks across a multitude of channels, taking advantage of heightened emotions and poor online security. Some £1.26 billion was lost to financial fraud in the UK in 2020, according to UK Finance, a trade association, while there was a 43% year-on-year explosion in internet banking fraud losses. The banking industry managed to stop some £1.6 billion of fraud over the course of the year, equivalent to £6.73 in every £10 of attempted fraud.
In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.
Chief Technology Officer at Integrity Management Services, Inc., where she is leading cutting-edge technology solutions (AI) for clients. Artificial intelligence is ubiquitous today. Most of us do not know where AI is being used and are unaware of the biased decisions that some of these algorithms produce. There are AI tools that claim to infer "criminality" from face images, race from facial expressions and emotion recognition through eye movements. Many of these technologies are increasingly used in applications that impact credit card checks, fraud detection, criminal justice decisions, hiring practices, healthcare outcomes, spreading misinformation, education, lifestyle decisions and more.
Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don't know with any certainty whether those AI models will one day run afoul of new AI regulations. Ted Kwartler, vice president of Trusted AI at DataRobot, talked with VentureBeat about why it's critical for AI models to make predictions "humbly" to make sure they don't drift or, one day, potentially run afoul of government regulations. This interview has been edited for brevity and clarity. VentureBeat: Why do we need AI to be humble?
The US Securities and Exchange Commission (SEC) has charged a Florida national for his alleged role in three separate securities fraud scams. Simple steps can make the difference between losing your online accounts or maintaining what is now a precious commodity: Your privacy. Edgar Radjabli, a former dentist, controlled Apis Capital Management LLC., marketed as an advisory firm that the SEC says was unregistered. Through this company, Radjabli allegedly controlled Apis Tokens as a managing partner, an offering called the "first tokenized hedge fund" which was based on the Stellar platform. Apis Tokens were touted as a way for investors to access the ACM Market Neutral Volatility Strategy fund by converting cryptocurrency including Bitcoin (BTC) and Ethereum (ETH) into Apis Tokens and stakes in the fund.
Content monitoring through AI technologies, smart cameras for facial identification, DNA profiling algorithms are some of the techniques witnessing a surge throughout the world. Technologies provide us with reliable and trustable data to bank upon, but the questions arising on its accuracy can be a debatable issue. Let's have a look at a recent case to understand the apprehension. Recently, in two separate judgements -- a judge from the Appellate Division of the Superior Court of New Jersey and a federal judge in Pennsylvania in the United States have ordered the prosecutor to hand over the source code of TrueAllele by Cybergenetics. The software program ran different DNA data available on a gun through complex statistical algorithms to compare the probability of a specific person's DNA being present.
Applied AI is put to work in various forms, depending on its purpose. These forms include natural language generation, chatbots, speech or image recognition, and sentiment analysis. This technology has become so omnipresent that it has made its place even in the creation of CRM platforms that allow better customer handling and lead to increased customer satisfaction. Industries like marketing use applied AI to target the right advertisement to the right audience, the education industry uses applied AI to decide the right curriculum, law enforcement uses chatbots for threat detection, finance uses applied AI for analyzing trade trends, the manufacturing industry uses applied AI for logistical support, and the healthcare industry uses applied AI for early detection and disease diagnosis, amongst many other uses.
The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption. Similar to the EU's General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world.
A new AI-based chatbot tool used to help identity crime victims seek after-hours help was also designed with future B2B applications in mind, including helping employees report a cyberattack when the IT or security team is unavailable. This chatbot helper is a new service currently undergoing beta testing by the Identity Theft Resource Center (ITRC), leveraging technology developed by its partner SAS Institute. Thanks to ViViAN, individuals do not have to wait until normal ITRC business hours in order to report an incident; rather, they can lodge their complaints with the chatbot and receive reassurance and guidance on the immediate next steps they should take. All communications with ViViAN are then later followed up by a live agent when one becomes available. But at least this way, victims are able to act swiftly when their data is at stake and time is of the essence.