Goto

Collaborating Authors

 compliance program


When Algorithms Rule, Values Can Wither

#artificialintelligence

Interest in the possibilities afforded by algorithms and big data continues to blossom as early adopters gain benefits from AI systems that automate decisions as varied as making customer recommendations, screening job applicants, detecting fraud, and optimizing logistical routes.1 But when AI applications fail, they can do so quite spectacularly.2 Consider the recent example of Australia's "robodebt" scandal.3 In 2015, the Australian government established its Income Compliance Program, with the goal of clawing back unemployment and disability benefits that had been made inappropriately to recipients. It set out to identify overpayments by analyzing discrepancies between the annual income that individuals reported and the income assessed by the Australian Tax Office.


Why Your Board Needs a Plan for AI Oversight

#artificialintelligence

We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI -- a discussion that's relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it's time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization's overall mission and risk management. According to McKinsey's 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations "comprehensively identify and prioritize" the risks associated with AI deployment. Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, Fit for the Future: An Urgent Imperative for Board Leadership, 86% of board members "fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years."1


Meet modern compliance: Using AI and data to manage business risk better

#artificialintelligence

In June 2020, when the U.S. Department of Justice (DoJ) issued updated guidance on how to evaluate corporate compliance programs, it came with a clear mandate to companies: Compliance programs must use robust technology and data analytics to assess their own actions and those of any third parties they do business with, from the point of engagement onward. At the very least, companies are expected to be able to explain the rationale for using third parties, whether they have relationships with foreign officials, and any potential risks to their reputation. This is a compliance game-changer. Historically, organizations could argue that they simply did not have the information available to identify potential compliance dissonance across their networks: the "needle in a haystack" defense. Organizations are now expected to show that they are leveraging data and applying modern analytics to draw insights and navigate the risks across their entire business network.


Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know - insideBIGDATA

#artificialintelligence

In this special guest feature, Joseph E. Mutschelknaus, a director in Sterne Kessler's Electronics Practice Group, addresses some of the top data privacy compliance issues that startups dealing with AI and ML applications face. He also assists with district court litigation and licensing issues. Based in Washington, D.C. and renown for more than four decades for dedication to the protection, transfer, and enforcement of intellectual property rights, Sterne, Kessler, Goldstein & Fox is one of the most highly regarded intellectual property specialty law firms in the world. Last year, the Federal Trade Commission (FTC) hit both Facebook and Google with record fines relating to their handling of personal data. The California Consumer Privacy Act (CCPA), which is widely viewed the toughest privacy law in the U.S., came online this year.


The Digital Twin and P&L of One JD Supra

#artificialintelligence

Innovation in compliance can come in many forms. One such form was described by Vincent M. Walden, Managing Director at Alvarez and Marsal Holdings, LLC (A&M), in his article entitled "Profit & Loss-of-One"(P&L-of-One). In it, Walden detailed how he and his then colleagues at Ernest & Young (EY) worked in conjunction with the General Electric (GE) compliance function to "improve compliance by using forensic data analytics to provide behavioral insights to their compliance program." They did this through the innovative use of "digital twins" which Walden described as "digital replicas of physical assets that organizations can use for multiple purposes such as the maintenance of power generation equipment, jet engines and heavy machinery." In a more expansive definition, the consulting firm Gartner, Inc. described "digital twins" as dynamic software models of physical things or systems.


Attacking Artificial Intelligence: AI's Security Vulnerability and What Policymakers Can Do About It

#artificialintelligence

Artificial intelligence systems can be attacked. The methods underpinning the state-of-the-art artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called an "artificial intelligence attack." Using this attack, adversaries can manipulate these systems in order to alter their behavior to serve a malicious end goal. As artificial intelligence systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant effects on the security of the country. These "AI attacks" are fundamentally different from traditional cyberattacks. Unlike traditional cyberattacks that are caused by "bugs" or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. Further, AI attacks fundamentally expand the set of entities that can be used to execute ...


Artificial Intelligence and Money Laundering: Would AI Catch Marty Byrde? JD Supra

#artificialintelligence

In the popular Netflix series Ozark, money launderer Marty Byrde expends a lot of time and energy mitigating the risks that relate to his work, including his drug cartel client, a pair of farmers, the local pastor, and his own employee and her relatives--but financial regulators never appear to be a blip on his radar. Would the series turn out differently if Marty's bank had used artificial intelligence to examine his deposits? The feds may be hoping for a plot twist. Recently, several federal agencies jointly encouraged banks to consider developing new technologies, particularly AI technologies, in order to help protect the financial system against money laundering and terrorist financing. Banks are now encouraged to "consider, evaluate, and, where appropriate, responsibly implement innovative approaches to meet their Bank Secrecy Act/Anti-Money Laundering compliance obligations" by agencies including the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency.