regulation


As AI gains enterprise 'citizenship,' it needs a foundation in trust, Accenture exec says

#artificialintelligence

As AI systems gain increasing "citizenship" in the enterprise, organizations have to ensure the "brains in a box" are adequately trained with enough data sets and content to establish a "bedrock in trust," said Mike Redding, managing director of Strategic Technology Innovation for Accenture, speaking Friday at an Accenture Technology Vision event in Washington D.C. Deployment of AI is complicated, particularly in legacy environments. This challenge has helped give rise to the popularity of robotic process automation (RPA), which is considered "entry level AI," Redding said. Companies can deploy a digital "agent" to the desktop of an employee to augment and streamline tasks, such as those requiring the processing of nuanced regulations. As AI is rolled out, however, companies will need to understand what goes into systems before it is deployed as a way to ensure trust. With the pending GDPR deadline approaching, companies will have to soon explain what goes into AI, Redding said.


Artificial intelligence, robots and a human touch Letters

#artificialintelligence

Elon Musk's comment that humans are underrated (Humans replace robots at flagging Tesla plant, 17 April) doesn't come as much of a surprise, even though his company is at the forefront of the technological revolution. Across industries, CEOs are wrestling with the balance between humans and increasingly cost-effective and advanced robots and artificial intelligence. However, as Mr Musk has discovered, the complexity of getting a machine to cover every possibility results in a large web of interconnected elements that can overcomplicate the underlying problem. This is why so many organisations fail when they try to automate everything they do. Three key mistakes I see time and again in these situations are missing the data basics, applying the wrong strategy, and losing the human touch.


U.S. Banks Should Seek New Solutions - Not Reduced Expectations -In...

#artificialintelligence

The Clearing House recently issued a report that proposed softening Anti-Money Laundering (AML) and Bank Secrecy Act (BSA) regulations. The proposed revamp of rules at this time is both understandable and predictable. Fines totaling hundreds of millions of dollars are now commonplace. Reacting to increased regulations and related fines, covered financial institutions have been spending billions of dollars annually to reduce the like lihood of actions being taken against them. Despite increasing regulatory burdens, growing fines, and requisite increases in compliance spending, the money laundering problem is not abating.


The U.S. Needs a New Paradigm for Data Governance

@machinelearnbot

The U.S. Senate and House hearings last week on Facebook's use of data and foreign interference in the U.S. election raised important challenges concerning data privacy, security, ethics, transparency, and responsibility. They also illuminated what could become a vast chasm between traditional privacy and security laws and regulations and rapidly evolving internet-related business models and activities. To help close this gap, technologists need to seriously reevaluate their relationship with government. Here are four ways to start. Help to increase tech literacy in Washington.


Canada to Facebook: ’The time of self-regulation is over’

Mashable

Canadians are famous for saying'sorry,' but Mark Zuckerberg's apologies are not ringing true for the northern nation. The Canadian House of Commons is conducting formal hearings on the'Breach of personal information involving Cambridge Analytica and Facebook.' On Tuesday, it kicked off two days of political and expert questioning in its Standing Committee on Access to Information, Privacy and Ethics (ETHI). Facebook Canada's Global Directeur and Head of Public Policy, Kevin Chan, is slated to appear on Thursday at 8:45 AM. SEE ALSO: Live updates from Mark Zuckerberg's Congressional testimony On Tuesday, Daniel Therrien, privacy commissioner of Canada, spoke with the committee.


UK report urges action to combat AI bias

#artificialintelligence

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament. "The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct," the committee writes, chiming with plenty of extant commentary around algorithmic accountability. "It is essential that ethics take centre stage in AI's development and use," adds committee chairman, Lord Clement-Jones, in a statement. "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences." The report also calls for the government to take urgent steps to help foster "the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions" -- recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.


No need to regulate robots say peers

#artificialintelligence

New developments in artificial intelligence do not yet need specific new laws to control possible harmful effects, a landmark inquiry by peers recommends today. However the House of Lords Select Committee on Artificial Intelligence's 180-page report proposes that the government draft an international ethical code - which would include a ban on autonomous weapons, so-called'killer robots'. In researching the report, the lords' investigation took evidence from a wide range of ethical and legal experts, including the Law Society, law firms and Gazette columnist Joanna Goodman, as well as figures from industry and academia. Its overall finding was that the UK is in a strong position to lead developments, with its'constellation of legal, ethical, financial and linguistic strengths'. However committee chair Lord Clement-Jones (DLA Piper partner Timothy Clement-Jones) noted that: 'AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these.'


GDPR and the Paradox of Interpretability

@machinelearnbot

Summary: GDPR carries many new data and privacy requirements including a "right to explanation". On the surface this appears to be similar to US rules for regulated industries. We examine why this is actually a penalty and not a benefit for the individual and offer some insight into the actual wording of the GDPR regulation which also offers some relief. GDPR is now just about 60 days away and there's plenty to pay attention to especially in getting and maintaining permission to use a subscriber's data. If you're just starting out in the EU there are some new third party offerings that promise to keep track of things for you (Integris, Kogni, and Waterline all emphasized this feature at the Strata Data San Jose conference this month).


Martech Archives - Marketing Technology

#artificialintelligence

If you're like the unbreakable Kimmy Schmidt and got stuck in a bomb shelter in 2017, it may be both a blessing and a curse that you missed the machine learning for marketing media frenzy. Machine learning showed up everywhere, rivaling electricity's systemic emergence a century ago, allegedly injecting sage-like wisdom into everything from sales forecasting tools to email subject lines generators. But buildup and hype aside, real progress was made in using machine learning for marketing purposes, infiltrating impactful areas as unprecedented investments poured in. More resources supporting great minds pushed forward innovation in areas like image recognition, voice technologies, and natural language generation (NLG). And savvy brands that mindfully wired these into marketing applications boosted performance, in some cases realizing 400 percent ROI.


Cambridge Analytica scandal 'highlights need for AI regulation'

The Guardian

Britain needs to lead the way on artificial intelligence regulation, in order to prevent companies such as Cambridge Analytica setting precedents for dangerous and unethical use of the technology, the head of the House of Lords select committee on AI has warned. The Cambridge Analytica scandal, Lord Clement-Jones said, reinforced the committee's findings, released on Monday in the report "AI in the UK: ready, willing and able?" "These principles do come to life a little bit when you think about the Cambridge Analytica situation," he told the Guardian. "Whether or not the data analytics they carried out was actually using AI … It gives an example of where it's important that we do have strong intelligibility of what the hell is going on with our data." Clement-Jones added: "With the whole business in [the US] Congress and Cambridge Analytica, the political climate in the west now is much riper in terms of people agreeing to … a more public response to the ethics and so on involved. It isn't just going to be left to Silicon Valley to decide the principles."