Issues: AI-Alerts


Fintech Infographic of the Week: Ethical AI - Fintech Hong Kong

#artificialintelligence

Artificial intelligence (AI) is set to play a key role in the future of financial services and more broadly in what UBS and the World Economic Forum refer to as the "Fourth Industrial Revolution." The global economy is on the cusp of profound changes driven by "extreme automation" and "extreme connectivity." In this changing economic landscape, AI is expected to be a pervasive feature, allowing to automate some of the skills that formerly only humans possessed. In the financial services industry in particular, there has been a lot of noise around the potential of AI and data supports that investors are excited about the impact the technology could have across the industry. VC-backed fintech AI companies raised approximately US$2.22 billion in funding in 2018, nearly twice as much as 2017's record.


The future of AI research is in Africa

#artificialintelligence

In 2016, the Johannesburg team at IBM Research discovered that the process of reporting cancer data to the government, which used it to inform national health policies, took four years after diagnosis in hospitals. In the US, the equivalent data collection and analysis takes only two years. The additional lag turned out to be due in part to the unstructured nature of the hospitals' pathology reports. Human experts were reading each case and classifying it into one of 42 different cancer types, but the free-form text on the reports made this very time-consuming. So the researchers went to work on a machine-learning model that could label the reports automatically.


Microsoft President Brad Smith Discusses The Ethics Of Artificial Intelligence

NPR Technology

NPR's Audie Cornish talks with Microsoft President Brad Smith about why he thinks the government should regulate artificial intelligence, especially facial recognition technology.


Hey Google, sorry you lost your ethics council, so we made one for you

MIT Technology Review

After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC--a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing. How did things go so wrong? And can Google put them right?


The Tricky Ethics of Google's Cloud Ambitions

WIRED

Google's attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company's cloud computing conference, building on his predecessor's strategy of emphasizing Google's strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its--and its customers'--AI projects to ethical reviews. They have caused Google to turn away some business.


UK, US and Russia among those opposing killer robot ban

#artificialintelligence

The UK government is among a group of countries that are attempting to thwart plans to formulate and impose a pre-emptive ban on killer robots. Delegates have been meeting at the UN in Geneva all week to discuss potential restrictions under international law to so-called lethal autonomous weapons systems, which use artificial intelligence to help decide when and who to kill. Most states taking part – and particularly those from the global south – support either a total ban or strict legal regulation governing their development and deployment, a position backed by the UN secretary general, António Guterres, who has described machines empowered to kill as "morally repugnant". But the UK is among a group of states – including Australia, Israel, Russia and the US – speaking forcefully against legal regulation. As discussions operate on a consensus basis, their objections are preventing any progress on regulation.


Call to Ban Killer Robots in Wars

#artificialintelligence

A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence. A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence (AI), warning they may malfunction unpredictably and kill innocent people. The coalition has established the Campaign to Stop Killer Robots to lobby for an international accord. Said Human Rights Watch's Mary Wareham, autonomous weapons "are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly, and land on their own; robotic sentries that can identify movement."


World calls for international treaty to stop killer robots before rogue states acquire them

The Independent - Tech

There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.


The EU Should Not Regulate Artificial Intelligence As A Separate Technology

#artificialintelligence

A report from the recent conference on Computers, Privacy and Data Protection suggested that the European Commission is "considering the possibility of legislating for Artificial Intelligence." Karolina Mojzesowicz, Deputy Head, Data Protection Unit at the European Commission, said that the Commission is "assessing whether national and EU frameworks are fit for purpose for the new challenges." The Commission is exploring, for instance, whether to specify "how big a margin of error is acceptable in automated decisions and machine learning." The vehicle for this regulatory effort seems to be the draft Ethics Guidelines developed by a high-level expert group. The comment period on this draft closed on February 1, and a final report is due in March.


Beheaded in Philadelphia, punched in Silicon Valley and smeared with barbecue sauce in San Francisco: Why do humans hurt robots?

The Independent - Tech

A hitchhiking robot was beheaded in Philadelphia. A security robot was punched to the ground in Silicon Valley. Another security bot, in San Francisco, was covered in a tarp and smeared with barbecue sauce. Why do people lash out at robots, particularly those built to resemble humans? It is a global phenomenon. In a mall in Osaka, Japan, three boys beat a humanoid robot with all their strength. In Moscow, a man attacked a teaching robot named Alantim with a baseball bat, kicking it to the ground, while the robot pleaded for help.