Social Issues


The code of ethics for AI and chatbots that every brand should follow - Watson

#artificialintelligence

Key Points: – Businesses often overlook important issues related to morals and ethics of chatbots and AI – Customers need to know when they are communicating with a machine and not an actual human – Ownership of information shared with a bot is another key ethical consideration and can create intellectual property issues – The privacy and protection of user data is paramount in today's interconnected world You can also listen to The Modern Customer Podcast with Rob High here.) Businesses are rapidly waking up to the need for chatbots and other self-service technology. From automating basic communications and customer service, to reducing call center costs and providing a platform for conversational commerce, chatots offer many new opportunities to delight and better serve consumers. Chatbots can offer 24/7 customer service, rapidly engaging users, answering their queries as whenever they arrive. Millennials in particular are impatient when engaging with brands and expect real-time responses.


Sequoia Backs Graphcore as the Future of Artificial Intelligence Processors

#artificialintelligence

Graphcore has today announced a $50 million Series C funding round by Sequoia Capital as the machine intelligence company prepares to ship its first Intelligence Processing Unit (IPU) products to early access customers at the start of 2018. The Series C round enables Graphcore to significantly accelerate growth to meet the expected global demand for its machine intelligence processor. The funding will be dedicated to scaling up production, building a community of developers around the Poplar software platform, driving Graphcore's extended product roadmap, and investing in its Palo Alto-based US team to help support customers. Nigel Toon, CEO at Graphcore said: "Efficient AI processing power is rapidly becoming the most sought-after resource in the technological world. We believe our IPU technology will become the worldwide standard for machine intelligence compute.


'Killer robots' will start slaughtering people if they're not banned soon, AI expert warns

The Independent

An artificial intelligence expert has called for countries to ban so-called "killer robots" before activists' warnings against them become a reality. The Campaign to Stop Killer Robots recently released a short film, in which autonomous weapons are used to carry out mass killings with frightening efficiency, while people struggle to work out how to combat them. A United Nations panel discussed the issue last week, but next plans to meet next year. Toby Walsh, Scientia Professor of AI at UNSW Sydney, says he's "confident" that killer robots will be banned, but is worried that the decision could take a long time to make. "[The] arms race has happened [and] is happening today," he said at the UN, reports AFP.


Call for ban on 'killer robots' - but are they really on the way?

#artificialintelligence

"ROBOTS ARE NOT taking over the world", was the message given this week during United Nations talks on the issue of autonomous weapons. That's according to the diplomat leading the first official talks on the issue, the Convention on Conventional Weapons (CCW), as they sought to ease criticism over slow progress towards restricting the use of so-called "killer robots". The United Nations was wrapping up an initial five days of discussions on weapons systems that can identify and destroy targets without human control, which experts say will soon be battle ready. The meeting of the CCW marked an initial step towards an agreed set of rules governing such weapons. But activists warned that time was running out and that the glacial pace of the UN-brokered discussions was not responding to an arms race already underway.


Chief scientist Alan Finkel calls for ethical AI stamp

#artificialintelligence

Australia's Chief Scientist, Alan Finkel, has called on governments and businesses across the world to consider developing a regulatory framework for artificial intelligence devices, ranging from the likes of Apple's Siri to weaponised drones. Dr Finkel, who was speaking at the Creative Innovation Global conference, said he was optimistic about AI, but an ethical stamp needed to be developed, similar to a Fair Trade label, in order to give consumers trust that the AI in a device had been developed according to specified global standards. "Two years ago I published an article in Cosmos magazine calling for a global accord [on weaponised drones]. In the same year, more than 3000 AI and robotics researchers signed an open letter urging the leaders of the world to take action to prevent a global arms race," he said. "On the other end of the spectrum are tools in everyday use, such as social media platforms and smartphones.


Satya Nadella talks about the future of Artificial Intelligence, Mixed Reality

#artificialintelligence

Satya Nadella spoke at India Today Conclave 2017 where he talked about the spread of digital technology in the Indian landscape. Nadella shared his experience of how India changed from being a service provider to a using its own IT prowess in the various sectors. During his session at the Conclave, Nadella shared a few instances where artificial intelligence and mixed reality have already started making major impacts in the fields of health and business development. Nadella spoke about the numerous opportunities that come with technological advancements. However, he believes that with tremendous opportunity comes tremendous responsibility.


The future of work: Technology, jobs and augmented intelligence

#artificialintelligence

Work as we know it is in a state of flux. Technology is imposing rapid change, and the rise in automation capabilities and artificial intelligence are the chief catalysts. As Salesforce's Futurist, I spend a lot of time forward-thinking and analysing trend data, and have shared my thoughts on what this technological change means for the future of work and how to navigate it. There's a lot of angst in the world right now that the rise of smart technologies are going to disemploy vast numbers of people. I appreciate why there's anxiety, but if we look at history as a predictor of the future, this simplistic idea that'technology steals jobs' is unfounded.


When algorithms discriminate: Robotics, AI and ethics

Al Jazeera

We live in an age of rapid technological advances where artificial intelligence (AI) is a reality, not a science fiction fantasy. Every day we rely on algorithms to communicate, do our banking online, book a holiday - even introduce us to potential partners. Driverless cars and robots may be the headline makers, but AI is being used for everything from diagnosing illnesses to helping police predict crime hot spots. As machines become more advanced, how does society keep pace when deciding the ethics and regulations governing technology? Al Jazeera talks to Stephen Roberts, professor of Machine Learning at the University of Oxford, United Kingdom, on the role machine learning plays in our lives today - and in the future.


UN panel to debate 'killer robots' and other AI weapons

#artificialintelligence

A United Nations panel agreed Friday to consider guidelines and potential limitations for military uses of artificial intelligence amid concerns from human rights groups and other leaders that so-called "killer robots" could pose a long-term, lethal threat to humanity. Advocacy groups warned about the threats posed by such "killer robots" and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part. Ambassador Amandeep Gill of India, who chaired the gathering, said participants plan to meet again in 2018. He said ideas discussed this week included the creation of legally binding instrument, a code of conduct, or a technology review process.


Panel aims to pull plug on killer robots

Boston Herald

A U.N. panel agreed yesterday to move ahead with talks to define and possibly set limits on weapons that can kill without human involvement, as human rights groups said governments are moving too slowly to keep up with advances in artificial intelligence that could put computers in control one day. Advocacy groups warned about the threats posed by such "killer robots" and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part. Ambassador Amandeep Gill of India, who chaired the gathering, said participants plan to meet again in 2018. He said ideas discussed this week included the creation of legally binding instrument, a code of conduct, or a technology review process.