Musk


Do you want a home with artificial intelligence? -

#artificialintelligence

Sure, there are enough horror/thriller movies about a home turning against its occupants, but perhaps that's the reason many are anxious about current AI and smart home advancements. Let's take a look at the potential of home AI: Ideally, a home AI system could learn your routine, set reminders for you, activate alarms, and even suggest ways to make your routine more efficient. A home AI system could track your diet and make recommendations to ensure you're getting the right amount of nutrients on a daily basis. But it seems like it's only a matter of time before your home AI system starts making mistakes.


We can't ban killer robots – it's already too late Philip Ball

#artificialintelligence

One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. Besides, there's a continuum between drone war, soldier enhancement technologies and Laws that can't be broken down into "man versus machine". By all means let's try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they can't make killer robots is like telling soft-drinks manufacturers that they can't make orangeade.


sorry-banning-killer-robots-just-isnt-practical

WIRED

That's not because it's impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. But it hasn't suggested it would be open to international agreement banning autonomous weapons. In 2015, the UK government responded to calls for a ban on autonomous weapons by saying there was no need for one, and that existing international law was sufficient.


Killer robots: Experts warn of 'third revolution in warfare' - BBC News

#artificialintelligence

In a letter to the organisation, artificial intelligence (AI) leaders, including billionaire Elon Musk, warn of "a third revolution in warfare". The letter says "lethal autonomous" technology is a "Pandora's box", adding that time is of the essence. Along with Tesla co-founder and chief executive Mr Musk, the technology leaders include Mustafa Suleyman, Google's DeepMind co-founder. A potential ban on the development of "killer robot" technology has previously been discussed by UN committees.


Elon Musk joins other experts in call for global ban on killer robots

FOX News

Tesla's CEO Elon Musk and other leading artificial intelligence experts have called on the United Nations for a global ban on the use of killer robots, which includes drones, tanks and machine guns, The Guardian reported on Sunday. The experts call autonomous weapons "morally wrong." The report said that the experts hope to add killer robots to the U.N.'s list of banned weapons that include chemical and intentionally blinding laser weapons. In a July 15 speech at the National Governors Association Summer Meeting in Rhode Island, Musk said the government needs to proactively regulate artificial intelligence before there is no turning back, describing it as the "biggest risk we face as a civilization."


Killer robots: Experts warn of 'third revolution in warfare'

BBC News

In a letter to the organisation, artificial intelligence (AI) leaders, including billionaire Elon Musk, warn of "a third revolution in warfare". The letter says "lethal autonomous" technology is a "Pandora's box", adding that time is of the essence. Along with Tesla co-founder and chief executive Mr Musk, the technology leaders include Mustafa Suleyman, Google's DeepMind co-founder. A potential ban on the development of "killer robot" technology has previously been discussed by UN committees.


Elon Musk on artificial intelligence: If you're not concerned you should be

#artificialintelligence

The easiest option is to head to the column on the left and open News Feed Preferences. Just hit the three buttons next to News Feed Preferences on the Facebook site and choose between Top Stories and Recent Stories. To permanently delete your Facebook account, you need to head to Facebook's Delete Account page. After Mr Musk called AI "a fundamental existential risk for human civilisation", the Facebook founder branded his views as "negative" and "pretty irresponsible".


Musk, tech experts want U.N. to ban killer robots

USATODAY

This file photo taken on July 19, 2017 shows Elon Musk, CEO of SpaceX and Tesla, during the International Space Station Research and Development Conference at the Omni Shoreham Hotel in Washington, DC. A group of technology experts including Tesla and SpaceX CEO Elon Musk is warning the United Nations about the potential threat posed by autonomous weapons. In an open letter addressed to the U.N.'s Convention on Certain Conventional Weapons, 116 founders and CEOs of robotics and artificial intelligence companies want the "killer robot" weapons banned. The group of experts who signed the letter applauded the U.N. for creating a Group of Governmental Experts (GGE) to consider lethal autonomous weapon systems.


I was worried about artificial intelligence--until it saved my life

#artificialintelligence

I was thankful for the AI that saved my life-- but then that very same algorithm changed my son's potential career path. Fearing for their own jobs and their children's future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society. After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways.


The importance of building ethics into artificial intelligence

Mashable

A crucial step toward building a secure and thriving AI industry is collectively defining what ethical AI means for people developing the technology – and people using it. At Sage, we define ethical AI as the creation of intelligent machines that work and react like humans, built with the ability to autonomously conduct, support or manage business activity across disciplines in a responsible and accountable way. Consequently, the industry should focus on efforts to develop and grow a diverse talent pool that can build AI technologies to enhance business operations and address specific sets of workplace issues, while ensuring that it is accountable. Hopefully, AI's human co-workers – including people actually building the technology – will learn vital AI management skills, adopt strong ethics and hold themselves more accountable in the process.