Speculations are rife that the EU might soon be bringing in a new set of rules and regulations aimed at artificial intelligence developers. FREMONT, CA: After rolling out comprehensive data privacy regulations, the EU is now considering norms to regulate the development of Artificial Intelligence (AI) solutions. The technology of artificial intelligence has been a revelation and is one of the most-used among all the advanced technologies. Although beneficial, there have been apprehensions regarding the potential misuse of the technology, owing to its over-reaching capabilities. These apprehensions might have been the factors driving the EU towards getting a new AI policy on the cards.
Self-driving cars are one of the high-risk artificial intelligence applications the European Union wants to regulate. The European Commission today unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws--including a ban on "black box" AI systems that humans can't interpret--to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote "trust, not fear." The plan also includes measures to update the European Union's 2018 AI strategy and pump billions into R&D over the next decade.
In the near future, threat actors will begin utilizing artificial intelligence (AI) to craft malware that's been expressly designed to evade your next-gen cyber defenses. On October 3rd, BlackBerry Cylance Staff Data Scientist Michael Slawinski and Security Engineer Josh Fu reviewed the current state of AI, assessed AI's future, and considered the central role cyber resilience will play in the arms race between attackers and defenders. You won't need a PhD in math to appreciate the significance of the data science Josh and Michael shared during this important webinar. We also invite you to stay current on the latest cybersecurity news, trends, and research by checking out our schedule of on-demand and live webinars.
The Defense Department has officially adopted a set of principles to ensure ethical artificial intelligence adoption, but much work is needed on the implementation front, senior DOD tech officials told reporters Feb. 24. The five principles [see sidebar], which are based on the recommendations of the Defense Innovation Board's 15-month study on the matter, represent a first step and generalized intentions around AI use and adoption including being responsible, equitable, traceable, reliable, and governable. DOD released the principles during a news briefing Feb. 24. Those AI ethical guidelines will likely be woven into a little bit of everything, like cyber, from data collection to testing, DOD CIO Dana Deasy told reporters. "We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used and you can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you're actually intending," Deasy said.
"Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan." The former Google engineer predicts autonomous weapons currently in development pose a far greater risk to humanity than remote-controlled drones. She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with potentially fatal consequences. She told The Guardian: "You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. "The machine doesn't have the discernment or common sense that the human touch has.
Today we launched Neural, our new home for human-centric AI news and analysis. While we're celebrating the culmination of years of hard work from our behind-the-scenes staff, I'm taking the day to quietly contemplate the future of AI journalism. The Guardian's Oscar Schwartz wrote an article in 2018 titled "The discourse is unhinged: how the media gets AI alarmingly wrong." In it, he discusses the 2017 hype-explosion surrounding Facebook's AI research lab developing a pair of chat bots that created a short-hand language for negotiating. In reality, the chat bots' behavior was remarkable but not entirely unexpected.
Accenture Labs Dublin is looking for a Post-Doctoral researcher in the domain of Graph Representation Learning and Explainable AI. You will be in charge of designing interpretable machine learning models to infer knowledge from a graph of clinical, genomic, and behavioural data. Explanations will use a wide range of techniques, such as rules derived from the deep learning models, gradient-based attribution methods, or graph-based explanations based on network analysis. The length of the PostDoc is 3 years. You will join a multi-partner project whose goal is identifying factors that can cause development of new medical conditions, and worsen the quality of life of cancer survivors.