hypponen
World-first study uses artificial intelligence to map the risks of ovarian cancer in women
The University of South Australia will lead a world-first study, using artificial intelligence, to map the risks of the most fatal reproductive cancer in women worldwide so it can be detected and treated earlier. Internationally-renowned nutritional epidemiologist Professor Elina Hypponen and a team from UniSA's Australian Centre for Precision Health have been awarded $1.2 million by the Federal Government to map the genetic and physical risks of ovarian cancer, based on the health records of 273,000 women from the UK Biobank database. A machine learning model, which automatically analyses the data to identify patterns of risk, is expected to accurately predict which women will develop ovarian cancer in the next 15 years. Ovarian cancer is usually diagnosed very late due to vague symptoms and few known causes, with a five-year survival rate of less than 30 per cent for women with late-stage cancer. Genes, diet and lifestyle come into play and the researchers say a computational approach will narrow down those most at risk.
Why AI will be inhuman
According to F-Secure Vice President of Artificial Intelligence Matti Aksela, there's a common misconception that'advanced' AI should mimic human intelligence – an assumption Project Blackfin aims to challenge. "People's expectations that'advanced' machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do. Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do," said Aksela, Head of F-Secure's Artificial Intelligence Center of Excellence. "We created Project Blackfin to help us reach that next level of understanding about what AI can achieve." Project Blackfin is a research initiative conceptualised by Aksela's cross-disciplinary team of artificial intelligence and cyber security researchers, mathematicians, data scientists, machine learning experts, and engineers.
AI may open dangerous new frontiers in geopolitics
The dawn of truly artificial intelligences will provoke an international security crisis, according to F-Secure chief research officer and security industry heavyweight, Mikko Hypponen. Speaking to Computer Weekly in October 2019 during an event at the company's Helsinki headquarters, Hypponen said that although true AI is a long way off – in cyber security it is largely restricted to machine learning for threat modelling to assist human analysts – the potential danger is real, and should be considered today. "I believe the most likely war for superhuman intelligence to be generated will be through human brain simulators, which is really hard to do – it's going to take 20 to 30 years to get there," said Hypponen. "But if something like that, or some other mechanism of generating superhuman levels of intelligence, becomes a reality, it will absolutely become a catalyst for an international crisis. It will increase the likelihood of conflict."
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.34)
Attacks against AI systems are a growing concern
Cyber attackers currently focus most of their efforts on manipulating existing artificial intelligence (AI) systems for malicious purposes, instead of creating new attacks that use machine learning. That is the key finding of a report by the Sherpa consortium, an EU-funded project founded in 2018 to study the impact of AI on ethics and human rights, supported by 11 organisations in six countries, including the UK. However, the report notes that attackers have access to machine learning techniques, and AI-enabled cyber attacks will be a reality soon, according to Mikko Hypponen, chief research officer at IT security company F-Secure, a member of the Sherpa consortium. The continuing game of "cat and mouse" between attackers and defenders will reach a whole new level when both sides are using AI, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging. But despite the claims of some security suppliers, Hypponen told Computer Weekly in a recent interview that no criminal groups appear to be using AI to conduct cyber attacks.
- Research Report (0.52)
- Overview > Growing Problem (0.40)
What Do Cyber Security Experts Fear Most?
Three leading cyber security researchers sat down for a panel at the IP Expo in London yesterday to air their catastrophic predictions about the biggest infosec threats in the coming years. Nightmare scenarios regarding autonomous killer robots, terrorists taking control of fleets of autonomous vehicles, AI writing its own malware, and the threat posed by IoT devices all quickly followed. Rik Ferguson, global VP of security research at Trend Micro started off by stating that an area "ripe for innovation in the security and criminal landscape" is artificial intelligence and machine learning. "One thing I find scary is the fact that we have a petition to the UN from 120 leading academics to outlaw autonomous weaponry," he said. "We are already in Skynet, that is the world we live in. So I have no doubt attackers will start using AI to build autonomous attack machinery online, as well as physical autonomous weaponry."
- North America > United States > Nevada (0.05)
- Asia > Middle East > Syria (0.05)
Self-driving cars to be targeted by hackers
First it was our computers, then it was our phones, and now experts have warned hackers will soon be targeting our cars. Self-driving car technology is improving so quickly that some experts believe it will be mainstream within the next five years, meaning hacking will be probably become a problem. A security expert has told MailOnline that cyber criminals may take control of a car and hold it ransom to extort money from owners. 'There's no question whether autonomous cars can be hacked or not,' Mikko Hypponen, chief research officer of cyber security firm F-secure told MailOnline. The Insurance Information Institute estimates that by 2030, 25 per cent of all cars sold will be autonomous. At the end of last year, Elon Musk told Fortune that Tesla Motors is two years away from achieving a fully autonomous self-driving car.
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- North America > United States > California > Santa Clara County > Mountain View (0.05)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- Automobiles & Trucks (1.00)