Goto

Collaborating Authors

Results


Artificial intelligence and national security: Integrating online data

#artificialintelligence

Artificial intelligence (AI) is now a major priority for government and defense worldwide -- one that some countries, such as China and Russia, consider the new global arms race. AI has the potential to support a number of national and international security initiatives, from cybersecurity to logistics and counter-terrorism. The overwhelming amount of public data available online is crucial for supporting a number of these use cases. These sources include unstructured social media data from both fringe and mainstream platforms, as well as deep and dark web data. While valuable, these sources are not always easily accessible through commercial threat intelligence platforms.


Parliament leads the way on first set of EU rules for Artificial Intelligence

#artificialintelligence

The European Parliament is among the first institutions to put forward recommendations on what AI rules should include with regards to ethics, liability and intellectual property rights. These recommendations will pave the way for the EU to become a global leader in the development of AI. The Commission legislative proposal is expected early next year. The legislative initiative by Iban García del Blanco (S&D, ES) urges the EU Commission to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including software, algorithms and data. It was adopted with 559 votes in favour, 44 against, and 88 abstentions.


Disturbing deepfake tool on popular messaging app Telegram is forging NUDE images of underage girls

Daily Mail - Science & tech

Photos underage girls share on their social media accounts are being faked to appear nude and shared on messaging app Telegram, a new report discovered. The disturbing images are created using a simple'deepfake' bot that can virtually remove clothes using artificial intelligence, according to report authors Sensity. More than 100,000 non-consensual sexual images of 10,000 women and girls have been shared online that were created using the bot between July 2019 and 2020. The majority of the victims were private individuals with photos taken from social media - all were women and some looked'visibly underage', Sensity said. Sensity says what makes this bot particularly scary is how easy it is to use as it just requires the user to upload an image of a girl, click a few buttons and it then uses its'neural network' to determine what would be under the clothes and produce a nude This form of'deepfake porn' isn't new, the technology behind this bot is suspected to be based on an tool produced last year called DeepNude.


Regulating AI – is the current legislation capable of dealing with AI? -- FCAI

#artificialintelligence

How law regulates Artificial Intelligence (AI)? How do we ensure AI applications comply with existing legal rules and principles? Is new regulation needed and if yes, what type of regulation? These questions have gained increasing importance as AI deployment has increased across various sectors in our societies. Adopting new technological solutions has raised legislators' concern for the protection of fundamental rights both nationally in Finland and at the EU level. However, finding these answers is not easy. And the answers we find may be frustrating: varying from typical "it depends" to the self-evident "it's complicated", followed by the slightly more optimistic "we don't know yet".


Robot judges will replace humans in the courtroom 'in 50 years'

Daily Mail - Science & tech

Robots that analyse a defendant's body language to determine signs of guilt will replace judges by the year 2070, according to an artificial intelligence expert. Writer and speaker on AI Terence Mauri believes the machines will be able to detect physical and psychological signs of dishonesty with 99.9 per cent accuracy. He claims they will be polite, speak every known language fluently and will be able to detect signs of lying that couldn't be detected by a human. Robot judges will have cameras that capture and identify irregular speech patterns, unusually high increases in body temperature and hand and eye movements. Terence Mauri (pictured) is an AI expert, author and founder of Hack Future Lab, a global think tank.


How AI Chatbot helps Children to Fight Online Abuse

#artificialintelligence

Thousands of children face sexual exploitation online every year. As per CyberTipline's 2019 report, India accounted for 11.7%, with 1,987,430 cases of all suspected child sexual offenses. One out of five Indian children aged between 8 and 17 get cyber bullied in every 16 minutes. While social media and other digital platforms have encouraged abusers another avenue to hurt victims, technology can also help to reduce the pain. A thriving member of Oracle for Startups, BotSupply is using its artificial intelligence (AI) chatbot technology to reinforce saving the Children Denmark's SletDet or'Erase It' initiative that is designed to address digital-based sexual offenses.


The impact of AI on business and society

#artificialintelligence

Artificial intelligence, or AI, has long been the object of excitement and fear. In July, the Financial Times Future Forum think-tank convened a panel of experts to discuss the realities of AI -- what it can and cannot do, and what it may mean for the future. Entitled "The Impact of Artificial Intelligence on Business and Society", the event, hosted by John Thornhill, the innovation editor of the FT, featured Kriti Sharma, founder of AI for Good UK, Michael Wooldridge, professor of computer sciences at Oxford university, and Vivienne Ming, co-founder of Socos Labs. For the purposes of the discussion, AI was defined as "any machine that does things a brain can do". Intelligent machines under that definition still have many limitations: we are a long way from the sophisticated cyborgs depicted in the Terminator films. Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters. Rose Luckin, professor at University College London Knowledge Lab and author of Machine Learning and Human Intelligence, points out that AlphaGo, the computer that beat a professional (human) player of Go, the board game, cannot diagnose cancer or drive a car.


The impact of AI on business and society

#artificialintelligence

Artificial intelligence, or AI, has long been the object of excitement and fear. In July, the Financial Times Future Forum think-tank convened a panel of experts to discuss the realities of AI -- what it can and cannot do, and what it may mean for the future. Entitled "The Impact of Artificial Intelligence on Business and Society", the event, hosted by John Thornhill, the innovation editor of the FT, featured Kriti Sharma, founder of AI for Good UK, Michael Wooldridge, professor of computer sciences at Oxford university, and Vivienne Ming, co-founder of Socos Labs. For the purposes of the discussion, AI was defined as "any machine that does things a brain can do". Intelligent machines under that definition still have many limitations: we are a long way from the sophisticated cyborgs depicted in the Terminator films. Such machines are not yet self-aware and they cannot understand context, especially in language. Operationally, too, they are limited by the historical data from which they learn, and restricted to functioning within set parameters. Rose Luckin, professor at University College London Knowledge Lab and author of Machine Learning and Human Intelligence, points out that AlphaGo, the computer that beat a professional (human) player of Go, the board game, cannot diagnose cancer or drive a car.


Twitter Data-Breach Case Won't Be Resolved Before Year's End, Ireland's Regulator Says

WSJ.com: WSJD - Technology

Helen Dixon, head of Ireland's Data Protection Commission, in May submitted a draft decision to more than two dozen of the bloc's privacy regulators for review, as required under the law. Eleven regulators objected to the proposed ruling, sparking a lengthy dispute-resolution mechanism, she said. The contents of the draft decision haven't been disclosed. Twitter's European operations are based in Dublin. "It's a long process," Ms. Dixon said at The Wall Street Journal's virtual CIO Network conference.


Germany Wants EU to Double Down on Idea That Would Hinder the AI Economy

#artificialintelligence

The European Commission has proposed strictly regulating AI systems that meet two conditions: they are used in sectors and in a manner where significant risks are likely to occur. But Germany has called on the EU to abandon its proposal, arguing that tougher rules should apply for all sectors that use AI and even for AI applications that do not pose a significant risk. This is not the first time that Germany has called for stricter regulation of AI, but as Germany has taken over the EU Council presidency, its perspective is likely to have more influence on the Commission's regulatory choices. But following Germany's advice would have far-reaching negative implications for innovation in the EU. First, imposing stricter rules on lower-risk AI systems would achieve little in the way of consumer protection because these systems already pose little risk to consumers and existing consumer protection laws apply. It does not make sense to require AI-powered dating apps to undergo the same level of scrutiny as credit scoring tools.