Goto

Collaborating Authors

Results


New ID crime chatbot could have future B2B cyber applications

#artificialintelligence

A new AI-based chatbot tool used to help identity crime victims seek after-hours help was also designed with future B2B applications in mind, including helping employees report a cyberattack when the IT or security team is unavailable. This chatbot helper is a new service currently undergoing beta testing by the Identity Theft Resource Center (ITRC), leveraging technology developed by its partner SAS Institute. Thanks to ViViAN, individuals do not have to wait until normal ITRC business hours in order to report an incident; rather, they can lodge their complaints with the chatbot and receive reassurance and guidance on the immediate next steps they should take. All communications with ViViAN are then later followed up by a live agent when one becomes available. But at least this way, victims are able to act swiftly when their data is at stake and time is of the essence.


Expect an Orwellian future if AI isn't kept in check, Microsoft exec says

#artificialintelligence

Artificial intelligence could lead to an Orwellian future if laws to protect the public aren't enacted soon, according to Microsoft President Brad Smith. Smith made the comments to the BBC news program "Panorama" on May 26, during an episode focused on the potential dangers of artificial intelligence (AI) and the race between the United States and China to develop the technology. The warning comes about a month after the European Union released draft regulations attempting to set limits on how AI can be used. There are few similar efforts in the United States, where legislation has largely focused on limiting regulation and promoting AI for national security purposes. "I'm constantly reminded of George Orwell's lessons in his book '1984,'" Smith said.


Everything You Need To Know About CAIR

#artificialintelligence

Did you know India had an exclusive centre for robotics since 1986? The Centre for Artificial Intelligence and Robotics (CAIR) lab started with just three staff in a tiny office in Bengaluru. Today, the centre has more than 300 employees. CAIR is involved in research and development in AI, robotics, command and control, networking, information and communication security, along with the development of mission-critical products for battlefield communication and management systems. CAIR was appraised for Capability Maturity Model Integration (CMMI) Maturity Level 2 in 2014 and has ISO 9001:2015 certification.


AI warning: Life will be like Orwell's 1984 'without curbs on AI'

#artificialintelligence

Life could become like George Orwell's 1984 within three years if laws aren't introduced to protect the public from artificial intelligence, Microsoft president Brad Smith has warned. Smith predicts that the kind of controlled, mass surveillance society portrayed by Orwell in his 1949 dystopian novel'could come to pass in 2024' if more isn't done to curb the spread of AI. It is going to be difficult for lawmakers to catch up with rapidly advancing artificial intelligence and surveillance technology, he told BBC Panorama during a special exploring China's increasing use of AI to monitor its citizens. The Microsoft president said: 'If we don't enact the laws that will protect the public in the future, we are going to find the technology racing ahead.' Life for humans will'become like Orwell's 1984' by 2024 if laws aren't introduced to protect the public from artificial intelligence, warns Microsoft president Brad Smith Facial recognition software works by matching real time images to a previous photograph of a person.


Microsoft president Brad Smith warns 'life will be like Orwell's 1984' by 2024

Daily Mail - Science & tech

Life could become like George Orwell's 1984 within three years if laws aren't introduced to protect the public from artificial intelligence, Microsoft president Brad Smith has warned. Smith predicts that the kind of controlled, mass surveillance society portrayed by Orwell in his 1949 dystopian novel'could come to pass in 2024' if more isn't done to curb the spread of AI. It is going to be difficult for lawmakers to catch up with rapidly advancing artificial intelligence and surveillance technology, he told BBC Panorama during a special exploring China's increasing use of AI to monitor its citizens. The Microsoft president said: 'If we don't enact the laws that will protect the public in the future, we are going to find the technology racing ahead.' Life for humans will'become like Orwell's 1984' by 2024 if laws aren't introduced to protect the public from artificial intelligence, warns Microsoft president Brad Smith Facial recognition software works by matching real time images to a previous photograph of a person.


Cybersecurity 101: Protect your privacy from hackers, spies, and the government

#artificialintelligence

"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.


The Impact of Artificial Intelligence on the IC

#artificialintelligence

Ian Fitzgerald is an M.A. student in International Security at George Mason University with research interests in Great Power Competition, Cyber Warfare, Emerging Technologies, Russia and China. ACADEMIC INCUBATOR -- The explosion of data available to today's analysts creates a compelling need to integrate artificial intelligence (AI) into intelligence work. The objective of the Intelligence Community (IC) is to analyze, connect, apply context, infer meaning, and ultimately, make analytical judgments based on that data. The data explosion offers an incredible source of potential information, but it also creates issues for the IC. Today's intelligence analysts find themselves working from an information-scarce environment to one with an information surplus.


Machine Learning, Ethics, and Open Source Licensing (Part I/II)

#artificialintelligence

The unprecedented interest, investment, and deployment of machine learning across many aspects of our lives in the past decade has come with a cost. Although there has been some movement towards moderating machine learning where it has been genuinely harmful, it's becoming increasingly clear that existing approaches suffer significant shortcomings. Nevertheless, there still exist new directions that hold potential for meaningfully addressing the harms of machine learning. In particular, new approaches to licensing the code and models that underlie these systems have the potential to create a meaningful impact on how they affect our world. This is Part I of a two-part essay.


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.


Macro-Average: Rare Types Are Important Too

arXiv.org Artificial Intelligence

While traditional corpus-level evaluation metrics for machine translation (MT) correlate well with fluency, they struggle to reflect adequacy. Model-based MT metrics trained on segment-level human judgments have emerged as an attractive replacement due to strong correlation results. These models, however, require potentially expensive re-training for new domains and languages. Furthermore, their decisions are inherently non-transparent and appear to reflect unwelcome biases. We explore the simple type-based classifier metric, MacroF1, and study its applicability to MT evaluation. We find that MacroF1 is competitive on direct assessment, and outperforms others in indicating downstream cross-lingual information retrieval task performance. Further, we show that MacroF1 can be used to effectively compare supervised and unsupervised neural machine translation, and reveal significant qualitative differences in the methods' outputs.