If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In a recent podcast, Oxford mathematician John Lennox answered some questions raised about his new book, 2084 by Walter Bradley Center director Robert J. Marks, including questions as to how the loss of privacy could wind up really harming us: Robert J. Marks: It's been said that AI is the new electricity. You have addressed some of the potential negative uses of artificial intelligence or the negative impacts of artificial intelligence, but expanding on that, what are some of the big threats that you see in the use of AI technology in the near future? John Lennox: Well, the threats are best explained by comparing them with the advantages. Let's take a very simple and practical example, which is extremely useful. That is in the field of x-ray technology.
In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the Internet. Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one's photos together via the people present in those photos using Google's own image recognition technology. Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.
On June 30, US Secretary of State Mike Pompeo's address to the UN Security Council calling for an arms embargo on Iran to be extended was expected to dominate the international news agenda. However, Iran's judiciary stole the morning's headlines by issuing an arrest warrant for Donald Trump the day before. Tehran prosecutor Ali Alqasimehr said on Monday that Trump, along with more than 30 others accused of involvement in the January 3 drone attack that killed Iran's top general, Qassem Soleimani, face "murder and terrorism charges". The prosecutor added that Tehran asked Interpol for help in detaining the US president. The same day, the US special envoy for Iran, Brian Hook, denounced the warrant as a "propaganda stunt" at a press conference in the Saudi capital, Riyadh.
Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.
AI is becoming mainstream, embedded into more and more applications of everyday life. From healthcare and finance to transportation and energy, the opportunities appear endless. Every sector is ripe with opportunities for time, money, and other resources savings, and AI provides many solutions. Yet critical questions remain unanswered related to AI security. How are IT organizations managing AI security as it scales to the enterprise, and do you have the audit functionality to answer questions of regulators?
The Mount Sinai Health System has received an award from Microsoft AI for Health to support the work of a new data science center dedicated to COVID-19 research. The Mount Sinai COVID Informatics Center (MSCIC) brings together leaders from entities across Mount Sinai, including the Hasso Plattner Institute for Digital Health, the Department of Genetics and Genomic Sciences, and the BioMedical Engineering and Imaging Institute. "This partnership with Microsoft provides us with cloud resources that will accelerate our discovery, translation and implementation of digital tools in the fight against COVID-19," said Robbie Freeman, MSN, RN, vice president of Clinical Innovation at The Mount Sinai Hospital. "Through this collaboration with AI for Health, we are leveraging the expertise of the Mount Sinai Health System in delivering world-class patient care and the Azure cloud to bring our AI-enabled products from bench to bedside." The philanthropic Microsoft AI for Health Grant will support the care of patients with the coronavirus, enabling the Center to develop tools using artificial intelligence (AI) that enhance care and evidence-based medicine for treating COVID-19 patients.
Artificial intelligence is incredibly important in the new age of cyberwarfare. Hackers use AI frequently to conduct more vicious attacks. At the same time, cybersecurity experts are using AI to bolster their defenses. AI has become more important than ever during the COVID-19 crisis. Cybercrimes are up, so artificial intelligence and other big data tools are essential to thwart cybercriminals.
In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that "AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible." Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor. Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2
Bottom Line: Real-time analysis of remote video feeds is rapidly improving thanks to AI, increasing the accuracy of remote equipment and facility monitoring. Agriculture, construction, oil & gas, utilities, and critical infrastructure all need to merge cybersecurity and physical security to adapt to an increasingly complex threatscape. What needs to be the top priority is improving the accuracy, insight, and speed of response to remote threats that AI-based video recognition systems provide. Machine learning techniques as part of a broader AI strategy are proving effective in identifying anomalies and threats in real-time using video, often correlating them back to cyber threats, which are often part of an orchestrated attack on remote facilities. The future of remote security monitoring is being defined by the rapid advances in supervised, unsupervised, and reinforcement machine learning algorithms and their contributions to AI-based visual recognition systems.
Artificial super-intelligence (ASI) is a software-based system with intellectual powers beyond those of humans across an almost comprehensive range of categories and fields of endeavor. The reality is that AI has been with here for a long time now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening Artificial Intelligence system in the movies, it's the malevolence of the system, coupled with the power of some machine that scares people. However, it still behaves in fundamentally human ways. The kind of AI that prevails today can be described as an Artificial Functional Intelligence (AFI). These systems are programmed to perform a specific role and to do so as well or better than a human.