Artificial intelligence can be defined as "the ability of an artifact to imitate intelligent human behavior" or, more simply, the intelligence exhibited by a computer or machine that enables it to perform tasks that appear intelligent to human observers (Russell & Norvig 2010). AI can be broken down into two different categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), which are defined as follows: ANI refers to the ability of a machine or computer program to perform one particular task at an extremely high level or learn how to perform this task faster than any other machine. The most famous example of ANI is Deep Blue, which played chess against Garry Kasparov in 1997. AGI refers to the idea that a computer or machine would one day have the ability to exhibit intelligent behavior equal to that of humans across any given field such as language, motor skills, and social interaction; this would be similar in scope and complexity as natural intelligence. A typical example given for AGI is an educated seven-year-old child.
The one glaring gap in the Commonwealth government's AI strategy and action plan is a process to develop a coordinated governance framework around the development, use and procurement of AI services within commonwealth government agencies. This is where the NSW Government has taken a clear lead, setting out a mandatory customer service circular which all NSW Government agencies need to adhere to. There is practical guidance on adhering to principles, assessing risk, managing data, sourcing AI solutions, meeting legal obligations and more.
In 1984, Heathkit presented HERO Jr. as the first robot that could be used in households to perform a variety of tasks, such as guarding people's homes, setting reminders, and even playing games. Following this development, many companies launched affordable "smart robots" that could be used within the household. Some of these technologies, like Google Home, Amazon Echo and Roomba, have become household staples; meanwhile, other products such as Jibo, Aniki, and Kuri failed to successfully launch despite having all the necessary resources. Why were these robots shut down? The simple answer is that most of these personal robots do not work well--but this is not necessarily because we do not have the technological capacity to build highly functional robots.
In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. "There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?" Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, "To eliminate it."
This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum. The current conversation around AI, ethics and the benefits for our global community is a heated one. The combination of high stakes and a complex, rapidly-adopted technology has created a very real state of urgency and intensity around this discussion. Promoters of the technology love to position AI as a welcome disruptor that could bring about a global revolution. It's all too easy to get caught up in the hype and create a situation whereby the world does not fully benefit from the development of AI technology.
My wife and I were recently driving in Virginia, amazed yet again that the GPS technology on our phones could guide us through a thicket of highways, around road accidents, and toward our precise destination. The artificial intelligence (AI) behind the soothing voice telling us where to turn has replaced passenger-seat navigators, maps, even traffic updates on the radio. How on earth did we survive before this technology arrived in our lives? We survived, of course, but were quite literally lost some of the time. My reverie was interrupted by a toll booth. It was empty, as were all the other booths at this particular toll plaza.
These are some of the outcomes that AI developers fear will come from their work, according to a new report issued today by the Deloitte AI Institute and the U.S. Chamber of Commerce. Titled "Investing in trustworthy AI," the 82-page report from Deloitte and the Chamber Technology Engagement Center sought to identify the concerns that technology experts have when it comes to the adoption of AI, as well as highlight the impact that government investment in AI can have on the emerging technology. Algorithmic bias and a lack of humans in decision loops are concerns for about two-thirds of the 250 people who participated in the survey. Another 60% identified "rogue or unanticipated behavior" of autonomous agents as a threat, while 56% said the lack of explainability of algorithms was a concern. "Perceived, and actual, discrimination by AI systems undermines the confidence individuals have in whether they are being given a fair opportunity when AI is involved," the report stated.
Like many leaders in my industry, I am currently running an artificial intelligence company during a time when the ethical use of the tech is top of mind. I have spoken to a number of leaders who believe that in order to innovate -- or, I should say, to reach the speed with which we want to innovate -- we must first consider our responsibility to create technology that can be leveraged ethically. You can't simply say you have no responsibility for how the end-user uses your platform. One of my first priorities as CEO was to grab this lightning rod and lean into the social impact I believe my company creates. I made a point that I do not want to hide behind the "technicalities" of everyday life.
Transparency in AI's working can be headache-inducing for organizations that incorporate the technology in their daily operations. So, what can they do to put their concerns surrounding explainable artificial intelligence (AI) requirements to rest? AI's far-reaching advantages in any industry are pretty well-known by now. We are aware of how artificial intelligence helps thousands of companies around the world by speeding up their operations and allowing them to use their personnel more imaginatively. Additionally, the long-term cost and data security benefits of AI incorporation have also been documented countlessly by several tech columnists and bloggers.