If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In this new century, the core opportunity - and challenge - facing corporate managers, as well as those investing in their activities, is not just to harness new technology to develop and deliver attractive products and services in ways that generate a healthy profit. It is also to do so in ways that contribute to meaningful progress against the ethical obligations imposed by UN's Sustainable Development Goals. Daunting as it is, this obligation is made more difficult still -- in both practical and ethical terms -- by the very nature of the digital/AI technology on which such progress depends. For the first time in human history, that technology makes it possible to take human beings out of critical decision loops entirely. The ethical questions thus raised far outrun the settled wisdom embodied in yesterday's laws, regulations, and cultural practices.
In October a German group published its position on ethics and AI. With the new European Commission tipped to put forward a comprehensive AI policy in the first 100 days of office, a lot of scrutiny has been given to the proposals. German consumer rights organisation, vzbv, CEO, Klaus Müller explained: "We expect this report – which was drafted by experts for the German Ministries of the interior and justice/consumers – to influence the plans of the European Commission. At the presentation of the report people already said that its findings should now become part of the EU-level debate." There are several reasons to take the report seriously.
Artificial Intelligence is a hot topic and many organizations are now starting to exploit these technologies, at the same time there are many concerns around the impact this will have on society. Governance sets the framework within which organizations conduct their business in a way that manages risk and compliance as well as to ensure an ethical approach. AI has the potential to improve governance and reduce costs, but it also creates challenges that need to be governed. The concept of AI is not new, but cloud computing has provided the access to data and the computing power needed to turn it into a practical reality. However, while there are some legitimate concerns, the current state of AI is still a long way from the science fiction portrayal of a threat to humanity.
We all know that artificial intelligence and machine learning will magically solve all your business problems, while Alexa makes you a dry martini and takes out the garbage, right? Wait a minute--lofty promises and fanciful fantasies around AI haven't been realized broadly in the banking industry, or many others for that matter. AI is encountering challenges in healthcare and even at Google, AI can raise controversy. So let's take a closer look: Why do AI and machine learning (ML) projects fail, and what should you do to steer clear of the pitfalls? The biggest problem blocking AI and ML projects centers on underlying data, says Bassam Chaptini, chief technology officer at Unqork, an enterprise software company that caters to the financial services and insurance industries.
In 2016 The Washington Post unleashed a new reporter on the world, an artificial intelligence (AI) system called Heliograf. In its first year, it churned out 300 short reports on the Rio Olympics, followed by 500 brief articles about the presidential election, which clocked up pretty good engagement online. Meanwhile, pharmaceutical companies are increasingly turning to AI to drastically speed up the process of discovering new drugs, analysing huge quantities of data to come up with new molecules that could potentially have a therapeutic effect. It's moves like these that have led some to suggest that, one day at least, AIs might be deemed owners of copyright or other intellectual property (IP). However, according to most legal and technology experts, this scenario is a long way off.
With so many questions surrounding artificial intelligence's effect on the workplace and workforce, one wonders whether future Labor Day celebrations will take on new meaning. Employers in Illinois may face these questions sooner than others following passage of a new Illinois law that regulates the use of artificial intelligence ("AI") to analyze and evaluate job applicants' video interviews. The Artificial Intelligence Video Interview Act imposes duties of transparency, consent and data destruction on organizations using AI to evaluate interviewees for jobs that are "based in" Illinois. The measure, passed unanimously in the Illinois legislature and approved by the Governor in early August, becomes effective January 1, 2020. Applying AI-based analytics to job interviews is an increasingly common practice.
In the policies promoted by the European Union, an intimate connection between artificial intelligence and open data has been considered. In this regard, as we highlighted, open data is essential for the proper functioning of artificial intelligence, since the algorithms must be fed by data whose quality and availability is essential for its continuous improvement, as well as to audit its correct operation. Artificial intelligence entails an increase in the sophistication of data processing, since it requires greater precision, updating and quality, which, on the other hand, must be obtained from very diverse sources to increase the quality of the algorithms results. Likewise, an added difficulty is the fact that processing is carried out in an automated way and must offer precise answers immediately to face changing circumstances. Therefore, a dynamic perspective that justifies the need for data -not only to be offered in open and machine-readable format, but also with the highest levels of precision and disaggregation- is needed.
The Ethics Guidelines for Trustworthy AI provide an assessment list that operationalises the key requirements and offers guidance to implement them in practice. This assessment list will undergo a piloting process: all stakeholders are invited to test the assessment list and provide practical feedback on how it can be improved. This feedback will allow for a better understanding of how the assessment list, which is aimed to offer guidance for all AI applications, can be implemented within an organisation. It will also indicate where specific tailoring of the assessment list is needed given AI's context-specificity. All interested stakeholders can participate to the piloting process and start testing out the assessment list.
These posts represent my personal views on enterprise governance, regulatory compliance, and legal or ethical issues that arise in digital transformation projects powered by the cloud and artificial intelligence. Unless otherwise indicated, they do not represent the official views of Microsoft. If you follow enterprise legal and compliance issues as I do, you have surely heard the claim that AI is transforming the way corporate legal departments and the law firms that serve them operate. I'm certainly not going to contradict this claim. In fact, I see new evidence for it nearly every week.
A significant part of the literature in deontic logic revolves around the discussions of puzzles and paradoxes which show that certain logical systems are not acceptable--typically, this happens with deontic KD, i.e., Standard Deontic Logic (SDL)--or which suggest that obligations and permissions should enjoy some desirable properties. One well-known puzzle is the the so-called Free Choice Permission paradox, which was originated by the following remark by von Wright in [23, p. 21]: "On an ordinary understanding of the phrase'it is permitted that', the formula'P(p q)' seems to entail'Pp Pq'. If I say to somebody'you may work or relax' I normally mean that the person addressed has my permission to work and also my permission to relax. It is up to him to choose between the two alternatives." Usually, this intuition is formalised by the following schema: P(p q) (Pp Pq) (FCP) Many problems have been discussed in the literature around FCP: for a comprehensive overview, discussion, and some solutions, see [11, 14, 20]. Three basic difficulties can be identified, among the others [11, p. 43]: - Problem 1: Permission Explosion Problem - "That if anything is permissible, then everything is, and thus it would also be a theorem that nothing is obligatory," , for example "If you may order a soup, then it is not true that you ought to pay the bill" ;