If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In the policies promoted by the European Union, an intimate connection between artificial intelligence and open data has been considered. In this regard, as we highlighted, open data is essential for the proper functioning of artificial intelligence, since the algorithms must be fed by data whose quality and availability is essential for its continuous improvement, as well as to audit its correct operation. Artificial intelligence entails an increase in the sophistication of data processing, since it requires greater precision, updating and quality, which, on the other hand, must be obtained from very diverse sources to increase the quality of the algorithms results. Likewise, an added difficulty is the fact that processing is carried out in an automated way and must offer precise answers immediately to face changing circumstances. Therefore, a dynamic perspective that justifies the need for data -not only to be offered in open and machine-readable format, but also with the highest levels of precision and disaggregation- is needed.
The Ethics Guidelines for Trustworthy AI provide an assessment list that operationalises the key requirements and offers guidance to implement them in practice. This assessment list will undergo a piloting process: all stakeholders are invited to test the assessment list and provide practical feedback on how it can be improved. This feedback will allow for a better understanding of how the assessment list, which is aimed to offer guidance for all AI applications, can be implemented within an organisation. It will also indicate where specific tailoring of the assessment list is needed given AI's context-specificity. All interested stakeholders can participate to the piloting process and start testing out the assessment list.
These posts represent my personal views on enterprise governance, regulatory compliance, and legal or ethical issues that arise in digital transformation projects powered by the cloud and artificial intelligence. Unless otherwise indicated, they do not represent the official views of Microsoft. If you follow enterprise legal and compliance issues as I do, you have surely heard the claim that AI is transforming the way corporate legal departments and the law firms that serve them operate. I'm certainly not going to contradict this claim. In fact, I see new evidence for it nearly every week.
A significant part of the literature in deontic logic revolves around the discussions of puzzles and paradoxes which show that certain logical systems are not acceptable--typically, this happens with deontic KD, i.e., Standard Deontic Logic (SDL)--or which suggest that obligations and permissions should enjoy some desirable properties. One well-known puzzle is the the so-called Free Choice Permission paradox, which was originated by the following remark by von Wright in [23, p. 21]: "On an ordinary understanding of the phrase'it is permitted that', the formula'P(p q)' seems to entail'Pp Pq'. If I say to somebody'you may work or relax' I normally mean that the person addressed has my permission to work and also my permission to relax. It is up to him to choose between the two alternatives." Usually, this intuition is formalised by the following schema: P(p q) (Pp Pq) (FCP) Many problems have been discussed in the literature around FCP: for a comprehensive overview, discussion, and some solutions, see [11, 14, 20]. Three basic difficulties can be identified, among the others [11, p. 43]: - Problem 1: Permission Explosion Problem - "That if anything is permissible, then everything is, and thus it would also be a theorem that nothing is obligatory," , for example "If you may order a soup, then it is not true that you ought to pay the bill" ;
Artificial intelligence (AI) algorithms are generally hungry for data, a trend which is accelerating. A new breed of AI approaches, called lifelong learning machines, are being designed to pull data continually and indefinitely. But this is already happening with other AI approaches, albeit with human intervention. A steady stream of data is the fuel for coveted results. But, with the ever-increasing importance of data, the stakes of data bias are growing ever higher.
As natural language uses a diverse and often vague way to express ideas, identifying a norm conflict and its causes While most social norms are informal, they are often in contracts is a challenging task. An ever larger number of formalized by companies in contracts to regulate contracts being currently generated necessitates a fast and reliable trades of goods and services. When poorly process to identify norm conflicts. However, since such written, contracts may contain normative conflicts contracts are written in natural language, traditional revision resulting from opposing deontic meanings or contradict methods involve contract makers reading the contract and specifications. As contracts tend to be identifying conflicting points between norms. Such a method long and contain many norms, manually identifying requires huge human-effort and may not guarantee a revision such conflicts requires human-effort, which is that eliminates all conflicts. In response, we provide three time-consuming and error-prone. Automating such contributions towards automatically identifying and classifying task benefits contract makers increasing productivity potential conflicts between norms in contracts.
The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. "Facial recognition is the plutonium of AI", it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read. The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera.
Halfway through my week without Google, my wife mentions that she would like to go out to see a film that evening, and I agree to deal with the logistics. In what I initially think is an inspired move, I drop by the local cinema on the way home and scribble down all the film times in my notebook. Then my wife insists on going to a different cinema. "Can I do this by phone?" "Is 118 still a thing?" Turns out it is, and an expensive one: £2.50 a call, plus 75p a minute, plus a 55p access charge from my mobile provider. But more than a million people a year still use the service, and it even offers a text facility that answers questions – although you're essentially just asking someone to Google something for you and text you back, for £3.50 a go. Before I started this experiment, when I tried to imagine what it would be like to take a break from Google, what I was really trying to remember was how my life worked all those years before it started.
Mechanized theorem proving is becoming the basis of reliable systems programming and rigorous mathematics. Despite decades of progress in proof automation, writing mechanized proofs still requires engineers' expertise and remains labor intensive. Recently, researchers have extracted heuristics of interactive proof development from existing large proof corpora using supervised learning. However, such existing proof corpora present only one way of proving conjectures, while there are often multiple equivalently effective ways to prove one conjecture. In this abstract, we identify challenges in discovering heuristics for automatic proof search and propose our novel approach to improve heuristics of automatic proof search in Isabelle/HOL using evolutionary computation.
The area of formal ethics is experiencing a shift from a unique or standard approach to normative reasoning, as exemplified by so-called standard deontic logic, to a variety of application-specific theories. However, the adequate handling of normative concepts such as obligation, permission, prohibition, and moral commitment is challenging, as illustrated by the notorious paradoxes of deontic logic. In this article we introduce an approach to design and evaluate theories of normative reasoning. In particular, we present a formal framework based on higher-order logic, a design methodology, and we discuss tool support. Moreover, we illustrate the approach using an example of an implementation, we demonstrate different ways of using it, and we discuss how the design of normative theories is now made accessible to non-specialist users and developers.