The topic of artificial intelligence has become indispensable in today's world. More and more companies are developing innovative solutions that want to simplify people's lives with the help of machine learning, big data and digital assistants. Some players are particularly active here and invest considerable sums in AI and Co. We would like to introduce these to you below. When you think of artificial intelligence, you often imagine human-like robots that perfect human-machine interaction.
The work of an Italian mathematician in the 1930s may hold the key to epidemic modeling. That's because models that try to replicate reality in all its detail have proven hard to steer during this crisis, leading to poor predictions despite noble and urgent efforts to recalibrate them. On the other hand overly stylized compartmental models have run headlong into paradoxes such as Sweden's herd immunity. This approach is represented in the picture above. The important thing to note is that we are not attempting to find a model that is close to the truth, only close to the orbit. This will make a lot more sense after Section 1, I promise.
Artificial intelligence has become an integral part of everything from medical diagnostics technology to systems that analyze electoral candidates and provide accurate information to voters. However, there are still many AI skeptics, and especially those who question the role of AI in the justice system. Many legal leaders and institutions are curious about the efficiency benefits AI brings to the field. But the big question is: can AI help make the judicial system fairer? Many claim that the United States' judicial system is one of the most robust in the world.
Machine learning algorithms are now synonymous with finding patterns in data but not all patterns are suitable for statistics based data-driven techniques, for example when these patterns don't have explicitly labelled targets to learn from. In some cases, these patterns can be expressed precisely as a rule. Reasoning is the process of matching rule-based patterns or verifying that they don't exist in a graph. Because these patterns are found with deductive logic they can be found more efficiently and interpreted more easily than Machine Learning patterns which are induced from the data. This article will introduce some common patterns and how you can express them in the rule language, Datalog, using RDFox, a knowledge graph and semantic reasoning engine developed by Oxford Semantic Technologies.
In the latest in this series of posts, researchers from the EU-funded COMPRISE project write about privacy issues associated with voice assistants. They propose possible ways to maintain the privacy of users whilst ensuring that manufacturers can still access the quality usage data vital for improving the functionality of their products. "Tell me what you read and I tell you who you are." – Pierre de La Gorce Voice assistants, such as Alexa, Siri, or Google Assistant, are becoming increasingly popular. Some users are, however, worried about their vocal interactions with these devices being stored in the cloud, together with a textual transcript of every spoken word. But is there an actual threat associated with the collection of these data?
Many applications of Bayesian data analysis involve sensitive information such as personal documents or medical records, motivating methods which ensure that privacy is protected. We introduce a general privacy-preserving framework for Variational Bayes (VB), a widely used optimization-based Bayesian inference method. Our framework respects differential privacy, the gold-standard privacy criterion, and encompasses a large class of probabilistic models, called the Conjugate Exponential (CE) family. We observe that we can straightforwardly privatise VB's approximate posterior distributions for models in the CE family, by perturbing the expected sufficient statistics of the complete-data likelihood. For a broadly-used class of non-CE models, those with binomial likelihoods, we show how to bring such models into the CE family, such that inferences in the modified model resemble the private variational Bayes algorithm as closely as possible, using the Pólya-Gamma data augmentation scheme. The iterative nature of variational Bayes presents a further challenge since iterations increase the amount of noise needed. We overcome this by combining: (1) an improved composition method for differential privacy, called the moments accountant, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation, Bayesian logistic regression, and sigmoid belief networks, evaluated on real-world datasets.
Markov decision processes are of major interest in the planning community as well as in the model checking community. But in spite of the similarity in the considered formal models, the development of new techniques and methods happened largely independently in both communities. This work is intended as a beginning to unite the two research branches. We consider goal-reachability analysis as a common basis between both communities. The core of this paper is the translation from Jani, an overarching input language for quantitative model checkers, into the probabilistic planning domain definition language (PPDDL), and vice versa from PPDDL into Jani. These translations allow the creation of an overarching benchmark collection, including existing case studies from the model checking community, as well as benchmarks from the international probabilistic planning competitions (IPPC). We use this benchmark set as a basis for an extensive empirical comparison of various approaches from the model checking community, variants of value iteration, and MDP heuristic search algorithms developed by the AI planning community. On a per benchmark domain basis, techniques from one community can achieve state-ofthe-art performance in benchmarks of the other community. Across all benchmark domains of one community, the performance comparison is however in favor of the solvers and algorithms of that particular community. Reasons are the design of the benchmarks, as well as tool-related limitations. Our translation methods and benchmark collection foster crossfertilization between both communities, pointing out specific opportunities for widening the scope of solvers to different kinds of models, as well as for exchanging and adopting algorithms across communities.
The problem of achieving common understanding between agents that use different vocabularies has been mainly addressed by techniques that assume the existence of shared external elements, such as a meta-language or a physical environment. In this article, we consider agents that use different vocabularies and only share knowledge of how to perform a task, given by the specification of an interaction protocol. We present a framework that lets agents learn a vocabulary alignment from the experience of interacting. Unlike previous work in this direction, we use open protocols that constrain possible actions instead of defining procedures, making our approach more general. We present two techniques that can be used either to learn an alignment from scratch or to repair an existent one, and we evaluate their performance experimentally.
The worldwide fight against COVID-19 continues to challenge AI experts. The Pope issued a challenge for AI experts to develop an "ethical algorithm" that would ensure fairness; Some new AI research shows how people are feeling about the virus. Other researchers are experimenting with the use of sound to detect the virus. Shortly before the Vatican closed due to the virus, members of the Pontifical Academy for Life, which researches bioethics and Catholic moral theology, worked on getting a commitment from AI developers to write an "ethical" algorithm in each AI system, according to an account in SSPX.news, the communication agency of the Society of St. Pius, based in Paris. "Following the example of electricity, AI is not necessary to perform a specific action, it is rather intended to change the way, the mode with which we carry out our daily actions," stated Fr.