With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a "new" computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups.
We consider a group of Bayesian agents who try to estimate a state of the world $\theta$ through interaction on a social network. Each agent $v$ initially receives a private measurement of $\theta$: a number $S_v$ picked from a Gaussian distribution with mean $\theta$ and standard deviation one. Then, in each discrete time iteration, each reveals its estimate of $\theta$ to its neighbors, and, observing its neighbors' actions, updates its belief using Bayes' Law. This process aggregates information efficiently, in the sense that all the agents converge to the belief that they would have, had they access to all the private measurements. We show that this process is computationally efficient, so that each agent's calculation can be easily carried out. We also show that on any graph the process converges after at most $2N \cdot D$ steps, where $N$ is the number of agents and $D$ is the diameter of the network. Finally, we show that on trees and on distance transitive-graphs the process converges after $D$ steps, and that it preserves privacy, so that agents learn very little about the private signal of most other agents, despite the efficient aggregation of information. Our results extend those in an unpublished manuscript of the first and last authors.
Planning in real-time offers several benefits over the more typical techniques of implementing Non-Player Character (NPC) behavior with scripts or finite state machines. NPCs that plan their actions dynamically are better equipped to handle unexpected situations. The modular nature of the goals and actions that make up the plan facilitates reuse, sharing, and maintenance of behavioral building blocks. These benefits, however, come at the cost of CPU cycles. In order to simultaneously plan for several NPCs in real-time, while continuing to share the processor with the physics, animation, and rendering systems, careful consideration must taken with the supporting architecture. The architecture must support distributed processing and caching of costly calculations. These considerations have impacts that stretch beyond the architecture of the planner, and affect the agent architecture as a whole. This paper describes lessons learned while implementing real-time planning for NPCs for F.E.A.R., a AAA first person shooter shipping for PC in 2005.
In part, the critics of AI are driven by the knowledge that'white collar jobs' are the ones that are now under threat. Business leaders are frequently confronted by notions of job-killing automation and headlines on the variation of the theme that "Robots Will Steal Our Jobs." Elon Musk, CEO of Tesla, Silicon Valley figurehead, and champion of technology-driven innovation even goes a step further by suggesting AI is a fundamental threat to human civilisation. The robot on the assembly line is now a familiar image. AI in middle management is new.
The complete part of the earthquake frequency-magnitude distribution (FMD), above completeness magnitude mc, is well described by the Gutenberg-Richter law. The parameter mc however varies in space due to the seismic network configuration, yielding a convoluted FMD shape below max(mc). This paper investigates the shape of the generalized FMD (GFMD), which may be described as a mixture of elemental FMDs (eFMDs) defined as asymmetric Laplace distributions of mode mc [Mignan, 2012, https://doi.org/10.1029/2012JB009347]. An asymmetric Laplace mixture model (GFMD- ALMM) is thus proposed with its parameters (detection parameter kappa, Gutenberg-Richter beta-value, mc distribution, as well as number K and weight w of eFMD components) estimated using a semi-supervised hard expectation maximization approach including BIC penalties for model complexity. The performance of the proposed method is analysed, with encouraging results obtained: kappa, beta, and the mc distribution range are retrieved for different GFMD shapes in simulations, as well as in regional catalogues (southern and northern California, Nevada, Taiwan, France), in a global catalogue, and in an aftershock sequence (Christchurch, New Zealand). We find max(mc) to be conservative compared to other methods, kappa = k/log(10) = 3 in most catalogues (compared to beta = b/log(10) = 1), but also that biases in kappa and beta may occur when rounding errors are present below completeness. The GFMD-ALMM, by modelling different FMD shapes in an autonomous manner, opens the door to new statistical analyses in the realm of incomplete seismicity data, which could in theory improve earthquake forecasting by considering c. ten times more events.