Elizabeth Holmes convinced investors and patients that she had a prototype of a microsampling machine that could run a wide range of relatively accurate tests using a fraction of the volume of blood usually required. She lied; the Edison and miniLab devices didn't work. Worse still, the company was aware they didn't work, but continued to give patients inaccurate information about their health, including telling healthy pregnant women that they were having miscarriages and producing false positives on cancer and HIV screenings. But Holmes, who has to report to prison by May 30, was convicted of defrauding investors; she wasn't convicted of defrauding patients. This is because the principles of ethics for disclosure to investors, and the legal mechanisms used to take action against fraudsters like Holmes, are well developed.
From Armageddon to the Day After Tomorrow, there have been plenty of Hollywood movies about how our world might end. But if there is to be a global apocalypse, what might be to blame for wiping out all life on Earth? A wandering black hole, giant asteroid impact and nuclear war could all trigger such disaster, as could the rise of killer robots or the reversal of our planet's magnetic field. Many of these might seem far-fetched but with the Doomsday Clock being placed at a record 90 seconds to midnight this year – and scientists warning that humanity's continued existence is at greater risk than ever before – the threat is now all to real. So how exactly would these devastating possibilities come about? End of days: Ff there is to be a global apocalypse, what might be to blame for wiping out all life on Earth?
Chris Winfield, founder of Understanding A.I., tells'Fox & Friends Weekend' host Will Cain about a study showing patients preferred medical answers from artificial intelligence over doctors. When it comes to answering medical questions, can ChatGPT do a better job than human doctors? It appears to be possible, according to the results of a new study published in JAMA Internal Medicine, led by researchers from the University of California San Diego. The researchers compiled a random sample of nearly 200 medical questions that patients posted on Reddit, a popular social discussion website, for doctors to answer. Next, they entered the questions into ChatGPT (OpenAI's artificial intelligence chatbot) and recorded its response.
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership.
Missing records are a perennial problem in analysis of complex data of all types, when the target of inference is some function of the full data law. In simple cases, where data is missing at random or completely at random [15], well-known adjustments exist that result in consistent estimators of target quantities. Assumptions underlying these estimators are generally not realistic in practical missing data problems. Unfortunately, consistent estimators in more complex cases where data is missing not at random, and where no ordering on variables induces monotonicity of missingness status are not known in general, with some notable exceptions [13, 18, 16]. In this paper, we propose a general class of consistent estimators for cases where data is missing not at random, and missingness status is non-monotonic. Our estimators, which are generalized inverse probability weighting estimators, make no assumptions on the underlying full data law, but instead place independence restrictions, and certain other fairly mild assumptions, on the distribution of missingness status conditional on the data. The assumptions we place on the distribution of missingness status conditional on the data can be viewed as a version of a conditional Markov random field (MRF) corresponding to a chain graph. Assumptions embedded in our model permit identification from the observed data law, and admit a natural fitting procedure based on the pseudo likelihood approach of [2]. We illustrate our approach with a simple simulation study, and an analysis of risk of premature birth in women in Botswana exposed to highly active anti-retroviral therapy.
We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.
We study the problem of off-policy policy evaluation (OPPE) in RL. In contrast to prior work, we consider how to estimate both the individual policy value and average policy value accurately. We draw inspiration from recent work in causal reasoning, and propose a new finite sample generalization error bound for value estimates from MDP models. Using this upper bound as an objective, we develop a learning algorithm of an MDP model with a balanced representation, and show that our approach can yield substantially lower MSE in common synthetic benchmarks and a HIV treatment simulation domain.
The assumption that data samples are independent and identically distributed (iid) is standard in many areas of statistics and machine learning. Nevertheless, in some settings, such as social networks, infectious disease modeling, and reasoning with spatial and temporal data, this assumption is false. An extensive literature exists on making causal inferences under the iid assumption [17, 11, 26, 21], even when unobserved confounding bias may be present. But, as pointed out in [19], causal inference in non-iid contexts is challenging due to the presence of both unobserved confounding and data dependence. In this paper we develop a general theory describing when causal inferences are possible in such scenarios. We use segregated graphs [20], a generalization of latent projection mixed graphs [28], to represent causal models of this type and provide a complete algorithm for nonparametric identification in these models. We then demonstrate how statistical inference may be performed on causal parameters identified by this algorithm. In particular, we consider cases where only a single sample is available for parts of the model due to full interference, i.e., all units are pathwise dependent and neighbors' treatments affect each others' outcomes [24]. We apply these techniques to a synthetic data set which considers users sharing fake news articles given the structure of their social network, user activity levels, and baseline demographics and socioeconomic covariates.
We thank all the reviewers for the constructive feedback. R1:"fairly limited in terms of applicability... the ability to extend this work to more general settings?" The task simulates the sequential decision making in HIV treatment. We show results in Table 1, where MOPO outperforms BEAR and achieves almost the buffer max score. HIV treatment results, averaged over 3 random seeds.
NAM shape functions learned on the MIMIC-II dataset to predict mortality risk using medical features (shown on the x-axis) collected during the stay in the ICU. Low values on the y-axis indicates a low risk of mortality. Figure A.1 shows 16 of the shape functions learned by the NAM for the MIMIC-II dataset [38] to predict mortality in intensive care unit (ICUs). The plot for HIV/AIDS shows that patients with AIDS have less risk of ICU mortality. While this might seem counter-intuitive, we confirmed with doctors that this is probably correct: among the various reasons why one might be admitted to the ICU, AIDS is a relatively treatable illness and is one of the less risky reasons for ICU admission.