Data privacy has become one of the top concerns in machine learning with deep neural networks, since there is an increasing demand to train deep net models on distributed, private data sets. For example, hospitals are now training their automated diagnosis systems on private patients' data [LST 16, LS17, DFLRP 18]; and advertisement providers are collecting users' online trajectories to optimize their learning-based recommendation algorithm [CAS16, YHC 18]. These private data, however, are usually decentralized in nature, and policies such as the Health Insurance Portability and Accountability Act (HIPAA) [Act96] and the California Consumer Privacy Act (CCPA) [Leg18] restrict the exchange of raw data among distributed users. Various schemes have been proposed for privacy sensitive deep learning with distributed private data, where model updates [KMY 16] or hidden-layer representations [VGSR18] are exchanged instead of the raw data. However, recent research identified that even if the raw data are kept private, sharing the model updates or hidden-layer activations can still leak sensitive information about the input, which we refer to as the victim.
We are often interested in clustering objects that evolve over time and identifying solutions to the clustering problem for every time step. Evolutionary clustering provides insight into cluster evolution and temporal changes in cluster memberships while enabling performance superior to that achieved by independently clustering data collected at different time points. In this paper we introduce evolutionary affinity propagation (EAP), an evolutionary clustering algorithm that groups data points by exchanging messages on a factor graph. EAP promotes temporal smoothness of the solution to clustering time-evolving data by linking the nodes of the factor graph that are associated with adjacent data snapshots, and introduces consensus nodes to enable cluster tracking and identification of cluster births and deaths. Unlike existing evolutionary clustering methods that require additional processing to approximate the number of clusters or match them across time, EAP determines the number of clusters and tracks them automatically. A comparison with existing methods on simulated and experimental data demonstrates effectiveness of the proposed EAP algorithm.
Humana Inc. has employed artificial intelligence to come up with persuasive language in emails sent to customers to encourage more of them to get flu shots--and it is seeing higher open and click-through rates. The Louisville, Ky.-based health insurer serves more than 16 million customers, including four million Medicare Advantage members. Medicare Advantage plans are administered by private insurers. These plans typically offer lower out-of-pocket costs than traditional government-run Medicare in exchange for members using...
UnitedHealth Group used technology that may have kept sick black patients from receiving high-quality care. New York's state departments of financial services and health sent a letter to UnitedHealth Group over its use of an algorithm that researchers found to be racially biased. Per the Wall Street Journal, the missive is an initial step into a larger investigation. The algorithm in question, Impact Pro, identifies which patients would benefit from complex health procedures favored treating white patients than sicker black ones between 2013 and 2015, according to a study published in the prestigious journal Science. New York lawmakers deemed the use of this discriminatory technology "unlawful," and asked to either demonstrate the algorithm is not biased or to stop using Impact Pro immediately.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Artificial intelligence could one day be used to tailor education to the needs of each individual child.Credit: Suzanne Kreiter/The Boston Globe/Getty People produce more than 2.5 quintillion bytes of data each day. Businesses are harnessing these riches using artificial intelligence (AI) to add trillions of dollars in value to goods and services each year. Amazon dispatches items it anticipates customers will buy to regional hubs before they are purchased. Thanks to the vast extractive might of Google and Facebook, every bakery and bicycle shop is the beneficiary of personalized targeted advertising. But governments have been slow to apply AI to hone their policies and services.
Understanding of the effect of a particular treatment or a policy pertains to many areas of interest -- ranging from political economics, marketing to health-care and personalized treatment studies. In this paper, we develop a non-parametric, model-free test for detecting the effects of treatment over time that extends widely used Synthetic Control tests. The test is built on counterfactual predictions arising from many learning algorithms. In the Neyman-Rubin potential outcome framework with possible carry-over effects, we show that the proposed test is asymptotically consistent for stationary, beta mixing processes. We do not assume that class of learners captures the correct model necessarily. We also discuss estimates of the average treatment effect, and we provide regret bounds on the predictive performance. To the best of our knowledge, this is the first set of results that allow for example any Random Forest to be useful for provably valid statistical inference in the Synthetic Control setting. In experiments, we show that our Synthetic Learner is substantially more powerful than classical methods based on Synthetic Control or Difference-in-Differences, especially in the presence of non-linear outcome models.
With the passage of the Chronic Care Act, Medicare Advantage plans have been scrambling to figure out how to offer supplemental benefits to their members. Passed as part of a Bipartisan Budget Act last year, the Chronic Care Act promotes the use of benefits that maintain health or keep a beneficiary's health from deteriorating, and the benefits don't have to be health-related. Instead, they can include help for social determinants of health that include housing, nutrition and transportation. Under the act, the supplements can also be tailored to the individual, when it comes to qualifications. The same benefits don't have to be offered to every beneficiary, he says.
But by 2017, that price tag had ballooned to $3.5 trillion flowing to and from insurers, Medicare and Medicaid via patient premiums and claims payouts to healthcare providers and drug companies. All told, keeping the U.S. healthcare system spinning took six billion insurance-related transactions (an increase of 1.2 billion transactions from 2016), according to the nonprofit Council of Affordable Quality Healthcare. Could artificial intelligence (AI) technologies help control the industry's rising costs and tsunami of paperwork? Insurers could save up to $7 billion over 18 months using AI-driven technologies by streamlining administrative processes, according to a recent Accenture study. By automating routine business tasks alone, the study projects that health insurers could save $15 million per 100 full-time employees.
The distribution of health care payments to insurance plans has substantial consequences for social policy. Risk adjustment formulas predict spending in health insurance markets in order to provide fair benefits and health care coverage for all enrollees, regardless of their health status. Unfortunately, current risk adjustment formulas are known to undercompensate payments to health insurers for specific groups of enrollees (by underpredicting their spending). Much of the existing algorithmic fairness literature for group fairness to date has focused on classifiers and binary outcomes. To improve risk adjustment formulas for undercompensated groups, we expand on concepts from the statistics, computer science, and health economics literature to develop new fair regression methods for continuous outcomes by building fairness considerations directly into the objective function. We additionally propose a novel measure of fairness while asserting that a suite of metrics is necessary in order to evaluate risk adjustment formulas more fully. Our data application using the IBM MarketScan Research Databases and simulation studies demonstrate that these new fair regression methods may lead to massive improvements in group fairness with only small reductions in overall fit.