Goto

Collaborating Authors

Results


AI's carbon footprint problem

#artificialintelligence

For all the advances enabled by artificial intelligence, from speech recognition to self-driving cars, AI systems consume a lot of power and can generate high volumes of climate-changing carbon emissions. A study last year found that training an off-the-shelf AI language-processing system produced 1,400 pounds of emissions--about the amount produced by flying one person roundtrip between New York and San Francisco. The full suite of experiments needed to build and train that AI language system from scratch can generate even more: up to 78,000 pounds, depending on the source of power. But there are ways to make machine learning cleaner and greener, a movement that has been called "Green AI." Some algorithms are less power-hungry than others, for example, and many training sessions can be moved to remote locations that get most of their power from renewable sources.


How artificial intelligence can save journalism

#artificialintelligence

The economic fallout from the COVID-19 pandemic has caused an unprecedented crisis in journalism that could decimate media organizations around the world. The future of journalism -- and its survival -- could lie in artificial intelligence (AI). AI refers "to intelligent machines that learn from experience and perform tasks like humans," according to Francesco Marconi, a professor of journalism at Columbia University in New York, who has just published a book on the subject: Newsmakers, Artificial Intelligence and the Future of Journalism. Marconi was head of the media lab at the Wall Street Journal and the Associated Press, one of the largest news organizations in the world. His thesis is clear and incontrovertible: the journalism world is not keeping pace with the evolution of new technologies.


Council Post: Building Ethical And Responsible AI Systems Does Not Start With Technology Teams

#artificialintelligence

Chief Technology Officer at Integrity Management Services, Inc., where she is leading cutting edge technology solutions (AI) for clients. In his book Talking with Strangers, Malcolm Gladwell discusses an AI experiment that looked at 554,689 bail hearings conducted by New York City judges. As one online publication noted, "Of the more than 400,000 people released, over 40% either failed to appear at their subsequent trials or were arrested for another crime." However, decisions recommended by the machine learning algorithm on whom to detain or release would have resulted in 25% fewer crimes. This is an example of an AI system that is less biased than a human.


Nearly half of all coronavirus deaths in US occurred inside nursing homes: report

FOX News

Nearly half of all deaths from the coronavirus in the United States have occurred at nursing homes and other long-term care facilities, according to new reports. The rate of cases to deaths from such facilities has been disproportionate: while only 11 percent of positive cases – around 282,000 – have occurred at nursing homes and long-term care facilities, about 43 percent of deaths – or 54,000 deaths – have come from the same, the New York Times reported. The Times cited its database for the numbers, arguing that some states and the federal government have not provided comprehensive data. The numbers are based on "official confirmations from states, counties and the facilities themselves," as well as some data provided by the federal government, the newspaper said. According to the data, the states with the highest number of deaths in nursing homes were New York, New Jersey, Massachusetts and Pennsylvania – all of which recorded more than 4,000 deaths in nursing homes.


Florida congressman: Spike in coronavirus cases 'obviously' an indication of increased testing

FOX News

Florida closes bars just three weeks after allowing them to re-open after reporting more than 24,000 new coronavirus cases over the past week. Rep. Greg Steube, R-Fla., pushed back on New York's decision to quarantine travelers from Florida, arguing that the Sunshine State's spike in coronavirus cases is the result of increased testing. "I mean, the first thing is there's 200,000 tests a week that we're doing in Florida so, obviously, you're going to see a spike in the number of positive cases," Steube said during "Cavuto Live" on Saturday. His comments came amid concerns that states like Florida reopened too quickly and were seeing a higher number of cases as a result. Vice President Mike Pence, who leads the White House's coronavirus task force, has maintained that the nation is experiencing the effects of higher testing rates, not hasty reopenings.


The use of artificial intelligence in medicine is growing rapidly – IAM Network

#artificialintelligence

Everyday use of artificial intelligence for health diagnosis could still be years away, but the field is robust right now. "We still have a lot of unknowns in terms of generalizing and validation of these systems before we can start using them as standard of care," Dr. Matthew Hanna, a pathologist at Memorial Sloan Kettering Cancer Center in New York City, told United Press International earlier this month. On the one hand, this is not surprising: The history of artificial intelligence (AI) is a history of overcommitment and underdelivery in real-world "production" environments. But on closer inspection, AI is highly useful in medicine as opposed to other domains and will rapidly increase in usage. The UPI article highlights people's desire to see a human doctor and not trusting a machine's subtleties as a principal factor in their choosing a person rather than the AI. Additionally, it points to the additional long-term testing needed before autonomous AI diagnostic systems can be widely installed.


Machine learning will mean more drug ads, and hopefully better outcomes, says ad-tech firm DeepIntent

ZDNet

If you've ever visited a doctor, you may find yourself receiving more ads for drugs in coming years. Advertising by Big Pharma directly to consumers is a small portion of the total online advertising market but may increase as new advertising tools, some of them using machine learning, are employed by the drug companies. "Pharma is about 18% of national GPD and only 3% of digital advertising, that's pretty astonishing," Christopher Paquette, CEO of New York-based DeepIntent Technologies, told ZDNet in a telephone interview. DeepIntent, founded just over four years ago, is part of publicly traded advertising technology firm Propel Media. "There is a $20 billion opportunity to unlock this digital advertising," said Paquette.


"Explaining" machine learning reveals policy challenges

Science

There is a growing demand to be able to “explain” machine learning (ML) systems' decisions and actions to human users, particularly when used in contexts where decisions have substantial implications for those affected and where there is a requirement for political accountability or legal compliance ([ 1 ][1]). Explainability is often discussed as a technical challenge in designing ML systems and decision procedures, to improve understanding of what is typically a “black box” phenomenon. But some of the most difficult challenges are nontechnical and raise questions about the broader accountability of organizations using ML in their decision-making. One reason for this is that many decisions by ML systems may exhibit bias, as systemic biases in society lead to biases in data used by the systems ([ 2 ][2]). But there is another reason, less widely appreciated. Because the quantities that ML systems seek to optimize have to be specified by their users, explainable ML will force policy-makers to be more explicit about their objectives, and thus about their values and political choices, exposing policy trade-offs that may have previously only been implicit and obscured. As the use of ML in policy spreads, there may have to be public debate that makes explicit the value judgments or weights to be used. Merely technical approaches to “explaining” ML will often only be effective if the systems are deployed by trustworthy and accountable organizations. The promise of ML is that it could lead to better decisions, yet concerns have been raised about its use in policy contexts such as criminal justice and policing. A fundamental element of the demand for explainability is for explanation of what the system is “trying to achieve.” Most policy decision-making makes extensive use of constructive ambiguity to pursue shared objectives with sufficient political consensus. There is thus a tension between political or policy decisions, which trade off multiple (often incommensurable) aims and interests, and ML, typically a utilitarian maximizer of what is ultimately a single quantity and which typically entails explicit weighting of decision criteria. We focus on public policy decision-making using ML algorithms that learn the relationships between data inputs and decision outputs. As a first step, policy-makers need to decide among a number of possible meanings of explainability. These range from causal accounts and post hoc interpretations of decisions ([ 3 ][3]) to assurance that outcomes are reliable or fair in terms of the specified objectives for the system ([ 4 ][4]). For example, the explainability requirements for ML systems used by local authorities to determine benefit payments will differ greatly from those required for the enforcement of competition policy with respect to pricing by online merchants. Each of the specific meanings of explainability has different technical requirements, which will imply choices about where efficiency and cost might need to be sacrificed to deliver both explainability and the desired outcomes. Choosing which meaning is relevant is far from a technical question (though what can be provided depends on what is technically feasible). Thus, those seeking explainability will need to specify, in terms translatable to how ML systems operate, what exactly they mean, and what kind of evidence would satisfy their demand ([ 5 ][5]). It must also be possible to monitor whatever explanations are provided, and there must be practical methods to enforce compliance. Furthermore, policy institutions starting to deploy algorithmic or ML-based decision systems, such as the police, courts, and government agencies, are operating in the context of declining trust in some aspects of public life. This context is important for understanding demands for explainability, as these may in part reflect broader legitimacy demands of the policy-making process. If an organization is not trusted, its automated decision procedures will likely also be distrusted. This implies a broader need for trustworthy processes and institutions, for “intelligent accountability” as the result of informed and independent scrutiny, communicated clearly to the public ([ 6 ][6]). Satisfying the demand for explainability implies testing the trustworthiness of the organizations using ML systems to make decisions affecting individuals. Evaluation requires comparing outcomes against a benchmark, which can be the baseline situation, or a specified desired outcome. Taking the demand for explainability as a demand for accountability, the promise of ML is that it could lead to more legitimate and better decisions than humans can make, on some measure. Potential benefits are clearly demonstrable in some forms of medical diagnosis ([ 7 ][7]) or monitoring attempted financial fraud ([ 8 ][8]). In these domains, there is general agreement on a straightforward quantity to optimize, and the incentives of principals (citizens or customers) and agents (public or corporate decision-makers) are aligned. Public concern about the use of ML focuses on other domains, such as marketing or policing, where there may be less agreement about (or trust in) the aim of either the ML system or the organization using it. These concerns highlight a key challenge posed by the use of ML in policy decisions, which is that ML processes are almost always set up to optimize an objective function; this optimization goal can be described in anthropomorphic terms as the “intention” of the system. Yet there is often little or no explicit discussion by policy-makers when considering using ML systems about what conflicting goals, benefits, and risks may trade off against each other as a result. One reason for this is that it is inherently challenging to specify a concrete objective function in sociopolitical domains ([ 9 ][9]). For example, like current ML systems, economists' decisions are informed by estimates of statistical relationships between directly observable and unobservable variables, derived from data generated by a complex environment. Yet economic policies such as tax changes often fail to take into account all relevant factors in the decision environment, or likely behavior changes, in specifying the objective function ([ 10 ][10]). The use of ML systems in other policy contexts will expand the scope of such unintended consequences. Given that the dominant paradigm of machine learning is based on optimization, the use of ML in policy decisions thus speaks to a fundamental debate about social welfare. From the perspective of ethical theories, ML is largely consequentialist: A machine system is configured on the basis of its ability to achieve a desired outcome. Conventional policy analysis is similarly typically based on consequentialist economic social welfare criteria. The well-known impossibility theorems in social choice theory ([ 11 ][11]) establish that when the goal is to aggregate individual choices under a set of reasonable social decision rules, it is impossible to satisfy a set of desirable criteria simultaneously, and thus impossible to achieve a set of desired outcomes by optimizing a single quantity. Critics of consequentialist economic policy analysis argue that people have multidimensional, probably incommensurable, and possibly contradictory objectives, so that imposing utilitarian decision-making procedures will conflict both with reality and with ethical intuitions ([ 12 ][12]). Nevertheless, policy choices are made, so there has always been an unavoidable, albeit often implicit, trade-off or weighting of different objectives ([ 12 ][12]). For example, cost-benefit analysis can incorporate environmental and cultural, as well as financial, considerations, but converts all of these into monetary values. Any choice made when there are multiple interests or trade-offs will imply weights on the different components. As these trade-offs are codified into ML objective functions, the weights given to competing objectives comprise a first-line characterization of how conflicts will be resolved. Using ML systems in political contexts is extending the use of optimization; progress in making these ML systems more understandable to policy-makers will make the de facto choices between competing objectives more explicit than they have been previously ([ 13 ][13]). Greater explainability is therefore likely to have to lead to a more explicit political, not wholly technical, debate. Distilling concrete, unambiguous objectives in this way may turn out to be extremely challenging, for ambiguity about objectives is often useful in policy-making precisely because it blurs uncomfortable conflicts of interest. In many domains, policies generally emerge as a pragmatic compromise between fundamentally conflicting aims. For example, people who disagree about whether the justice system should be retributive or rehabilitative may well be able to agree on specific sentencing policies. Such incompletely theorized agreements “Play an important function in any well-functioning democracy consisting of a heterogeneous population” ([ 14 ][14], p. 1738). The omission of discussion of ultimate aims can make it easier to achieve consensus on difficult issues. As there is some (limited) scope to interpret means to achieve the objective with flexibility, the “weighting” of different fundamental aims remains implicit, and diverse political communities can make progress. An optimistic conclusion would be that being forced by the use of ML systems to be more explicit about policy objectives could promote useful debate leading in the long run to more considered outcomes. ML systems can be used to explore choices and outcomes on different counterfactual high-level objectives, such as retribution or rehabilitation in justice, enabling considered human judgments. However, it may in practice be impossible to specify what we collectively truly want in rigid code. For example, many local governments do not seem to be engaging in public consultation when they adopt predictive ML systems, such as to flag “troubled” families that are likely to need interventions. Although steps such as explicitly adding uncertainty to the ML objective might address this challenge of imperfectly specified objectives in future, ML systems are unable at present to offer wisely moderated solutions to ambiguous objectives ([ 15 ][15]). Human decision-makers can make use of common sense or tacit knowledge, and often override decisions indicated by an economic model or other formal policy analysis, and they will be able to do the same when assisted by ML. Yet, demanding that ML systems be explainable is likely to make the trade-offs between different objectives far more explicit than has been the norm previously. Ultimately, the use of explainable ML systems in the public sector will make a broader debate about social objectives and social justice newly salient. Providing explanations requires being transparent about the systems' objectives — forcing clarity about choices and trade-offs previously often made implicitly — and how their predictions or decisions draw on patterns revealed by a fundamentally biased social and institutional system. Moreover, whereas democratic political systems often look to resolve conflicts through constructive ambiguity—or in other words, the failure to explain—ML systems may require ambiguous objectives to be resolved unequivocally. So, although the need for explainability certainly poses technical challenges, it poses political challenges too, which have not to date been widely acknowledged. Yet, the increasing scope of ML, and progress in delivering explainability, in politically salient areas of policy could shine a helpful spotlight on the conflicting aims and the implicit trade-offs in policy decisions, just as it already has on the biases in existing social and economic systems. 1. [↵][16]1. B. Dattner, 2. T. Chamorro-Premuzic, 3. R. Buchband, 4. L. Schittler , The legal and ethical implications of using AI in hiring, Harv. Bus. Rev. April, 25 (2019); . 2. [↵][17]1. R. Richardson, 2. J. Schultz, 3. K. Crawford , New York Univ. Law Rev. 192, 204 (2019). [OpenUrl][18] 3. [↵][19]1. Z. Lipton , The mythos of model interpretability (2017); . 4. [↵][20]1. T. Miller , Explanation in artificial intelligence: Insights from the social sciences (2018); . 5. [↵][21]1. P. Madumal, 2. T. Miller, 3. L. Sonenberg, 4. F. Vetere , A grounded interaction protocol for explainable artificial intelligence (2019); . 6. [↵][22]1. O. O'Neill , Int. J. Philos. Stud. 26, 293 (2018). [OpenUrl][23] 7. [↵][24]1. J. De Fauw et al ., Nat. Med. 24, 1342 (2018). [OpenUrl][25][PubMed][26] 8. [↵][27]1. T. Lynn, 2. G. Mooney, 3. P. Rosati, 4. M. Cummins 1. S. Aziz, 2. M. Dowling , in Disrupting Finance: FinTech and Strategy in the 21st Century, T. Lynn, G. Mooney, P. Rosati, M. Cummins, Eds. (Palgrave, 2019), pp. 33–50. 9. [↵][28]1. P. Samuelson , Foundations of Economic Analysis (Harvard University Press, 1979), chap. 8, pp. 203–252. 10. [↵][29]1. J. Le Grand , Br. J. Polit. Sci. 21, 423 (1991). [OpenUrl][30][CrossRef][31][Web of Science][32] 11. [↵][33]1. A. Sen , Am. Econ. Rev. 89, 349 (1999). [OpenUrl][34][CrossRef][35][Web of Science][36] 12. [↵][37]1. E. Anderson , Value in Ethics and Economics (Harvard Univ. Press, 1993). 13. [↵][38]1. S. Grover, 2. C. Pulice, 3. G. I. Simari, 4. V. S. Subrahmanian , IEEE Trans. Comput. Soc. Syst. 6, 350 (2019). [OpenUrl][39] 14. [↵][40]1. C. R. Sunstein , Harv. Law Rev. 108, 1733 (1995). [OpenUrl][41][CrossRef][42][Web of Science][43] 15. [↵][44]1. M. Hildebrandt , Smart Technologies and the End(s) of Law (Edward Elgar, 2016). Acknowledgments: We are grateful to M. Kenny and N. Rabinowitz for helpful comments. A.W. acknowledges support from the David MacKay Newton research fellowship at Darwin College, The Alan Turing Institute under EPSRC grants EP/N510129/1 and TU/B/000074, the Leverhulme Trust via CFI, and the Centre for Data Ethics and Innovation. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: #ref-8 [9]: #ref-9 [10]: #ref-10 [11]: #ref-11 [12]: #ref-12 [13]: #ref-13 [14]: #ref-14 [15]: #ref-15 [16]: #xref-ref-1-1 "View reference 1 in text" [17]: #xref-ref-2-1 "View reference 2 in text" [18]: {openurl}?query=rft.jtitle%253DNew%2BYork%2BUniv.%2BLaw%2BRev.%26rft.volume%253D192%26rft.spage%253D204%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: #xref-ref-3-1 "View reference 3 in text" [20]: #xref-ref-4-1 "View reference 4 in text" [21]: #xref-ref-5-1 "View reference 5 in text" [22]: #xref-ref-6-1 "View reference 6 in text" [23]: {openurl}?query=rft.jtitle%253DInt.%2BJ.%2BPhilos.%2BStud.%26rft.volume%253D26%26rft.spage%253D293%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [24]: #xref-ref-7-1 "View reference 7 in text" [25]: {openurl}?query=rft.jtitle%253DNat.%2BMed.%26rft.volume%253D24%26rft.spage%253D1342%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [26]: /lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fsci%2F368%2F6498%2F1433.atom [27]: #xref-ref-8-1 "View reference 8 in text" [28]: #xref-ref-9-1 "View reference 9 in text" [29]: #xref-ref-10-1 "View reference 10 in text" [30]: {openurl}?query=rft.jtitle%253DBr.%2BJ.%2BPolit.%2BSci.%26rft.volume%253D21%26rft.spage%253D423%26rft_id%253Dinfo%253Adoi%252F10.1017%252FS0007123400006244%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [31]: /lookup/external-ref?access_num=10.1017/S0007123400006244&link_type=DOI [32]: /lookup/external-ref?access_num=A1991GK10800002&link_type=ISI [33]: #xref-ref-11-1 "View reference 11 in text" [34]: {openurl}?query=rft.jtitle%253DAm.%2BEcon.%2BRev.%26rft.volume%253D89%26rft.spage%253D349%26rft_id%253Dinfo%253Adoi%252F10.1257%252Faer.89.3.349%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [35]: /lookup/external-ref?access_num=10.1257/aer.89.3.349&link_type=DOI [36]: /lookup/external-ref?access_num=000081083000001&link_type=ISI [37]: #xref-ref-12-1 "View reference 12 in text" [38]: #xref-ref-13-1 "View reference 13 in text" [39]: {openurl}?query=rft.jtitle%253DIEEE%2BTrans.%2BComput.%2BSoc.%2BSyst.%26rft.volume%253D6%26rft.spage%253D350%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [40]: #xref-ref-14-1 "View reference 14 in text" [41]: {openurl}?query=rft.jtitle%253DHarv.%2BLaw%2BRev.%26rft.volume%253D108%26rft.spage%253D1733%26rft_id%253Dinfo%253Adoi%252F10.2307%252F1341816%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [42]: /lookup/external-ref?access_num=10.2307/1341816&link_type=DOI [43]: /lookup/external-ref?access_num=A1995RA49100010&link_type=ISI [44]: #xref-ref-15-1 "View reference 15 in text"


Artist Uses Artificial Intelligence to Transform Data Into Mesmerizing Art – TechEBlog

#artificialintelligence

Photo credit: AI Artists Unlike other artists, Refik Anadol transforms pools of data into mesmerizing art. That's right, when Anadol comes across an interesting data sets, they're processed into swirling visualizations of how computers see the world by using artificial intelligence-powered machine learning algorithms. These techniques are used to filter and / or expand the raw material, which is then shown on large screens or projected onto walls and entire buildings. In the video above, we see Machine Hallucination, a 360-degree video installation comprised of 10-million photos of New York. These images were processed by machine learning to group photos and morph between them, resulting in flickering images of the city as recorded by many different people.


Scientists want 'Minority Report' pre-crime face recognition AI stopped

#artificialintelligence

Over 1500 researchers across multiple fields have banded together to openly reject the use of technology to predict crime, arguing it would reproduce injustices and cause real harm. The Coaltition for Critical Technology wrote an open letter to Springer Verlag in Germany to express their grave concerns about a newly developed automated facial recognition software that a group of scientistts from Harrisburg Univeristy, Pennsylvania have developed. Springer's Nature Research Book Series intends to publish an article by the Harrisburg scientists named A Deep Neural Network Model to Predict Criminality Using Image Processing. The coalition wants the publication of the study - and others in similar vein - to be rescinded, arguing the paper makes claims that are based unsound scientific premises, research and methods. Developed by a New York Police Department veteran and PhD student Jonathan Korn along with professors Nathaniel Ashby and Roozbeh Sadeghian, the Harrisburg University researchers' software claims 80 per cent accuracy and no racial bias.