Goto

Collaborating Authors

Results


The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

arXiv.org Artificial Intelligence

In 1950, Alan Turing proposed an imitation game as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions indistinguishable from a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans. So an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. Furthermore, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.


Converting Laws to Programs

Communications of the ACM

You would think something as numerical as income tax law would be similar to mathematical logic, but it is not, Protzenko says, because it is not written with the precision and clarity that would "make it amenable to a very mathematical reading of it." For example, that law does not mention a number may need to be rounded into whole cents. "The law won't tell you what you're supposed to do with rounding numbers and that can lead to ambiguity and a lack of specification of what's supposed to happen," he says. Healthcare law is also very complex. Faisal Khan, senior legal counsel at healthcare law firm Nixon Gwilt Law in Vienna, VA, says, "Software for HIPAA compliance must incorporate algorithms that target and hit on all the top-level statutory requirements and implementing regulations.' To make that happen, Khan says, "There must be a team of compliance-related input as many of the regulations essentially function as guidelines for companies to adhere to." That means a process or ...


AI's Future Doesn't Have to Be Dystopian - Boston Review

#artificialintelligence

Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society. Worse still, the weakening of democracy makes formulating solutions to the adverse labor market and distributional effects of AI much more difficult. These dangers have only multiplied during the COVID-19 crisis. Lockdowns, social distancing, and workers' vulnerability to the virus have given an additional boost to the drive for automation, with the majority of U.S. businesses reporting plans for more automation.


AI's Future Doesn't Have to Be Dystopian

#artificialintelligence

The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society.


Here are 6 major issues facing healthcare in 2019, according to PwC

#artificialintelligence

The U.S. healthcare industry is looking less like a special case, a large segment of the U.S. economy with its own unique quirks, and is beginning to behave like other industries, according to "Top health industry issues of 2019: The New Health Economy comes of age," the 13th annual healthcare report from consulting giant PwC. So for PwC Health Research Institute's latest report, rather than focusing on issues only U.S. health organizations face, it for the first time is examining how healthcare is adapting to factors common to all industries: deals, business and tax strategy, risk and regulatory issues, workforce trends and digital transformation. The details may be specific to healthcare, but the business issues are shared with many other parts of the economy. In 2019, new entrants and biopharmaceutical and medical device companies will bring to market new digital therapies and connected health services that can help patients make behavioral changes, give providers real-time therapeutic insights, and give insurers and employers new tools to more effectively manage beneficiaries' health, the PwC report said. "The arrival of digital therapeutics – an emerging health discipline that uses technology to augment or even replace active drugs in disease treatment – is reshaping the landscape for new medicines, product reimbursement and regulatory oversight," PwC said. "This means that new data sharing processes and payment models will be established to integrate these products into the broader treatment arsenal and regulatory structure for drug and device approvals." As digital therapeutics and connected devices have transitioned from concept to reality, investors have poured $12.5 billion into digital health ventures in 2017 and 2018, PwC reported.


The 50 big ideas for 2018

@machinelearnbot

If 2017 left you breathless, exhausted by unexpected headlines, then brace yourself. The coming year may bring even more turbulent change, according to the CEOs, academics, economists and other bold thinkers we consulted for our annual peek at the year ahead.