Goto

Collaborating Authors

Results


Dark Reading

#artificialintelligence

For the past couple of years, renowned technologist and researcher Bruce Schneier has been researching how societal systems can be hacked, specifically the rules of financial markets, laws, and the tax code. That led him to his latest examination of the potential unintended consequences of artificial intelligence on society: how AI systems themselves, which he refers to as "AIs," could evolve such that they automatically - and inadvertently - actually abuse societal systems. "It's AIs as the hacker," he says, rather than hackers hacking AI systems. Schneier will discuss his AI hacker research in a keynote address on Monday at the 2021 RSA Conference, which, due to the pandemic, is being held online rather than in person in San Francisco. The AI topic is based on a recent essay he wrote for the Cyber Project and Council for the Responsible Use of AI at the Belfer Center for Science and International Affairs at Harvard Kennedy School.


AI's Future Doesn't Have to Be Dystopian

#artificialintelligence

The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society.


AI might not have rights, but it could pay taxes

#artificialintelligence

Tax laws, for example, don't currently take automated workers into account. While human employees contribute payroll and income taxes, an automated "employee" doesn't, Abbott noted. Governments could lose out on quite a bit of income tax as AI becomes more prevalent and possibly displaces more human workers. Granted, that argument only works if displaced employees don't find other jobs. Abbott predicted that that may happen as AI becomes smarter at a rate that outpaces people's ability to learn new skills or find job training.


Tax Knowledge Graph for a Smarter and More Personalized TurboTax

arXiv.org Artificial Intelligence

Most knowledge graph use cases are data-centric, focusing on representing data entities and their semantic relationships. There are no published success stories to represent large-scale complicated business logic with knowledge graph technologies. In this paper, we will share our innovative and practical approach to representing complicated U.S. and Canadian income tax compliance logic (calculations and rules) via a large-scale knowledge graph. We will cover how the Tax Knowledge Graph is constructed and automated, how it is used to calculate tax refunds, reasoned to find missing info, and navigated to explain the calculated results. The Tax Knowledge Graph has helped transform Intuit's flagship TurboTax product into a smart and personalized experience, accelerating and automating the tax preparation process while instilling confidence for millions of customers.


Taxing Robots Won't Help Workers or Create Jobs

#artificialintelligence

The debate over automation has been overshadowed by more immediate economic problems created by the coronavirus crisis. But when things return to some semblance of normality, it's sure to crop up again and may well play a role in how a recovery takes shape. The basic question is whether automation is good or bad for average workers. The latest salvo against the robots comes from economists Daron Acemoglu, Andrea Manera, and Pascual Restrepo. In a recent National Bureau of Economic Research paper entitled "Does the US Tax Code Favor Automation?," they argue that taxes are higher on labor than on capital equipment, causing companies to invest too much in machines and not enough in manpower.


A Knowledge Graph for Assessing Agressive Tax Planning Strategies

arXiv.org Artificial Intelligence

The taxation of multi-national companies is a complex field, since it is influenced by the legislation of several states. Laws in different states may have unforeseen interaction effects, which can be exploited by allowing multinational companies to minimize taxes, a concept known as tax planning. In this paper, we present a knowledge graph of multinational companies and their relationships, comprising almost 1.5M business entities. We show that commonly known tax planning strategies can be formulated as subgraph queries to that graph, which allows for identifying companies using certain strategies. Moreover, we demonstrate that we can identify anomalies in the graph which hint at potential tax planning strategies, and we show how to enhance those analyses by incorporating information from Wikidata using federated queries.


Can AI model economic choices?

#artificialintelligence

Tax policy analysis is a well-developed field with a robust body of research and extensive modeling infrastructure across think tanks and government agencies. Because tax policy affects everyone, and especially wealthy people, it gets both a lot of attention and research funding (notably from individual foundations like those of Peter G. Peterson and Koch brothers). In addition to empirical studies, organizations like the Urban-Brookings Tax Policy Center and the Joint Committee on Taxation produce microsimulations of tax policy to comprehensively model thousands of levers of policymaking. However, because it is difficult to guess how people will react to changing public policy scenarios, these models are limited in how much they account for individual behavioral factors. Although it is far from certain, artificial intelligence (AI) might be able to help address this notable deficiency in tax policy, and recent work has highlighted this possibility.


Natural language processing for word sense disambiguation and information extraction

arXiv.org Artificial Intelligence

This research work deals with Natural Language Processing (NLP) and extraction of essential information in an explicit form. The most common among the information management strategies is Document Retrieval (DR) and Information Filtering. DR systems may work as combine harvesters, which bring back useful material from the vast fields of raw material. With large amount of potentially useful information in hand, an Information Extraction (IE) system can then transform the raw material by refining and reducing it to a germ of original text. A Document Retrieval system collects the relevant documents carrying the required information, from the repository of texts. An IE system then transforms them into information that is more readily digested and analyzed. It isolates relevant text fragments, extracts relevant information from the fragments, and then arranges together the targeted information in a coherent framework. The thesis presents a new approach for Word Sense Disambiguation using thesaurus. The illustrative examples supports the effectiveness of this approach for speedy and effective disambiguation. A Document Retrieval method, based on Fuzzy Logic has been described and its application is illustrated. A question-answering system describes the operation of information extraction from the retrieved text documents. The process of information extraction for answering a query is considerably simplified by using a Structured Description Language (SDL) which is based on cardinals of queries in the form of who, what, when, where and why. The thesis concludes with the presentation of a novel strategy based on Dempster-Shafer theory of evidential reasoning, for document retrieval and information extraction. This strategy permits relaxation of many limitations, which are inherent in Bayesian probabilistic approach.


Generative Learning of Counterfactual for Synthetic Control Applications in Econometrics

arXiv.org Machine Learning

A common statistical problem in econometrics is to estimate the impact of a treatment on a treated unit given a control sample with untreated outcomes. Here we develop a generative learning approach to this problem, learning the probability distribution of the data, which can be used for downstream tasks such as post-treatment counterfactual prediction and hypothesis testing. We use control samples to transform the data to a Gaussian and homoschedastic form and then perform Gaussian process analysis in Fourier space, evaluating the optimal Gaussian kernel via non-parametric power spectrum estimation. We combine this Gaussian prior with the data likelihood given by the pre-treatment data of the single unit, to obtain the synthetic prediction of the unit post-treatment, which minimizes the error variance of synthetic prediction. Given the generative model the minimum variance counterfactual is unique, and comes with an associated error covariance matrix. We extend this basic formalism to include correlations of primary variable with other covariates of interest. Given the probabilistic description of generative model we can compare synthetic data prediction with real data to address the question of whether the treatment had a statistically significant impact. For this purpose we develop a hypothesis testing approach and evaluate the Bayes factor. We apply the method to the well studied example of California (CA) tobacco sales tax of 1988. We also perform a placebo analysis using control states to validate our methodology. Our hypothesis testing method suggests 5.8:1 odds in favor of CA tobacco sales tax having an impact on the tobacco sales, a value that is at least three times higher than any of the 38 control states.


The case for taxing robots -- or not MIT Sloan

#artificialintelligence

Should your Roomba need a W-2? Probably not, but it's an amusing thought when debating the more serious topic of whether or not a robot should have to pay taxes -- and how to do it. During the June MIT Technology Review EmTech Next event, two experts argued both sides of the question before an audience at the MIT Media Lab in Cambridge, Massachusetts. Ryan Abbott, professor of law and health sciences at the University of Surrey, argued in favor of taxing robots, while Ryan Avent, economics columnist for The Economist, argued against the idea. Both agreed there needs to be a shift in tax burden from labor to capital. Avent, however, carried the most audience votes by the end of the debate.