Goto

Collaborating Authors

Results


Does Palantir See Too Much?

#artificialintelligence

On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.


Parliament leads the way on first set of EU rules for Artificial Intelligence

#artificialintelligence

The European Parliament is among the first institutions to put forward recommendations on what AI rules should include with regards to ethics, liability and intellectual property rights. These recommendations will pave the way for the EU to become a global leader in the development of AI. The Commission legislative proposal is expected early next year. The legislative initiative by Iban García del Blanco (S&D, ES) urges the EU Commission to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including software, algorithms and data. It was adopted with 559 votes in favour, 44 against, and 88 abstentions.


Germany Wants EU to Double Down on Idea That Would Hinder the AI Economy

#artificialintelligence

The European Commission has proposed strictly regulating AI systems that meet two conditions: they are used in sectors and in a manner where significant risks are likely to occur. But Germany has called on the EU to abandon its proposal, arguing that tougher rules should apply for all sectors that use AI and even for AI applications that do not pose a significant risk. This is not the first time that Germany has called for stricter regulation of AI, but as Germany has taken over the EU Council presidency, its perspective is likely to have more influence on the Commission's regulatory choices. But following Germany's advice would have far-reaching negative implications for innovation in the EU. First, imposing stricter rules on lower-risk AI systems would achieve little in the way of consumer protection because these systems already pose little risk to consumers and existing consumer protection laws apply. It does not make sense to require AI-powered dating apps to undergo the same level of scrutiny as credit scoring tools.


UK regulators set up AI insights forum

#artificialintelligence

UK regulators are staging the first meeting of the Artificial Intelligence Public Private Forum, a quarterly talking shop to gauge the impact of AI in financial services.


EU challenges for an AI human-centric approach: lessons learnt from ECAI 2020

AIHub

During this period of progressive development and deployment of artificial intelligence, discussions around the ethical, legal, socio-economic and cultural implications of its use are increasing. What are the challenges and the strategy, and what are the values that Europe can bring to this domain? During the European Conference on AI (ECAI 2020), two special events in the format of panels discussed the challenges of AI made in the European Union, the shape of future research and industry, and the strategy to retain talent and compete with other world powers. This article collects some of the main messages from these two sessions, which included the participation of AI experts from leading European organisations and networks. Since the publication of European directives and guidance, such as the EC White Paper on AI and the Trustworthy AI Guidelines, Europe has been laying the foundation for the future vision of AI. The European strategy for AI builds on the well-known and accepted principles found in the Charter of Fundamental Rights of the European Commission and the Universal Declaration of Human Rights to define a human-centric approach, whose primary purpose is to enhance human capabilities and societal well-being.


Ethical Machine Learning in Health Care

arXiv.org Artificial Intelligence

The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.


Preserving Integrity in Online Social Networks

arXiv.org Artificial Intelligence

Online social networks provide a platform for sharing information and free expression. However, these networks are also used for malicious purposes, such as distributing misinformation and hate speech, selling illegal drugs, and coordinating sex trafficking or child exploitation. This paper surveys the state of the art in keeping online platforms and their users safe from such harm, also known as the problem of preserving integrity. This survey comes from the perspective of having to combat a broad spectrum of integrity violations at Facebook. We highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community. Instead of discussing the many individual violation types, we identify key aspects of the social-media eco-system, each of which is common to a wide variety violation types. Furthermore, each of these components represents an area for research and development, and the innovations that are found can be applied widely.


Artificial intelligence and intellectual property: call for views

#artificialintelligence

Intellectual Property rewards people for creativity and innovation. It is crucial to the proper functioning of an innovative economy. The UK is voted one of the best IP environments in the world. To keep it that way we are keen to look ahead to the challenges that new technologies bring. We need to make sure the UK's IP environment is adapted to accommodate them.


Active Fairness Instead of Unawareness

arXiv.org Artificial Intelligence

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results. Systematic, unequal treatment of individuals based on their membership of a sensitive group is considered discrimination.


The Radicalization Risks of GPT-3 and Advanced Neural Language Models

arXiv.org Artificial Intelligence

In this paper, we expand on our previous research of the potential for abuse of generative language models by assessing GPT-3. Experimenting with prompts representative of different types of extremist narrative, structures of social interaction, and radical ideologies, we find that GPT-3 demonstrates significant improvement over its predecessor, GPT-2, in generating extremist texts. We also show GPT-3's strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors. While OpenAI's preventative measures are strong, the possibility of unregulated copycat technology represents significant risk for large-scale online radicalization and recruitment; thus, in the absence of safeguards, successful and efficient weaponization that requires little experimentation is likely. AI stakeholders, the policymaking community, and governments should begin investing as soon as possible in building social norms, public policy, and educational initiatives to preempt an influx of machine-generated disinformation and propaganda. Mitigation will require effective policy and partnerships across industry, government, and civil society.