Goto

Collaborating Authors

Results


How big tech got so big: Hundreds of acquisitions

Washington Post - Technology News

You're probably reading this on a browser built by Apple or Google. If you're on a smartphone, it's almost certain those two companies built the operating system. You probably arrived from a link posted on Apple News, Google News or a social media site like Facebook. And when this page loaded, it, like many others on the Internet, connected to one of Amazon's ubiquitous data centers. Amazon, Apple, Facebook and Google -- known as the Big 4 -- now dominate many facets of our lives. But they didn't get there alone. They acquired hundreds of companies over decades to propel them to become some of the most powerful tech behemoths in the world.


Tinder users will soon be able to access a background check database

Engadget

The owner of massive dating apps Tinder and Match has just announced a new partnership to help keep its users safe. Match Group, which owns Tinder, Match, OK Cupid, Hinge and several other services, has made an investment in Garbo, a non-profit, female-founded background check platform. As part of the deal, Garbo's platform will be available to people using Match Group apps, starting with Tinder later this year. If you're not familiar with Garbo, it was founded by Kathryn Kosmides, a "survivor of gender-based violence" who wanted to make it easier to find information about people you may connect with online. Garbo's platform aggregates numerous data sources to provide details on an individual, including "arrests, convictions, restraining orders, harassment, and other violent crimes."


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

arXiv.org Artificial Intelligence

While algorithm audits are growing rapidly in commonality and public importance, relatively little scholarly work has gone toward synthesizing prior work and strategizing future research in the area. This systematic literature review aims to do just that, following PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies. The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement), with codes also provided for domain (e.g. search, vision, advertising, etc.), organization (e.g. Google, Facebook, Amazon, etc.), and audit method (e.g. sock puppet, direct scrape, crowdsourcing, etc.). The review shows how previous audit studies have exposed public-facing algorithms exhibiting problematic behavior, such as search algorithms culpable of distortion and advertising algorithms culpable of discrimination. Based on the studies reviewed, it also suggests some behaviors (e.g. discrimination on the basis of intersectional identities), domains (e.g. advertising algorithms), methods (e.g. code auditing), and organizations (e.g. Twitter, TikTok, LinkedIn) that call for future audit attention. The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research working toward algorithmic justice.


Socially Responsible AI Algorithms: Issues, Purposes, and Challenges

arXiv.org Artificial Intelligence

In the current era, people and society have grown increasingly reliant on Artificial Intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, health care, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great efforts of designing more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI's indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


How Joe Biden Could Help Internet Companies Moderate Harmful Content

The New Yorker

After the buyer used the weapon to kill his estranged wife and two others, the site successfully invoked Section 230 to avoid liability. More recently, Grindr, a dating app, took cover behind Section 230 when Matthew Herrick, an actor in New York, sued the site as a result of false profiles that were created by an ex-boyfriend. The profiles, which included Herrick's home and work addresses, suggested that Herrick had rape fantasies, and that any resistance he put up was part of the fantasy. As a consequence, hundreds of men showed up at his apartment door or at his workplace, at all hours, month after month, forcibly demanding sex. "You look at that law, and it seems very narrow," Herrick's lawyer, Carrie Goldberg, told me.


Massachusetts man charged with kidnapping, assaulting woman he met on Tinder

FOX News

Tinder, the most popular dating app in the world, has banned teens under the age of 18 but it's not stopping them from signing up. A Massachusetts man is accused of kidnapping and assaulting a woman he met on Tinder, threatening to kill her and her child if she went to the cops, authorities said. Peter Bozier, 28, was arrested Tuesday during a traffic stop in Sudbury after the victim told investigators she was severely beaten and strangled while being held against her will at Bozier's home, police said. The victim said the harrowing ordeal began a day earlier, police spokesman Lt. Robert Grady told the MetroWest Daily News. Grady said the woman managed to "release herself from the situation" and then went to a hospital in Burlington, where hospital staffers contacted police, the newspaper reported.


Google antitrust: Just how much do you actually use it? Way more than you think

USATODAY - Tech Top Stories

Google's influence in our lives is overwhelming, which is perhaps one of the reasons the Department of Justice and several state attorney generals banded together to file an anti-trust lawsuit against the company. But just how wide is Google's reach? We decided to take a look, and the results may surprise you. Start with the fact that Google ads are all over the Internet, and despite the initial stated goal of "organizing the world's information," the Alphabet unit is designed to have more ads appear, to keep the earnings up. In its most recent earnings, Alphabet reported $38.30 billion for Google.


CIPR AI in PR ethics guide

#artificialintelligence

UK EDITION Ethics Guide to Artificial Intelligence in PR 2. The AIinPR panel and the authors are grateful for the endorsements and support from the following: In May 2020 the Wall Street Journal reported that 64 per cent of all signups to extremist groups on Facebook were due to Facebook's own recommendation algorithms. There could hardly be a simpler case study in the question of AI and ethics, the intersection of what is technically possible and what is morally desirable. CIPR members who find an automated/AI system used by their organisation perpetrating such online harms have a professional responsibility to try and prevent it. For all PR professionals, this is a fundamental requirement of the ability to practice ethically. The question is – if you worked at Facebook, what would you do? If you're not sure, this report guide will help you work out your answer. Alastair McCapra Chief Executive Officer CIPR Artificial Intelligence is quickly becoming an essential technology for ...