In 2018, William Frederick Keck III pleaded guilty in a court in Manassas, Virginia, to possession with intent to distribute cannabis. He served three months in prison, then began a three-year probation. He was required to wear a GPS ankle monitor before his trial and then to report for random drug tests after his release. Eventually, the state reduced his level of monitoring to scheduled meetings with his parole officer. Finally, after continued good behaviour, Keck's parole officer moved him to Virginia's lowest level of monitoring: an app on his smartphone.
"I don't use Facebook anymore," she said. I was leading a usability session for the design of a new mobile app when she stunned me with that statement. It was a few years back, when I was a design research lead at IDEO and we were working on a service design project for a telecommunications company. The design concept we were showing her had a simultaneously innocuous and yet ubiquitous feature -- the ability to log in using Facebook. But the young woman, older than 20, less than 40, balked at that feature and went on to tell me why she didn't trust the social network any more. This session was, of course, in the aftermath of the 2016 Presidential election. An election in which a man who many regarded as a television spectacle at best and grandiose charlatan at worst had just been elected to our highest office. Though now in 2020, our democracy remains intact.
Amazon founder Jeff Bezos announced on Tuesday that he will step down as CEO later this year and become executive chairman of the company's board. He described the move in a letter to employees as an opportunity for him to focus on "new products and early initiatives" and his various pet projects like his space-flight company Blue Origin and the Washington Post. In Bezo's stead, longtime Amazon executive Andy Jassy will become the new CEO. So, who exactly is that guy? Jassy joined Amazon in 1997, three years after its founding.
The first wave of artificial intelligence (AI) has already replaced humans for repetitive physical tasks like inspecting equipment, manufacturing goods, repairing things, and crunching numbers. That shift started way back with the Industrial Revolution. This gave rise to our current Thinking Economy, where employment and wages are more tied to workers' abilities to process, analyze and interpret information to make decisions and solve problems … Just like the industrial revolution automated physical tasks by decreasing the value of human strength and increasing the value of human cognition, AI is now reshaping the landscape and ushering in a Feeling Economy. What characterizes this emerging economy? Consider, for example, the role of a financial analyst, which seems pretty quantitative and thinking-oriented.
We propose an instrumental variable (IV) selection procedure which combines the agglomerative hierarchical clustering method and the Hansen-Sargan overidentification test for selecting valid instruments for IV estimation from a large set of candidate instruments. Some of the instruments may be invalid in the sense that they may fail the exclusion restriction. We show that under the plurality rule, our method can achieve oracle selection and estimation results. Compared to the previous IV selection methods, our method has the advantages that it can deal with the weak instruments problem effectively, and can be easily extended to settings where there are multiple endogenous regressors and heterogenous treatment effects. We conduct Monte Carlo simulations to examine the performance of our method, and compare it with two existing methods, the Hard Thresholding method (HT) and the Confidence Interval method (CIM). The simulation results show that our method achieves oracle selection and estimation results in both single and multiple endogenous regressors settings in large samples when all the instruments are strong. Also, our method works well when some of the candidate instruments are weak, outperforming HT and CIM. We apply our method to the estimation of the effect of immigration on wages in the US.
America's first confirmed wrongful arrest by facial recognition technology happened in January 2020. Robert Williams, a Black man, was arrested in his driveway just outside Detroit, with his wife and young daughter watching. He spent the night in jail. The next day in the questioning room, a detective slid a picture across the table to Williams of a different Black man who had been caught on video stealing watches from the boutique Shinola. "Is this you?" he asked.
Outgoing US President Donald Trump's political campaign for his second presidential term may have been in full swing late last year, but that didn't stop him from continuing his relentless onslaught against the H1-B visa, which had been comprehensively consumed by Indian IT companies until now. The recent rules, as part of an overall effort to curb nearly all forms of immigration, have come as a further headache to US companies desperate for global talent at affordable prices -- especially in hot areas like artificial intelligence (AI) and robotics which are now beginning to see applications across a wide spectrum of industries and companies. In the latest round of rules, companies will have to shell out salaries for entry-level H-1B workers that are in the 45th percentile of wages for that profession versus the prior level of 17th percentile. Higher-skilled workers would receive salaries in the 95th percentile from the 67th. In other words, there are pretty huge pay bumps across the board.
From Google's commitment to never pursue AI applications that might cause harm, to Microsoft's "AI principles", through IBM's defense of fairness and transparency in all algorithmic matters: big tech is promoting an responsible AI agenda, and it seems companies large and small are following the lead. While in 2019, a mere 5% of organizations had come up with an ethics charter that framed how AI systems should be developed and used, the proportion jumped to 45% in 2020. Key words such as "human agency", "governance", "accountability" or "non-discrimination" are becoming central components of many companies' AI values. The concept of responsible technology, it would seem, is slowly making its way from the conference room and into the boardroom. What is AI? Everything you need to know about Artificial Intelligence This renewed interest in ethics, despite the topic's complex and often abstract dimensions, has been largely motivated by various pushes from both governments and citizens to regulate the use of algorithms.
OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry. This week, Kantrowitz sits down with Meredith Whittaker, an A.I. researcher who helped lead Google's employee walkout in 2018. This interview, which took place at World Summit A.I, has been edited for length and clarity. To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast. When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, "You should speak to the people who were critical of the social web long before the film." One name stood out: Meredith Whittaker. An A.I. researcher and former Big Tech employee, Whittaker helped lead Google's walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but also of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all. Alex Kantrowitz: It seems like your perspective on The Social Dilemma is a little bit different from Tristan's.
On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.