Desperate times call for desperate measures and no other big tech company is feeling the heat more than Meta Platforms Inc. A report published by Wall Street Journal last week revealed the strict new policy it has imposed on some employees asking them to either look for new positions somewhere else within the company or face termination. Meta has announced that it plans to cut costs by 10%. In the earnings released for the previous quarter, Meta's results looked grim. The company had lost close to 50% of its value by the second-quarter of this year.
AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year. But there is growing concern around AI bias -- situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm. I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
Amazon might face some political opposition in its bid to acquire iRobot. Democrats including Senator Elizabeth Warren and House Representatives Jesus Garcia, Pramila Jayapal, Mondaire Jones, Katie Porter and Mark Pocan have asked the Federal Trade Commission (FTC) to oppose the purchase of the Roomba creator. The members of Congress pointed to Amazon's history of technology buyouts to support their case, arguing that the company snaps up competitors to eliminate them. Amazon killed sales of Kiva Systems' robots after the 2012 acquisition and used them exclusively in its warehouses, for instance. The 2017 and 2018 acquisitions of Blink and Ring reportedly helped Amazon dominate US video doorbell sales, while the internet retailer has also faced multiple accusations of abusing third-party seller data to launch rival products and promote them above others.
Automation of the labor force was feared for a long time. In 2017, a website sprung up to answer a question long on the minds of many: Will robots take my job? The creators based it on Bureau of Labor Statistics data and a 2013 research paper from Oxford University about "the susceptibility of jobs to computerization." Things have moved quickly since; even the term "computerization" now sounds desperately out of date. If you plug "journalist" into the site's search bar, for example, the site reveals an "automation risk score" of 9 percent.
We all started to realize that the rapid development of AI was really going to change the world we live in. AI is no longer just a branch of computer science, it has escaped from research labs with the development of "AI systems", "software that, for human-defined purposes, generates content, predictions, recommendations or decisions influencing the environments with which they interact" (european union definition). The issues of governance of these AI systems – with all the nuances of ethics, control, regulation and regulation – have become crucial, as their development today is in the hands of a few digital empires like them Gafa-Natu-Batx… who have become the masters of real societal choices on automation and on the "rationalization" of the world. The complex fabric intersecting AI, ethics and law is then built in power relations – and connivance – between states and tech giants. But the commitment of citizens becomes necessary, to assert other imperatives than a solutionism technology where "everything that can be connected will be connected and streamlined".
In the last few months we have seen promising developments in establishing safeguards for AI. This includes a landmark EU regulation proposal on AI that prohibits unacceptable AI uses and imposes mandatory disclosures and evaluations for high-risk systems, an algorithmic transparency standard launched by the UK government, mandatory audits for AI hiring tech in New York City, and a draft AI Risk Assessment Framework developed by NIST at the request of US congress, to name a few. That being said, we are still in the early days of AI regulation. There is a long road ahead to minimize harms that algorithmic systems can cause. In this article series, I explore different topics related to the responsible use of AI and its societal implications.
Alphabet subsidiary and precision health company Verily recently announced a breakthrough in its AI drug discovery GPCR research collaboration with Sosei Heptares. A mere six months ago Verily launched the study with Sosei Heptares – a global leader in GPCR structure-based drug design – with an aim to "prioritise protein targets for therapeutic targeting in immune-mediated disease". Now, Verily has announced that early results from its "next generation immune mapping technology" Immune Profiler platform have already identified "more effective therapeutic options against G protein-coupled receptors (GPCR) in autoimmune and other immune-mediated diseases". The companies hope that in the year to come those data targets will be entered for validation, hit generation, and lead selection. With approximately one third of all current FDA-approved drugs targeting GPCRs, Verily/Sosei Heptares are looking to expedite GPCR research within not only immunology, but also gastroenterology and immuno-oncology as well, and the latest data bodes well for future development of therapeutic options in these areas.
A novel New York City law that penalizes employers for bias in artificial intelligence hiring tools is leaving companies scrambling to audit their AI programs before the law takes effect in January. The law, which requires employers to conduct an independent audit of the automated tools they use, marks the first time employers in the US will face heightened legal requirements if they wish to use those any automated decision-making tools. Such tools--which can range from algorithms built to find ideal candidates to software that assesses body language--have faced scrutiny in recent years for their potential to perpetuate bias against protected groups. But without guidance from the city, employers aren't clear what, exactly, is expected of them and how to prepare. "Notably, the law does not define who or what is meant by an'independent auditor,'" said Danielle J. Moss, a partner at Gibson Dunn & Crutcher LLP.
Trying to overcome human untoward biases by replacing with AI is not as straightforward as it might ... [ ] seem. Humans have got to know their limitations. You might recall the akin famous line about knowing our limitations as grittily uttered by the character Dirty Harry in the 1973 movie entitled Magnum Force (per the spoken words of actor Clint Eastwood in his memorable role as Inspector Harry Callahan). The overall notion is that sometimes we tend to overlook our own limits and get ourselves into hot water accordingly. Whether due to hubris, being egocentric, or simply blind to our own capabilities, the precept of being aware of and taking into explicit account our proclivities and shortcomings is abundantly sensible and helpful. Let's add a new twist to the sage piece of advice. Artificial Intelligence (AI) has got to know its limitations. What do I mean by that variant of the venerated catchphrase? Turns out that the initial rush to get modern-day AI into use as a hopeful solver of the world's problems has become sullied and altogether muddied by the realization that today's AI does have some rather severe limitations. We went from the uplifting headlines of AI For Good and have increasingly found ourselves mired in AI For Bad. You see, many AI systems have been developed and fielded with all sorts of untoward racial and gender biases, and a myriad of other such appalling inequities.
Teenagers deserve to grow, develop, and experiment, says Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), a nonprofit advocacy group. They should be able to test or abandon ideas "while being free from the chilling effects of being watched or having information from their youth used against them later when they apply to college or apply for a job." She called for the Federal Trade Commission (FTC) to make rules to protect the digital privacy of teens. Hye Jung Han, the author of a Human Rights Watch report about education companies selling personal information to data brokers, wants a ban on personal data-fueled advertising to children. "Commercial interests and surveillance should never override a child's best interests or their fundamental rights, because children are priceless, not products," she said.