schroepfer
The Download: AI's end of life decisions, and green investing
End-of-life decisions can be extremely upsetting for surrogates--the people who have to make those calls on behalf of another person. Friends or family members may disagree over what's best for their loved one, which can lead to distressing situations. David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial intelligence-based tool that can help surrogates predict what the patients themselves would want in any given situation. Wendler hopes to start building their tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won't be simple.
Bandits for Online Calibration: An Application to Content Moderation on Social Media Platforms
Avadhanula, Vashist, Baki, Omar Abdul, Bastani, Hamsa, Bastani, Osbert, Gocmen, Caner, Haimovich, Daniel, Hwang, Darren, Karamshuk, Dima, Leeper, Thomas, Ma, Jiayuan, Macnamara, Gregory, Mullett, Jake, Palow, Christopher, Park, Sung, Rajagopal, Varun S, Schaeffer, Kevin, Shah, Parikshit, Sinha, Deeksha, Stier-Moses, Nicolas, Xu, Peng
We describe the current content moderation strategy employed by Meta to remove policy-violating content from its platforms. Meta relies on both handcrafted and learned risk models to flag potentially violating content for human review. Our approach aggregates these risk models into a single ranking score, calibrating them to prioritize more reliable risk models. A key challenge is that violation trends change over time, affecting which risk models are most reliable. Our system additionally handles production challenges such as changing risk models and novel risk models. We use a contextual bandit to update the calibration in response to such trends. Our approach increases Meta's top-line metric for measuring the effectiveness of its content moderation strategy by 13%.
- Oceania > Australia (0.04)
- North America > United States > Pennsylvania (0.04)
- Information Technology (0.69)
- Leisure & Entertainment > Sports > Soccer (0.46)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.50)
Twitter: Spiros Margaris on AI and global privacy laws top tweet Q4 2021
Verdict lists five of the most popular tweets on artificial intelligence (AI) in Q4 2021 based on data from GlobalData's Technology Influencer Platform. The top tweets are based on total engagements (likes and retweets) received on tweets from more than 150 AI experts tracked by GlobalData's Technology Influencer platform during the fourth quarter (Q4) of 2021. Spiros Margaris, board member of venture capital firm Margaris Ventures, shared an article on how companies are looking to deploy AI while also complying with data regulations and trends. Companies are focusing on how consumer data can be used while maintaining and protecting their data and building trust in their personalised services. Global technology company IBM, for instance, launched a data fabric solution, which allows consumers to have a complete view of their data, irrespective of where the data resides.
He got Facebook hooked on AI. Now he can't fix its misinformation addiction
The Cambridge Analytica scandal would kick off Facebook's largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump's favor. Millions began deleting the app; employees left in protest; the company's market capitalization plunged by more than $100 billion after its July earnings call. In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking "a broad enough view" of Facebook's responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy. Finally, Mike Schroepfer, Facebook's chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company's algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI. Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook's position as an AI powerhouse. In his six years at Facebook, he'd created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he'd diffused those algorithms across the company. Now his mandate would be to make them less harmful. Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms.
- Asia > Russia (0.34)
- Asia > Myanmar (0.04)
- North America > United States > New York (0.04)
- (4 more...)
Facebook wants to help build AI that can remember everything for you
On Friday, Facebook announced new AI research that could help pave the way for a significant change in how artificial intelligence -- and some devices that incorporate this technology -- functions in our daily lives. The company announced a real-world sound simulator that will let researchers train AI systems in virtual three-dimensional spaces with sounds that mimic those that occur indoors, opening up the possibility that an AI assistant may one day help you track down a smartphone ringing in a distant room. Facebook also unveiled an indoor mapping tool meant to help AI systems better understand and recall details about indoor spaces, such as how many chairs are in a dining room or whether a cup is on a counter. This isn't something you can do with technology as it is today. Smart speakers generally can't "see" the world around them, and computers are not nearly as good as humans at finding their way around indoor spaces.
Facebook's AI for detecting hate speech is facing its biggest challenge yet
The single most amazing thing about Facebook is how vast it is. But while more than two and a half billion people find value in the service, this scale is also Facebook's biggest downfall. Controlling what happens in that vast digital space is nearly impossible, especially for a company that historically hasn't been very responsible about managing the possible harms implicit in its technology. Only in 2017--13 years into its history--did Facebook seriously begin facing up to the fact that its platform could be used to deliver toxic speech, propaganda, and misinformation directly to the brains of millions of people. Various flavors of toxic stuff can be found all over Facebook, from bullying and child trafficking to the rumors, hate, and fakery that helped Donald Trump become president in 2016. In the past few years, Facebook has invested heavily in measures to control this kind of toxic content. It has mainly outsourced its content moderation to a small army of reviewers in contract shops around the world. But content moderators can't begin to weed through all the harmful content, and the traffickers of such stuff are constantly evolving new ways of evading them.
- Media (1.00)
- Information Technology > Services (1.00)
Deepfakes aren't very good--nor are the tools to detect them – Ars Technica
The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them. In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation. Facebook's Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms.
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
Facebook announces the winner of its Deepfake Detection Challenge
In September of 2019, Facebook launched its Deepfake Detection Challenge (DFDC) -- a public contest to develop autonomous algorithmic detection systems to combat the emerging threat of deepfake videos. After nearly a year, the social media platform announced the winners of the challenge, out of a pool of more than 2,000 global competitors. Deepfakes present a unique challenge to social media platforms. Capable of being produced with little more than a consumer-grade GPU and software that can be downloaded from the internet. With it, individuals can quickly and easily create fraudulent video clips, the subjects of which appearing to say or do things that they actually didn't.
Facebook just released a database of 100,000 deepfakes to teach AI how to spot them
Social-media companies are concerned that deepfakes could soon flood their sites. But detecting them automatically is hard. To address the problem, Facebook wants to use AI to help fight back against AI-generated fakes. To train AIs to spot manipulated videos, it is releasing the largest ever data set of deepfakes --more than 100,000 clips produced using 3,426 actors and a range of existing face-swapping techniques. "Deepfakes are currently not a big issue," says Facebook's CTO, Mike Schroepfer.
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
Deepfakes Aren't Very Good. Nor Are the Tools to Detect Them
The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them. In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation. Facebook's Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms.