It was December 2020, and she was being invited into a pilot program providing guaranteed income--a direct cash transfer with no strings attached. For Softky, it was a lifeline. "For the first time in a long time, I felt like I could … take a deep breath, start saving, and see myself in the future," she says. The idea of "just giving people money" has been in and out of the news since becoming a favored cause for many high-profile Silicon Valley entrepreneurs, including Twitter's Jack Dorsey, Facebook cofounders Mark Zuckerberg and (separately) Chris Hughes, and Singularity University's Peter Diamandis. They proposed a universal basic income as a solution to the job losses and social conflict that would be wrought by automation and artificial intelligence--the very technologies their own companies create.
What is new is how quickly malicious actors can spread disinformation when the world is tightly connected across social networks and internet news sites. We can give up on the problem and rely on the platforms themselves to fact-check stories or posts and screen out disinformation--or we can build new tools to help people identify disinformation as soon as it crosses their screens. Preslav Nakov is a computer scientist at the Qatar Computing Research Institute in Doha specializing in speech and language processing. He leads a project using machine learning to assess the reliability of media sources. That allows his team to gather news articles alongside signals about their trustworthiness and political biases, all in a Google News-like format. "You cannot possibly fact-check every single claim in the world," Nakov explains. Instead, focus on the source. "I like to say that you can fact-check the fake news before it was even written." His team's tool, called the Tanbih News Aggregator, is available in Arabic and English and gathers articles in areas such as business, politics, sports, science and technology, and covid-19. Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next. This podcast was produced in partnership with the Qatar Foundation. "Even the best AI for spotting fake news is still terrible," MIT Technology Review, October 3, 2018 Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
Amazon's book recommendation algorithms that help customers discover new titles may have a dark side. A new report from the Institute for Strategic Dialogue says these algorithms steer people to books about conspiracy theories and extremism, sometimes introducing them to the work of conspiracy theorists who've been banned by other online platforms. People browsing a book about one conspiracy on Amazon are likely to get suggestions for more books on that topic as well as books about other conspiracy theories about everything from QAnon to COVID-19 vaccine, the report found. Other features, such as auto-complete in the search bar and content suggestions for the author or similar authors can also lead users down an extremist rabbit hole, said Chloe Colliver, head of digital policy and strategy at ISD. The pattern is similar to problems observed on other major online platforms like Google's YouTube whose algorithms have been found to direct users to extreme content, sucking them into violent ideologies.
How to Unplug from the World-Gobbling Machine One year ago, on March 19, 2020, I took a walk in my neighborhood park, seeking to clear my mind. The governor had declared a state of emergency 11 days ago in response to the WHO's pronouncement of a global coronavirus pandemic. In the week that followed, I voluntarily transferred my counseling practice to video-only sessions, doing my part to participate in the "two weeks to flatten the curve" campaign that had spread virally via social media and other means of internet delivery. After all, "We're all in this together," I thought. But during my first week of teletherapy sessions, a new, involuntary form of curve-flattening had begun to sweep the country, starting in the California Bay Area, in imitation of the Chinese and Italian lockdowns. Earlier that day the California Governor had extended this lockdown to cover the entire state by executive decree. It seemed only a matter of time before the Oregon Governor would follow suit (she did ...
To understand these manipulations of the quantum world, we need to understand the... Harvey pulled Einstein's brain out of the skull. Then put it in a formalin jar.... To understand these manipulations of the quantum world, we need to understand the... What is the most coveted and prestigious award in the world? Scientists recently developed this new process in the lab. Let's take a look at what Bill Gates was saying about the epidemic that we have... The whole world is now pointing to the Chinese wildlife market as a possible source... Today we are going to discuss Marie Tharp and Marie Tharp's historical map.
A new machine-learning program accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time--a tool that could someday help public health officials combat misinformation online. A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories. Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods. The study, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and provide context for each through the first five months of the pandemic.
Scientists have developed a new machine learning tool that can identify Covid-19-related conspiracy theories on social media and predict how they evolved over time, an advance which may lead to better ways for public health officials to fight misinformation online. The study, published in the Journal of Medical Internet Research, analysed anonymised Twitter data to characterise four Covid-19 conspiracy theory themes – such as one that erroneously claims the Bill and Melinda Gates Foundation engineered or has malicious intent related to the pandemic. Using the AI tool's analysis of more than 1.8 million tweets that contained Covid-19 keywords, the scientists from the Los Alamos National Laboratory in the US categorised the posts as misinformation or not, and provided context for each of these conspiracy theories through the first five months of the pandemic. "From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets," explained Dax Gerts, a computer scientist and co-author of the study from the Los Alamos National Laboratory. The four major themes examined in the study were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or have "malicious intent" related to Covid-19; that the novel coronavirus was bioengineered or was developed in a laboratory; and that vaccines for Covid-19, which were still in development during the study period, would be dangerous.
Obviously, the coronavirus has had a devastating impact on our world. The transformative effects of COVID-19 are immense, but they aren't all negative. When faced with the reality of everything being remote from work to entertainment to education to connecting with friends and more, the technologies that propel the 4th Industrial Revolution offered solutions to continue some normality in business and life. As more companies relied on these technologies to continue operations, things that held digital transformation back in the past were challenged. As a result, as Satya Nadella, CEO of Microsoft, said, "We've seen two years' worth of digital transformation in two months."
Facebook is dipping its toes further into the world of dating. The company's New Product Experimentation (NPE) Team, which creates experimental apps, has released a video speed-dating app called Sparked. It sounds a little like both Chatroulette and the video chat features that major dating apps have added since the onset of the COVID-19 pandemic. You'll go on a rapid-fire series of four-minute dates and, if you and the other person enjoy your time together, you can go on a 10-minute second date. At that point, you'll need to exchange contact details if you want to stay in touch.
The world celebrated Women's History Month in March, and it is a timely moment for us to look at the forces that will shape gender parity in the future. Even as the pandemic accelerates digitization and the future of work, artificial intelligence (AI) stands out as a potentially helpful--or hurtful--tool in the equity agenda. McKinsey recorded a podcast in collaboration with Citi that dives into how gender bias is reflected in AI, why we must consciously debias our machine-human interfaces, and how AI can be a positive force for gender parity. Ioana Niculcea: Before we start the conversation, I think it's important for us to spend a moment assessing the amount of change that has taken place with regard to AI, and how the pace of that change has accelerated over the past few years. And many people argue that in light of the current COVID-19 circumstance, we'll feel further acceleration as people move toward digitization. I spent the past eight years in financial services, and it all started with data. Datafication of the industry was sort of the point of origin. And we hear often that over 90 percent of the data that we have today was created over the past two years. You hear things like every minute, there's over one million Facebook logins and 4.5 million YouTube videos being streamed, or 17,000 different Uber rides. There's a lot of data, and only 1 percent of that is being analyzed, as said today.