The list of concerns around the use of artificial intelligence seems to grow with every passing week. Issues around bias, the use of AI for deepfake videos and audio, misinformation, governmental surveillance, security and failure of the technology to properly identify the simplest of objects have created a cacophony of concern about the technology's long-term future. One software company recently released a study which showed only 25% of consumers would trust a decision made by systems using AI, and another report commissioned by KPMG International found that a mere 35% of information technology leaders had a high level of trust in their own organizations' analytics. It's a bumpy journey for AI as the technology world embarks on a new decade and key practitioners in the space are well aware that trust will ultimately determine how widely and quickly the technology becomes adopted throughout the world. "We want to build an ecosystem of trust," Francesca Rossi, AI ethics global leader at IBM Corp., said at the digital EmTech Digital conference on Monday.
Researchers are utilising artificial intelligence (AI) to develop an early warning system that can identify manipulated images, deepfake videos and disinformation online in 2020 US election. The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections. According to the study, published in the journal Bulletin of the Atomic Scientists, the scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. "Memes are easy to create and even easier to share. When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm," said study researcher Tim Weninger, Associate Professor at the University of Notre Dame in the US.
Researchers at the University of Notre Dame are using artificial intelligence to develop an early warning system that will identify manipulated images, deepfake videos and disinformation online. The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections. The scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. "Memes are easy to create and even easier to share," said Tim Weninger, associate professor in the Department of Computer Science and Engineering at Notre Dame. "When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm."
Our 2020 presidential candidates will be questioned about their stance on artificial intelligence (AI) policy, especially with regard to the job displacement AI could cause in manufacturing, transportation, and other industries. An over-regulation of AI could hand technical superiority to countries like China and Russia, leading to a ripple effect on America's GDP and even threatening national security. But under-regulation could lead to a massive consolidation of power among a handful of American technology companies, millions of jobs lost without replacement planning, and algorithms that show bias based on age, race, gender, and more. We're certain to hear statements about upskilling -- the process of helping displaced workers acquire new skills so they can find other employment -- and about taxing robots to slow down job loss. But the candidates will need to offer up more than a few soundbites.
After the surprising results of the 2016 presidential election, I wanted to better understand the socio-economic and cultural factors that played a role in voting behavior. With the election results in the books, I thought it would be fun to reverse-engineer a predictive model of voting behavior based on some of the widely available county-level data sets. For example, if you want to answer the question "how could the election have been different if the percentage of people with at least a bachelor's degree had been 2% higher nationwide?" you can simply toggle that parameter up to 1.02 and click "Submit" to find out. The predictions are driven by a random forest classification model that has been tuned and trained on 71 distinct county-level attributes. Using real data, the model has a predictive accuracy of 94.6% and an ROC AUC score of 96%.
The world's tech industry will be shaped by China, artificial intelligence, cancel culture, and a number of other trends, according to the Future Today Institute's 2020 Tech Trends Report. Now in its 13th year, the document is put together by the Future Today Institute and director Amy Webb, who is also a professor at New York University's Stern School of Business. The report attempts to recognize connections between tech and future uncertainties like the outcome of the 2020 U.S. presidential election, as well as the spread of epidemics like coronavirus. Among major trends in the report, 2020 will be the synthetic decade. "Soon, we will produce'designer' molecules in a range of host cells on demand and at scale, which will lead to transformational improvements in vaccine production, tissue production and medical treatments. Scientists will start to build entire human chromosomes, and they will design programmable proteins," the report reads.
Stahlman joins Team Human to discuss how artificial intelligence has become the new ground for human interaction, and why navigating it will require us to retrieve our uniquely human senses. "We will only become fully human if we learn to take responsibility for our actions." Further, he discusses the shift from a television environment to a digital environment and what that means for our collective sensibilities. Rushkoff discusses Super Tuesday results and looks at the figure and ground of the presidential race and what we can do in our local communities to create change.
While this ban on technically manipulated videos of political figures isn't new and has been in place since the last presidential election in 2016, it illustrates just how increasingly difficult it is for the public (and organisations) to verify a person's true identity online. A deepfake today uses AI to combine existing imagery to replicate both their face and voice. Essentially, they can impersonate a real person, making them appear to say words they have never even spoken – hence the fear when it comes to general elections and politics being skewed by misinformed videos. Worryingly, the number of them online has doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later. While the majority of these are porn-related, the problem isn't solely defined to this space.
"The Five" discussed the media reaction to reports on Russia's involvement or prospective involvement in the 2020 presidential election Monday, with particular focus on cable news channels CNN and MSNBC. "In terms of these talking heads on TV, the makeup-wearing misery mongers, you're never, ever, ever going to hear them apologize for getting it wrong literally for the last four years," Fox Business Network's Dagen McDowell said. "Because in their in their arrogance and insecurity, they'll never be able to admit that they are tools for Putin and also fools." A U.S. intelligence official told Fox News Sunday that contrary to numerous recent media reports, there is no evidence to suggest that Russia is making a specific "play" to boost President Trump's reelection bid. The official added that top election security official Shelby Pierson, who briefed Congress on Russian election interference efforts earlier this month, may have overstated intelligence regarding the issue.