Goto

Collaborating Authors

Results


IBM's Watson Assistant can now field election questions

#artificialintelligence

Ahead of the U.S. presidential election on November 3, IBM today announced it's working with states to put information into the hands of potential voters. Using the AI and natural language processing capabilities of Watson Assistant, IBM says it's helping field voter queries online and via phone by advising people on polling place locations, voting hours, procedures for requesting mail-in ballots, and deadlines. Research from the Pew Center indicates that nearly half of all U.S. voters expect to have difficulties casting a ballot due to the coronavirus pandemic. In a recent NPR/PBS NewsHour/Marist Poll, 41% of those surveyed said they believed the U.S. is not very prepared or not at all prepared to keep November's election safe and secure. IBM's election-focused Watson Assistant offering taps Watson Discovery to surface information about voting logistics from federal, state, and county websites; local news reports; and government documents.


AI Weekly: Cutting-edge language models can produce convincing misinformation if we don't stop them

#artificialintelligence

It's been three months since OpenAI launched an API underpinned by cutting-edge language model GPT-3, and it continues to be the subject of fascination within the AI community and beyond. Portland State University computer science professor Melanie Mitchell found evidence that GPT-3 can make primitive analogies, and Columbia University's Raphaël Millière asked GPT-3 to compose a response to the philosophical essays written about it. But as the U.S. presidential election nears, there's growing concern among academics that tools like GPT-3 could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies' Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3's strength in generating "informational," "influential" text could be leveraged to "radicalize individuals into violent far-right extremist ideologies and behaviors." Bots are increasingly being used around the world to sow the seeds of unrest, either through the spread of misinformation or the amplification of controversial points of view.


Mozilla wants to understand your weird YouTube recommendations

ZDNet

From cute cat videos to sourdough bread recipes: sometimes, it feels like the algorithm behind YouTube's "Up Next" section knows the user better than the user knows themselves. Often, that same algorithm leads the viewer down a rabbit hole. How many times have you spent countless hours clicking through the next suggested video, each time promising yourself that this one would be the last one? The scenario gets thorny when the system somehow steers the user towards conspiracy theory videos and other forms of extreme content, as some have complained. To get an idea of how often this happens and how, the non-profit Mozilla Foundation has launched a new browser extension that lets users take action when they are recommended videos on YouTube that they then wish they hadn't ended up watching.


Voice assistants are doing a poor job of conveying information about voting

#artificialintelligence

Over 111.8 million people in the U.S. talk to voice assistants like Siri, Alexa, and Google Assistant every month, eMarketer estimates. Tens of millions of those people use assistants as data-finding tools, with the Global Web Index reporting that 25% of adults regularly perform voice searches on smartphones. But while voice assistants can answer questions about pop culture and world events like a pro, preliminary evidence suggests they struggle to supply information about elections. In a test of popular assistants' abilities to provide accurate, localized context concerning the upcoming U.S. presidential election, VentureBeat asked Alexa, Siri, and Google Assistant a set of standardized questions about procedures, deadlines, and misconceptions about voting. In general, the assistants fared relatively poorly, often answering questions with information about voting in other states or punting questions to the web instead of answering them directly. In light of historic misinformation efforts around the election, the shortcomings have the potential to sow confusion or hamper get-out-the-vote efforts -- especially among those with accessibility challenges who rely heavily on voice assistants.


Faked videos shore up false beliefs about Biden's mental health

#artificialintelligence

From Ronald Reagan in 1984 to Bob Dole in 1996 and even Hillary Clinton in 2016, candidate health has become a common theme across recent U.S. presidential campaigns. The issue is poised to take on added significance this fall. No matter who wins, the U.S. is set to inaugurate its oldest president by a wide margin. The Trump campaign and its surrogates have seized on Democratic nominee Joe Biden's age and have been painting him as mentally unfit for the presidency. Videos of Biden falling asleep during an interview, misspeaking about the dangers of "Joe Biden's America" and appearing lost during a campaign event have bolstered the belief, particularly among Trump supporters, that Biden is in cognitive decline.


Iran warns US against 'strategic mistake' after Trump's threat

Al Jazeera

Iran has warned the United States against making a "strategic mistake" after President Donald Trump threatened Tehran over reports it planned to avenge the killing of top general Qassem Soleimani. "We hope that they do not make a new strategic mistake and certainly in the case of any strategic mistake, they will witness Iran's decisive response," government spokesman Ali Rabiei told a televised news conference on Tuesday. Trump on Monday promised that any attack by Iran would be met with a response "1,000 times greater in magnitude," after reports said Iran planned to avenge Soleimani's killing in a US drone attack in January this year. A US media report, quoting unnamed officials, said an alleged Iranian plot to assassinate the US ambassador to South Africa was planned before the presidential election in November. "According to press reports, Iran may be planning an assassination, or other attack, against the United States in retaliation for the killing of terrorist leader Soleimani," Trump tweeted.


Can AI Predict the 2020 Election?

#artificialintelligence

The outcome of the 2020 US Presidential election is becoming less and less predictable by the day. Will a vaccine be available by November? How many people will (be able to) vote? There's not even agreement on how many swing states there are -- 6, 10, 11, perhaps 12 or more? There are many opinionated arguments to be made, but there's hardly a rigorous way to analyze how unprecedented current events will impact voting habits.


Deepfake Fiascos Of 2020 That Made Headlines

#artificialintelligence

Deepfakes are indeed scary and have managed to strike a nerve for many, especially the ones being victimised for this sophisticated technology. Not only has it become a worldwide concern for many due to its influential impact on election campaigns but also made people anxious due to the criminal activity associated with it. With easily accessible deepfake making tools available for anybody to use and advancements in GANs has made it relatively easy for notorious minds to create these eerie-looking unreal AI-generated videos and images. Such improvement and accessibility has in turn increased the number of deepfake incidents in recent times. Some of them are so incredibly convincing that they manage to surpass the original videos. This news showcased one of the weirder applications of deep fakes, that used artificial intelligence to manipulate an audio-visual content -- a less heard usage, termed as audio deepfake scam.


'Video Authenticator' is Microsoft's answer to Deepfake detection

#artificialintelligence

Deepfakes is a class of synthetic media generated by AI and represents another dark side of technology -- this form of Artificial Intelligence stole the headlines last year when a LinkedIn user by the name Katie Jones, who appeared on the platform & started connecting with the Who's Who of the political elite in Washington DC. It was alarming, how deep learning created a real-life image of a person & then penetrated the social media spreading misinformation. With the U.S presidential elections looming, lawmakers in the country are worried about how deepfakes can greatly jeopardize the transparency of the democratic process. Many of the leading tech companies have been asked for help and are working on developing tools that can detect this fake synthetic media. Global software giant, Microsoft, has now released two new tools that can spot if a certain media has been artificially manipulated.


Microsoft's New Deepfake Detector Puts Reality To The Test - Liwaiwai

#artificialintelligence

The upcoming US presidential election seems set to be something of a mess--to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn't shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn't be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines. So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn't pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent.