Ever since I was a boy, I was fascinated by the idea of miniaturization. I read Isaac Asimov's Fantastic Voyage and then, when I finally got my hands on the movie, I probably watched it a dozen times. The premise was that a team of scientists were miniaturized to the point where they could be injected into a person and perform surgery from the inside. Another movie with a similar premise was InnerSpace, starring the incredibly well-matched team of Martin Short and Dennis Quaid. There was the whole Honey, I Shrunk the Kids series of movies and TV shows, and I ate them up as well.
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
On the morning of November 9, 2016, the world woke up to the shocking outcome of the U.S. Presidential election: Donald Trump was the 45th President of the United States of America. An unexpected event that still has tremendous consequences all over the world. Today, we know that a minority of social bots--automated social media accounts mimicking humans--played a central role in spreading divisive messages and disinformation, possibly contributing to Trump's victory.16,19 In the aftermath of the 2016 U.S. elections, the world started to realize the gravity of widespread deception in social media. Following Trump's exploit, we witnessed to the emergence of a strident dissonance between the multitude of efforts for detecting and removing bots, and the increasing effects these malicious actors seem to have on our societies.27,29 This paradox opens a burning question: What strategies should we enforce in order to stop this social bot pandemic? In these times--during the run-up to the 2020 U.S. elections--the question appears as more crucial than ever. Particularly so, also in light of the recent reported tampering of the electoral debate by thousands of AI-powered accounts.a What struck social, political, and economic analysts after 2016--deception and automation--has been a matter of study for computer scientists since at least 2010. Via a longitudinal analysis, we discuss the main trends of research in the fight against bots, the major results that were achieved, and the factors that make this never-ending battle so challenging. Capitalizing on lessons learned from our extensive analysis, we suggest possible innovations that could give us the upper hand against deception and manipulation. Studying a decade of endeavors in social bot detection can also inform strategies for detecting and mitigating the effects of other--more recent--forms of online deception, such as strategic information operations and political trolls.
The RCMP awarded a new social media monitoring contract Sept. 2 to a U.S. company that uses artificial intelligence to track what's said on the web. Virginia-based Babel Street says its software can instantly translate between 200 languages and filter social media content by geographic areas and by sentiments expressed. We can't let journalism fade away. Contribute to The Tyee so we can add to our team. Two lucky Tyee readers will win an all-access ticket to this annual literary event.
Artificial Intelligence and Machine Learning (AI & ML) and Sentiment Analysis are said to "predict the future through analysing the past" – the Holy Grail of the finance sector. They can replicate cognitive decisions made by humans yet avoid the behavioural biases inherent in humans. Processing news data and social media data and classifying (market) sentiment and how it impacts Financial Markets is a growing area of research. The field has recently progressed further with many new "alternative" data sources, such as email receipts, credit/debit card transactions, weather, geo-location, satellite data, Twitter, Micro-blogs and search engine results. AI & ML are gaining adoption in the financial services industry especially in the context of compliance, investment decisions and risk management.
UN Global Pulse is a United Nations initiative doing pioneering work at the nexus between development aid and technology. It is conducting research into the potential of Big Data and artificial intelligence in addition to supporting other UN agencies in the implementation of projects. In response to the pandemic, governments around the world have grown increasingly interested in data-focused models that might be able to forecast the spread of COVID-19 infections and the effectiveness of planned strategies along with their possible side effects. The innovation team at the UN Refugee Agency (UNHCR) has already developed a number of experimental approaches to gather clues about possible future migration events. Data analysts, for example, have used open source weather data and Facebook postings from migrant traffickers for clues about smuggling prices, the most frequently used routes and assembly points.
AI Alignment through anthropology: How social science can steer AI towards better outcomes Guest article by Anna Leggett, Senior Research Consultant, and Morgan Williams, Junior Consultant, Stripe Partners. If an advanced AI system were instructed to make paper clips, or to fetch coffee, we would not want it to
On Saturday, a user tested out racial discrimination existing in Twitter's AI photo tools using two strips of photos. Twitter crops pictures attached to a tweet, only showing the entire picture after a user clicks on it, to allow the tweet to be concise. Taking two photos--one of Mitch McConnell, a white US senator, and the other of former US president Barack Obama, who is Black--attached on a strip of white, the user tried to see which photos the microblogging platform will display in the tweet. In all his attempts, the user found that the AI displayed McConnell's photo over Obama's, even after changing all secondary features that could affect the order.
The graph represents a network of 2,121 Twitter users whose tweets in the requested range contained "#iiot", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Tuesday, 22 September 2020 at 21:13 UTC. The requested start date was Tuesday, 22 September 2020 at 00:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 2-day, 16-hour, 57-minute period from Saturday, 19 September 2020 at 07:03 UTC to Tuesday, 22 September 2020 at 00:00 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.