Ever since I was a boy, I was fascinated by the idea of miniaturization. I read Isaac Asimov's Fantastic Voyage and then, when I finally got my hands on the movie, I probably watched it a dozen times. The premise was that a team of scientists were miniaturized to the point where they could be injected into a person and perform surgery from the inside. Another movie with a similar premise was InnerSpace, starring the incredibly well-matched team of Martin Short and Dennis Quaid. There was the whole Honey, I Shrunk the Kids series of movies and TV shows, and I ate them up as well.
The way we work has changed and it's continuing to change. People are working remotely while being part of their team irrespective of the location. With this change, traditional training methods being restrictive and costly have become less relevant. One of the challenges faced by teachers is to provide customized learning catering to the needs of every student. As different students have different requirements, even teaching one student is an arduous task as the teacher is challenged to find the right curriculum to meet their requirements.
Shopping from home during the coronavirus has necessarily caused a surge in online retail transactions. That, in turn, has pushed a big spike in support calls, which has posed some problems, especially for larger e-tailers. Fortunately, chatbot technology has been there to take on the brunt of this burden. Recent research by Digital360Commerce quantifies the spike at a whopping 426 percent increase in chatbot-driven customer service sessions in April, 2020 as compared to the preceding February. The challenge for human service agents is that despite the ease with which most voice over IP (VoIP) call centers claim they can handle agents that work in distributed environments (at home, for instance), most support services aren't set up that way with only their central call centers having the network infrastructure to handle the new volume of support calls.
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
On the morning of November 9, 2016, the world woke up to the shocking outcome of the U.S. Presidential election: Donald Trump was the 45th President of the United States of America. An unexpected event that still has tremendous consequences all over the world. Today, we know that a minority of social bots--automated social media accounts mimicking humans--played a central role in spreading divisive messages and disinformation, possibly contributing to Trump's victory.16,19 In the aftermath of the 2016 U.S. elections, the world started to realize the gravity of widespread deception in social media. Following Trump's exploit, we witnessed to the emergence of a strident dissonance between the multitude of efforts for detecting and removing bots, and the increasing effects these malicious actors seem to have on our societies.27,29 This paradox opens a burning question: What strategies should we enforce in order to stop this social bot pandemic? In these times--during the run-up to the 2020 U.S. elections--the question appears as more crucial than ever. Particularly so, also in light of the recent reported tampering of the electoral debate by thousands of AI-powered accounts.a What struck social, political, and economic analysts after 2016--deception and automation--has been a matter of study for computer scientists since at least 2010. Via a longitudinal analysis, we discuss the main trends of research in the fight against bots, the major results that were achieved, and the factors that make this never-ending battle so challenging. Capitalizing on lessons learned from our extensive analysis, we suggest possible innovations that could give us the upper hand against deception and manipulation. Studying a decade of endeavors in social bot detection can also inform strategies for detecting and mitigating the effects of other--more recent--forms of online deception, such as strategic information operations and political trolls.
The IoT is getting smarter. Companies are incorporating artificial intelligence--in particular, machine learning--into their Internet of Things applications and seeing capabilities grow, including improving operational efficiency and helping avoid unplanned downtime. WITH a wave of investment, a raft of new products, and a rising tide of enterprise deployments, artificial intelligence is making a splash in the Internet of Things (IoT). Companies crafting an IoT strategy, evaluating a potential new IoT project, or seeking to get more value from an existing IoT deployment may want to explore a role for AI. Artificial intelligence is playing a growing role in IoT applications and deployments, a shift apparent in the behavior of companies operating in this area.
The RCMP awarded a new social media monitoring contract Sept. 2 to a U.S. company that uses artificial intelligence to track what's said on the web. Virginia-based Babel Street says its software can instantly translate between 200 languages and filter social media content by geographic areas and by sentiments expressed. We can't let journalism fade away. Contribute to The Tyee so we can add to our team. Two lucky Tyee readers will win an all-access ticket to this annual literary event.
Whenever you use a free application, website, or service, the companies behind it gain large amounts of information about you and then package you with other users with similar ages and interests to be sold to advertisers. This process is called data mining, is how Google generated a staggering $134.81 billion in advertising in 2019 alone. With advertising accounting for over 70% of Google's revenue, it has no other option than to try to convince us that we should not only tolerate its data collection and mining but accept it, because of its many advantages. Your phone is your personal assistant, and the more information about you it gets fed, the more things it can do for you. Would you care that your data is being collected if Google could use it to make things easier for you?
In April 2020, Cynet launched the world's first Incident Response Challenge to test and reward the skills of Incident Response professionals. The Challenge consisted of 25 incidents, in increasing difficulty, all inspired by real-life scenarios that required participants to go beyond the textbook solution and think outside of the box. Over 2,500 IR professionals competed to be recognized as the top incident responders. Now that the competition is over (however, the challenge website is still open for anyone who wants to practice solving the challenges), Cynet makes the detailed solutions available as a free resource for knowledge and inspiration. Providing the thought process and detailed steps to solve each of the challenges will serve as a training aid and knowledge base for incident responders.