Most Shocking Deepfake Videos Of 2021


Only, it was a deepfake. So was the video of Donald Trump taunting Belgium for remaining in the Paris climate agreement and Barack Obama's public service announcement as posted by Buzzfeed. These great examples of deepfakes are the 21st Century's answer to Photoshopped images and videos. Synthetic media, deepfakes, use artificial intelligence (AI) -- deep learning technology, to replace an existing person in an image or video with someone else. One reason for the widespread use of deepfake technology in popular celebrities is that these personalities have a large number of pictures available on the internet, allowing AI to train and learn from.

Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges Artificial Intelligence

When 5G began its commercialisation journey around 2020, the discussion on the vision of 6G also surfaced. Researchers expect 6G to have higher bandwidth, coverage, reliability, energy efficiency, lower latency, and, more importantly, an integrated "human-centric" network system powered by artificial intelligence (AI). Such a 6G network will lead to an excessive number of automated decisions made every second. These decisions can range widely, from network resource allocation to collision avoidance for self-driving cars. However, the risk of losing control over decision-making may increase due to high-speed data-intensive AI decision-making beyond designers and users' comprehension. The promising explainable AI (XAI) methods can mitigate such risks by enhancing the transparency of the black box AI decision-making process. This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies (e.g., intelligent radio, zero-touch network management) and 6G use cases (e.g., industry 5.0). Moreover, we summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems. This research aligns with goals 9, 11, 16, and 17 of the United Nations Sustainable Development Goals (UN-SDG), promoting innovation and building infrastructure, sustainable and inclusive human settlement, advancing justice and strong institutions, and fostering partnership at the global level.

Trustworthy AI: From Principles to Practices Artificial Intelligence

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

On the Opportunities and Risks of Foundation Models Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

Deepfake: A new formula for Phishing?


Phishing is the activity of a site appearing as another, and trying to deceive the user of the site into mistaking the attacker's site as the one the user wants to use. This has caused an infinite number of fraudulent transactions and other criminal activities. Now think what happens if the person that you think you are looking at in an online video, is not the same person at all. It is a digitally rendered copy of the person, however, this time it's not just a still, it's a moving, talking video of the person with features almost indistinguishable from the person that it is supposed to be. Read along to find more on what I'm talking about.

The AI Index 2021 Annual Report Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.

Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020) Artificial Intelligence

The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.

From ImageNet to Image Classification: Contextualizing Progress on Benchmarks Machine Learning

Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline. In this work, we use human studies to investigate the consequences of employing such a pipeline, focusing on the popular ImageNet dataset. We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset---including the introduction of biases that state-of-the-art models exploit. Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for. Finally, our findings emphasize the need to augment our current model training and evaluation toolkit to take such misalignments into account. To facilitate further research, we release our refined ImageNet annotations at

What are deepfakes – and how can you spot them?


Have you seen Barack Obama call Donald Trump a "complete dipshit", or Mark Zuckerberg brag about having "total control of billions of people's stolen data", or witnessed Jon Snow's moving apology for the dismal ending to Game of Thrones? Answer yes and you've seen a deepfake. The 21st century's answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician's mouth, star in your favourite movie, or dance like a pro? Then it's time to make a deepfake.

Hackers or state actors could use 'deepfake' medium with devastating consequences

The Japan Times

WASHINGTON - If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don't adjust your television set -- you may just be witnessing the future of "fake news." "Deepfake" videos that manipulate reality are becoming more sophisticated due to advances in artificial intelligence, creating the potential for new kinds of misinformation with devastating consequences. As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. "We're not quite to the stage where we are seeing deepfakes weaponized, but that moment is coming," said Robert Chesney, a University of Texas law professor who has researched the topic. Chesney argues that deepfakes could add to the current turmoil over disinformation and influence operations.