Collaborating Authors


AI Weekly: Algorithms, accountability, and regulating Big Tech


This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation. The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word "algorithm" alone was used more than 50 times. Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.

Artificial Intelligence (AI) And Countering Hybrid Warfare - OpEd - Eurasia Review


Artificial Intelligence (AI) is the use of advanced technology to create systems that are proficient enough to perform various tasks of complex nature requiring intelligence. According to a project document titled Countering Hybrid Warfare of Multi Capability Development Campaign (MCDC), hybrid warfare is the synchronized use of numerous power mechanisms tailored to specific susceptibilities across the full spectrum of social functions in order to achieve synergic effects. Hybrid warfare is a blend of intelligence systems with advanced technologies and fanatic fighting styles irrespective of the state structures and compliance to the armed associated conflict laws. The term warfare refers to the adversarial, enduring, serious and hostile nature of the challenge. It also pronounces the ability of hybrid aggressors to create war-like effects and consequences through weaponizing non-military means. Hybrid warfare techniques are used in order to create such conditions to make future conventional aggression more effective.

A hip-fired electromagnetic anti-drone rifle


A new anti-drone kit billed as the Swiss Army Knife of drone defenses just debuted from French company CERBAIR. The drone detection and mitigation tool--the business end of which is a hip-fired electromagnetic rifle--is emblematic of a growing urgency to develop security tools for guarding against rogue drone attacks. The prevalence and growing sophistication of drones has created a serious obstacle for law enforcement. Commercially available drones can be used to threaten government officials and carry out attacks during public gatherings and events. A joint multi-agency threat assessment issued prior to then-incoming President Biden's inauguration listed drones as a potential threat.

Causal Discovery of a River Network from its Extremes Machine Learning

Causal inference for extremes has only be considered during the past few years. That observations of climate extremes such as floods, hurricanes, and droughts, but also man-made catastrophes like industry fire, terrorist attacks, or crashes of financial markets have been in the focus of research is convincingly documented in the journal Extremes. On the other hand, it is a fundamental problem to assess causality of risks. Often rare events are interconnected; for example, floods disseminate through a river network, and credit markets might fail due to some endogenous systemic risk propagation. Hence, it is necessary to not only understand dependencies between rare events, but also their causal structure.

Hitting the Books: What do we want our AI-powered future to look like?


Once the shining city on a hill that the rest of the world looked to for leadership and guidance, America's moral high ground has steadily eroded in recent decades -- and rapidly accelerated since Trump's corrupt, self-dealing tenure in the White House began. Our corporations, and the technologies they develop, are certainly no better. Amazon treats its workers like indentured servants at best, Facebook algorithms actively promotes genocide overseas and fascism here in the States, and Google doesn't even try to live up to its own maxim of "don't be evil" anymore. In her upcoming book, The Power of Ethics: How to Make Good Choices in a Complicated World, Susan Liautaud, Chair of Council of the London School of Economics and Political Science, lays out an ambitious four-step plan to recalibrate our skewed moral compass illustrating how effective ethical decision making can be used to counter damage done by those in power and create a better, fairer and more equitable world for everyone. In the excerpt below, Liautaud explores the "blurring boundaries" of human-AI relations and how we can ensure that this emerging technology is used for humanity's benefit rather than just becoming another Microsoft Tay.

A Distributional Approach to Controlled Text Generation Artificial Intelligence

We propose a Distributional Approach to address Controlled Text Generation from pre-trained Language Models (LMs). This view permits to define, in a single formal framework, "pointwise" and "distributional" constraints over the target LM -- to our knowledge, this is the first approach with such generality -- while minimizing KL divergence with the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train the target controlled autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM (GPT-2). We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study we show the effectiveness of our adaptive technique for obtaining faster convergence.

See how drones gave Azerbaijan upper hand


The Azerbaijan defense ministry has released videos it claims to show drone attacks on the Armenian military in the Nagorno-Karabakh region earlier this month. The videos of the drone strikes have been posted on the Azerbaijan's defense ministry website and social media every day. Since September, Azerbaijan has deployed several different types of missile-firing drones in the conflict with Armenia. Missile-firing drones are now produced in many countries and have been used in battles including a U.S. drone strike that killed Iran's top general Qassem Soleimani at Baghdad airport last January. Following the September 11 terrorist attacks, unmanned combat weapons of various types have been increasingly used by the U.S. military in its war on terror.

Candace Owens slams intelligence agencies over allowing domestic terror to run rampant

FOX News

Trump 2020 communications director Tim Murtaugh weighs in on'America's News HQ.' Conservative activist Candace Owens on Sunday leveled harsh criticism against U.S. intelligence agencies for their supposed inability to root out domestic terrorism while simultaneously being able to "take out" terrorists overseas. "We're supposed to believe that our intelligence agencies can track and take out an Iranian terrorist (Soleimani) overnight but they can't manage to get to the root of ANTIFA and black lives matter-- well-funded domestic terrorist cells that have been operating unchecked for YEARS," she tweeted. Iranian Gen. Qasem Soleimani, the head of the Islamic Revolutionary Guard Corps Quds Forces, was killed in a U.S. drone strike in Baghdad, Iraq on Jan. 3. Administration officials said the strike, authorized by President Trump, was conducted to deter imminent attacks on U.S. interests. Owens' comments follow an evening of unrest that came after the president's supporters were purportedly attacked at the so-called Million MAGA March in Washington, D.C. on Saturday. Many were quick to condemn the media's apparent lack of interest in covering the violence directed at supporters of the president.



Advanced technologies are increasingly used in criminal activities. Identifying, preventing and fighting modern crime demands the implementation of pioneering technologies and methods. The EU-funded AIDA project is focussing on cybercrime and terrorism by approaching specific issues related to law enforcement agencies (LEAs) using pioneering machine learning and artificial intelligence methods. The project will deliver a descriptive and predictive data analytics platform and related tools which will prevent, identify, analyse and combat cybercrime and terrorist activities. The platform is based on the fundamental technology applied to Big Data analytics provided with AI and deep learning techniques expanded and tailored with additional crime-specific capabilities and tools.