Goto

Collaborating Authors

 sadeghi


The Download: political chatbot persuasion, and gene editing adverts

MIT Technology Review

Plus: The metaverse's future looks murkier than ever. Chatting with a politically biased AI model is more effective than political ads at nudging both Democrats and Republicans to support presidential candidates of the opposing party, new research shows. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate--in fact, the researchers found, the most persuasive models said the most untrue things. The findings are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. The fear that elections could be overwhelmed by AI-generated realistic fake media has gone mainstream--and for good reason.


The ads that sell the sizzle of genetic trait discrimination

MIT Technology Review

A startup's ads for controversial embryo tests hit the New York City subway. One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, Inside the station, every surface was wrapped with more ads--babies on turnstiles, on staircases, on banners overhead. To his mind, one should be as accessible as the other. Nucleus is a young, attention-seeking genetic software company that says it can analyze genetic tests on IVF embryos to score them for 2,000 traits and disease risks, letting parents pick some and reject others. This is possible because of how our DNA shapes us, sometimes powerfully.


The race to make the perfect baby is creating an ethical mess

MIT Technology Review

A new field of science claims to be able to predict aesthetic traits, intelligence, and even moral character in embryos. Is this the next step in human evolution or something more dangerous? Consider, if you will, the translucent blob in the eye of a microscope: a human blastocyst, the biological specimen that emerges just five days or so after a fateful encounter between egg and sperm. This bundle of cells, about the size of a grain of sand pulled from a powdery white Caribbean beach, contains the coiled potential of a future life: 46 chromosomes, thousands of genes, and roughly six billion base pairs of DNA--an instruction manual to assemble a one-of-a-kind human. Now imagine a laser pulse snipping a hole in the blastocyst's outermost shell so a handful of cells can be suctioned up by a microscopic pipette. This is the moment, thanks to advances in genetic sequencing technology, when it becomes possible to read virtually that entire instruction manual. An emerging field of science seeks to use the analysis pulled from that procedure to predict what kind of a person that embryo might become. Some parents turn to these tests to avoid passing on devastating genetic disorders that run in their families. A much smaller group, driven by dreams of Ivy League diplomas or attractive, well-behaved offspring, are willing to pay tens of thousands of dollars to optimize for intelligence, appearance, and personality. Some of the most eager early boosters of this technology are members of the Silicon Valley elite, including tech billionaires like Elon Musk, Peter Thiel, and Coinbase CEO Brian Armstrong. Embryo selection is less like a build-a-baby workshop and more akin to a store where parents can shop for their future children from several available models--complete with stat cards. But customers of the companies emerging to provide it to the public may not be getting what they're paying for. Genetics experts have been highlighting the potential deficiencies of this testing for years.


From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments

Trapp, Anna, Sadeghi, Mersedeh, Vogelsang, Andreas

arXiv.org Artificial Intelligence

Abstract--Explainability is increasingly seen as an essential feature of rule-based smart environments. While counterfactual explanations, which describe what could have been done differently to achieve a desired outcome, are a powerful tool in eXplainable AI (XAI), no established methods exist for generating them in these rule-based domains. In this paper, we present the first formalization and implementation of counterfactual explanations tailored to this domain. It is implemented as a plugin that extends an existing explanation engine for smart environments. We conducted a user study (N=17) to evaluate our generated counterfactuals against traditional causal explanations. The results show that user preference is highly contextual: causal explanations are favored for their linguistic simplicity and in time-pressured situations, while counterfactuals are preferred for their actionable content, particularly when a user wants to resolve a problem. Our work contributes a practical framework for a new type of explanation in smart environments and provides empirical evidence to guide the choice of when each explanation type is most effective. Smart environments, such as smart homes, offices, and buildings, integrate sensor-enabled devices to support users in decision-making, monitoring, and managing abnormal situations [1], [2]. The rapid adoption of these environments is fueled by advances in the Internet of Things (IoT) and Artificial Intelligence (AI), decreasing device costs, and improved system integration [3]-[5]. Rule-based systems are a prevalent approach for implementing automation in smart environments, by executing predefined rules when certain conditions are met [6], [7].


Two charged over US tech used in deadly drone attack on soldiers in Jordan

Al Jazeera

An Iranian-American citizen and a Swiss Iranian have been arrested and charged by United States authorities with allegedly exporting sensitive technology to Iran that was used in a deadly drone attack on American forces based in Jordan. Islamic Resistance in Iraq, an umbrella group of Iran-backed fighters, was alleged to have carried out the drone attack that killed three US soldiers and wounded 47 others at a US military outpost in Jordan, near the Syrian border, in January. Federal prosecutors in Boston on Monday charged 38-year-old Mohammad Abedininajafabadi, who is known as Mohammad Abedini, the co-founder of an Iranian-based company, and Mahdi Sadeghi, 42, an employee of Massachusetts-based semiconductor manufacturer Analog Devices, with conspiring to violate US export laws. Abedini, a dual citizen of Switzerland and Iran, was arrested in Milan, Italy, at the request of the US government, which will seek his extradition. Sadeghi, an Iranian-born naturalised US citizen, who lives in Natick, Massachusetts, was also arrested.


A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification

Sadeghi, Zahra, Matwin, Stan

arXiv.org Artificial Intelligence

In the era of deep learning and the rapid advancement of powerful Artificial Intelligence (AI) models, consisting of numerous layers and millions of parameters, the demand for understanding the decision-making process of black box models is on the rise. Explainable AI is a growing trend that seeks to uncover the inner workings of AI systems through computational analysis, shedding light on the decision-making process and has been applied across a variety of data types such as video [1], text [2], AIS [3] and causal [4] and genomic data [5], and applications such as art [6], medicine [7], finance [8] and education [9]. Explainability methods can be broadly divided into model agnostic or model free and model specific approaches. Model-agnostic methods can be applied to any trained machine learning model regardless of the learning mechanism and model architecture. Rule based methods [10] and sensitivity analysis are two common approaches from this category.


Forward-Backward Knowledge Distillation for Continual Clustering

Sadeghi, Mohammadreza, Wang, Zihan, Armanfard, Narges

arXiv.org Artificial Intelligence

Unsupervised Continual Learning (UCL) is a burgeoning field in machine learning, focusing on enabling neural networks to sequentially learn tasks without explicit label information. Catastrophic Forgetting (CF), where models forget previously learned tasks upon learning new ones, poses a significant challenge in continual learning, especially in UCL, where labeled information of data is not accessible. CF mitigation strategies, such as knowledge distillation and replay buffers, often face memory inefficiency and privacy issues. Although current research in UCL has endeavored to refine data representations and address CF in streaming data contexts, there is a noticeable lack of algorithms specifically designed for unsupervised clustering. To fill this gap, in this paper, we introduce the concept of Unsupervised Continual Clustering (UCC). We propose Forward-Backward Knowledge Distillation for unsupervised Continual Clustering (FBCC) to counteract CF within the context of UCC. FBCC employs a single continual learner (the ``teacher'') with a cluster projector, along with multiple student models, to address the CF issue. The proposed method consists of two phases: Forward Knowledge Distillation, where the teacher learns new clusters while retaining knowledge from previous tasks with guidance from specialized student models, and Backward Knowledge Distillation, where a student model mimics the teacher's behavior to retain task-specific knowledge, aiding the teacher in subsequent tasks. FBCC marks a pioneering approach to UCC, demonstrating enhanced performance and memory efficiency in clustering across various tasks, outperforming the application of clustering algorithms to the latent space of state-of-the-art UCL algorithms.


Deep Learning Helps Predict Traffic Crashes Before They Happen - Liwaiwai

#artificialintelligence

Today's world is one big maze, connected by layers of concrete asphalt that afford us the luxury of navigation by vehicle. For much of our road-related advancements – GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs – our safety measures haven't quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B. To get ahead of the uncertainty inherent to crashes, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence (QCAI) developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes. Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together.


Predicting Traffic Crashes Before They Happen With Artificial Intelligence

#artificialintelligence

A deep model was trained on historical crash data, road maps, satellite imagery, and GPS to enable high-resolution crash maps that could lead to safer roads. Today's world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements -- GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs -- our safety measures haven't quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B. To get ahead of the uncertainty inherent to crashes, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes.


Deep learning helps predict traffic crashes before they happen

#artificialintelligence

Today's world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements -- GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs -- our safety measures haven't quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B. To get ahead of the uncertainty inherent to crashes, scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes. Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together.