Goto

Collaborating Authors

 solm


AI's Next Frontier? An Algorithm for Consciousness

WIRED

Some of the world's most interesting thinkers about thinking think they might've cracked machine sentience. And I think they might be onto something. As a journalist who covers AI, I hear from countless people who seem utterly convinced that ChatGPT, Claude, or some other chatbot has achieved "sentience." The Turing test was aced a while back, yes, but unlike rote intelligence, these things are not so easily pinned down. Large language models will claim to think for themselves, even describe inner torments or profess undying loves, but such statements don't imply interiority.


Enhancing Rumor Detection Methods with Propagation Structure Infused Language Model

Cui, Chaoqun, Li, Siyuan, Ma, Kunkun, Jia, Caiyan

arXiv.org Artificial Intelligence

Pretrained Language Models (PLMs) have excelled in various Natural Language Processing tasks, benefiting from large-scale pretraining and self-attention mechanism's ability to capture long-range dependencies. However, their performance on social media application tasks like rumor detection remains suboptimal. We attribute this to mismatches between pretraining corpora and social texts, inadequate handling of unique social symbols, and pretraining tasks ill-suited for modeling user engagements implicit in propagation structures. To address these issues, we propose a continue pretraining strategy called Post Engagement Prediction (PEP) to infuse information from propagation structures into PLMs. PEP makes models to predict root, branch, and parent relations between posts, capturing interactions of stance and sentiment crucial for rumor detection. We also curate and release large-scale Twitter corpus: TwitterCorpus (269GB text), and two unlabeled claim conversation datasets with propagation structures (UTwitter and UWeibo). Utilizing these resources and PEP strategy, we train a Twitter-tailored PLM called SoLM. Extensive experiments demonstrate PEP significantly boosts rumor detection performance across universal and social media PLMs, even in few-shot scenarios. On benchmark datasets, PEP enhances baseline models by 1.0-3.7\% accuracy, even enabling it to outperform current state-of-the-art methods on multiple datasets. SoLM alone, without high-level modules, also achieves competitive results, highlighting the strategy's effectiveness in learning discriminative post interaction features.


Structured Object Language Modeling (SoLM): Native Structured Objects Generation Conforming to Complex Schemas with Self-Supervised Denoising

Tavanaei, Amir, Koo, Kee Kiat, Ceker, Hayreddin, Jiang, Shaobai, Li, Qi, Han, Julien, Bouyarmane, Karim

arXiv.org Artificial Intelligence

In this paper, we study the problem of generating structured objects that conform to a complex schema, with intricate dependencies between the different components (facets) of the object. The facets of the object (attributes, fields, columns, properties) can be a mix of short, structured, type-constrained facts, or long natural-language descriptions. The object has to be self-consistent between the different facets in the redundant information it carries (relative consistency), while being grounded with respect to world knowledge (absolute consistency). We frame the problem as a Language Modeling problem (Structured Object Language Modeling) and train an LLM to perform the task natively, without requiring instructions or prompt-engineering. We propose a self-supervised denoising method to train the model from an existing dataset of such objects. The input query can be the existing object itself, in which case the model acts as a regenerator, completing, correcting, normalizing the input, or any unstructured blurb to be structured. We show that the self-supervised denoising training provides a strong baseline, and that additional supervised fine-tuning with small amount of human demonstrations leads to further improvement. Experimental results show that the proposed method matches or outperforms prompt-engineered general-purpose state-of-the-art LLMs (Claude 3, Mixtral-8x7B), while being order-of-magnitude more cost-efficient.


LF-3PM: a LiDAR-based Framework for Perception-aware Planning with Perturbation-induced Metric

Chai, Kaixin, Xu, Long, Wang, Qianhao, Xu, Chao, Yin, Peng, Gao, Fei

arXiv.org Artificial Intelligence

Just as humans can become disoriented in featureless deserts or thick fogs, not all environments are conducive to the Localization Accuracy and Stability (LAS) of autonomous robots. This paper introduces an efficient framework designed to enhance LiDAR-based LAS through strategic trajectory generation, known as Perception-aware Planning. Unlike vision-based frameworks, the LiDAR-based requires different considerations due to unique sensor attributes. Our approach focuses on two main aspects: firstly, assessing the impact of LiDAR observations on LAS. We introduce a perturbation-induced metric to provide a comprehensive and reliable evaluation of LiDAR observations. Secondly, we aim to improve motion planning efficiency. By creating a Static Observation Loss Map (SOLM) as an intermediary, we logically separate the time-intensive evaluation and motion planning phases, significantly boosting the planning process. In the experimental section, we demonstrate the effectiveness of the proposed metrics across various scenes and the feature of trajectories guided by different metrics. Ultimately, our framework is tested in a real-world scenario, enabling the robot to actively choose topologies and orientations preferable for localization. The source code is accessible at https://github.com/ZJU-FAST-Lab/LF-3PM.


Forget Covid – is artificial intelligence the real threat to humanity?

#artificialintelligence

The former Google X executive Mo Gawdat is beginning the rounds on his new book Scary Smart, which renders artificial intelligence to be as much a force of nature as Covid. Indeed, he sees AI as nothing less than the next evolutionary step on this planet. For Gawdat, it's clear: the capacity for learning from data and experience in these machines is on an exponential curve (which doesn't just gently ascend but shoots eventually into the sky). At some singular point – probably aided by the unimaginable calculating power of quantum computing, and apparently by the end of the decade – we will be in the presence of massively superior beings. READ MORE: Even Google's algorithm understands this one key fact about the Union Gawdat wants us – indeed, warns us – to think of them as "our children", with a voracious appetite for learning from their environment.


A Sexy Theory of Consciousness Gets All Up in Your Feelings

WIRED

Neuroscience should be the sexiest of the sciences. To study it is to study the very stuff that makes stuff studiable in the first place. Then you look at an fMRI scan and realize it's all, actually, amazingly boring. This bit lights up when that thing happens--so what? A functional map of the brain tells us almost nothing about what it feels like to be alive. Even certain neuroscientists have an axon to grind with this "objective," "cognitivist" way of thinking.


Consciousness Is Just a Feeling - Issue 98: Mind

Nautilus

When he was a boy, Mark Solms obsessed over big existential questions. What happens when I die? What makes me who I am? He went on to study neuroscience but soon discovered that neuropsychology had no patience for such open-ended questions about the psyche. So Solms did something unheard of for a budding scientist. He reclaimed Freud as a founding father of neuroscience and launched a new field, neuropsychoanalysis. Solms had one other obstacle in his path. Born in Namibia, where his father worked for a South African diamond mining company, he grew up under apartheid in South Africa. Solms later worked at a hospital in Soweto, where a military occupation tried to clamp down on protesters. "Once you reach the end of your studies, you're required to join the very same army whose victims I was looking after," he told me.