Goto

Collaborating Authors

 different kind


48aedb8880cab8c45637abc7493ecddd-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank each of the reviewers for reading our manuscript and providing very useful feedback. Below, we address the key points in detail. To calculate Eq. 6, it takes ELBO bound on each edge. Moreover, vGraph is efficient and scalable compared to classical community detection methods. We have updated the manuscript to make this more clear.



Want a Different Kind of Work Trip? Try a Robot Hotel

WIRED

Upon arrival at Japan's Henn na Hotel, you are greeted by a pair of receptionists nodding from behind the front desk as you check in at a tablet. A quiet grace emanates from their serene smiles, confident gaze, and perfect porcelain skin. Say "good evening," and they may blink. Ask for the weather report, and they may reply, "Tomorrow's weather is fine and 25C." They wear pristine white uniforms, blue silk scarves, and white caps that sit perfectly atop their glossy black bobs.

  Country:

Information Structure in Mappings: An Approach to Learning, Representation, and Generalisation

Conklin, Henry

arXiv.org Artificial Intelligence

Despite the remarkable success of large large-scale neural networks, we still lack unified notation for thinking about and describing their representational spaces. We lack methods to reliably describe how their representations are structured, how that structure emerges over training, and what kinds of structures are desirable. This thesis introduces quantitative methods for identifying systematic structure in a mapping between spaces, and leverages them to understand how deep-learning models learn to represent information, what representational structures drive generalisation, and how design decisions condition the structures that emerge. To do this I identify structural primitives present in a mapping, along with information theoretic quantifications of each. These allow us to analyse learning, structure, and generalisation across multi-agent reinforcement learning models, sequence-to-sequence models trained on a single task, and Large Language Models. I also introduce a novel, performant, approach to estimating the entropy of vector space, that allows this analysis to be applied to models ranging in size from 1 million to 12 billion parameters. The experiments here work to shed light on how large-scale distributed models of cognition learn, while allowing us to draw parallels between those systems and their human analogs. They show how the structures of language and the constraints that give rise to them in many ways parallel the kinds of structures that drive performance of contemporary neural networks.


Conversation Kernels: A Flexible Mechanism to Learn Relevant Context for Online Conversation Understanding

Agarwal, Vibhor, Gupta, Arjoo, De, Suparna, Sastry, Nishanth

arXiv.org Artificial Intelligence

Understanding online conversations has attracted research attention with the growth of social networks and online discussion forums. Content analysis of posts and replies in online conversations is difficult because each individual utterance is usually short and may implicitly refer to other posts within the same conversation. Thus, understanding individual posts requires capturing the conversational context and dependencies between different parts of a conversation tree and then encoding the context dependencies between posts and comments/replies into the language model. To this end, we propose a general-purpose mechanism to discover appropriate conversational context for various aspects about an online post in a conversation, such as whether it is informative, insightful, interesting or funny. Specifically, we design two families of Conversation Kernels, which explore different parts of the neighborhood of a post in the tree representing the conversation and through this, build relevant conversational context that is appropriate for each task being considered. We apply our developed method to conversations crawled from slashdot.org,


Characterizing AI Agents for Alignment and Governance

Kasirzadeh, Atoosa, Gabriel, Iason

arXiv.org Artificial Intelligence

The creation of effective governance mechanisms for AI agents requires a deeper understanding of their core properties and how these properties relate to questions surrounding the deployment and operation of agents in the world. This paper provides a characterization of AI agents that focuses on four dimensions: autonomy, efficacy, goal complexity, and generality. We propose different gradations for each dimension, and argue that each dimension raises unique questions about the design, operation, and governance of these systems. Moreover, we draw upon this framework to construct "agentic profiles" for different kinds of AI agents. These profiles help to illuminate cross-cutting technical and non-technical governance challenges posed by different classes of AI agents, ranging from narrow task-specific assistants to highly autonomous general-purpose systems. By mapping out key axes of variation and continuity, this framework provides developers, policymakers, and members of the public with the opportunity to develop governance approaches that better align with collective societal goals.


Elon Musk wants to use AI to run US gov't, but experts say 'very bad' idea

Al Jazeera

Is Elon Musk planning to use artificial intelligence to run the US government? That seems to be his plan, but experts say it is a "very bad idea". Musk has fired tens of thousands of federal government employees through his Department of Government Efficiency (DOGE), and he reportedly requires the remaining workers to send the department a weekly email featuring five bullet points describing what they accomplished that week. Since that will no doubt flood DOGE with hundreds of thousands of these types of emails, Musk is relying on artificial intelligence to process responses and help determine who should remain employed. Part of that plan reportedly is also to replace many government workers with AI systems.


Can We Program Our Cells?

#artificialintelligence

Making living cells blink fluorescently like party lights may sound frivolous. But the demonstration that it's possible could be a step toward someday programming our body's immune cells to attack cancers more effectively and safely. That's the promise of the field called synthetic biology. While molecular biologists strip cells down to their component genes and molecules to see how they work, synthetic biologists tinker with cells to get them to perform new feats -- discovering new secrets about how life works in the process. Listen on Apple Podcasts, Spotify, Google Podcasts, Stitcher, TuneIn or your favorite podcasting app, or you can stream it from Quanta. Steve Strogatz (00:03): I'm Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in science and math today. In this episode, we're going to be talking about synthetic biology. Simply put, we could say that synthetic biology is a fusion of biology, especially molecular biology, and engineering. The distinctive thing about it is that it treats cells as programmable devices. It's a kind of tinker toy approach that builds circuits, but not out of wires and switches like we're used to, but rather out of biological components, like proteins and genes. But also, the approach holds promise for illuminating how life works at the deepest level. It's one thing to strip cells apart to see how they work. But it's another thing to tinker with cells to try to get them to perform new tricks, which is something that my guest, Michael Elowitz, does. For example, a while back, he engineered cells to blink on and off like Christmas lights. Michael Elowitz is a professor of biology and biological engineering at Caltech and Howard Hughes Medical Institute. It's great to be here. Strogatz (01:53): So let's talk about the foundational idea of synthetic biology. I mentioned it in the intro, that's -- that living cells, we could think of as programmable devices. The field, synthetic biology, it seems like you guys have this philosophy that you can learn about cells by building functionality into cells yourself.


AI is cognitive automation, not cognitive autonomy

#artificialintelligence

The way we think about AI is shaped by works of science-fiction. In the big picture, fiction provides the conceptual building blocks we use to make sense of the long-term significance of "thinking machines" for our civilization and even our species. Zooming in, fiction provides the familiar narrative frame leveraged by the media coverage of new AI-powered product releases. As a result, the dominant view in the popular imagination today is that AI is about creating artificial minds, agents with a will of their own. These agents, since they possess a similar kind of autonomy as their human creators, may decide to pursue their own goals, and eventually turn against humans.


The Power Of AI To Take On Tasks Traditionally Associated With "Knowledge Work" - AI Summary

#artificialintelligence

ChatGPT is a new AI tool that can tell stories and write code. It has the potential to take over certain roles traditionally held by humans, such as copywriting, answering customer service inquiries, writing news reports, and creating legal documents. As AI continues to improve, more and more current jobs will be threatened by automation. But AI presents opportunities as well and will create new jobs and different kinds of organizations. The question isn't whether AI will be good enough to take on more cognitive tasks but rather how we'll adjust.