collaborator
- North America > United States (0.04)
- North America > Canada (0.04)
- Consumer Products & Services > Travel (0.94)
- Transportation > Passenger (0.68)
- Asia > China > Jilin Province (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
Jeff Goldblum should make a film about this legendary mathematician
Paul Erdős was one of the most prolific mathematicians to ever live, known for showing up at the door of others in the field and declaring they should host and feed him while they do maths together. I come to you with something a little different for my latest maths column - a plea to Hollywood to make a comedy biopic about one of the greatest mathematicians of all time, Paul Erdős. Why is Erdős (pronounced "air-dish") deserving of such acclaim? With almost 1500 papers to his name, he is probably the most prolific mathematician that ever lived, and possibly that will ever live. Unsurprisingly, with that many papers, he is known for his work across many areas of maths, from probability to number theory to graph theory.
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- Europe > Hungary (0.05)
- Asia > Japan (0.05)
- Health & Medicine (0.49)
- Media (0.48)
- Marketing (0.42)
Inside OpenAI's big play for science
An exclusive conversation with Kevin Weil, head of OpenAI for Science, a new in-house team that wants to make scientists more productive. In the three years since ChatGPT's explosive debut, OpenAI's technology has upended a remarkable range of everyday activities at home, at work, in schools--anywhere people have a browser open or a phone out, which is everywhere. Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them. The last couple of months have seen a slew of social media posts and academic publications in which mathematicians, physicists, biologists, and others have described how LLMs (and OpenAI's GPT-5 in particular) have helped them make a discovery or nudged them toward a solution they might otherwise have missed. In part, OpenAI for Science was set up to engage with this community.
- North America > United States > Massachusetts (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > China (0.04)
- Education (0.48)
- Health & Medicine (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Collaborative Learning via Prediction Consensus
We consider a collaborative learning setting where the goal of each agent is to improve their own model by leveraging the expertise of collaborators, in addition to their own training data. To facilitate the exchange of expertise among agents, we propose a distillation-based method leveraging shared unlabeled auxiliary data, which is pseudo-labeled by the collective. Central to our method is a trust weighting scheme that serves to adaptively weigh the influence of each collaborator on the pseudo-labels until a consensus on how to label the auxiliary data is reached. We demonstrate empirically that our collaboration scheme is able to significantly boost individual models' performance in the target domain from which the auxiliary data is sampled. At the same time, it can provably mitigate the negative impact of bad models on the collective. By design, our method adeptly accommodates heterogeneity in model architectures and substantially reduces communication overhead compared to typical collaborative learning methods.
Semantic Chain-of-Trust: Autonomous Trust Orchestration for Collaborator Selection via Hypergraph-Aided Agentic AI
Zhu, Botao, Wang, Xianbin, Niyato, Dusit
The effective completion of tasks in collaborative systems hinges on task-specific trust evaluations of potential devices for distributed collaboration. Due to independent operation of devices involved, dynamic evolution of their mutual relationships, and complex situation-related impact on trust evaluation, effectively assessing devices' trust for collaborator selection is challenging. To overcome this challenge, we propose a semantic chain-of-trust model implemented with agentic AI and hypergraphs for supporting effective collaborator selection. We first introduce a concept of semantic trust, specifically designed to assess collaborators along multiple semantic dimensions for a more accurate representation of their trustworthiness. To facilitate intelligent evaluation, an agentic AI system is deployed on each device, empowering it to autonomously perform necessary operations, including device state detection, trust-related data collection, semantic extraction, task-specific resource evaluation, to derive a semantic trust representation for each collaborator. In addition, each device leverages a hypergraph to dynamically manage potential collaborators according to different levels of semantic trust, enabling fast one-hop collaborator selection. Furthermore, adjacent trusted devices autonomously form a chain through the hypergraph structure, supporting multi-hop collaborator selection. Experimental results demonstrate that the proposed semantic chain-of-trust achieves 100\% accuracy in trust evaluation based on historical collaborations, enabling intelligent, resource-efficient, and precise collaborator selection.
- North America > Canada (0.04)
- Asia > Singapore (0.04)
Can adding light sensors to nerve cells switch off pain, epilepsy, and other disorders?
In the past 20 years, mice with glowing cables sprouting from their heads have become a staple of neuroscience. They reflect the rise of optogenetics, in which neurons are engineered to contain light-sensitive proteins called opsins, allowing pulses of light to turn them on or off. The method has powered thousands of basic experiments into the brain circuits that drive behavior and underlie disease. As this research tool matured, hopes arose for using it as a treatment, too. Compared with the electrical or magnetic brain stimulation approaches already in use, optogenetics offers a way to more precisely target and manipulate the exact cell types underlying brain disorders.
- North America > United States > Missouri (0.05)
- North America > United States > California > San Diego County > San Diego (0.05)
Generative AI in Sociological Research: State of the Discipline
Alvero, AJ, Stoltz, Dustin S., Stuhler, Oscar, Taylor, Marshall
Generative artificial intelligence (GenAI) has garnered considerable attention for its potential utility in research and scholarship. A growing body of work in sociology and related fields demonstrates both the potential advantages and risks of GenAI, but these studies are largely proof-of-concept or specific audits of models and products. We know comparatively little about how sociologists actually use GenAI in their research practices and how they view its present and future role in the discipline. In this paper, we describe the current landscape of GenAI use in sociological research based on a survey of authors in 50 sociology journals. Our sample includes both computational sociologists and non-computational sociologists and their collaborators. We find that sociologists primarily use GenAI to assist with writing tasks: revising, summarizing, editing, and translating their own work. Respondents report that GenAI saves time and that they are curious about its capabilities, but they do not currently feel strong institutional or field-level pressure to adopt it. Overall, respondents are wary of GenAI's social and environmental impacts and express low levels of trust in its outputs, but many believe that GenAI tools will improve over the next several years. We do not find large differences between computational and non-computational scholars in terms of GenAI use, attitudes, and concern; nor do we find strong patterns by familiarity or frequency of use. We discuss what these findings suggest about the future of GenAI in sociology and highlight challenges for developing shared norms around its use in research practice.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Mexico (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Education (0.88)
- Government (0.68)
- Law > Environmental Law (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Generation (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Workflow as Medium: A Framework for Navigating Human-AI Co-Creation
This paper introduces the Creative Intelligence Loop (CIL), a novel socio-technical framework for responsible human-AI co-creation. Rooted in the 'Workflow as Medium' paradigm, the CIL proposes a disciplined structure for dynamic human-AI collaboration, guiding the strategic integration of diverse AI teammates who function as collaborators while the human remains the final arbiter for ethical alignment and creative integrity. The CIL was empirically demonstrated through the practice-led creation of two graphic novellas, investigating how AI could serve as an effective creative colleague within a subjective medium lacking objective metrics. The process required navigating multifaceted challenges including AI's 'jagged frontier' of capabilities, sycophancy, and attention-scarce feedback environments. This prompted iterative refinement of teaming practices, yielding emergent strategies: a multi-faceted critique system integrating adversarial AI roles to counter sycophancy, and prioritizing 'feedback-ready' concrete artifacts to elicit essential human critique. The resulting graphic novellas analyze distinct socio-technical governance failures: 'The Steward' examines benevolent AI paternalism in smart cities, illustrating how algorithmic hubris can erode freedom; 'Fork the Vote' probes democratic legitimacy by comparing centralized AI opacity with emergent collusion in federated networks. This work contributes a self-improving framework for responsible human-AI co-creation and two graphic novellas designed to foster AI literacy and dialogue through accessible narrative analysis of AI's societal implications.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia (0.04)
- North America > United States > New York (0.04)
- Workflow (1.00)
- Research Report > Experimental Study (1.00)
- Overview (0.92)
- Research Report > New Finding (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)