accurate information
Towards Effective Planning Strategies for Dynamic Opinion Networks
In this study, we investigate the under-explored intervention planning aimed at disseminating accurate information within dynamic opinion networks by leveraging learning strategies. Intervention planning involves identifying key nodes (search) and exerting control (e.g., disseminating accurate/official information through the nodes) to mitigate the influence of misinformation. However, as the network size increases, the problem becomes computationally intractable. To address this, we first introduce a ranking algorithm to identify key nodes for disseminating accurate information, which facilitates the training of neural network (NN) classifiers that provide generalized solutions for the search and planning problems. Second, we mitigate the complexity of label generation--which becomes challenging as the network grows--by developing a reinforcement learning (RL)-based centralized dynamic planning framework.
Don't blindly trust what AI tells you, says Google's Sundar Pichai
Don't blindly trust what AI tells you, says Google's Sundar Pichai People should not blindly trust everything AI tools tell them, the boss of Google's parent company Alphabet told the BBC. In an exclusive interview, chief executive Sundar Pichai said that AI models are prone to errors and urged people to use them alongside other tools. Mr Pichai said it highlighted the importance of having a rich information ecosystem, rather than solely relying on AI technology. This is why people also use Google search, and we have other products that are more grounded in providing accurate information. While AI tools were helpful if you want to creatively write something, Mr Pichai said people have to learn to use these tools for what they're good at, and not blindly trust everything they say.
- South America (0.15)
- North America > Central America (0.15)
- Asia > Taiwan (0.07)
- (16 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.49)
- Government > Regional Government > Europe Government > United Kingdom Government (0.30)
FRABench and UFEval: Unified Fine-grained Evaluation with Task and Aspect Generalization
Hong, Shibo, Ying, Jiahao, Liang, Haiyuan, Zhang, Mengdi, Kuang, Jun, Zhang, Jiazheng, Cao, Yixin
Evaluating open-ended outputs of Multimodal Large Language Models has become a bottleneck as model capabilities, task diversity, and modality rapidly expand. Existing ``MLLM-as-a-Judge'' evaluators, though promising, remain constrained to specific tasks and aspects. In this paper, we argue that, on one hand, based on the interconnected nature of aspects, learning specific aspects can generalize to unseen aspects; on the other hand, jointly learning to assess multiple visual aspects and tasks may foster a synergistic effect. To this end, we propose UFEval, the first unified fine-grained evaluator with task and aspect generalization for four evaluation tasks -- Natural Language Generation, Image Understanding, Image Generation, and Interleaved Text-and-Image Generation. However, training such a unified evaluator is hindered by the lack of a large-scale, multi-modal, and aspect-level resource. To address this gap, we introduce FRABench, a comprehensive fine-grained evaluation dataset. Specifically, (1) We first construct a hierarchical aspect taxonomy encompassing 112 distinct aspects across the aforementioned four tasks. (2) Based on this taxonomy, we create FRABench, comprising 60.4k pairwise samples with 325k evaluation labels obtained from a combination of human and GPT-4o annotations. (3) Finally, leveraging FRABench, we develop UFEval, a unified fine-grained evaluator. Experiments show that learning on specific aspects enables UFEval to generalize to unseen aspects, and joint learning to assess diverse visual tasks and aspects can lead to substantial mutual benefits.
- Europe > North Macedonia > Southwestern Statistical Region > Ohrid Municipality > Ohrid (0.04)
- Europe > France (0.04)
- Asia > Singapore (0.04)
- Africa > Central Africa (0.04)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Law (0.67)
- (2 more...)
Towards Effective Planning Strategies for Dynamic Opinion Networks
In this study, we investigate the under-explored intervention planning aimed at disseminating accurate information within dynamic opinion networks by leveraging learning strategies. Intervention planning involves identifying key nodes (search) and exerting control (e.g., disseminating accurate/official information through the nodes) to mitigate the influence of misinformation. However, as the network size increases, the problem becomes computationally intractable. To address this, we first introduce a ranking algorithm to identify key nodes for disseminating accurate information, which facilitates the training of neural network (NN) classifiers that provide generalized solutions for the search and planning problems. Second, we mitigate the complexity of label generation--which becomes challenging as the network grows--by developing a reinforcement learning (RL)-based centralized dynamic planning framework.
Analyzing the temporal dynamics of linguistic features contained in misinformation
Consumption of misinformation can lead to negative consequences that impact the individual and society. To help mitigate the influence of misinformation on human beliefs, algorithmic labels providing context about content accuracy and source reliability have been developed. Since the linguistic features used by algorithms to estimate information accuracy can change across time, it is important to understand their temporal dynamics. As a result, this study uses natural language processing to analyze PolitiFact statements spanning between 2010 and 2024 to quantify how the sources and linguistic features of misinformation change between five-year time periods. The results show that statement sentiment has decreased significantly over time, reflecting a generally more negative tone in PolitiFact statements. Moreover, statements associated with misinformation realize significantly lower sentiment than accurate information. Additional analysis shows that recent time periods are dominated by sources from online social networks and other digital forums, such as blogs and viral images, that contain high levels of misinformation containing negative sentiment. In contrast, most statements during early time periods are attributed to individual sources (i.e., politicians) that are relatively balanced in accuracy ratings and contain statements with neutral or positive sentiment. Named-entity recognition was used to identify that presidential incumbents and candidates are relatively more prevalent in statements containing misinformation, while US states tend to be present in accurate information. Finally, entity labels associated with people and organizations are more common in misinformation, while accurate statements are more likely to contain numeric entity labels, such as percentages and dates.
- North America > United States (1.00)
- Africa (0.28)
- Europe > United Kingdom (0.14)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
Characteristics of Political Misinformation Over the Past Decade
Although misinformation tends to spread online, it can have serious real-world consequences. In order to develop automated tools to detect and mitigate the impact of misinformation, researchers must leverage algorithms that can adapt to the modality (text, images and video), the source, and the content of the false information. However, these characteristics tend to change dynamically across time, making it challenging to develop robust algorithms to fight misinformation spread. Therefore, this paper uses natural language processing to find common characteristics of political misinformation over a twelve year period. The results show that misinformation has increased dramatically in recent years and that it has increasingly started to be shared from sources with primary information modalities of text and images (e.g., Facebook and Instagram), although video sharing sources containing misinformation are starting to increase (e.g., TikTok). Moreover, it was discovered that statements expressing misinformation contain more negative sentiment than accurate information. However, the sentiment associated with both accurate and inaccurate information has trended downward, indicating a generally more negative tone in political statements across time. Finally, recurring misinformation categories were uncovered that occur over multiple years, which may imply that people tend to share inaccurate statements around information they fear or don't understand (Science and Medicine, Crime, Religion), impacts them directly (Policy, Election Integrity, Economic) or Public Figures who are salient in their daily lives. Together, it is hoped that these insights will assist researchers in developing algorithms that are temporally invariant and capable of detecting and mitigating misinformation across time.
- North America > United States > New York (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- North America > United States > Massachusetts (0.04)
- (4 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this' - ABC News
The CEO behind the company that created ChatGPT believes artificial intelligence technology will reshape society as we know it. He believes it comes with real dangers, but can also be "the greatest technology humanity has yet developed" to drastically improve our lives. "We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this." Altman sat down for an exclusive interview with ABC News' chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 -- the latest iteration of the AI language model.
- Media > News (0.86)
- Government (0.72)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.66)
AI model GPT-3 (dis)informs us better than humans
Spitale, Giovanni, Biller-Andorno, Nikola, Germani, Federico
Artificial intelligence is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having dramatic effects on global health. In this paper we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. Our results show that GPT-3 is a double-edge sword, which, in comparison with humans, can produce accurate information that is easier to understand, but can also produce more compelling disinformation. We also show that humans cannot distinguish tweets generated by GPT-3 from tweets written by human users. Starting from our results, we reflect on the dangers of AI for disinformation, and on how we can improve information campaigns to benefit global health.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States (0.05)
- Oceania > Australia (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Education (1.00)
- Media > News (1.00)
AI Is Terrible at Detecting Misinformation. It Doesn't Have to Be. - Nautilus
Elon Musk has said he wants to make Twitter "the most accurate source of information in the world." I am not convinced that he means it, but whether he does or not, he's going to have to work on the problem; a lot of advertisers have already made that pretty clear. If he does nothing, they are out. And Musk has continued to tweet in ways that seem to indicate that he is generally on board with some kind of content moderation. The tech journalist Kara Swisher has speculated that Musk wants AI to help; on Twitter she wrote, rather plausibly, that Musk "is hoping to build an AI system that replaces [fired moderators] that will not work well now but will presumably get better."
- Media > News (0.79)
- Information Technology > Services (0.70)
Why do Modern Networks Require AIOps?
Over the past decade, network operations teams have had to deal with a number of issues in their networks--from increased complexity to more distributed environments. With AIOps, you can start optimizing your networks now and prepare for the future. AIOps lets you manage your network like never before. According to Gartner, AIOps combines big data and machine learning to automate IT operations processes such as event correlation, anomaly detection, and causality determination to name a few. It can be defined as the application of machine learning (ML) and data science to IT operations problems.
- Information Technology > Communications > Networks (0.73)
- Information Technology > Artificial Intelligence > Machine Learning (0.71)
- Information Technology > Data Science > Data Mining (0.55)