Goto

Collaborating Authors

 opinion piece


Creation of the Estonian Subjectivity Dataset: Assessing the Degree of Subjectivity on a Scale

Gailit, Karl Gustav, Muischnek, Kadri, Sirts, Kairit

arXiv.org Artificial Intelligence

This article presents the creation of an Estonian-language dataset for document-level subjectivity, analyzes the resulting annotations, and reports an initial experiment of automatic subjectivity analysis using a large language model (LLM). The dataset comprises of 1,000 documents-300 journalistic articles and 700 randomly selected web texts-each rated for subjectivity on a continuous scale from 0 (fully objective) to 100 (fully subjective) by four annotators. As the inter-annotator correlations were moderate, with some texts receiving scores at the opposite ends of the scale, a subset of texts with the most divergent scores was re-annotated, with the inter-annotator correlation improving. In addition to human annotations, the dataset includes scores generated by GPT-5 as an experiment on annotation automation. These scores were similar to human annotators, however several differences emerged, suggesting that while LLM based automatic subjectivity scoring is feasible, it is not an interchangeable alternative to human annotation, and its suitability depends on the intended application.


The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply

The Guardian

Beneath a recent Los Angeles Times opinion piece about the dangers of artificial intelligence, there is now an AI-generated response about how AI will make storytelling more democratic. "Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon," the co-directors of the Archival Producers Alliance, Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli, wrote on 1 March. Published over the Academy Awards weekend, their comment piece focused on the specific dangers of AI-generated footage within documentary film, and the possibility that unregulated use of AI could shatter viewers' "faith in the veracity of visuals". On Monday, the Los Angeles Times's just-debuted AI tool, "Insight", labeled this argument as politically "center-left" and provided four "different views on the topic" underneath.


'The Simpsons' star fears AI could rip off his work, but says there's one thing it cannot recreate

FOX News

AI Expert Marva Bailer explains to Fox News Digital Hank Azaria's opinion piece about humanity and AI matters. "The Simpsons" star Hank Azaria has voiced his fears over artificial intelligence in a new opinion piece. The actor, who has been with the show since 1989, wrote an opinion essay for The New York Times, worrying AI "will be able to recreate the sounds of the more than 100 voices I created for characters on'The Simpsons.'" He continued, "It makes me sad to think about it. Not to mention, it seems just plain wrong to steal my likeness or sound -- or anyone else's."


A list of resources, articles, and opinion pieces relating to generative AI models – September 2024 update

AIHub

We've collected some of the articles, opinion pieces, videos and resources relating to generative AI models. We periodically update this list to add further resources of interest. This article represents the fifth in the series.


ArguMentor: Augmenting User Experiences with Counter-Perspectives

Pitre, Priya, Luther, Kurt

arXiv.org Artificial Intelligence

Opinion pieces (or op-eds) can provide valuable perspectives, but they often represent only one side of a story, which can make readers susceptible to confirmation bias and echo chambers. Exposure to different perspectives can help readers overcome these obstacles and form more robust, nuanced views on important societal issues. We designed ArguMentor, a human-AI collaboration system that highlights claims in opinion pieces, identifies counter-arguments for them using a LLM, and generates a context-based summary of based on current events. It further enhances user understanding through additional features like a Q&A bot (that answers user questions pertaining to the text), DebateMe (an agent that users can argue any side of the piece with) and highlighting (where users can highlight a word or passage to get its definition or context). Our evaluation shows that participants can generate more arguments and counter-arguments and have, on average, have more moderate views after engaging with the system.


A list of resources, articles, and opinion pieces relating to generative AI models – February 2024 update

AIHub

We've collected some of the articles, opinion pieces, videos and resources relating to generative AI models. We periodically update this list to add further resources of interest. This article represents the fourth in the series.


A list of resources, articles, and opinion pieces relating to large language models – August 2023 update

AIHub

We've collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series.


A list of resources, articles, and opinion pieces relating to large language models

AIHub

We've collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the first update.


A list of resources, articles, and opinion pieces relating to large language models

AIHub

We've collected some of the articles, opinion pieces, videos and resources relating to large language models. Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest.


DeepMind researcher claims new AI could lead to AGI, says 'game is over'

#artificialintelligence

According to Doctor Nando de Freitas, a lead researcher at Google's DeepMind, humanity is apparently on the verge of solving artificial general intelligence (AGI) within our lifetimes. In response to an opinion piece penned by yours truly, the scientist posted a thread on Twitter that began with what's perhaps the boldest statement we've seen from anyone at DeepMind concerning its current progress toward AGI: It's about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N https://t.co/UJxSLZGc71 It's about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Rich Sutton is right too, but the AI lesson ain't bitter but rather sweet.