Goto

Collaborating Authors

 news organisation


AI-generated news should carry 'nutrition' labels, thinktank says

The Guardian

The IPPR recommended standardised labels for AI-generated news, showing what information had been used to create those answers. The IPPR recommended standardised labels for AI-generated news, showing what information had been used to create those answers. AI-generated news should carry'nutrition' labels, thinktank says AI-generated news should carry "nutrition" labels and tech companies must pay publishers for the content they use, according to a left-of-centre thinktank, amid rising use of the technology as a source for current affairs . The Institute for Public Policy Research (IPPR) said AI firms were rapidly emerging as the new "gatekeepers" of the internet and intervention was needed to create a healthy AI news environment. It recommended standardised labels for AI-generated news, showing what information had been used to create those answers, including peer-reviewed studies and articles from professional news organisations.


Generative AI is already being used in journalism – here's how people feel about it

AIHub

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. A new report published this week finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools. The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France). Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.


Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation

Kaffee, Lucie-Aimée, Atanasova, Pepa, Rogers, Anna

arXiv.org Artificial Intelligence

The rapid adoption of AI across diverse domains has led to the development of organisational guidelines that vary significantly, even within the same sector. This paper examines AI policies in two domains, news organisations and universities, to understand how bottom-up governance approaches shape AI usage and oversight. By analysing these policies, we identify key areas of convergence and divergence in how organisations address risks such as bias, privacy, misinformation, and accountability. We then explore the implications of these findings for international AI legislation, particularly the EU AI Act, highlighting gaps where practical policy insights could inform regulatory refinements. Our analysis reveals that organisational policies often address issues such as AI literacy, disclosure practices, and environmental impact, areas that are underdeveloped in existing international frameworks. We argue that lessons from domain-specific AI policies can contribute to more adaptive and effective AI governance at the global level. This study provides actionable recommendations for policymakers seeking to bridge the gap between local AI practices and international regulations.


OpenAI, Microsoft sued by news nonprofit for copyright infringement

Al Jazeera

The Center for Investigative Reporting (CIR), which publishes Mother Jones and Reveal, said on Thursday that it had filed the lawsuit accusing the tech firms of using its content without permission in a "rebuke to artificial intelligence and its exploitative practices". "OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organisations that license our material," Monika Bauerlein, CEO of the Center for Investigative Reporting, said in a statement. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it." OpenAI and Microsoft did not immediately respond to requests for comment. OpenAI's ChatGPT chatbot relies on vast quantities of information scraped from the internet, including news sites, to respond to users' queries.


Google fined 250m in France for breaching intellectual property rules

The Guardian

Google has been fined 250m ( 213m) by French regulators for breaching an agreement over paying media companies for reproducing their content online. France's competition watchdog said on Wednesday that it was fining the US tech company for breaches linked to intellectual property rules related to news media publishers. The regulator also cited concerns about Google's AI service. The competition authority said Google's AI-powered chatbot Bard – since rebranded as Gemini – was trained on content from publishers and news agencies without notifying them. The watchdog said in a statement that the fine was for "failing to respect commitments made in 2022" and accused Google of not negotiating in "good faith" with news publishers on how much to compensate them for use of their content.


Google developing AI tools to help journalists report the news

Al Jazeera

Google is developing artificial intelligence-enabled tools to help journalists research and write news articles, a development that is likely to rattle nerves across the media industry after years of painful job cuts. Google is working with media outlets, particularly with small publishers, to provide AI-powered tools to assist journalists with "options for headlines or different writing styles", the California-based tech giant said on Thursday. "Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we're making assistive tools available for people in Gmail and in Google Docs," Google spokeswoman Jenn Crider said in a statement, which described the company's "earliest stages of exploring ideas". "Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles." The global media industry has been decimated by successive rounds of layoffs amid a collapse in print advertising revenues, with US newsrooms alone shedding a record 17,436 jobs in the first five months of 2023.


ChatGPT is making up fake Guardian articles. Here's how we're responding

#artificialintelligence

Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem we'd identified?


What newsroom leaders think about the future of journalism and AI

#artificialintelligence

Jack Clark, co-chair of the 2022 AI Index Report published by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), has declared that "2021 was the year that AI went from an emerging technology to a mature technology--we're no longer dealing with a speculative part of scientific research, but instead something that has real-world impact, both positive and negative". The report highlights that private sector investment in AI doubled in that year. Is it adopting AI at a similar pace? And what impact is AI having? JournalismAI gathered a group of news media executives from around the world in a private seminar at the International Journalism Festival to discuss their AI hopes and fears and strategies.

  Country: Europe > Italy > Umbria > Perugia Province > Perugia (0.05)
  Industry: Media > News (1.00)

AI Academy for Small Newsrooms

#artificialintelligence

This FREE online programme offers a deep-dive into the potential of artificial intelligence to journalists and media professionals from small newsrooms. It is designed by the JournalismAI team at the London School of Economics and Political Science (LSE) and powered by the Google News Initiative. The Academy is a 6-week online programme that starts in September 2021 and, in its first pilot edition, it is designed for 20 participants from small news organisations (fewer than 50 employees) in the EMEA region (Europe, Middle East and Africa). In line with JournalismAI's mission to inform media organisations about the potential offered by AI-powered technologies and to foster debate about the ethical, editorial, and social impact of AI on journalism, the Academy aims to support small newsrooms that want to learn how AI can be used to support their journalism. The programme combines a series of masterclasses given by experts working at the intersection of journalism and artificial intelligence with opportunities for discussion among participants.


Artificial intelligence and journalism: a race with machines

#artificialintelligence

The term Artificial Intelligence (AI) is a somewhat catch-all term that refers to the different possibilities offered by recent technological developments. From machine learning to natural language processing, news organisations can use AI to automate a huge number of tasks that make up the chain of journalistic production, including detecting, extracting and verifying data, producing stories and graphics, publishing (with sorting, selection and prioritisation filters) and automatically tagging articles. These systems offer numerous advantages: speed in executing complex procedures based on large volumes of data; support for journalistic routines through alerts on events and the provision of draft texts to be supplemented with contextual information; an expansion of media coverage to areas that were previously either not covered or not well covered (the results of matches between'small' sports clubs, for example); optimisation of real-time news coverage; strengthening a media outlet's ties with its audiences by providing them with personalised context according to their location or preferences; and more. But there is a flipside to the coin: the efficiency of these systems depends on the availability and the quality of data fed into them. The principle of garbage in, garbage out (GIGO), tried and tested in the IT world, essentially states that without reliable, accurate and precise input, it is impossible to obtain reliable, accurate and precise output.