Artificial Intelligence tools: risks and opportunities for journalism

News
OpenAI ChatGPT
Image credit
Ascannio

Artificial Intelligence (AI) has been making its way into everyday use more and more for a while now, but with the launch of ChatGPT in November 2022 (becoming the fastest growing app of all time) this has taken real flight. With that, the concerns about what the impact of systems using AI is on our societies, have also become more prominent. As Free Press Unlimited, we specifically look at the risks and opportunities it offers for journalism.

ChatGPT can write an article on this topic after only receiving a few sentences as input. Does that make the job of a writer obsolete? It’s simply put, but it is one of the important concerns that generative AI tools, like ChatGPT, raise, especially when it comes to the field of journalism.

Independent media have already struggled with social media taking over the role as distributors of news, taking away the revenue stream coming from advertising. Now, a tool like ChatGPT can take on a lot of the work that is part of being a journalist. It can search through available information incredibly fast, and construct this into a news article. Does this affect the need for journalists and the professionalism of the journalistic profession?

First of all it is important to note that ChatGPT has one limitation: its last knowledge update was in September 2021. Therefore, it cannot provide information on anything that was published on the internet after that date. It for example does not know about the Russian invasion of Ukraine. This fact alone makes the work of a journalist still very necessary: to report on and analyse current affairs.
 

Opportunities

ChatGPT also brings opportunities for the profession. It can assist journalists by automating repetitive tasks, such as transcribing interviews or summarising reports. This frees up time for journalists to focus on more critical aspects of their work, such as investigative reporting and analysis.

AI can also be used in performing more in depth research for investigative journalism stories because it can quickly scan vast amounts of data, and identify relevant information. These advantages also increase the speed of news production and distribution, and decrease costs, making news more accessible.

Our Eastern European office has experimented with using ChatGPT to edit an article about the pollution of the Black Sea in Ukraine due to the bombing and fatal damage at the Kahovka Hydropower Plant. The overall experience was positive because it significantly reduced the working time. However, after processing the text with artificial intelligence, a human editor was still needed to proofread the text, ensure that important aspects are not missed, and make stylistic improvements. In the end, the adapted text was republished by four partners.
 

Risks

An important risk of generative AI in the way that it functions now, is that it can jeopardise our access to reliable information. The artificial intelligence behind ChatGPT searches through all the information available on the internet to look for an answer to the question it was asked. In doing so it does not distinguish between reliable or unreliable information, therefore its answer can be based on dis-or misinformation. It also does not disclose sources, making it very difficult to verify the information. The dependency on the digital and media literacy skills of people in the use of these systems is therefore very high. If not carefully trained and monitored, these AI tools can produce or amplify dis- and misinformation, leading to a spread of inaccurate information and public confusion. 

Then there is the issue of accountability. When a news outlet publishes an article, there is an actual person or human organisation behind it to be held accountable. There are checks and balances in place. In the case of tools like ChatGPT, this is lacking. Accountability becomes a critical issue when the line between human-generated and AI-generated content blurs because of the lack of attribution to sources of content, potentially undermining public trust in journalism.

Lastly, because sources are not disclosed, ChatGPT most likely makes use of information provided in journalistic articles, without crediting them. It compromises intellectual property of information, and does not recognise the important work that journalists do. Which leads to a resource issue as well, on top of the existing viability issues that journalists face. 
 

Profit driven use of AI

Reporters Without Borders has also raised concerns about the emergence of AI-generated websites that resemble real media sites and are taking ad revenue from them. They refer to a research by NewsGuard, a firm that specialises in evaluating the reliability of online news and information, which has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software.

This new development is very concerning for several reasons. First of all it is solely profit driven: the goal is to maximise clicks with as little effort as possible, in order to optimise profit. It eliminates an ethical approach to news, and creates a flood of questionable articles. Due to the sheer amount of it, it makes content from reliable, independent media less visible, and with that it diminishes people’s access to reliable information. On top of that, these articles sometimes contain harmful, false information. For example, one article claimed Joe Biden had died, and another falsely reported that Ukraine had claimed that it killed 3,870 Russian soldiers in a single attack.

This way of using AI in journalism raises ethical concerns, like the need to disclose when an article or part of it is generated by an AI model. Transparency about the use of AI is vital to ensure that readers are aware of the information's source and know to be critical of its content. 
 

Safeguarding access to reliable information

As Free Press Unlimited we are carefully following the developments around generative AI tools like ChatGPT. We strongly believe in the added value of the human analytic eye that a journalist, whose sole purpose is to bring reliable information to the public, brings when reporting the news. This remains important, maybe even more, with the emergence of these AI tools. We consider its advantages for the work of journalists, and take this up with our partners, but are mostly vigilant about the threats it poses to people’s access to reliable information. 

Transparency, regulation and accountability are key moving forward. Fundamental rights need to be incorporated into AI systems. When released onto the market irresponsibly, without proper public debate and government oversight and regulation, AI tools can and will be abused easily. At this moment, we observe an astonishing lack of accountability from the companies who put these tools on the market. Besides this, a clear vision from governments on regulation is missing. The longer this is put off, the more power is given to the digital platforms that are distributing and using AI tools. 

Making sure that independent journalism remains viable, trusted and visible in society is essential. It is important to strike a balance between the use of AI technologies, and the preservation of the essential human elements that underpin journalism's integrity and purpose. By considering the strengths of ChatGPT while retaining the journalistic values of accuracy, fairness, and accountability, the journalism industry can adapt and evolve, ultimately serving the public with reliable and independent information in the digital age.

 

This article was written with the help of ChatGPT. The question posed was: Can you write an article that describes the effects of ChatGPT on the journalism work field and the advantages and disadvantages of the existence of ChatGPT for journalism?

Share this page:

Subject:
Access to information