2024-07-09
AI writes news, but is it so good?

Recent research shows that artificial intelligence-based search tools and chatbots, while offering convenience and instant answers, can inadvertently impair the visibility and reliability of news content. Artificial intelligence is rapidly changing our digital interactions, but can we trust it 100%? These facts raise important questions about the future of information dissemination and the role of traditional news agencies in a digital world dominated by artificial intelligence.
The revolution in search using artificial intelligence: a double-edged sword
As technology giants such as Google, Microsoft and OpenAI continue to improve their search capabilities using artificial intelligence, users are increasingly turning to these tools to get quick information. However, a study conducted by Merya Millilakhti revealed an alarming trend: the variety of news sources appearing in search results is decreasing, while the number of non-traditional, potentially less reliable sources is growing.
The study, which analyzed data for three months in 2023 and 2024, showed that although traditional news outlets still feature prominently in search results, there has been a sharp increase in the "other sources" category. This category often includes industry forums, press releases and other non-journalistic content, which raises questions about the quality and reliability of information presented in the form of news.
Even more worrying is the performance of artificial intelligence chatbots such as Google Gemini and Microsoft Copilot. When asked to talk about the main news of the day in New Zealand, these chatbots often offered outdated stories, did not provide links to specific articles, and in some cases could not even indicate the sources of the information they provided. This lack of transparency and accuracy poses a serious threat to informed public opinion.
The dilemma of news organizations
While AI companies are trying to find enough data to train their models, news organizations are at a crossroads. Some of them, such as News Corp, the Financial Times and Germany's Axel Springer, have decided to enter into commercial agreements with companies working with AI, allowing their content to be used for educational purposes. Others, including the New York Times and Alden Global Capital, have taken a more forceful stance, filing lawsuits against Microsoft and OpenAI for alleged copyright infringement.
New Zealand news publisher Stuff has taken an active position, banning ChatGPT from using its stories. However, this decision seems to have come at a cost, as Stuff content is reportedly becoming less visible on Google and Microsoft search engines.
These different strategies highlight the difficulty of balancing that news organizations have to do. On the one hand, partnering with AI companies can provide a new source of revenue and potentially increase the visibility of their content. On the other hand, it risks losing control over how their journalism is presented and used, potentially undermining their brand and credibility.
Implications for democracy and public information
The potential consequences of these changes go far beyond the business models of news organizations. As search engines using artificial intelligence increasingly offer summaries and answers without references to sources, there is a risk that the key role of journalism in providing context, analysis and reporting may be reduced.
Moreover, if users get used to receiving information without a clear indication of authorship or the ability to verify sources, this can exacerbate existing problems with misinformation and reduce public confidence in legitimate news sources. Such a scenario poses a serious threat to informed citizens, which is necessary for the functioning of democracy.
Policy responses and directions for the future
In light of these issues, policy makers are trying to figure out how to regulate the interaction of AI and news. For example, the New Zealand government is renewing its Fair Digital News Bidding bill, which would require tech giants like Google and Meta to pay news companies for their content. However, the decision to exclude artificial intelligence considerations from the current version of the bill was met with criticism from some quarters.
As artificial intelligence technology continues to evolve, it is obvious that a more comprehensive approach will be required. This may include not only updating existing laws, but also developing new frameworks that can keep up with rapid technological changes.
Looking to the future: The future of news in the world of artificial intelligence
As we explore this new area, several key questions arise:
• How can we ensure visibility and trust in quality journalism in an AI-dominated search environment?
• What role should AI companies play in supporting the news ecosystem they rely on to produce data?
• How can we balance the benefits of accessing information using artificial intelligence with the need for transparency and accountability?
• What skills will citizens need to critically evaluate information in the era of artificial intelligence-generated content?
Solving these issues will require constant cooperation between technologists, journalists, politicians and the public. As artificial intelligence continues to change the way information is accessed and consumed, ensuring the continued viability of a free and independent press remains more important than ever for the health of our democratic societies.
The story of artificial intelligence and news is still being written, and its next chapters will have profound implications for how we understand our world and make decisions as citizens. As we move forward, maintaining a balance between technological innovation and maintaining reliable and diverse sources of information will become one of the defining challenges of our time.
Share with friends:
Write and read comments can only authorized users