AI in Journalism and its Ethical Implications

artificial intelligence

AI is slowly penetrating almost every aspect of our life, raising voices of awe and admiration as well as those of concern and fear. With this unprecedented technological achievement in our hands, it is now time to address certain ethical questions before we surrender ourselves completely to these new machines that one day may potentially outrun and replace us.

One of the most controversial areas where Artificial Intelligence is already in use is journalism. While the automation of various tasks (data analysis, content selection, and translation, or personalized news recommendation) and the resulting time saving are some of its undoubted benefits, the ethics of using AI for actual content generation and article writing is something that should be taken into consideration. Are we ready to tackle the new challenges?

AI in journalism and the risk of content bias

Good journalism prides itself on being:

  • Thoroughly researched and accurate
  • Clear and communicative
  • Objective
  • Fair

Journalists as we know them have always been able to determine what data is factual and to make sure they show their “story” from all different angles. Do the AI systems know how to do that? Let’s remember that they rely entirely on the information that is fed into them, and they can only be as unbiased as the data we provide them with.

For AI-generated content to be bias-free, it is crucial we feed the machines with data coming from various sources and showing different points of view and experiences. Otherwise, the problems we’ve been trying so hard to eradicate from our society (discrimination, racism, xenophobia, stereotypes) will perpetuate.

Can AI be taken accountable for the content it produces?

Traditionally, behind every piece of content released to the public, there was the name of the person who wrote it or the company that published it. In case of any controversy, mistake, or a lie, they could always be held accountable. Can automated content-generating systems be held accountable?

Accountability in the era of Artificial Intelligence is becoming increasingly difficult as the systems rely on data that was input by someone else and content that was written by someone else. Who are we to blame, then, if we find it inaccurate, disrespectful, or defamatory? In cases like that, are publishing companies willing to stand by their artificial “journalists”? Are software developers ready to take the blame for the faulty algorithm they created? Let’s ask ourselves these questions before it is too late.

The need for transparency for the use of AI in journalism

The efficiency of AI systems has enabled us to considerably reduce both the time and the effort behind content writing. The machine is only taking a few seconds to produce something that a human would need several hours for. Understandably, the temptation to take credit for the improved efficiency is more than tempting.

Can we, however, turn a blind eye to the ethical dilemma here? Doesn’t the reader have the right to know the content they are offered is not human-generated? This information can be particularly relevant when trying to assess the accuracy and credibility of what you read. It seems more than fair to disclose what part of the content that you publish under your own name was produced with the use of an AI algorithm.

Transparency about AI use in journalism should also include making your readers aware of the potential flaws and limitations that this new technology still has. Faster doesn’t necessarily mean better, at least not yet.

Machine-created content and the right to privacy

Privacy has always been an important issue in this industry, and skilled journalists are well-trained to move within certain boundaries. However, as Artificial Intelligence is taking its first steps in content writing, some of us fear that it still does not have a good grip on what is allowed and what is not when it comes to privacy considerations.

This concern is even more urgent given the huge amount of information AI systems can actually access and extract. Social media accounts, police or court records, surveillance camera recordings – if there is something to undig, you can be sure the AI algorithm will undig it. It still needs to be taught about privacy protection, responsible use of information, as well as data anonymization. Otherwise, publishers will be flooded with lawsuits.

AI and job displacement in journalism

One of the biggest fears that humanity has always had is that Artificial Intelligence will steal our jobs. What used to look like a distant future is quickly becoming our reality as AI systems get more and more advanced and efficient and numerous tasks. It is time we asked ourselves whether journalism (and any other economic activity for that matter) should only be regarded in terms of its profitability and cost-efficiency, or if steps should be taken to protect the jobs and the economic security of human journalists.

If we want to preserve the responsible journalism style, the ethical concerns regarding the use of AI cannot be ignored. That means the media as well as software developers to engage not only in counting all the opportunities the new technology has to offer but also in a thorough evaluation of its potential threats.