- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
It is a complex issues to deal with, but fundamental to understand our future: will artificial intelligence really reign in journalism? With the evolution of search algorithms, technologies and machine learning, developers have already created AI with incredible potential such as GPT-3 or Generative Pre-trained Transformer 3, the third version of the text generator system developed by the organization. profit OpenAI, also used by "The Guardian" to write an essay from scratch on hot topics in the tech-ethical field.
In light of the results obtained, many journalists and
experts have been concerned not a little about the implications that the use of
a similar tool in publishing can have, often wondering what it could beits
impact on the world of information and work .
To find the answers we need, however, first we need to
understand the current state of journalism , or rather the perception that
readers, journalism students and some writers for blogs and online newspapers
have.
Speech on the method: between insights and clickbait
In this analysis we will limit ourselves only to the study
of the Italian situation, without however citing specific journalists and
newspapers, to understand what is the perception of journalism at a national
level, starting in particular from the opinion of readers.
In a 2017 survey conducted by the Pew Research Center
concerning the relationship between news outlets and political alignment in
Italy, the first data presented are precisely those regarding the reading and
trust of the media in the Belpaese: although about 75% believe the Italian
newspapers (online, printed or televised) important for the dissemination of
the main news, 71% of respondents do not trust them .
The reasons are different: on the one hand, the problem of
the political alignment and objectivity of the journalists in question was
highlighted, analyzed in a series of polls available in the same Pew Research
study. On the other hands, however, some have argued that what makes journalism
so unreliable is the method in which articles are written and published ,
especially when it comes to quick-to-read news.
This opinion is shared daily especially by young people on
social networks, so much so that there are also pages and groups on Facebook
that are dedicated to sharing and criticizing (sometimes even heavy, ed)
articles clickbait or considered decidedly useless and unprofessional.
Substitute or helper?
Many experts and researchers have already tried to answer
these questions, generally questioning the future of journalism conditioned by
algorithms and AI. One of these, appreciated by several specialists, is
Newsmakers: Artificial Intelligence and the Future of Journalism by professor
at Columbia University in New York Francesco Marconi, head of the media
laboratory of the Wall Street Journal and the Associated Press.
His thesis is clear: the world of journalism is not in step
with the evolution of new technologies; therefore, newsrooms should take
advantage of what AI can offer by creating a new business model.
For Marconi, artificial intelligence should become the heart
of journalism,but not to replace journalists .
Before we mentioned the article of GPT-3 published by the
English newspaper The Guardian : although of high quality, it has always
required human support in the search for sources, in the instructions to be
respected and in the final revision of the article that includes cuts and the
reorganization of some paragraphs. The time and costs to write the editorial
were lower than those required by a human journalist, but the final product was
not automatically better.
According to Marconi, AI should only replace around 8-12% of
human journalistsand only in specific tasks such as finding sources, correcting
articles and writing news based on statistical or mathematical data such as the
results of football, basketball and all other sports, agreements between
companies and local weather. That percentage of editors, on the other hand,
would be redirected towards purely human content such as interviews, analyzes,
insights, investigative and field journalism.
Currently there are already newspapers that rely on AI for
similar content and more: see the case of The Canadian Press , which uses it
for translations and articles on weather and sports; or even that of the French
newspaper Agence France-Presse , where AI is used to detect which photos are
manipulated and which ones can be used in editorials.
In still other cases it is made a fundamental tool for the
transcription of audio and video, extremely large databases and complex data,
or even the analysis of fake news and deepfake. But then the human being
remains the one who checks these data, analyzes them, contextualizes them and
reports them in the right way.
And what do young journalists think? For them it isa
double-edged sword: having an AI like GPT-3 in the editorial office can teach
you to write well, but it shouldn't become an editor instead of being human; it
would be much better to have an algorithm that checks the news, helps in
finding sources and performs the previously mentioned tasks.
The human and artificial danger: the biases
In this long process, however, an element we have not yet
dealt with must be considered : bias , or that set of prejudices - innate or
learned - that affect the final product. To give a simple example, a newspaper
aligned to a specific political orientation will report more news regarding the
politicians of that camp than those of the opposing parties, perhaps more
legitimizing their points of view. This factor is already present in current
journalism and removing it completely will be a practically impossible task ,
but it can also be seen in several artificial intelligences, especially with
regard to the facial recognition algorithms used to predict citizen crime.
In June, even a coalition called Coalition for Critical
Technology formed by 1700 experts was born, which asked through an open letter
published in Medium not to publish more studies in favor of this technology
because "there is no way to develop a system capable of to predict crime
without this mechanism being subject to bias, precisely because the notion of
crime is naturally subject to prejudice ".
In the world of journalism, or of research tout court, the
problem could arise in the same way: since the algorithms are written by human
developers, the risk of the presence of bias capable of altering the analysis
of the data is always present. Also for this reason, AI cannot be left alone in
writing news, indeed it will have to be the human editor to control this tool
according to his needs.
And here a vicious circle could arise , since the AI should
in turn check that the human being does not condition the article with his own
personal biases.
A professional issue not to be underestimated
Therefore, if the initial question may be difficult to find
a definitive answer, since the "hype" for new technologies
counteracts the paranoia for the ethical and economic question of the possible
loss of jobs, we can try to give an answer to another question: is fully human
or AI-supported journalism better ? By equipping ourselves with a little
"cynicism" regarding the economic problem, it would be better to have
a journalism able to offer the world the most in-depth news , not subject to
political and racial bias and based on authoritative sources.
To achieve this goal, then, the support of the AI would be
needed not only to help the human editor in the search for sources and in the
drafting and correction of the article, but also to control his activity; the
additional assumption would be that of analyzing the work of the developer of
this algorithm, perhaps by an international super partes regulator, to ensure
that the entire final apparatus is as fair and correct as possible.
In short, it would be necessary to find the right middle way
to guarantee benefits both to the editor and to the top of the newspaper, but
placing the greatest focus on offering the reader the best possible experience
.
Currently, however, this does not appears to be the
objective of most national and international newspapers for the reader.
Between hype and paranoia, there is still an opportunity
Quoting part of the incipit of the report written by Charlie
Beckett for the Google News Initiative and the London School of Economics and
Political Science, "No, robots will not take the place of journalism. Yes,
machines may soon be able to do a lot of work. routine journalism. But the
reality and potential of artificial intelligence (AI), machine learning and
data processing aims to give journalists new powers of discovery, creation and
connection. [...] Algorithms will power the systems, but the human touch - the
reporter's intuition and judgment - will be a key element. Can the information
industry seize this opportunity? " .
There are many problems on both sides, which is why for a
long time it is and will be a challenge to try to understand how to exploit
artificial intelligence without damaging both the journalist and the product of
his work. Despite this, with the right mindset it will be possible to face the
economic, ethical and editorial threat of AI to make it a tool capable of
enhancing research skills, and then leave it to the human writer to analyze the
sources and deal with the issue in matter as thoroughly and professionally as
possible.
Finally, the danger of "clickbait corruption"
should not be underestimated , often exploited excessively to entice users to
read news or editorials and share them on social networks, whether to criticize
the title-bait or to discuss the article.
An ethical threat therefore already exists and generates not
a few controversies, and is conditioned by the activity of human beings and by
algorithms already tested and functioning. To seize the opportunity of
artificial intelligence, therefore, we must first "fix the most dangerous
bug" in current journalism .
Techcrunchpro thepinkcharm themarketinginfo worldmarketingtips technologybeam
- Get link
- X
- Other Apps