ChatGPT facing ethics : the scientists come out of the silence.

Artificial intelligence continues to scare a lot of people even if this scientific 
revolution what is artificial intelligence always has benefits for mankind. 
Voices are raised for slow down research in artificial intelligence, especially on 
the side of the OpenAI company which disseminated ChatGPT.

Experts have signed a forum to ask suspend research for a while
artificial intelligence. These experts are convinced that technologies like
ChatGPT can pose serious risks to society.
In an interview with researcher Raja Chatila on the French 
television channel TF1, He supports a more ethical approach to
intelligence artificial.
All technological innovations have caused worries before being able
to assert themselves and make it essential. ChatGPT and its
Aftershocks today cause fears. And it's why the greatest experts in
research in artificial intelligence ring a bell alarm for better 
organization of the artificial intelligence research.

In a column signed by these eminent researchers, 
they write: “uncontrolled race for develop and deploy AI systems always
more powerful than anyone, not even their creators, cannot understand, 
predict or control reliably.” The signatories of the tribune in
fact, call for a moratorium without delay. They call on all AI labs to 
suspend immediately, for at least six months, the formation of AI systems 
more powerful than the GPT-4, the latest version of the ChatGPT robot from
the Californian company OpenAI, made public since the middle of last March. 
The gallery aims above all at caution and not at braking innovation. 
And scientists to wonder if it is necessary to automate all tasks, 
including those that are fulfilling.

Raja Chatila, research director at the CNRS, is president of an international
ethics initiative in artificial intelligence within the Institute of
electrical and electronic engineers (IEEE) since 2016. He is a systems 
specialist intelligence and robotics. He is part of signatories of this
open letter, published at the end of month of March on the website of
the American Future Institute of Life.

In the gallery so Raja Chatila and his colleagues scientists are asking 
the OpenAI company to pause its research for a period of six months. 
Why is it urgent to act? In his interview on TF1 the French channel, 
Raja Chatila answers this question.

The scientist speaks of a red card, in a way so, which was lifted. 
It subtly borrows the term used in football to take a player out
who committed a very serious fault in the game. Mr. Chatila explains that 
the idea of ​​this open letter is to warn about the frantic race to develop
ever more efficient systems, while these technologies can present risks
much more serious for society. The scientist proposes to push the reflection 
much further upstream in order to take the necessary measures to limit the
negative consequences. He doubts the impact of the letter but he persists and
signs, it was necessary write this letter and demand the suspension of research. 
For him the request is very clear. She is addressed to the company responsible 
for the distribution from ChatGPT to take a break.

Chatila is not fooled. He knows that the company question namely OpenAI can 
obviously refuse. But he is convinced that the platform helps by pointing
finger the responsibilities. The expert affirms that to denounce is not always
enough to change things. But sometimes the fact of denouncing is enough to move
the lines. 
He is therefore hopeful that the OpenAI company can hear this call, even if
Shatila remains well aware that it is not in six months that the solution will be 
found since in any case the solutions to consider concern future systems and 
not systems already released. According to him the scientists have a purpose. 
That of awakening consciences. People need to take action of the negative 
consequences that this technology if it is not better framed, especially from 
an ethical point of view. The satisfaction of M.Chatila is that everyone is 
talking about now and For him, it's a step forward.

To the journalist's question, how is ChatGPT dangerous since it is an artificial 
intelligence which produces texts already written by a human hand.

The scientist explains that they are intelligences artificial generatives, 
which means that they can produce texts for ChatGPT or images for others, from a 
database phenomenal on the internet which was the basis learning these systems.

ChatGPT can say anything without any reservations. He does not understand what is 
written. But the text reproduced by the machine is nothing wrong, and it is
this, he says, that it is extremely dangerous. Because the problem, it remains 
difficult, if not impossible, to distinguish whether what is written is correct or
not. And the Chatila expert from conclude that this poses problems in terms of law
authorship and misinformation.

The problems remain unresolved and frightening. In Belgium, according to
Le Figaro, a young father committed suicide at the end of March after spending 
six weeks discussion with a GPT-J robot developed by OpenAI. 
It is. say that the panel of experts has a sense.