Start-up OpenAI designs sophisticated artificial intelligence software capable of generating images (DALL-E) or text (GPT-3, ChatGPT) – Copyright AFP Stefani Reynolds
Julie Jammot with Laurent Barthelemy in Paris
California start-up OpenAI has launched a chatbot capable of answering a variety of questions, but its impressive performance has reopened the debate on the risks associated with artificial intelligence (AI) technologies.
Conversations with ChatGPT, posted on Twitter by fascinated users, show a kind of omniscient machine, capable of explaining scientific concepts and writing scenes for a play, university dissertations or even functional lines of computer code.
“His answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant,” Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation, told AFP.
“When you start asking very specific questions, ChatGPT’s response can be off the mark,” but its overall performance is still “really impressive,” with a “high level of linguistics,” he said.
OpenAI, co-founded in 2015 in San Francisco by billionaire tech mogul Elon Musk, who left the business in 2018, received $1 billion from Microsoft in 2019.
The startup is best known for its automated creation software: GPT-3 for text generation and DALL-E for image generation.
ChatGPT can ask your caller for details and has fewer weird responses than GPT-3, which, despite its prowess, sometimes returns absurd results, De Loupy said.
– Cicero –
“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish,” said Sean McGregor, a researcher who runs a database of AI-related incidents.
“Chatbots are getting a lot better at the ‘history problem’ where they act consistently with the history of queries and responses. Chatbots have graduated from goldfish status.”
Like other programs that rely on deep learning, mimicking neural activity, ChatGPT has one major weakness: “It doesn’t have access to meaning,” says De Loupy.
The software cannot justify your choices, such as explaining why you chose the words that make up your answers.
However, AI technologies capable of communication are increasingly capable of giving an impression of thought.
Researchers at Facebook parent Meta recently developed a computer program called Cicero, after the Roman statesman.
The software has proven to be proficient in the Diplomacy board game, which requires negotiation skills.
“If you don’t speak like a real person, showing empathy, building relationships and speaking knowledgeably about the game, you won’t find other players willing to work with you,” Meta said in the research results.
In October, Character.ai, a startup founded by former Google engineers, put an experimental chatbot online that can take on any persona.
Users create characters based on a short description and can then “chat” with a fake Sherlock Holmes, Socrates, or Donald Trump.
– ‘Just a machine’ –
This level of sophistication fascinates and worries some observers, who express concern that these technologies could be misused to mislead people, spread false information, or create increasingly credible scams.
What does ChatGPT think of these dangers?
“There are potential dangers in building highly sophisticated chatbots, particularly if they are designed to be indistinguishable from humans in their language and behavior,” the chatbot told AFP.
Some companies are implementing security measures to prevent abuse of their technologies.
On its welcome page, OpenAI lays out disclaimers, saying that the chatbot “may occasionally generate incorrect information” or “produce harmful instructions or biased content.”
And ChatGPT refuses to take sides.
“OpenAI made it incredibly difficult to get the model to express opinions about things,” McGregor said.
McGregor once asked the chatbot to write a poem on an ethical topic.
“I am just a machine, a tool for you to use, I do not have the power to choose or reject. I can’t weigh the options, I can’t judge what’s right, I can’t make up my mind on this fateful night,” he replied.
On Saturday, OpenAI co-founder and CEO Sam Altman took to Twitter and reflected on the debates surrounding AI.
“It is interesting to watch how people begin to debate whether powerful AI systems should behave the way users want or the way their creators intend,” he wrote.
“The question of whose values we align these systems will be one of the most important debates society has ever had.”