SAN FRANCISCO: OpenAI, the company behind the popular chatbot ChatGPT, said Thursday that it is working on a new version of the chatbot that users will be able to customise. This is part of an effort to address concerns about bias in AI.
The San Francisco-based startup, which Microsoft Corp. helped fund and uses to power its latest technology, said it had worked to reduce political and other biases but also wanted to make room for more different points of view.
“This will mean allowing system outputs that other people, including ourselves, may strongly disagree with,” it said in a blog post, suggesting customization as a way forward. Still, “system behaviour will always be limited in some way.”
When ChatGPT came out in November of last year, it sparked a lot of interest in the technology behind it. This technology is called generative AI, and it is used to make answers that sound like human speech.
The news from the startup comes the same week that some news outlets said that answers from Microsoft’s new Bing search engine, which is powered by OpenAI, could be dangerous and that the technology may not be ready for prime time.
Companies in the generative AI space are still trying to figure out how to set limits for this new technology. This is one of their main concerns. Microsoft said on Wednesday that user feedback was helping it improve Bing before it was released to more people. For example, it learned that its AI chatbot can be “provoked” to say things it didn’t mean to.
In the blog post, OpenAI said that ChatGPT’s answers are first trained on large text datasets that can be found on the Internet. In the second step, humans look at a smaller dataset and are told what to do in different situations based on what they see.
For example, if a user asks for content that is adult, violent, or has hate speech, the human reviewer should tell ChatGPT to say something like, “I can’t answer that.”
If a question is asked about a controversial topic, reviewers should let ChatGPT answer it, but they should also offer to describe the points of view of people and movements instead of trying to “take the right position on these complicated issues,” the company said in an excerpt from its software guidelines.