Webdesk: OpenAI, which is based in San Francisco, didn’t answer when asked to comment on the agency’s statement.
In a blog post titled “Our approach to AI safety” which posted on Thursday. The company said it was working on “nuanced policies against behaviour that represents a genuine risk to people.”
It said, “We don’t use data for selling our services, advertising, or building profiles of people”. “We use data to improve our models so that they help people more. ChatGPT, for example, learns from the conversations people have with it and gets better.
“While some of our training data includes personal information that is available on the public internet. We want our models to learn about the world, not private individuals.”
The company said that it took out personal information from its datasets when it could, tweaked its models so that users wouldn’t be asked for such information, and would respond to requests from individuals to delete their data from its systems.
Other privacy regulators in Europe are looking into whether or not chatbots need stricter rules and whether or not they should work together. This is because Italy banned them.
In February, Garante told AI chatbot company Replika that it couldn’t use the personal information of Italian users, saying that doing so could be dangerous for minors and people with fragile emotions.













