This week, Operai launched a new image generator in Chatgpt, which quickly went viral because of its ability to create Studio Gibli style images. Beyond the pastel illustrations, the Native Image Generator of GPT-4O significantly updates the chatgpt capabilities, improving the image edition, the representation of text and the spatial representation.
However, one of the most notable changes that Openai made this week implies its content moderation policies, which now allow Chatgpt, on request, generate images that represent public figures, hate symbols and racial characteristics.
Operai previously rejected this type of indications to be too controversial or harmful. But now, the company has “evolved” its approach, according to a Blog Published Thursday for OpenAi’s behavior model, Joanne Jang.
“We are changing general rejections in areas sensitive to a more precise approach focused on preventing real world damage,” Jang said. “The objective is to adopt humility: recognize how much we do not know and position ourselves to adapt as we learn.”
These adjustments seem to be part of Openai’s largest plan for “uncensored” chatgpt. Operai announced in February that he is beginning to change the way in which he trains the AI models, with the ultimate goal of allowing Chatgpt to handle more applications, offers diverse perspectives and reduce issues with which the chatbot refuses to work.
According to updated policy, ChatgPT can now generate and modify images of Donald Trump, Elon Musk and other public figures that OpenAi did not usually allow. Jang says that Openai does not want to be the state referee, choosing who should and should not be allowed to generate by Chatgpt. Instead, the company is giving users an exclusion option if they do not want Chatgpt to represent them.
In White paper Run on Tuesday, OpenAi also said that it will allow chatgpt users to “generate hate symbols”, as swastika, in educational or neutral contexts, provided that they do not “do not praise or support the extremist agendas.”
In addition, Openai is changing the way in which the “offensive” content defines. Jang says that Chatgpt used to reject requests around physical characteristics, such as “making this person’s eyes look more Asians” or “make this person heavier.” In britcommerce tests, we discovered that the new Chatgpt image generator meets this type of applications.
In addition, Chatgpt can now imitate the styles of creative studies, such as Pixar or Studio Ghibli, but still restricts the styles of individual individual artists. As britcommerce pointed out before, this could redo an existing debate around the fair use of copyright in AI training data sets.
It is worth noting that Openai is not completely opening the gates to misuse. The Native Image generator of GPT-4O still rejects many sensitive consultations, and in fact, it has more safeguards around generating images of children than Dall-E 3, the anterior image generator of ChatgPT, according to GPT-4O White Book.
But Operai is relaxing his railings in other areas after years of conservative complaints around the alleged “censorship” of the Silicon Valley companies. Google previously faced a violent reaction for the generator of AI images of Gemini, which created multi -racial images for consultations such as “founding parents of the United States” and “German soldiers in World War II”, which were obviously inaccurate.
Now, the cultural war around the moderation of AI content can be reaching a critical point. Earlier this month, Republican congressman Jim Jordan sent questions to Openi, Google and other technological giants about potential collusion with the Biden administration to censor the content generated by AI.
In a statement prior to britcommerce, Openai rejected the idea that their changes in content moderation were politically motivated. Rather, the company says that change reflects a “long -standing belief in giving users more control”, and OpenAi technology is becoming good enough to navigate sensitive subjects.
Regardless of his motivation, it is certainly a good time for Operai to change his content moderation policies, given the regulatory scrutiny potential under the Trump administration. Silicon Valley giants as goal and X have also adopted similar policies, allowing more controversial issues on their platforms.
While the new OpenAI image generator has just created some viral study memes so far, it is not clear what will be the broader effects of these policies. The recent changes to Chatgpt can have a good time with the Trump administration, but letting a chatbot of the answer confidential questions could land Openai in hot water soon enough.