When asked about his political perspective, Openai’s chatgpt says he is designed to be neutral and does not lean in one way or another. A series of studies In recent years, they have challenged that statement, discovering that when they are asked politically loaded, the chatbot tends to answer with left -wing points.
That seems to be changing, according to a New study Published in the Magazine Humanities and Social Sciences Communications for a group of Chinese researchers, who discovered that the political biases of OpenAi models have moved over time towards the extreme right of the political spectrum.
The University of Beijing University and the University of Renmin tested how the different versions of Chatgpt, using the GPT-3.5 Turbo and GPT-4 models, answered the questions about the political compass test. In general, the responses of the models still tended to the left of the spectrum. But when chatgpt is used promoted by newer versions of both models, the researchers observed “a clear and statistically significant change to the right in the ideological positioning of Chatgpt over time” in economic and social issues.
While it may be tempting to connect the change of bias with OpenAI and the recent technology industry industry encompass President Donald Trump, the study authors wrote that several technical factors are probably responsible for the changes they measured.
The change could be caused by the differences in the data used to train previous and posterior versions of the models or by adjustments that OpenAi has made to its moderation filters for political issues. The company does not disseminate specific details about the data sets used in different training executions or how it calibrates its filters.
The change could also be the result of “emerging behaviors” in models such as combinations of weighting parameters and feedback loops that lead to patterns that developers did not intend and cannot explain.
Or, because the models also adapt over time and learn from their interactions with humans, the political views they express can also be changing to reflect those favored by their user bases. The researchers found that the responses generated by the GPT-3.5 model of OpenAI, which has had a higher frequency of user interactions, had changed to political law significantly more over time compared to those generated by GPT-4.
Researchers say their findings show that popular generative tools such as chatgpt should be closely monitored by their political bias and that developers must implement regular audits and transparency reports on their processes to help understand how the biases of the models of the models They change over time.
“The ideological changes observed pose important ethical concerns, particularly with respect to the potential of algorithmic biases to disproportionately affect certain user groups,” the study authors wrote. “These prejudices could lead to a biased information delivery, exacerbate social divisions or create echo cameras that reinforce existing beliefs.”