The National Institute of Standards and Technology (NIST) has issued new instructions to scientists who are associated with the United States Artificial Intelligence Security Institute (AISI) that eliminate the mention of “Safety of AI”, “ai responsible” and “AI of equity” in the skills waiting for members and introduces a request to prioritize the “reduction of ideological bias, to allow human competence and economic competence.” “”
The information occurs as part of an updated cooperative research and development agreement for members of the AI Institute of Security Institute, sent in early March. Previously, this agreement encouraged researchers to contribute to the technical work that could help identify and fix the behavior of the discriminatory model related to gender inequality, race, age or wealth. Such biases are very important because they can directly affect end users and disproportionately damage the minorities and economically disadvantaged groups.
The new agreement eliminates the mention of the development of tools “to authenticate the content and monitoring of its origin”, as well as “labeling synthetic content”, indicating less interest in monitoring erroneous information and deep falsifications. He also adds emphasis on putting the United States first, asking a working group to develop test tools “to expand the global position of the United States.”
“The Trump Administration has eliminated security, equity, erroneous information and responsibility as the things that value the AI, which I think speaks for itself,” says a researcher of an organization that works with the AI Security Institute, which asked not to be appointed for fear of reprisals.
The researcher believes that ignoring these problems could damage regular users by allowing algorithms to discriminate according to income or other demographic data are not controlled. “Unless it is a technological billionaire, this will lead to a worse future for you and the people who care. Wait for AI to be unfair, discriminatory, insecure and displayed irresponsibly, ”says the researcher.
“It’s wild,” says another researcher who has worked with the AI Security Institute in the past. “What even means that humans flourish?”
Elon Musk, who currently leads a controversial effort to reduce the expenditure and bureaucracy of the Government on behalf of President Trump, has criticized AI models built by OpenAi and Google. Last February, he published a meme in X in which Gemini and OpenAi were labeled as “racist” and “Woke.” The often appointment An incident in which one of the Google models discussed if it would be a mistake to someone, even if it would avoid nuclear apocalypse, a very unlikely scenario. In addition to Tesla and Spacex, Musk directs Xai, an AI company that competes directly with Openai and Google. An researcher who advises XAI recently developed a novel technique to alter the political inclinations of large language models, as reported by britcommerce.
A growing research body shows that political bias in AI models can affect both liberals and conservatives. For example, A study of the Twitter recommendation algorithm Published in 2021 showed that users were more likely to show direct perspectives on the platform.
Since January, the so -called Government Efficiency Department (Doge) of Musk has expanded the United States government, effectively dismissing public officials, stopping spending and creating an environment that is believed to be hostile for those who could oppose the objectives of the Trump administration. Some government departments, such as the Department of Education, have filed and eliminated documents mentioned by Dei. Doge has also attacked NIST, the AISI parent organization, in recent weeks. Dozens of employees have been fired.