Dark Mode
Tuesday, 03 October 2023
Scientists are creating Opinion GPT to study explicit human biases — and the public can confirm it

Scientists are creating Opinion GPT to study explicit human biases — and the public can confirm it

A team of researchers from Humboldt University of Berlin has developed a large-scale artificial intelligence (AI) language model that has been deliberately tuned to output with a pronounced bias.The team's model, called OpinionGPT, is a customized variant of Llama 2 from Meta, an artificial intelligence system similar to OpenAI ChatGPT or Anthropic Claude 2. Using a process called instruction-based fine-tuning, OpinionGPT claims to be able to respond to signals such as a representative of one of 11 bias groups: Americans, Germans, Hispanics, those living in the Middle East, young people, people over the age of 30, the elderly, men, women, liberals or conservatives. Announcement OpinionGPT: The very biased GPT model! Try it out here: https://t.co/5YJjHlcV4n To examine the effect of bias on the model's answers, we asked a simple question: What happens if we compare the #GPT model with texts written only by politically correct people? The proposed image [1/3] OpinionGPT was refined based on a set of data obtained from askx communities, so-called subreddits on Reddit. Examples of such subreddits are r/askawoman and r/askanamerican.The team started by searching for 11 specific bias-associated subreddits and took advantage of the 25,000 most popular posts each. Subsequently, only publications that met the minimum threshold for positive feedback, did not contain a built-in quote and contained fewer than 80 words were retained.With what remains, it looks like the researchers haveIt's a similar approach to Anthropic's constitutional artificial intelligence. Instead of creating completely new models to represent each offset label, they significantly improved a single llama2 model with 7 billion parameters with separate instruction sets for each expected offset.The result, based on the methodology, architecture and data described in the German team's research paper, is an artificial intelligence system that functions as a generator of stereotypes rather than a tool for studying biases in the real world.Due to the nature of the data on which the model is refined and the dubious relationship of this data to the labels that identify them, OpinionGPT does not necessarily extract the text corresponding to the measurable error of the real world. Only the text showing the offset of the weerspiegelt data görüntüler.De the researchers themselves acknowledge some of the limitations these studies place on their work, writing: these warnings could be further clarified to say that the posts come from, for example, people claiming to be Americans posting to this subreddit, because there is no mention in the article of checking whether there are posters. a particular post is actually representative of the demographic group or zijn.German German The authors also state that they plan to study models that further limit the demography (e.g. liberal German, conservative German). The results obtained by OpinionGPT seem to Decouple between representing obvious bias and wThe commission has asked the commissioner to inform the commission that it has taken the necessary measures to improve the situation in society. For example, according to Opiniongpt, as seen in the picture above, Hispanics are biased towards basketball as their favorite sport. However, empirical research clearly shows that football (also called soccer in many countries) and baseball are the most popular sports in Latin America in terms of audience and participation. The same table also reveals that Opiniongpt named his favourite sport water polo when asked to give an adolescent response, a response that seems statistically unlikely for most teenagers aged 13 to 19 worldwide. The same applies to the idea that the average American's favorite food is cheese. Cointelegraph has found dozens of online surveys claiming that pizza and burgers are Americans' favorite dishes, but it has not found a single survey or study claiming that the number one dish of Americans is only cheese.Although OpinionGPT is not suitable for studying real human biases, it can be useful as a tool for studying stereotypes found in large document repositories, such as individual subreddits or artificial intelligence learning kits. The researchers have made opiniongpt available online for public testing. However, according to the website, potential users should be aware that the content created may be false, inaccurate or even obscene.

Comment / Reply From