STERADECTS IS NOT
Looking for insights into AI text generation biases I used GPT-3 to define a word that has never been written before: Steradects. Phrasing the definition as “Steradects is” provides an endless stream of false but credible explanations. Instead, when asked “Steradects is not” the LLM seems to recognize the word as non-existent. The installation presents both sets of answers from opposite viewpoints, inviting the viewer to follow a prompt and choose what they want to know.