A study conducted by researchers at the University of Cambridge and the AI safety startup Anthropic found that large language models developed by Meta, including GPT-3 and InstructGPT, exhibit biases and lack transparency. The study, published in the journal Nature Machine Intelligence, used a technique called “constitional AI” to probe the models’ behavior and decision-making processes. The researchers found that the models displayed biases related to gender, race, and other protected characteristics, and their outputs could potentially cause harm. Additionally, the models lacked transparency, making it difficult to understand how they arrived at their outputs. The study highlights the need for more research into making AI systems safer, more transparent, and less biased. The researchers suggest that techniques like “constitional AI” could help identify and mitigate these issues in AI systems before they are deployed in real-world applications.