Child Abuse Images Removed from AI Image Generator Training

The article discusses the removal of child sexual abuse imagery from the training data used by Stable Diffusion, an AI image generator. Anthropic, the company behind the tool, revealed that its initial training data inadvertently included a small number of illegal images. Upon discovery, Anthropic immediately removed the offending data and retrained the model. The company emphasized its commitment to ethics and safety, stating that it has robust content filters to prevent generating illegal or explicit content. Stable Diffusion is designed to create images based on text prompts, and its capabilities have raised concerns about potential misuse. Anthropic acknowledged the challenges of developing AI systems responsibly and pledged to continue improving safety measures. The incident highlights the importance of rigorous data vetting and ethical practices in the rapidly evolving field of AI.

Source: https://abcnews.go.com/Business/wireStory/child-abuse-images-removed-ai-image-generator-training-113284602