AI images scandalized a California elementary school. Now the state is pushing new safeguards – The Mercury News Today Us News



By Khari Johnson, CalMatters

In December, fourth graders in a class at Delevan Drive Elementary School in Los Angeles were given a homework assignment: Write a book report about Pippi Longstocking, then draw or use artificial intelligence to make a book cover.

When Jody Hughes’ daughter asked Adobe Express for Education, graphic design software provided by her teacher, to generate an image of “long stockings a red headed girl with braids sticking straight out,” it produced nothing resembling the Swedish children’s book character she had accurately described. Instead, using recently-added artificial intelligence, it generated sexualized imagery of women in lingerie and bikinis. Hughes quickly contacted other parents, who said they were able to reproduce similar results on their own school-issued Chromebook computers. Days later, the parent group Schools Beyond Screens told the LA school board they were opposed to further use of the Adobe software.

The incident raised questions not only about the LA school district’s use of a particular AI product but also about guidelines state administrators provide to schools throughout California on how to safely adopt the technology. A few weeks after the incident, the state Department of Education published a new edition of the guidelines, which it had been working on for several months with help from a group of 50 teachers, administrators, and experts. The revision came in response to instructions from the Legislature, which passed two laws in 2024 telling the department, essentially, to get a handle on AI’s rapid spread among students, teachers and administrators.

Critics wonder if the guidelines would have helped avoid what parents referred to as Pippigate; the controversy, they say, provides evidence that districts, schools, and parents, who often lack the time or resources to ensure that software tools don’t produce harmful output, need more support from the state. The guidelines, they add, are also too vague in places and don’t do enough to define guardrails for how teachers use AI in the classroom.

The issues with the guidelines call into question whether the department can effectively respond to instructions from elected officials on how to safeguard a technology that, according to the guidelines themselves, can leave children isolated and with narrowed perspectives.




Leave a Reply

Your email address will not be published. Required fields are marked *