Welcome to readin – the best world tech news chanel.

Teenage girls face epidemic of fake nudity in schools| GuyWhoKnowsThings


Westfield Public Schools held a regular board meeting in late March at the local high school, a red brick complex in Westfield, New Jersey, with a scoreboard outside proudly welcoming visitors from the schools. “Home of the Blue Devils” sports teams.

But for Dorota Mani things were not as usual.

In October, some 10th-grade girls at Westfield High School, including Mani's 14-year-old daughter Francesca, alerted administrators that boys in their class had used artificial intelligence software to fabricate sexually explicit images of them and were circulating fake images. photos. Five months later, the Manises and other families say, the district has done little to publicly address the manipulated images or update school policies to prevent exploitative use of AI.

“It seems as if the Westfield High School administration and the district are engaging in a master class to make this incident vanish into thin air,” Ms. Mani, founder of a local preschool, admonished board members. during the meeting.

In a statement, the school district said it had opened an “immediate investigation” upon learning of the incident, had immediately notified and consulted with police and had provided group counseling to the sophomore class.

“All school districts are grappling with the challenges and impact of artificial intelligence and other technologies available to students anytime, anywhere,” Raymond Gonzalez, superintendent of Westfield Public Schools, said in the statement.

Surprised last year by the sudden popularity of AI-powered chatbots like ChatGPT, schools across the United States rushed to rein in text-generating robots in an effort to prevent student cheating. Now, a more alarming phenomenon of AI imaging is shaking up schools.

Boys in several states have used widely available “nudification” apps to pervert real, identifiable photographs of their clothed female classmates, attending events such as school proms, into graphic and convincing-looking images of the girls with their breasts and exposed genitals generated by AI. In some cases, children shared the fake images in the school cafeteria, on the school bus or through group chats on platforms such as Snapchat and Instagram, according to school and police reports.

These types of digitally altered images, known as “deepfakes” or “deepnudes,” can have devastating consequences. Child sexual exploitation experts say the use of non-consensual AI-generated images to harass, humiliate and intimidate young women can damage their mental health, reputation and physical safety, as well as pose risks to their university and career prospects. Last month, the Federal Bureau of Investigation warned that it is illegal distribute computer-generated child sexual abuse material, including realistic-looking AI-generated images of identifiable minors engaging in sexually explicit conduct.

However, students' use of exploitative AI applications in schools is so new that some districts seem less prepared to address it than others. That can make safeguards precarious for students.

“This phenomenon has occurred very suddenly and may be catching many school districts off guard and not knowing what to do,” he said. Riana Pfefferkornresearcher at the Stanford Internet Observatory, who writes about Legal issues related to computer-generated images of child sexual abuse..

At Issaquah High School near Seattle last fall, a police detective investigating complaints from parents about explicit AI-generated images of their 14- and 15-year-old daughters asked an assistant principal why the school had not reported the incident. incident to police, according to a report from the Issaquah Police Department. The school official then asked “what he was supposed to report,” according to the police document, prompting the detective to inform him that schools are required by law to report sexual abuse, including possible child sexual abuse material. The school later reported the incident to Child Protective Services, according to the police report. (The New York Times obtained the police report through a public records request.)

In a statement, the Issaquah School District said it had spoken with students, families and police as part of its investigation into deepfakes. The district alsowe share our empathy”the statement said, and provided support to the affected students.

The statement added that the district had reported the “AI-generated fake images to Child Protective Services out of an abundance of caution,” and noted that “according to our legal team, we are not required to report fake images to law enforcement.” “

At Beverly Vista High School in Beverly Hills, California, administrators contacted police in February after learning that five boys had created and shared explicit AI-generated images of female classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said it was prohibited by California education code from confirming whether the expelled students were the students who had fabricated the images.)

Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools should not allow students to create and circulate sexually explicit images of their peers.

“That's extreme bullying when it comes to schools,” Dr. Bregy said, noting that the explicit images were “disturbing and violating” to the girls and their families. “It's something we absolutely will not tolerate here.”

The schools of the small, prosperous communities of Beverly Hills and Westfield They were among the first to publicly acknowledge deepfake incidents. The details of the cases, described in the district's communications with parents, school board meetings, legislative hearings and court filings, illustrate the variability of schools' responses.

The Westfield incident began last summer when a high school student asked to befriend a 15-year-old classmate on Instagram who had a private account, according to a lawsuit against the boy and his parents filed by the girl and her family. (The Manises said they are not involved in the lawsuit.)

After she accepted the request, the student copied photos of her and several other schoolmates from her social media accounts, according to court documents. He then used an artificial intelligence app to fabricate sexually explicit and “fully identifiable” images of the girls and shared them with her schoolmates through a Snapchat group, according to court documents.

Westfield High began investigating in late October. While administrators discreetly took some children aside for questioning, Francesca Mani said, they called her and other 10th-grade girls who had been subjected to deepfakes to the school office by announcing their names over the school intercom.

That week, Mary Asfendis, principal at Westfield High, sent an email to parents alerting them to “a situation that has resulted in widespread misinformation.” The email went on to describe the deepfakes as a “very serious incident.” She also said that despite students' concerns about possible image sharing, the school believed that “all images created have been deleted and are not circulating.”

Dorota Mani said Westfield administrators had told her the district suspended the student accused of fabricating the images for a day or two.

Shortly after, she and her daughter began speaking publicly about the incident, urging school districts, state legislators, and Congress to enact laws and policies specifically banning explicit deepfakes.

“We have to start updating our school policy,” Francesca Mani, now 15, said in a recent interview. “Because if the school had AI policies, students like me would have been protected.”

Parents, including Dorota Mani, also filed harassment complaints with Westfield High last fall over the explicit images. However, during the March meeting, Mani told school board members that the high school had not yet provided parents with an official report about the incident.

Westfield Public Schools said it could not comment on any disciplinary action for reasons of student confidentiality. In a statement, Dr. González, the superintendent, said the district was strengthening its efforts “by educating our students and establishing clear guidelines to ensure these new technologies are used responsibly.”

Beverly Hills schools have taken a stronger public stance.

When administrators learned in February that eighth-graders at Beverly Vista Middle School had created explicit images of 12- and 13-year-old female classmates, they quickly sent out a message (subject: “Shocking misuse of artificial intelligence.” ) to all. district parents, staff, and middle and high school students. The message urged community members to share information with the school to help ensure that “disruptive and inappropriate” use of AI by students “is stopped immediately.”

He also warned that the district was prepared to impose severe punishments. “Any student found creating, disseminating, or possessing AI-generated images of this nature will face disciplinary action,” including a recommendation for expulsion, the message said.

Dr. Bregy, the superintendent, said schools and lawmakers needed to act quickly because abuse of AI was making students feel unsafe in schools.

“You hear a lot about physical security in schools,” he said. “But what isn't talked about is this invasion of students' personal and emotional safety.”


Share this article:
you may also like
Next magazine you need
most popular

what you need to know

in your inbox every morning