The Intersection between AI and Sexism in South Korea
Artificial Intelligence was created to advance society, but it’s just a machine churning out child pornography instead. What’s worse–the culprit is likely to be someone you know. It’s worse than you think.
The AI craze has been exacerbated by corporations and the media alike with the development of 2023’s most popular softwares such as ChatGPT, CharacterAI, Quillbot, and more. But with every new development, its evil counterpart grows in tandem. For example, crypto, created to be free from bureaucratic and governmental control, now is used to foster an environment for illegal activity. In the same way plastic was originally made to replace paper to reduce deforestation, now surpasses paper as a pollutant. Even GMOs were originally designed to stimulate food production and crop yields, now sparking controversy over health concerns and environmental risks. Similarly, AI technologies, created to improve efficiency and accessibility, are now facing scrutiny for potential job displacement, biased algorithms, and privacy violations.
Though, all this has already been extensively discussed in the media, taught in classrooms, and frankly, seems all too repetitive.
That is, until I got a phone call from my cousin in highschool, back in Korea.
“It must’ve been someone in my class… There’s no way anyone could get a hold of her pictures. Her account is private.”
South Korea has a long-standing history of online sex-related crimes, dating all the way back to the early 2010s. Molka, roughly translated to “secret camera,” are highly camouflaged cameras hidden between crevices in bathrooms, hotel rooms, and public transportation, that stream live for viewers. This was followed by a cybersex trafficking ring, led by a college student, who managed over 60,000 viewers. The student would blackmail and coerce targets with exploitative videos of children as young as 11, on multiple Telegram chat rooms between 2018 to 2021.
This September of 2024, reports on a new type of case, deep fake pornography, have taken the Korean news sources by storm. There has been a rapid rise in nauseating cases of students creating fake pornographic videos of their peers. All it takes is 5 full-face photos and AI to upload a sexually exploitative video within minutes. Suspects are often those close by. Young boys that sit across the classroom targeting fellow students and teachers, colleagues at the workplace, extended family members– the uncertainty between who could’ve created a deep fake and who could be a victim has created unease across the country. “For teenagers, deep fakes have become part of their culture, they’re seen as a game or a prank,” a counselor for young sex offenders said. A Telegram chat room notorious for producing and distributing deepfake pornography reportedly had around 220,000 members involved.
My cousin’s classmates have been victims themselves of blackmail from anonymous users threatening them with hyper realistic deep fakes. The only solution? To take down all images on social media with a clear view of one’s face.
How is it that the “solution” to a problem created by a select group of warped individuals is to hide away in fear?
Sickening reports like this are now common to the South Korean public. The disappointment and rage felt by the community, fueled by stories from loved ones and warnings sent out by school administration, should not be normalized. The root issue? Korea’s deeply ingrained sexism, raking far worse than most OECD countries, seems to be getting worse with time. A viral chart released by the Financial Times visualizes this schism.
The word “Feminist” has been stigmatized in Korea. The mere mention of sexism can make a room go cold. But, how can one not be a feminist living in a country like this? Moreover, how can regulators and AI developers be prepared to combat this escalating issue? Questions remain, but one thing is clear: the conversation about sexism in Korea is far from over and must evolve alongside the advancements in AI.
Regulators and developers have a responsibility to ensure that AI is not used in such ways to harm the community. Addressing these complex issues requires an open dialogue, education, regulatory change, and a shift in mindsets. Only by confronting these challenges head-on can progress be made.
Featured image by Financial Times.
A great article Cristina. Incredibly eye-opening, insightful and shocking to read! I loved reading it and I would like to know even more about this!!!
So insightful and informative! Good job!
Wow, this is so insightful. Very well written.
Good job Cristina!!
So well written, would love to see more on this topic!