AI generated real-person child porns sold in Japan, report says

TOKYO, June 3, 2024—It’s in Japan too and not surprisingly, the authorities are sucking their fingers and doing little.

AI training data that can be used to produce images closely resembling real children is being sold online, the Yomiuri daily reported June 2..

‘The fine-tuning datasets include images of Japanese former child celebrities. Sexual images closely resembling the children were being sold on a separate website, and it is believed that the data was used to create them.

Regulating the trade of this type of data is said to be difficult under Japan’s law against child prostitution and child pornography, and experts are calling for the establishment of legislation to address the problem.

Learning from a vast amount of image data, AI image generators produce elaborate, photorealistic images through simple written instructions. Generating an image that closely resembles a specific person is difficult with just the input of their name. But some AI image generators can create images that closely resemble particular people if trained on fine-tuning datasets containing dozens of images of that person. This fine-tuning data is called “LoRA” among other names.

The Yomiuri Shimbun identified an English-language website selling fine-tuning datasets on several former child celebrities who have worked in Japan and abroad. The site listed the names of actual personalities in the descriptions of the data, and each was being sold for the equivalent of $3 in crypto assets. The site also offered data on adult women.

A separate Japanese site was selling sexual images closely resembling Japanese child celebrities on whom fine-tuning data was available to buy, and the images were marked as having been produced by AI. According to some experts, the characteristics of the images indicate that they were likely produced using fine-tuning data.

AI image generators can quickly produce a massive number of images, and the subject’s pose and facial expression can be freely set. If fine-tuning datasets on real people are circulated, sexually explicit images that closely resemble not only real children but also real adults may spread widely.

According to the Justice Ministry, the law against child prostitution and child pornography is applicable only when a child victim exists. Court precedents show that even computer-generated images can be subject to regulations if they appear to depict a specific real child. However, some experts say that the law may not apply unless the physical and facial features of the generated images closely resemble a particular existing child. Therefore, whether AI-generated sexual images of children can be regulated remains to be seen.

“The finding uncovered the fact that generative AI and fine-tuning datasets are being misused to violate the rights of children,” said Prof. Takashi Nagase of Kanazawa University, a former judge well versed in issues related to online speech and expression. “The fear is that the damage may be spreading under the surface. This is a situation unanticipated by the current law, so it’s necessary to consider establishing relevant legislation to regulate AI-generated sexual images of children.”’

#####

Japan makes first arrest for AI-generated computer viruses

TOKYO, May 28, 2024—Police May 27 arrested a 25-year-old man for producing AI-generated computer virus, the first such arrest in the country, the daily Yomiuri reported on its on-line site May 28.

The unidentified man manufactured computer viruses by mobilizing a plural number of interactive generative artificial intelligence programs that are publicly available on the internet with his PCs and smart phones, a violation of the 2011 amendment to the Criminal Law on Information Processing Advancement that prohibits the manufacturing of computer viruses with generative AI programs.

His viruses bore such properties as encrypting target data so the data owner would not be able to access, and/or demanding payments in crypro assets to unlock such data, the newspaper quoted police as explaining. The man had admitted to have manufactured the ransomware viruses, telling the police that he intended to defraud targets for money.

CHAT-GPT and other generative AI programs disable answers relating to crimes, police said, but interactive AI programs publicly accessible on the internet provide answers that can be used as ransomware. The man instructed to those programs without disclosing virus manufacturing intentions, obtaining architectural information and source codes on data encryption and ransoms.

The police did not give details on which provisions of the law the man infringed on.

Violations of the law are subject to a range of fines from 500,000 yen ($3,200) to 5 million yen and/or five years of imprisonment.

The February 2024 National Police Agency annual report on crimes took note of growing cyber crimes. Internet banking crimes in terms of the sum of damage surged 465 percent in 2023, compared with 2022, it said. Ransomware cases decreased more than 10 percent in 2023 but the report cautioned against complacency. Potentially ruinous cyberspace access jumped nearly 20 percent, it said.

###