Добавить новость
News in English
Новости сегодня

Новости от TheMoneytizer

A Tipping Point in Online Child Abuse

In 2025, new data show, the volume of child pornography online was likely larger than at any other point in history. A record 312,030 reports of confirmed child pornography were investigated last year by the Internet Watch Foundation, a U.K.-based organization that works around the globe to identify and remove such material from the web.

This is concerning in and of itself. It means that the overall volume of child porn detected on the internet grew by 7 percent since 2024, when the previous record had been set. But also alarming is the tremendous increase in child porn, and in particular videos, generated by AI. At first blush, the proliferation of AI-generated depictions of child sexual abuse may leave the misimpression that no children were harmed. This is not the case. AI-generated, abusive images and videos feature and victimize real children—either because models were trained on existing child porn, or because AI was used to manipulate real photos and videos.

Today, the IWF reported that it found 3,440 AI-generated videos of child sex abuse in 2025; the year before, it found just 13. Social media, encrypted messaging, and dark-web forums have been fueling a steady rise in child-sexual-abuse material for years, and now generative AI has dramatically exacerbated the problem. Another awful record will very likely be set in 2026.

Of the thousands of AI-generated videos of child sex abuse the IWF discovered in 2025, nearly two-thirds were classified as “Category A”—the most severe category, which includes penetration, sexual torture, and bestiality. Another 30 percent were Category B, which depict nonpenetrative sexual acts. With this relatively new technology, “criminals essentially can have their own child sexual abuse machines to make whatever they want to see,” Kerry Smith, the IWF’s chief executive, said in a statement.

[Read: High school is becoming a cesspool of sexually explicit deepfakes]

The volume of AI-generated images of child sex abuse has been rising since at least 2023. For instance, the IWF found that over just a one-month span in early 2024, on just a single dark-web forum, users uploaded more than 3,000 AI-generated images of child sex abuse. In early 2025, the digital-safety nonprofit Thorn reported that among a sample of 700-plus U.S. teenagers it surveyed, 12 percent knew someone who had been victimized by “deepfake nudes.” The proliferation of AI-generated videos depicting child sex abuse lagged behind such photos because AI video-generating tools were far less photorealistic than image generators. “When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers,” Josh Thomas, an IWF spokesperson, told me. That has changed.

Last year, OpenAI released the Sora 2 model, Google released Veo 3, and xAI put out Grok Imagine. Meanwhile, other organizations have produced many highly advanced, open-source AI video-generating models. These open-source tools are generally free for anyone to use and have far fewer, if any, safeguards. There are almost certainly AI-generated videos and images of child sex abuse that authorities will never detect, because they are created and stored on personal computers; instead of having to find and download such material online, potentially exposing oneself to law enforcement, abusers can operate in secrecy.

OpenAI, Google, Anthropic, and several other top AI labs have joined an initiative to prevent AI-enabled child sex abuse, and all of the major labs say they have measures in place to stop the use of their tools for such purposes. Still, safeguards can be broken. In the first half of 2025, OpenAI reported more than 75,000 depictions of child sex abuse or child endangerment on its platforms to the National Center for Missing & Exploited Children, more than double the number of reports from the second half of 2024. A spokesperson for OpenAI told me that the firm designs its products to prohibit creating or distributing “content that exploits or harms children” and takes “action when violations occur.” The company reports all instances of child sex abuse to NCMEC and bans associated accounts. (OpenAI has a corporate partnership with The Atlantic.)

The advancement and ease of use of AI video generators, in other words, offer an entry point for abuse. This dynamic became clear in recent weeks, as people used Grok, Elon Musk’s AI model, to generate likely hundreds of thousands of nonconsensual sexualized images, primarily of women and children, in public on his social-media platform, X. (Musk insisted that he was “not aware of any naked underage images generated by Grok” and blamed users for making illegal requests; meanwhile, his employees quietly rolled back aspects of the tool.) While scouring the dark web, the IWF found that, in some cases, people had apparently used Grok to create abusive depictions of 11-to-13-year-old children that were then fed into more permissive tools to generate even darker, more explicit content. “Easy availability of this material will only embolden those with a sexual interest in children” and “fuel its commercialisation,” Smith said in the IWF’s press release. (Yesterday, the X safety team said it had restricted the ability to generate images of users in revealing clothing and that it works with law enforcement “as necessary.”)

[Read: Elon Musk cannot get away with this]

There are signs that the crisis of AI-generated child sex abuse will worsen. While more and more nations, including the United Kingdom and the United States, are passing laws that make generating and publishing such material illegal, actually prosecuting criminals is slow. Silicon Valley, meanwhile, continues to move at a breakneck pace.

Any number of new digital technologies have been used to harass and exploit people; the age of AI sex abuse was predictable a decade ago, yet it has begun nonetheless. AI executives, engineers, and pundits are fond of saying that today’s AI models are the least effective they will ever be. By the same token, AI’s ability to abuse children may only get worse from here.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media






Топ новостей на этот час

Rss.plus





СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *