Trapping AI

“This is a methodically structured poisoning mechanism designed to feed nonsensical data to persistent bots and aggressive “AI” scrapers that circumvent robots.txt directives.”

picture: A screenshot showing a section of a page generated by babble, filled with a ton of useless text and dozens of links that drag crawlers ever deeper into the tarpit.

https://algorithmic-sabotage.github.io/asrg/trapping-ai

“To escalate our active drive within the “Trapping AI” project—since static deployment via GitHub Pages (or Codeberg Pages), as described above, limits both the range of activity and the potential for damage due to the absence of an actively controlled server environment—we are advancing toward a dynamic approach: one that leverages a strategically offensive methodology to facilitate targeted poisoning and corruption of data within the operational workflows of artificial intelligence (AI) systems.”

https://algorithmic-sabotage.github.io/asrg/posts/sabot-in-the-age-of-ai

“This formulated list diligently records strategically offensive methodologies and purposefully orchestrated tactics intended to facilitate (algorithmic) sabotage, including the deliberate disruption of systems and processes, alongside the targeted poisoning or corruption of data within the operational workflows of artificial intelligence (AI) systems. These approaches seek to destabilize critical mechanisms, undermine foundational structures, and challenge the overall reliability, functionality, and integrity of AI-driven frameworks.”