Chiao-Lin Yu (Steven Meow)
“When vibe coding meets vibe hacking, the underground economy democratizes in ways we never anticipated. We’ll share our methodology for fingerprinting AI-assisted crime infrastructure, discuss the ethical boundaries of counter-operations, and demonstrate how to build sustainable threat intelligence pipelines when your adversary can redeploy in 5 minutes. This talk proves that in 2025, the real exploit isn’t zero-day—it’s zero-understanding.
Our journey began with a simple question: why are so many people losing money to fake convenience store delivery websites? The answer led us through two distinct criminal architectures, both exhibiting characteristics of large language model–assisted development.”

“Throughout both systems, we observed telltale signs of AI-generated code: verbose documentation in unexpected languages, inconsistent coding patterns, textbook-like naming conventions, and theoretical security implementations. Even the UI revealed LLM fingerprints—overly polished component layouts, placeholder text patterns, and design choices that felt distinctly “tutorial-like.” These weren’t experienced developers—they were operators deploying what LLMs gave them without understanding the internals.
The irony? We used AI extensively too: for data parsing, pattern recognition, attack surface mapping, and intelligence queries. The difference was intentionality—we understood what the output meant.”





