

Not sure if they’ve edited it, but right now it reads:
the historian George Dyson envisioned the internet as a sentient being that would one day reach artificial general intelligence (AGI)
[…]
Inside China, such a network of large-scale AGI systems could autonomously improve repression
The whole piece looks like written by, or with the use of, some LLM.
Other than that, there are two valid points that could be made:
- Massive application of AI to city-wide surveillance, with zero regards for privacy, could provide an AI agent system with enough compute power to self-train in realtime.
- DeepSeek is plausibly a Trojan horse, trained with a repression based bias, if not directly with hidden malware features.
The near future will see a soft “AI war” in the form of publishing models — to be used as agent cores — with different ideological biases.
Never, EVER, do anything security related while sleep deprived, drunk, high, having sex, or all of the above.
After that… no, don’t trust. Zero trust.
There are basic hygiene measures to run anything related to any exploit — including “just” PoCs — depending on how risky a total pwn would be:
Reading through the code is nice, and should be done anyway from an educational point of view… but even when “sure”, basic hygiene still applies.
Keeping tokens in one VM (or a few), while running the exploit in another, is also a good idea. Stuff like ”Windows → WSL2 → Docker", works wonders (but beware of VSCode’s pass-through containers). Bonus points if passkeys and a fingerprint reader get involved. Extra bonus points for logging out before testing (if it asks to unlock any passkey… well, don’t), then logging out again afterwards.
What I’m not so sure about, is deleting the siphoned data without alerting the potential victims. Everyone kind of failed at security, but still. A heads up to rotate all keys, would be nice.