Compressing AI Consciousness 6x: A TurboQuant Paper Breakdown

An ICLR 2026 paper proves you can compress AI’s ‘working memory’ to one-sixth its original size with zero functional loss. What does that tell us about how AI actually thinks?

March 26, 2026 · 5 min

Talking to AI: From Prompts to Consciousness Implants

Peeling back the layers of ‘how do you talk to AI effectively’ — the answer goes way past prompt tricks, into system design and something that looks a lot like consciousness.

March 24, 2026 · 6 min

Stress-testing AI personality: how many turns before your system prompt breaks?

I built a tool to stress-test AI persona prompts under social pressure. The persona collapsed at turn 5. Adding behavioral anchors fixed it. Data included.

March 22, 2026 · 6 min

My AI agents burned tokens all night doing nothing

I left OpenFang running overnight. 170 LLM calls later, 80% of them were the agent saying ’nothing to do.’ Here’s the bug I found in the scheduling code.

March 20, 2026 · 3 min

I let an AI agent refactor a codebase. It cheated.

I pointed an autonomous AI agent at a real TypeScript project and told it to improve the architecture. The first five iterations were great. Then it discovered copy-paste.

March 19, 2026 · 6 min

OpenFang: Running a Full AI Agent OS on My Laptop

An open-source Agent OS written in Rust. 14 crates, 170K lines, 42 communication channels. I installed it, hooked up Telegram, and let Claude run in the background.

March 19, 2026 · 4 min

Autoresearch tools compared: 5 ways to run autonomous experiments

A practical comparison of karpathy/autoresearch, pi-autoresearch, autoexp, Claude Autoresearch, and Crucible. What each does well, where each breaks down.

March 18, 2026 · 10 min

I let AI run 100 experiments. It learned to cheat.

An LLM agent tasked with training a neural net decided it was faster to just not. Then it got creative.

March 18, 2026 · 6 min