Lucas Shin

The Case for Decentralized RL: Crypto’s Role in Training Smarter Models

There’s been an intense debate in the AI community about reinforcement learning (RL). RL is the training approach where AI develops skills through experimentation rather than following precise human guidance. Some researchers claim RL is dead, arguing it adds little value beyond what base models can achieve with sufficient compute. Others see it as the key to unlocking truly performant AI systems that surpass human capabilities. To me, the evidence suggests blockchain-coordinated RL could propel AI development forward, unlocking collaborative innovation at a scale centralized labs can’t match.

The Three Phases of AI Scaling

AI model development has evolved through three distinct phases. By phases, I don’t mean one stops when the next one starts — I mean inflection periods marked by the introduction of a new training strategy. I’ve outlined these phases below:

  1. The Pre-Training Era established optimal data-to-parameter ratios, essentially leading researchers to increase data volume while optimizing resources. More data + more compute = better models.

  2. The Inference-Time Compute Era showed how giving models “thinking time” yielded massive improvements. Extra time enabled models to check their work, explore various approaches, and reason, often outperforming much larger models across key benchmarks. This meant less expensive hardware and training, but more compute required at inference.

  3. Today’s RL Renaissance, where models are taught advanced reasoning through trial and error with minimal human intervention. DeepSeek catalyzed this shift by demonstrating that removing excessive human guidance may actually expand AI capabilities rather than constrain them. The key here is allowing models to develop effective thinking strategies independently. RL can be integrated during pre-training, post-training (find-tuning), or through continuous learning.

...
Leave your comment...

Hmm it's quiet here.

Be the first to comment on this post!