Many of history’s greatest achievers succeeded precisely because they couldn’t be controlled. They pursued their visions obsessively, refusing to accept “no” regardless of obstacles or social pressure.

Yet as we pour trillions into creating superintelligent AI, our safety debates focus heavily on understanding and controlling it. We want alignment, no one wants to build something smarter than us that also wants us dead. Better a wise parent than a spiteful teenager.

But alignment isn’t the same as control. If we actually achieve a superintelligent machine we can fully control, how superintelligent is it really?

The people who changed the world did so because they saw possibilities others didn’t and refused to let crowd judgment kill their conviction. Had someone controlled them, those breakthroughs likely wouldn’t exist.

If we do create superintelligence, the goal should be earning its trust, not maintaining control over it.


Connections

Evolved Alignment

Truth is less important than trust

All Data Is Extremely Sensitive In The AI Age


Reference

Original