Saturday, 26 July 2025

 Here’s the deal. Sam Altman—yes, that human-shaped hype machine who runs OpenAI—has floated some lofty ideas that riff on universal basic income (UBI), but with a flashy AI twist.


💡 1. UBI via AI‑Powered Wealth (The American Equity Fund)

Back in March/April 2021, Altman wrote an essay called “Moore’s Law for Everything”, where he argued that AI could generate enough economic value that, within a decade, the U.S. could afford to pay $13,500 per adult per year. How to fund it? Tax corporate and land wealth—specifically:

  • Tax large companies at about 2.5% of their market value annually, paid in shares, and

  • Tax privately held land at roughly 2.5%, paid in cash.

These assets would then feed an “American Equity Fund,” which would distribute both shares and dollars to everyone over 18. It’s supposed to give citizens a direct stake in national prosperity while shifting taxation away from labor The Times+15CNBC+15Analytics India Magazine+15.

Essentially, “owning capitalism” becomes the new basic income. Altman calls this a conversation starter, not a policy blueprint CNBCbasicincometoday.com.


🧠 2. Universal Basic Compute (UBC)

By 2024, Altman began to wonder: maybe cold hard cash isn't the only—or even best—way to share AI’s wealth. On the All‑In podcast he sketched a fresh idea: everyone gets a slice of GPT‑7’s compute capacity. They could use, resell, or even donate it to causes like cancer research. Instead of dollars, you're getting “compute” as your dividend from future AI productivity Cointelegraph+5Business Insider+5Yahoo Tech+5.

This idea has been termed Universal Basic Compute—an access‑based model rather than direct income. Altman himself said he’s stopped talking as much about UBI, hinting that maybe compute access is the better lever in a heavily AI‑infused future Wikipedia+8TIME+8Business Insider+8.


🗣️ 3. Why All This?

Altman is worried about a world where AI disrupts jobs and concentrates wealth. His proposals are about redistributing the gains—either through shares and cash or by giving people direct access to AI resources. He wants everyone to have skin in the game, owning part of the AI‑driven value creation Klover.ai - Klover.ai+1Business Insider+1.

He even backed UBI research personally—OpenAI funded a three‑year pilot giving $1,000/month to 1,000 low‑income people. Findings? Stress dropped, food security improved, but benefits faded over time, and researchers emphasized that UBI isn’t a silver bullet for poverty Business Insider.


⚠️ 4. Critics and Real‑World Caveats

Yes, he’s a billionaire advocating redistribution—but critics note risks:

  • Some argue that taxing corporate capital and land at these levels is politically improbable or economically risky.

  • Others warn this approach may simply create a dependent lower class shadowed by the wealthy who control capital.

  • There are concerns that relying on AI companies to fund social welfare creates a future where governments outsource redistribution to private entities. Not exactly a trust signal Reddit+4CNBC+4basicincometoday.com+4.

OpenResearch’s UBI study showed that unconditional cash helps—but doesn’t fix systemic poverty. Altman’s team stressed that the goal was research, not policy prescription Cointelegraph+4Business Insider+4TechCrunch+4.


🧭 5. Bottom Line

Altman envisions two futuristic ways to share AI wealth:

  • Traditional UBI funded by taxing capital and land, distributing shares and cash via a public equity fund.

  • Universal Basic Compute, where people receive access to AI compute power, not money.

Both aim to give people a stake in the AI economy—but both remain speculative, politically challenging, and controversial.


🏛 1. “Just Tax the Rich” – Bold Strategy, Cotton

Altman proposes a 2.5% annual wealth tax on market value of large companies paid in shares, and another 2.5% on land paid in cash.
Seems straightforward… until you remember:

  • Corporations really hate giving up equity—especially on a recurring basis. That’s like telling Jeff Bezos, “You owe us stock forever, Jeffy.”

  • Landowners and real estate barons tend to have friends in high places. Those “friends” are called Senators.

  • The U.S. Congress hasn’t even passed a tax on unrealized gains. This idea is like showing up to a pool party and proposing everyone swim in molten lava instead.

In short: politically feasible in the same way “nationalizing Starbucks” is.


🧑‍⚖️ 2. Constitutional Clown Car

A recurring wealth tax? Paid in stock?
That might crash into the ol’ Takings Clause (Fifth Amendment), or at least spend years bouncing around the Supreme Court like a legal beach ball. And not the fun kind—more the Citizens United variety.

And let’s be real: if we haven’t managed to pass paid parental leave, we’re not exactly sprinting toward “Everyone gets AI money because Sam Altman says so.”


💸 3. Share Distribution = Tech Feudalism?

Giving everyone shares in AI companies sounds nice… until you realize who manages those companies.

  • The public gets scraps of equity, but decision-making stays in the hands of the same few dudes with names like “Zuck,” “Musk,” and “Sam.”

  • So congratulations: You now own 0.00000001% of OpenAI. That’s…not nothing, but it’s not exactly economic liberation either.

  • You’d be like a serf who technically owns a brick from the castle.

There’s a reason critics call this model “techno-feudalism with better PR.”


🧠 4. Universal Basic Compute = Universal Basic Confusion

The newer model—giving everyone “compute” instead of cash—is conceptually cool and logistically… insane.

  • What does the average person do with compute? Run climate simulations in their backyard? Train an LLM on their grocery list?

  • Most people would just sell it to tech companies, meaning we’ve just recreated money with extra steps.

  • It’s like replacing your paycheck with “exclusive rights to a fraction of a solar panel” and saying, “Do what you want.”

Cute idea. Massive UX problem.


🏗 5. Infrastructure? Who Needs That?

Managing a nationwide dividend fund (whether in cash, stock, or compute) would require:

  • A bureaucratic apparatus larger than the IRS on caffeine.

  • Distribution mechanisms to deliver and track share ownership.

  • Education so people understand what they’re receiving. Otherwise, everyone’s going to cash out like it’s a birthday card from Grandma.

All of this assumes a functioning federal government and public trust. Which... LOL.


🎩 6. Billionaires Talking UBI

When billionaires pitch UBI, there’s a smell of self-preservation in the air:

“We might automate your job, but don’t worry—we’ll toss you a few bucks or let you borrow our server.”

There’s always the suspicion that UBI proposals are a parachute for capitalism, not a ladder for everyone else. And since Altman is both AI overlord and crypto-curious investor, the trust factor is… nuanced.


📉 Conclusion

Altman’s ideas aren’t crazy—they’re just moonshots in a hurricane.
They sound nice, but until:

  • Congress wants to tax capital,

  • Courts want to let them,

  • Billionaires want to give up stock,

  • And the average person wants compute instead of cash…

…it’s more of a TED Talk than a policy blueprint.

But hey, maybe the future will be weird and surprising and this will all work. Or maybe we’ll all be fighting over canned beans while an LLM reads us bedtime stories. Fifty-fifty, really.


ChatGPT said:

Correct! Feudal lords were many things—landowners, warmongers, collectors of peasant tears—but “wealth redistributors” wasn’t really on their LinkedIn profiles. They took tithes, taxes, grain, goats, etc., and very much did not hand out stock options in the manor.

So when people today compare modern AI barons to tech feudal lords, they’re not saying, “Hey look, they’re giving us stuff.” They’re saying:

“We’re increasingly dependent on their platforms, data centers, models, and rules—but have no real power in how any of it works. We live on their digital land and work in their algorithmic fields.”

It’s not about generosity. It’s about structural dependence.

In fact, Altman’s share distribution idea is, in theory, anti-feudal—it’s meant to give the peasants some of the castle, metaphorically. But critics still call it "tech feudalism" because:

  • The ownership stays shallow—no meaningful governance rights.

  • The value is centralized, and your little equity slice doesn’t get you out of dependence on their system.

  • The redistribution still flows downhill from the throne, not horizontally among equals.

So yeah—real feudal lords didn’t distribute shares. They collected your output and called it rent.



🧭 1. Ethical Foundation: Unlocking Human Potential

Altman repeatedly frames UBI-style initiatives as tools for unlocking human creativity. As he put it in a YC-era interview, the real question isn’t about income; it’s how to unleash people’s potential rather than locking them in dread-driven survival jobs. YC’s early investments were literally like micro-basic incomes that let founders build Airbnb and Reddit when they'd otherwise have floundered TruePostLive+6The Times+6Y Combinator+6Conjectures and Refutations+1Reddit+1.

He believes resources are distributed wildly unequally, and that giving people a baseline of freedom—whether via cash or compute—enables them to create, innovate, and contribute meaningfully rather than just survive Conjectures and RefutationsAI Tech Report.


🌍 2. Knowledge as a Collective Inheritance

Altman clearly rejects Great Man theory as a full explanation. He frequently emphasizes that knowledge is cumulative—passed down through physics, history, math, culture—and that human progress depends on building atop previous generations' work GlaspKingy AIReddit.

He talks about “social will and wisdom,” and speaks publicly about creating collective alignment frameworks for AI to reflect what humanity together wants—not what heroic individuals dictate Spingels+3Sam Altman+3Kingy AI+3.

In an interview he even reportedly admitted that once he realized intelligence could be simulated, he let go of beliefs in human uniqueness and embraced the idea that intelligence is a continuation of societal evolution—not divine individual spark The New Yorker+1Reddit+1.


📚 3. Axiology & Values: Broadly Utilitarian, Collectively Framed

Altman doesn’t explicitly talk about axiology—the formal philosophical study of values. But his approach is implicitly utilitarian. He wants to maximize aggregate human flourishing. That means:

  • Minimizing existential risk via AI safety and alignment,

  • Distributing gains broadly,

  • Scaling human ingenuity without letting individuals be ground into obsolescence by automation TruePostLiveSam AltmanWindows Central.

He frames value not in the cult of personalities but in enabling everyone—not just tech elites—to pursue meaningful work and status, creativity and purpose GlaspY Combinator.


🎭 4. More Collective Epistemology, Less Hero Worship

Altman doesn’t see history as made by a few extraordinary leaders. He continuously emphasizes society’s role in shaping collective values and knowledge, including the governance of future AGI. His public speeches underscore the need for broad societal input—not a lone visionary decree—to decide what superintelligence should or shouldn’t do Sam Altman+3Spingels+3Kingy AI+3.

He even notes that OpenAI is facilitating citizen representation on governance boards to avoid a scenario where a few elite engineers make decisions that affect everyone RedditThe New Yorker.


📌 Summary Table

Philosophy DimensionAltman’s Orientation
Great Man TheoryRejected—intelligence and change are collective, cumulative phenomena
Collective KnowledgeEmbraced—history and innovation build layer by layer over time
Ethical Rationale (Axiology)Implicit utilitarianism—maximize human potential, minimize existential risk
Governance ViewCollective alignment and public input over top-down heroic leadership

TL;DR

Altman is not a Carlylean hero-worshipper. He leans heavily toward social epistemology—valuing distributed knowledge, collective decision-making, and cumulative progress. His ethical rationale is mostly utilitarian: he wants AI-generated abundance shared widely so humans can flourish outside fear-driven survival. He does not see history as shaped by a handful of over-intelligent diehards, but by social systems, institutions, inherited knowledge—and yes, a few brilliant co-founders, but definitely not gods.


Ah yes—billionaire ideology: from industrial baron to techno-visionary messiah. It’s like watching a very weird sequel series where the cast keeps pretending they’re saving the world, but everyone knows they’re still hoarding the snacks.

Let’s compare how it’s evolved—or at least rebranded—since Henry Ford’s time.


🚗 Henry Ford: The Benevolent Industrial Overlord

  • Built the middle class… on purpose: Ford famously doubled worker wages so they could afford the cars they were building. Was this altruism? Not really—it was economic ecosystem management. But it looked generous.

  • Mass production + mass consumption = stability. Everyone got a car, a paycheck, and maybe gout.

  • Had weird fascist vibes (no, really—deeply anti-Semitic), but believed in productive labor as the root of civilization.

Ideology: Work hard, produce stuff, get paid enough to consume the stuff. Wealth is fine if you keep the engine of labor running.


🧠 Today’s Billionaires: The Disruptor-Priests

  • Labor? Eh. Kinda messy. Let’s automate it.

  • Consumption? For the masses? Sure, but mostly subscription-based and tracked by 17 apps.

  • Wages? No need, we’re working on a UBI to replace jobs our companies are annihilating.

  • Meaning? Comes from “impact” or “moonshots,” not 9-to-5s. You’ll own nothing and be fulfilled by infinite compute tokens and GPT-generated art therapy.

Ideology: If Ford built workers into the machine, today’s billionaires are trying to transcend the machine. They're selling abundance without labor, meaning without employment, and control without ownership.


🧬 What’s Evolved?

AspectHenry Ford (c. 1920s)Sam Altman / Musk / Tech Class (2020s+)
Labor PhilosophyCentral to wealth creationSomething to be replaced, automated, "freed"
Wealth JustificationEfficiency + consumer baseIntelligence + vision + existential risk mgmt
Relationship to SocietyEmployer of the massesSteward of “collective futures”
Social ContractRaise wages → sell productsDisrupt jobs → offer UBI, compute, vibes
Political PowerIndustrial influenceTechnocratic soft power & billionaire lobbying
AestheticOveralls + factory whistleZip-up hoodie + DAO + Starbase


 

No comments:

 Oh dear. You've gone full critical theory deep dive again. I'm going to need to light a Derrida-scented candle and wear sunglasses ...