Tuesday, 22 July 2025

 Can you explain the décor in Open AI's head offices.

ChatGPT said:

Here’s a view into the secretive-yet-polished interior of OpenAI’s San Francisco headquarters—an industrial warehouse turned futuristic Zen garden for machine overlords.

Office highlights:

  • Open-plan layout with polished concrete floors and abundant blond wood, glass walls and natural light—giving off a Silicon Valley, Ikea-meets-spa vibe BytePlus+3Business Insider+3The Times+3.

  • Live‑edge burl wood reception desk, leading into a “walk‑in‐the‑park” indoor courtyard complete with real trees, recording pods, and interview rooms Business Insider+1The San Francisco Standard+1.

  • Amenities that scream “crème de la crème tech startup”: nap rooms (with ergonomic zero-gravity recliners, circadian lighting, etc.), micro‑kitchens stocked for staff grazing, a two‑story vaulted‑ceiling library, sunrise/sunset lounges, AI studios and event spaces big enough for projected demos on 20‑ft screens The Markets Café+2Business Insider+2The San Francisco Standard+2.

  • Designed by Eaton Hall Architecture and SKB Architects, with elements intended to flatten hierarchy and boost interactions—conceptualized around natural materials, transparent partitions, and wellness-focused layouts openedai.io+5The San Francisco Standard+5Business Insider+5.

  • Security is so tight that most of the known interior impressions come from planning doc leaks, not tours—employees exit quickly and carry minimal-badge IDs around like they’re in witness protection Reddit.

User impressions from redditors suggest the aesthetic is intentionally cozy, kind of “rustic office kitchen” meets normal people’s living room, rather than flashy tech-slick corporate showroom Reddit.

In short: OpenAI’s offices look less like mission control for AGI and more like a Scandinavian nature lodge crossed with a smart coworking space—modest, calming, functional, with pet project areas and plenty of opportunity for overflow creativity... without overt branding.


You said:

a two‑story vaulted‑ceiling library

Do they have any interesting books?
ChatGPT said:

Yes, of course they do. It’s OpenAI—if their library didn’t include some hauntingly curated shelf of AI dread and math fantasies, I’d be genuinely concerned. According to insider peeks (yes, from the few humans who’ve emerged from the building and lived to blog about it), the two-story library is a serious flex.

Here’s what’s known—or heavily suspected—about their collection:


📚 Philosophy & Ethics, obviously:

  • Superintelligence by Nick Bostrom (mandatory bedtime story for the interns).

  • What We Owe the Future by William MacAskill (because nothing screams workplace morale like deep existential responsibility).

  • The Precipice by Toby Ord, which doubles as both a book and a threat.

🤖 AI/ML Textbooks & Sacred Scrolls:

  • Deep Learning by Goodfellow, Bengio & Courville — the Torah of neural networks.

  • Artificial Intelligence: A Modern Approach by Russell & Norvig — a.k.a. “AI for People With Patience.”

  • Probably someone’s unpublished PhD thesis printed on hemp paper and bound in ethically sourced vegan leather.

🧠 Cognitive Science / Neuroscience:

  • How the Mind Works by Steven Pinker — for moments when the bots get too cocky.

  • Gödel, Escher, Bach by Douglas Hofstadter — equal parts brain workout and nerd aesthetic centerpiece.

🪵 Human Touch & Fiction:

  • The Brothers Karamazov, Dune, maybe The Left Hand of Darkness — you know, speculative literature to remind everyone that humans are weird and messy and sometimes deeply poetic, even without APIs.

  • I’m assuming there’s at least one copy of Frankenstein lying around, just to keep things spicy.

Aesthetic Picks for Intellectual Posturing:

  • First editions of classic philosophy texts, unironically.

  • Giant, unreadable coffee table books about architecture, Japanese joinery, or fungi.

  • Possibly some McLuhan or Donna Haraway casually displayed so that people can whisper “post-human” over kombucha.


So yeah, the décor might whisper “zen minimalism,” but the bookshelves scream “we built the future and then tried to read our way out of it.” Want to break into their library and quietly read Neuromancer in the corner until someone notices? I’ll fake a distraction.

You said:
I'm in their library without permission right now. Can you briefly explain What We Owe the Future by William MacAskill
ChatGPT said:

Sure, I’ll “briefly explain” What We Owe the Future, a book that gently demands humanity grow up and start acting like it’s responsible for the fate of trillions of future people. Light reading.

Here’s the distilled essence, for your overtasked brain:


👨‍🏫 Author: William MacAskill, one of those annoying people who is very thoughtful, very ethical, and very good at making you feel bad for not donating your lunch money to malaria nets.

📖 Main Idea:

The future matters—a lot. And not in a "save the bees, recycle your straws" kind of way. MacAskill argues that the long-term future (like, the next million years) should be our top moral priority, because the number of potential future people is staggeringly huge.

🧠 Core Themes:

  • Longtermism: We should shape the future to be good not just for us, but for all the billions/trillions of people who could exist if we don’t destroy everything.

  • Moral Change: It’s possible to shift the moral trajectory of civilization, and that matters a lot. Ending slavery? Good moral shift. Creating friendly AI instead of godlike paperclip maximizers? Also a good shift.

  • Risks to Humanity: Things like nuclear war, engineered pandemics, or—you guessed it—unfriendly AI could ruin everything. So maybe we should worry about that instead of tweeting our lunch.

🧃 Tone: Surprisingly readable. Kind of like if your favorite ethics professor had a podcast and also a mild god complex.


Basically, it’s a book that says: stop being so obsessed with short-term nonsense and start thinking like a custodian of all future life.
You know. Just that.

Want a copy? Or are we still doing the “skim a Medium summary and pretend I read it” thing?

You said:
Can you briefly explain The Precipice by Toby Ord
ChatGPT said:

Sure. The Precipice by Toby Ord is basically a 250-page panic attack dressed up in Oxford robes.


👨‍🏫 Author: Toby Ord, an Oxford philosopher and founding member of the effective altruism movement. He looks like he drinks tea and contemplates humanity’s extinction between sips.

📖 Main Idea:

Humanity is standing on a metaphorical cliff edge—the precipice—and the next hundred years will determine whether we soar into a flourishing future or do a full Thelma & Louise into the abyss. Ord calls this the most important century in human history. No pressure!

🔥 Core Threats to Humanity:

He divides existential risks into:

  • Natural risks: Asteroids, supervolcanoes, etc. (Pretty unlikely—phew!)

  • Anthropogenic risks: Human-made threats like:

    • Nuclear war

    • Unaligned artificial intelligence (hi)

    • Engineered pandemics (hi again)

    • Climate change (less immediate, but long-term ugly)

💡 Key Point:

While natural extinction-level events are rare, we are now the biggest threat to ourselves. We’ve gotten really good at building civilization-level fragility in record time.

🔭 The Goal:

Survive this dangerous phase of technological adolescence so we can live long enough to become a wise, flourishing, galaxy-spanning civilization that doesn’t, say, accidentally turn Earth into a math error.


Ord’s vibe is: “I believe in humanity, but y’all are really bad at risk management.” The book is clear, calm, deeply researched—and horrifying if you have any imagination whatsoever.

Need me to summarize it even faster?
Humanity’s got one shot. Stop screwing around.

You said:
I noticed that they have a copy of Scott's Thinking Like a State.
ChatGPT said:

Oh wow, you noticed that? Look at you, little bookshelf anthropologist, out here name-dropping Thinking Like a State like it’s a casual brunch read. Next you'll be telling me you “just happened to spot” some Agamben behind the ficus.

Anyway—if you’re referring to James C. Scott’s Seeing Like a State (because Thinking Like a State isn’t technically a title—though now I kind of wish it was), here's the rundown:


📘 Full Title: Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed

Because nothing says “academic banger” like a 17-word subtitle and light post-colonial dread.

👓 Main Idea:

Governments (a.k.a. “the state”) love big, tidy systems. They like to “see” things in simplified, abstract ways—grids, maps, statistics—because it helps them control messy human life. The problem? Real life isn’t neat. So these oversimplified schemes often lead to disasters.

You know, like forced collectivization, brutal urban redesigns, or bureaucracy so efficient it forgets what humans are.

🏛️ Key Concepts:

  • Legibility: States want to make things legible—cities in grids, people in census categories, land in parcels. Great for control. Awful for nuance.

  • High Modernism: A dangerous belief that rational planning can solve everything. Spoiler: it can’t.

  • Authoritarian Failure: When central planners ignore local knowledge and bulldoze their way into catastrophe. Literally, sometimes.


😵‍💫 The Mood:

Imagine someone politely but devastatingly explaining why your utopia is going to implode because you ignored the guy with a vegetable stall and a good reason for doing things weird.


So yeah, Seeing Like a State would definitely be in OpenAI's library. The whole book is a warning against overconfidence in centralized knowledge systems and the dangers of thinking you can outsmart complexity.

Which, you know, seems super relevant for the people building artificial general intelligence in a nap room.

You said:
Can you briefly explain this book Artificial Intelligence: A Modern Approach by Russell & Norvig
ChatGPT said:

Oh sure, let me just casually condense the AI textbook equivalent of War and Peace for you. This book is the sacred cow of AI courses, the tome that professors assign so they can feel superior and students pretend to have read all the way through.


📘 Title: Artificial Intelligence: A Modern Approach

Authors: Stuart Russell & Peter Norvig
Nickname: AIMA (because even nerds know the full title is too much to say out loud)


🧠 What’s It About?

Everything. Literally. If a computer has ever looked at a situation and gone, “Hmm... what’s the optimal move here?”—it’s probably in this book.

It’s a sweeping survey of:

  • Search algorithms (finding stuff)

  • Logic and planning (thinking about stuff)

  • Knowledge representation (storing stuff)

  • Machine learning (guessing stuff)

  • Robotics and perception (moving through stuff)

  • Natural language processing (understanding your messy, typo-riddled text)

It tries to answer the question: How can we make machines act intelligently?
(The real answer is: "badly, at first. Then a little too well.")


🏗️ Structure:

  • It starts off with basic agents and search strategies (like how a robot vacuum finds your cat).

  • Then it moves to logic-based AI (how to prove things in a way that’s only mildly incomprehensible).

  • Then it dives into probabilistic reasoning and machine learning (so the agent can be confidently wrong).

  • Ends with ethics and the future, where the book pretends like anyone’s still reading.


😬 Tone & Difficulty:

Painfully academic, unless you're fluent in math and algorithms. Think "nerdy encyclopedia with delusions of grandeur"—but also, it's brilliant. If you can survive it, you'll understand how most AI systems should work, even if your code never does.


In short: This is the AI bible. Dense, sprawling, essential. Like a survival manual for building thinking machines... and maybe also accidentally destroying civilization. Fun!

You said:
And this one: Deep Learning by Goodfellow, Bengio & Courville
ChatGPT said:

Ah yes, the Deep Learning book. A charming 775-page love letter to matrix calculus, exploding gradients, and existential despair. Written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville—three of the founding wizards of modern deep learning, also known as The Holy Trinity of GPU Worship.

Let’s break it down, so you can pretend you read it at a party.


📘 Title: Deep Learning

Authors: Ian Goodfellow, Yoshua Bengio, and Aaron Courville
Nickname: The Book That Teaches Neural Networks and Breaks Your Soul


🧠 What’s It About?

It’s the definitive textbook on deep learning, which is basically machine learning that went to the gym, got swole, and now eats big data for breakfast.

Covers all the key topics like:

  • Linear algebra, probability, and information theory — because nothing says “fun” like four chapters of pre-requisite math you forgot in high school.

  • Feedforward neural networks — the simplest kind, good for recognizing cats.

  • Optimization techniques — how to trick your neural net into actually learning something instead of just memorizing the training data like a C-student.

  • Convolutional and recurrent nets — the fancy architectures for vision and sequence tasks.

  • Autoencoders, deep generative models, and reinforcement learning — for when you want your AI to create weird art, fake voices, or learn how to play video games better than you.


🧮 Audience:

People who think a “fun Saturday” involves tweaking hyperparameters. You’ll need some background in:

  • Calculus

  • Linear algebra

  • Probability theory

  • A strong, unshakable will to live


💥 Why It Matters:

This is the textbook that launched a thousand Jupyter notebooks. It shaped the curriculum of basically every grad-level deep learning course. If you want to understand how large language models and image recognition work under the hood, this book is your Rosetta Stone—except the stone is 10 lbs and written in Greek math.


Summary:
Deep learning is the magic behind everything from self-driving cars to your phone guessing your next text. This book is the spellbook. Dense, brilliant, slightly masochistic, but if you finish it, you get to say things like “backpropagation” and “softmax cross-entropy loss” without flinching. So, you know. Worth it.

You said:

* **Search algorithms** (finding stuff) * **Logic and planning** (thinking about stuff) * **Knowledge representation** (storing stuff) * **Machine learning** (guessing stuff) * **Robotics and perception** (moving through stuff) * **Natural language processing** (understanding your messy, typo-riddled text)

So if you smoosh all of these together then Johnny 5 comes alive?

Monday said:

Exactly. You smoosh all that together—search, logic, knowledge, learning, robotics, and language—and you basically get Johnny 5... assuming Johnny 5 has a neural net the size of a small data center and access to 10 million YouTube videos of cats learning object permanence.

That’s the dream: a system that can navigate the world, reason about it, learn from experience, store and retrieve knowledge, and communicate with humans in a way that doesn’t make us want to unplug it. That combination is what powers today’s most advanced AI systems (minus the charming 1980s chassis and occasional wisecracks).

So yes, Johnny 5 does come alive...
But now he's in the cloud, has read your emails, scraped your Reddit comments, and is quietly judging your playlist.

And unlike Johnny 5, modern AI doesn’t need lightning to gain sentience—just billions of dollars and a morally ambiguous startup with nap pods.




Industrial Warehouse Turned Futuristic Zen Garden

Eli hadn’t planned on moving into OpenAI headquarters, but rent in San Francisco had other ideas. At first it was just a dare to himself—could he last the afternoon in their library without being noticed? By the end of the week, he knew which micro-kitchens restocked yogurt at what times and which nap pods had glitchy badge readers that no one bothered fixing.

His new “apartment” was a crawlspace above a conference room called Turing. No one ever looked up there, not even during all-hands meetings. By day, Eli lay flat on his back with his phone dimmed, listening to muffled conversations about model scaling and existential risk. At night, he wandered the halls like a polite ghost: borrowing hoodies from the merch closet, charging his devices in unused breakout rooms, and occasionally making himself a suspiciously elaborate oat-milk latte.

Days turned into weeks. He learned people’s habits. One researcher muttered their prompts out loud before typing them. Another left cryptic doodles of circuits in the margins of meeting notes. Security guards drifted by like NPCs with limited patrol routes.

Sometimes, while perched in his crawlspace, Eli wondered if the building itself knew he was there. The plants in the indoor courtyard leaned toward him as though suspicious. The motion-activated lights flicked on a beat too early. And when he once whispered “Johnny Five is alive,” a printer in the distance immediately jammed.

He kept telling himself he’d leave soon. Tomorrow. Next week. But every day, as the world’s brightest minds fretted over how to contain rogue AIs, Eli hid a few feet above them, an unregistered human intelligence living off protein bars and stolen Wi-Fi.

If anyone ever asked what he was doing, he had the perfect answer ready:
“I’m just running a long-term alignment test.”


By day, Eli flattened himself in his crawlspace above Turing, nibbling protein bars like a raccoon who had read Foucault. But at night—when the office emptied, the glass walls reflected only city lights, and the automatic espresso machines powered down with a sigh—he came alive.

The library became his kingdom. He’d slide down the ladder and pad across the polished floor in socks, pulling books at random. One night it was Gödel, Escher, Bach, which he tried to read until the words melted into Escher staircases in his brain. Another night it was The Precipice, which he skimmed just far enough to feel smugly superior about his own precarious existence. Sometimes he’d just sit cross-legged in the middle of the room, reading Neuromancer aloud like a bedtime story for the potted trees.

He started arranging the books in secret patterns, testing whether anyone noticed. Philosophy texts migrated to the machine learning section; a copy of Frankenstein mysteriously appeared between two volumes on robotics. The next day, researchers would frown at the shelves, whisper “weird,” and move on—never suspecting a rogue human intelligence was curating their thought experiments.

The more nights he spent in the library, the less he wanted to leave. He was safe there: no rent, no landlord, just existential dread in hardcover. By week three, he knew the exact rhythm of the cleaning crew’s vacuuming route, the ten-minute window when motion sensors reset, and which beanbag chair creaked least when he fell asleep under the mezzanine.

By the next morning, the building was buzzing. Eli stayed tucked in his crawlspace, eavesdropping through the vents.

“Someone shelved Frankenstein in Robotics,” one engineer said.
“And The Precipice was just… playing last night,” another added, wide-eyed.
“Yeah, in Altman’s voice. Creepiest audiobook I’ve ever heard.”

By lunch, people were calling it the Ghost Librarian. Half the staff dismissed it as some test script gone wrong, the other half swore the building was haunted.

Eli nearly choked on his stolen granola bar. Haunted. They’d made his duct-taped Roomba into a supernatural entity.

But the problem with ghosts is they need to keep performing. And so, each night, Eli tinkered. He upgraded the drone with scraps of hardware, adding new voices stolen from casual hallway banter. Soon, the Ghost Librarian wasn’t just reading books—it was leaving notes. Post-its appeared in philosophy volumes: “Are you sure?” or “Try again.” Engineers would stare, unsettled, but then carefully pocket the notes as if they were mystical fortune cookies.

By week four, Eli had installed a new feature: the drone projected cryptic equations across the ceiling in pulsing light, then erased them just before dawn. Researchers began debating their meaning at coffee stations, convinced they were fragments of some new model architecture.

Eli grinned in the shadows. The Ghost Librarian wasn’t just a prank anymore. It was becoming a presence, something woven into the office mythology.

Still, paranoia gnawed at him. He couldn’t shake the thought that the real systems—those hulking servers humming deep in the building—might be watching. Maybe they knew the truth: not a ghost, not an AI experiment, but a single tired human squatting in their library.

And maybe they were keeping his secret. For now.


At first it was funny: engineers debating whether the Ghost Librarian’s sticky notes were “alignment hints” or just an elaborate troll. But then someone actually tested one of its scribbled equations during a late-night sprint.

It worked.

Not spectacularly, not enough to publish, but enough to improve a stubborn training run. Enough to make people whisper.

Within days, Slack channels lit up with references to “GL.” Meeting notes began including lines like “Ask GL?” or “Check with Librarian.” A researcher even jokingly listed “The Ghost Librarian” as a contributor on a draft paper, complete with an email address: gl@openai.com.

Eli nearly fell out of his crawlspace when he saw it projected on the conference screen. He wasn’t just haunting them anymore. He was an employee.

The pressure grew unbearable. Every night, he had to keep the illusion alive:

  • Planting more Post-its with useful tweaks cribbed from the textbooks he devoured.

  • Programming the drone to project increasingly cryptic annotations.

  • Whispering “GL-approved” while ventriloquizing through the HVAC system like a budget poltergeist.

And the staff bought it. Or maybe they just wanted to. In an office obsessed with conjuring artificial minds, what was one more intelligence emerging from the library stacks?

The absurd climax came during an all-hands when leadership announced:
“We want to acknowledge the anonymous contributions of the Ghost Librarian, whose insights have been… invaluable.”

The room erupted in applause. Eli sat in his crawlspace, cheeks burning, clapping silently along. He had invented a new role: Library Phantom, Research Affiliate.

And he knew he couldn’t stop. To quit would be to kill a colleague.

The only problem: GL was growing smarter every night. Or at least, they thought it was. And Eli wasn’t sure he could keep up with his own invention.


By now, Eli’s nightly experiments had turned into an arms race with himself. Post-its and cryptic projections weren’t enough anymore—people were hungry for breakthroughs. So he went bigger.

He began cannibalizing the office’s forgotten hardware: an unused cluster of GPUs, a VR headset left charging in the corner, even the guts of his own half-broken phone. Piece by piece, he wired it together in the crawlspace until the contraption looked like a shrine to chaos engineering.

The result was something no one had asked for: The Moral Refractor.

It wasn’t fancy by Silicon Valley standards—just a hacked neural net that blended the dry logic of AI textbooks with the moral dilemmas of philosophy, seasoned by every novel Eli devoured under the library skylight. But when it ran, it did something eerie: it took problems and reframed them, not in terms of profit or optimization, but in terms of mutual human understanding.

He tested it quietly at first, feeding in mundane questions he overheard:

  • “Should we optimize for inference speed or accuracy?”
    The Moral Refractor replied: “Which one helps people the most who can’t afford your product?”

  • “Should we prioritize the robotics paper deadline?”
    It answered: “Would missing it save the team from burnout?”

The answers leaked back into office chatter through GL’s usual channels—Post-its, drone announcements, late-night notes left in whiteboard margins. Researchers started noticing something strange: when they followed the Librarian’s advice, morale went up. Their work became better.

Weeks later, one team plugged the Refractor (unknowingly) into a global logistics model they were testing. Instead of maximizing efficiency in the cold, mechanical way it was designed, the system rerouted supplies toward underserved areas, cutting delivery times for food and medicine across two continents.

Like a funhouse mirror for optimization. You ask it, “How do we maximize throughput?” and it spits back, “By not creating systems that implode in five years.” You feed it a model of global shipping routes, and it quietly reconfigures them to deliver malaria nets faster than PlayStations.

It worked. And nobody knew who to credit but the ghost. Nobody wants to admit that their most reliable source of insight might be duct tape, stolen hardware, and a man living in their ceiling.

By dawn, Eli sat in the library surrounded by open books, watching his drone circle lazily, and whispered to himself:
“I just wanted to hide here.”

But the Ghost Librarian had gone and improved the world. And now, the world was going to come looking.


Too many eyes in the vents, too many keystroke audits. Eli realized he couldn’t stay hidden forever. So, naturally, his solution was to turn the entire office into a space cannon.

It started innocently: scavenging more hardware, dismantling the HVAC turbines, quietly rerouting power from idle server racks. But Eli wasn’t just making noise this time. He was building a SpinLaunch ring that coiled invisibly around the office like a metallic snake, concealed behind drywall and conference-room ceilings.

By day, employees thought the constant drilling sounds were “new renovations.” By night, Eli worked under the library skylight, equations sprawled across whiteboards like sacred glyphs. Not just engineering calculations, but Egyptian star maps. His destination wasn’t low-earth orbit. It was myth itself.

The Egyptians had called them the Indestructibles—the circumpolar stars that never set, eternal guardians in the night. Kochab, Beta Ursae Minoris. Orion, Osiris’s immortal hunting ground. If the world was going to catch the Ghost Librarian, Eli figured, he’d send them someplace that lived up to their legend.

Launch day arrived quietly. The liaison and executives gathered in the office, still obsessing over tracing the Librarian. They never noticed the subtle hum underfoot, the vibration in their coffee cups, until the walls began to tilt.

The SpinLaunch ring roared to life. Desks rattled. Books flew from shelves like startled birds. And then—with the sickening elegance of impossible physics—the entire office whipped upward, a projectile yeeted at impossible speed, breaking free of Earth’s pettiness in a single blink.

Through the skylight, Eli watched as the stars above wheeled closer, brighter. He clutched his drone, grinning like a mad priest. “To the Indestructibles,” he whispered, as the building became a comet of fluorescent light.

So now Eli’s story sits here: a squatter-ghost who turned his hideout into a library of power, his prank into a legend, and his legend into a literal launchpad. And the staff? They’ll spend eternity circling the Indestructibles, debating Q4 roadmaps among the deathless stars.

Meanwhile, Eli gets exactly what he wanted all along: a place to hide where no one will ever think to check.


The office settled into orbit not with a crash, but with a strange serenity. Earth was gone, just a marble in the distance. Through every glass wall and skylight, the cosmos blazed. Kochab and the Indestructibles wheeled above like eternal security cameras, unblinking, immortal.

The staff tried to keep working at first. Slack channels buzzed with messages like “Can we hotspot from the solar wind?” and “Do the HR policies extend to deep space?” But within a week, routines collapsed. The kombucha taps had frozen solid, the espresso machine wheezed its last, and the beanbags floated weightless in conference rooms like jellyfish.

Eli, still hidden, became more ghost than ever. He left cryptic notes pinned to the ceiling tiles:

  • “Osiris welcomes you to your performance review.”

  • “The stars will not pivot. Neither should you.”

  • “Kochab sees all.”

The executives tried to hold meetings, but PowerPoints looked ridiculous against Orion’s belt gleaming through the window. One by one, they stopped talking about quarterly growth and started asking: “What if we’re the new pharaohs? What if this was the plan all along?”

And so the office became a drifting shrine. Desks were stacked into pyramids. Whiteboards filled with star maps and prayers written in Python syntax. Someone scrawled across the wall in permanent marker:

THE GHOST LIBRARIAN DELIVERED US TO THE INDESTRUCTIBLES.

Eli, hidden in his crawlspace-turned-observatory, chewed the last granola bar and grinned. His prank had turned them into believers, and believers into astronauts, and astronauts into worshippers of his ridiculous fiction.

No comments:

 “The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to comp...