<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://andrewmiracle.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://andrewmiracle.com/" rel="alternate" type="text/html" /><updated>2026-04-15T06:13:45+00:00</updated><id>https://andrewmiracle.com/feed.xml</id><title type="html">Andrew Miracle</title><subtitle>Product Generalist, Design Thinker, and AI Hacker building at the intersection of technology, design, and entrepreneurship.</subtitle><author><name>Andrew Miracle</name></author><entry><title type="html">I miss when we used to ask stupid questions</title><link href="https://andrewmiracle.com/2026/04/15/i-miss-when-we-used-to-ask-stupid-questions/" rel="alternate" type="text/html" title="I miss when we used to ask stupid questions" /><published>2026-04-15T00:00:00+00:00</published><updated>2026-04-15T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/04/15/i-miss-when-we-used-to-ask-stupid-questions</id><content type="html" xml:base="https://andrewmiracle.com/2026/04/15/i-miss-when-we-used-to-ask-stupid-questions/"><![CDATA[<p>Someone joins your team. You dump a wiki, three Notion pages, and a Slack channel backlog on them. They burn through it. And somewhere in the middle of burning through it, they ask the stupid questions.</p>

<p>“When you people say this is this, what does that mean?”</p>

<p>“When you say this is that, what does that mean?”</p>

<p>Those questions were the signal.</p>

<p>From a management point of view, that is how you checkpoint the progress of new talent on a team, or someone finding their footing in a new department. The stupid questions gave you visibility. They showed you where the person was confused, what they were picking up, and how far along they really were.</p>

<p>We do not get them anymore.</p>

<p>Not because people stopped being confused. People still get confused. They still join teams and feel overwhelmed by context they do not yet have. They still hit the gap between what they know and what the work demands.</p>

<p>The difference is where they take the confusion.</p>

<p>They paste the wiki into ChatGPT. They paste the Slack thread into Claude. They describe the codebase to Gemini and ask it to explain what they are looking at. The questions still happen. They just happen in private, with a machine that never judges them for asking.</p>

<p>And honestly, you cannot blame them. Asking a lead or a senior teammate “what does this mean” carries social cost. It always has. Nobody wants to be the person who does not get it yet. AI removed that cost entirely. You can ask the dumbest possible question, five different ways, at 2am, and nobody on your team will ever know.</p>

<p>So the questions went underground.</p>

<h2 id="the-visibility-problem">The visibility problem</h2>

<p>This changes something fundamental about how teams work.</p>

<p>When a new hire used to ask you why the billing service talks to two different databases, that question told you three things at once. It told you they had gotten far enough to find the billing service. It told you they did not yet understand the data architecture. And it told you they were actively trying to close the gap.</p>

<p>That was free signal. You did not have to schedule a check-in to get it. It just surfaced naturally because asking humans was the only option.</p>

<p>Now the signal is gone.</p>

<p>Instead, six hours before the deadline, or twelve hours before the deadline, you receive the first output. And while output is a good thing, it is also a coin flip. They might get it right. They might get it wrong. You have no way to tell which one is coming because you never saw the confusion that preceded it.</p>

<p>And when someone asks no questions and delivers polished output, you cannot tell the difference between someone who deeply understood the work and someone who got lucky with a prompt.</p>

<p>Both look the same on the surface. Clean deliverable. On time. Formatted well. Hits the brief. But one of them built understanding along the way and the other one outsourced the understanding to a machine and shipped whatever came back.</p>

<p>That is a risk. Because the moment the work gets harder, the moment the context is too specific for a general model to handle, the person who built real understanding will adapt. The person who did not will break. And you will not know which one is which until that moment arrives.</p>

<p>This is not a plea to ban AI from onboarding. That ship has sailed and it should have. AI is genuinely useful for bridging knowledge gaps, and pretending otherwise is not a strategy.</p>

<p>But it does change the job of leadership.</p>

<p>You have to assume you are not going to get the stupid questions. You have to design around that assumption instead of hoping people will still come to you with their confusion.</p>

<blockquote>
  <p>You can no longer wait until the final stretch to see the work for the first time.</p>
</blockquote>

<p>That means compressing delivery into shorter feedback cycles. Not one big checkpoint at the end. Multiple small ones throughout.</p>

<p>Early output, before anyone has had time to polish or over-rely on generated answers. Review and correction, where you can see the shape of someone’s thinking while it is still rough enough to be honest. Iteration and improvement, where the work gets better because you shaped it together rather than received it finished.</p>

<p>Each checkpoint gives you a chance to see where someone actually is instead of where their output suggests they are. It is more work. It is also the only way to replace the signal that the stupid questions used to give you for free.</p>

<h2 id="what-we-actually-lost">What we actually lost</h2>

<p>The stupid questions were never just a management tool. They were a relationship. Someone admitting they did not understand something yet, and someone else helping them get there. That exchange built trust. It built context that no wiki can replicate. It built the kind of team knowledge that lives in people, not documents.</p>

<p>AI is better than any tool we have ever had for closing information gaps. But information gaps were never the only thing the stupid questions were closing.</p>

<p>I still miss them.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Leadership" /><category term="Future of Work" /><summary type="html"><![CDATA[Your team still has questions. They just ask a machine now. And that is a problem you cannot manage your way out of.]]></summary></entry><entry><title type="html">We are all thinking with the same brain</title><link href="https://andrewmiracle.com/2026/04/12/we-are-all-thinking-with-the-same-brain/" rel="alternate" type="text/html" title="We are all thinking with the same brain" /><published>2026-04-12T00:00:00+00:00</published><updated>2026-04-12T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/04/12/we-are-all-thinking-with-the-same-brain</id><content type="html" xml:base="https://andrewmiracle.com/2026/04/12/we-are-all-thinking-with-the-same-brain/"><![CDATA[<p>You ask AI to pressure test your idea. It sounds solid.</p>

<p>Your cofounder asks the same model. Also solid. Your board member runs it through a different prompt. Same conclusion. Your teammate asks from a different angle. Still holds up.</p>

<p>Four people. Four conversations. One brain.</p>

<p>Most of us are using the same LLMs, or at least models trained on roughly the same corpus, shaped by similar safety layers, similar incentives, and often a similar worldview. The outputs look different on the surface. Different wording, different examples, different tone. But underneath, they’re pointing in the same direction.</p>

<p>And that creates a bug.</p>

<p>Today, people use AI for almost everything. Advice, research, understanding new topics, pressure testing ideas, making decisions. Often, that makes sense. Especially when you’re entering a domain where you don’t yet have enough context or expertise to think clearly on your own. That’s exactly when an AI system feels most useful.</p>

<p>But here’s the problem.</p>

<p>What happens when I bring an idea to AI, especially an idea that depends on a certain worldview to make sense, and I use that system to help me shape it, challenge it, or validate it?</p>

<p>Then my teammate does the same.
Then my board member does the same.
Then my cofounder, partner, or friend does the same.</p>

<p>The wording will differ.
The examples will differ.
The style of explanation will differ.</p>

<p>But very often, the conclusion will be almost identical.</p>

<p>Not because the idea is necessarily right, but because everyone is querying the same layer of intelligence, trained on the same patterns, rewarded toward the same kind of coherence, and biased toward the same kind of acceptable answer.</p>

<p>So what looks like independent validation may actually be consensus laundering.</p>

<p>It feels like multiple people have pressure tested the idea from different angles. But maybe they haven’t. Maybe they’ve all just asked the same machine to think from slightly different seats at the same table.</p>

<p>That’s the bug.</p>

<p>Counterarguments start becoming a luxury.
Real epistemic friction becomes rare.
And the more persuasive these systems get, the easier it becomes to confuse fluency with truth. Or alignment with rigor.</p>

<p>This matters more than people think.</p>

<p>Because in teams, in companies, in decision making environments, we often don’t need perfect certainty. We just need something that sounds reasonable enough for everyone to move forward. Once AI can produce that level of reasonable coherence for everyone in the room, bad ideas travel further simply because they meet the minimum standard of collective comfort.</p>

<p>Not truth.
Not depth.
Not real challenge.</p>

<p>Just enough sense to pass.</p>

<p>So the question isn’t only whether AI can help us think.</p>

<p>The question is how we avoid thinking inside a closed loop where the same machine keeps reflecting the same assumptions back to all of us, until agreement feels like evidence.</p>

<p>Maybe the next skill isn’t better prompting.
Maybe it’s designing for disagreement.</p>

<p>Seeking out people with real domain knowledge.
Stress testing ideas outside model consensus.
Using different systems with different priors.
Separating “this sounds right” from “this survives serious opposition.”</p>

<p>Because if we’re all using the same intelligence to validate the same ideas, then sooner or later, agreement stops being useful.</p>

<p>It becomes a mirror.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="Artificial Intelligence" /><category term="AI" /><category term="Critical Thinking" /><category term="Decision Making" /><summary type="html"><![CDATA[If you asked AI to validate your idea and so did your cofounder, your board, and your team, you did not get four opinions. You got one.]]></summary></entry><entry><title type="html">Your AI sessions could be your next digital product</title><link href="https://andrewmiracle.com/2026/03/26/your-ai-sessions-could-be-your-next-digital-product/" rel="alternate" type="text/html" title="Your AI sessions could be your next digital product" /><published>2026-03-26T00:00:00+00:00</published><updated>2026-03-26T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/03/26/your-ai-sessions-could-be-your-next-digital-product</id><content type="html" xml:base="https://andrewmiracle.com/2026/03/26/your-ai-sessions-could-be-your-next-digital-product/"><![CDATA[<p>The most valuable thing you produce with AI is not the output.</p>

<p>It is the conversation that got you there.</p>

<p>What you asked. What you asked next. What you ignored. What you circled back to. Where the draft broke and how you fixed the framing. Why you went left instead of right when both directions looked fine.</p>

<p>That trail is worth something.</p>

<p>And right now, you close the tab and it disappears.</p>

<p>I think that is about to change.</p>

<h2 id="a-quick-primer">A quick primer</h2>

<p>For most of the internet’s history, the valuable thing was information itself. Someone knew something you did not. They wrote it down, recorded it, organized it into modules, and charged you for access. That was the entire digital product economy for over a decade. E-books. Video courses. Membership sites. Masterclasses. The person who knew the thing sold the knowing.</p>

<p>Courses were the pinnacle of that era. Not because they were the best way to learn, but because they were the most natural way to package expertise for sale. You take what is in your head, you structure it into lessons, you put a price on it, and you scale it infinitely because delivery costs nothing.</p>

<p>That worked. It still works. But it stopped being the only game.</p>

<p>Because at some point, people realized they did not just need information. They needed structure. They needed the thinking already done, the system already built, the workflow already laid out so they could skip straight to doing.</p>

<p>That is when templates took over. Notion made an entire economy out of this. People were not buying empty pages. They were buying someone else’s organized way of thinking. The information was already free by then. Google had taken care of that. The structure was the product.</p>

<p>Airtable bases. Figma kits. Spreadsheet models. Prompt libraries. The whole template marketplace era was built on one insight: people will pay to skip the organizing step.</p>

<p>That era is still running. But I think the next layer is already forming and most people have not named it yet.</p>

<p>Your Claude sessions. Your ChatGPT conversations. Your Codex workflows. Your cloud coding sessions. The actual working trails you leave behind while you build, write, design, edit, debug, research, decide.</p>

<p>Those are becoming a product category.</p>

<p>Not because someone is going to sit down and read your transcripts.</p>

<p>Because someone is going to feed them to their agent.</p>

<h2 id="the-buyer-is-not-a-student-the-buyer-is-an-agent">The buyer is not a student. The buyer is an agent.</h2>

<p>This is the part that changes the economics.</p>

<p>When someone buys a course, the human has to do the work. Watch the videos. Take the notes. Try to apply it. Forget most of it. Go back and rewatch. The transfer cost is enormous. Most people never finish. Everyone knows this.</p>

<p>When someone buys a template, the transfer cost drops. Duplicate and go. But a template is frozen. It captures one arrangement at one moment in time. It does not show you what to do when the template does not fit your situation.</p>

<p>A session library is a different kind of product because the end consumer is different.</p>

<p>The consumer is not a human trying to learn.</p>

<p>The consumer is an agent trying to perform.</p>

<p>Think about what that means. An agent does not need motivation. It does not need the six-hour preamble before the useful part. It does not need to be convinced. It needs context. It needs examples of how decisions were made, what was tried, what worked, what did not, and why the approach changed.</p>

<p>That is exactly what your working sessions contain.</p>

<p>The first prompt that missed. The second one that got closer. The pivot when you realized the framing was wrong. The moment the whole thing clicked.</p>

<p>A human reads that and maybe learns something. An agent reads that and starts performing differently. It pattern-matches against your judgment and begins producing work that reflects it.</p>

<p>That is the difference. A course hopes the human will internalize it. A session library lets the agent absorb it directly.</p>

<h2 id="what-actually-disappears-when-you-close-the-tab">What actually disappears when you close the tab</h2>

<p>You finish a session. You got the output you needed. You close the tab.</p>

<p>What just vanished?</p>

<p>Every decision you made along the way. Every dead end you navigated out of. Every moment where you chose one direction over another and the choice was informed by something you know but never wrote down.</p>

<p>That is judgment. And it is the scarcest thing in the entire AI workflow.</p>

<p>Output is getting cheaper every month. Models get faster, cheaper, more capable. The cost of generating text, code, designs, analysis, all of it is falling.</p>

<p>But the cost of knowing what to generate, what to throw away, what to push further, and when to change direction entirely? That has not moved.</p>

<p>That judgment lives in your sessions. Not in the final deliverable. Not in your portfolio. In the messy, real, unpolished working trail where the actual calls were made.</p>

<p>If you curate that trail instead of discarding it, you have something.</p>

<h2 id="what-a-session-library-actually-is">What a session library actually is</h2>

<p>It is not a transcript dump. That would be useless.</p>

<p>It is a curated archive of working sessions organized by domain, by problem type, by workflow pattern. Think of it like a reference library, but instead of books, the units are real conversations where real work happened.</p>

<p>A developer who has spent six months building production systems with Claude has hundreds of sessions. Architecture decisions. Debugging workflows. Refactoring strategies. Deployment patterns. Edge cases that only show up in production. All of it real. All of it showing what actually worked and what did not.</p>

<p>Now imagine another developer who is earlier in that process. They do not need a course on prompting. They do not need a template for a system design doc. They need their agent to already understand how a more experienced developer thinks through these problems.</p>

<p>They subscribe to the session library. They point their agent at it. And now their agent is not starting from zero. It is starting from someone else’s accumulated decision-making.</p>

<p>That is not “teach me.” That is “make my tools smarter.”</p>

<p>Same thing applies to designers, editors, strategists, researchers. Anyone whose work increasingly happens in conversation with AI. Their sessions are not logs. They are transferable context. They are portable judgment.</p>

<h2 id="why-this-has-subscription-dynamics">Why this has subscription dynamics</h2>

<p>One session is a piece of content. You could sell it as a one-off and it would be worth something.</p>

<p>But a continuously growing library of sessions is a different thing. Every new session the creator adds makes the archive more valuable. More problems covered. More edge cases handled. More examples of judgment applied to situations the subscriber has not encountered yet.</p>

<p>That is compounding value. That is why people would stay subscribed.</p>

<p>Not because they are locked in. Because the library keeps getting better. Because the agent keeps getting more context to draw from. Because next month’s sessions will cover things that have not happened yet.</p>

<p>A course is finished the day it ships. A template is finished the day it ships. A session library is never finished. It grows with the creator’s work.</p>

<p>That is a fundamentally different product shape.</p>

<h2 id="this-is-not-a-course-business">This is not a course business</h2>

<p>I do not think this ends up looking like a course business.</p>

<p>And I do not think it fits in the consulting bucket either.</p>

<p>It is something closer to a subscription archive. A growing collection of expert workflow that agents can consume. Not polished. Not cleaned up. Not the usual case study where the mess has been removed and everything looks inevitable in hindsight.</p>

<p>The mess is the product.</p>

<p>The failed attempts. The restarts. The tangents that turned out to matter. The moments where the direction changed and the reason was something subtle.</p>

<p>That mess is where judgment actually lives. The clean version is the one that hides it.</p>

<p>And I think a lot of people would rather subscribe to a library that makes their agent smarter than sit through another course that asks them to become smarter themselves.</p>

<p>Not because they are lazy.</p>

<p>Because the bottleneck moved.</p>

<p>The bottleneck used to be knowledge. Courses solved that. Then it was structure. Templates solved that. Now the bottleneck is context. How do you get an agent to work the way someone experienced would work?</p>

<p>You give it a library of how that person actually works.</p>

<p>That is the product.</p>

<p>Not the output.</p>

<p>Not the template.</p>

<p>Not the course.</p>

<p>The session library. The curated, growing, living archive of how the work actually happened. Packaged not for humans to study, but for agents to absorb.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Future of Work" /><category term="Creator Economy" /><summary type="html"><![CDATA[The most valuable thing you produce with AI is not the output. It is the conversation that delivers that output, what you asked, how you asked.]]></summary></entry><entry><title type="html">What does the next era of leverage look like?</title><link href="https://andrewmiracle.com/2026/02/23/what-does-the-next-era-of-leverage-look-like/" rel="alternate" type="text/html" title="What does the next era of leverage look like?" /><published>2026-02-23T00:00:00+00:00</published><updated>2026-02-23T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/02/23/what-does-the-next-era-of-leverage-look-like</id><content type="html" xml:base="https://andrewmiracle.com/2026/02/23/what-does-the-next-era-of-leverage-look-like/"><![CDATA[<p>I’ve been thinking a lot about leverage lately. The real thing. The structural force that determines why some people, some companies, some entire civilizations end up shaping the world while others just participate in it.</p>

<p>It started with a question I couldn’t shake: if leverage is the most powerful force in economic history, and if it shifts form every few generations, then what does the next shift look like? And more selfishly, am I positioned anywhere near it?</p>

<p>I don’t have a clean answer yet. But I’ve been studying the pattern, going back about five hundred years, and what I keep finding is that the people who won big in each era weren’t necessarily the smartest or the hardest working. They were the ones who recognized the new form of leverage before it became obvious.</p>

<p>So let me walk through what I’ve found.</p>

<hr />

<h2 id="the-voyage-that-changed-the-math">The Voyage That Changed the Math</h2>

<p>In 1484, a Genoese sailor named Christopher Columbus walked into the Portuguese court with a pitch: fund a western sea route to Asia. King John II listened, consulted his advisors, and said no. The math was wrong. Columbus had underestimated the circumference of the Earth by about 25%.</p>

<p>He spent the next eight years pitching. Portugal rejected him. Spain’s scholars rejected him. He was about to try France when, in early 1492, Ferdinand and Isabella finally said yes. They’d just conquered Granada. They had momentum, a little cash, and the appetite for a bet.</p>

<p>The total investment came to roughly two million maravedis, maybe five to ten million dollars in today’s money. A meaningful sum, but not a kingdom-ending one. Columbus negotiated himself <a href="https://en.wikipedia.org/wiki/Capitulations_of_Santa_Fe">10% of all profits</a> from whatever he found, the title of Admiral of the Ocean Sea, and governorship of any new lands. All of it hereditary.</p>

<p>Then they waited. Months at sea. No updates. No feedback loop. Just capital deployed into the unknown.</p>

<p>Columbus never reached Asia. He died believing he had. But what he actually did was far more consequential. He stumbled into an entirely new map. And over the next 150 years, roughly 1500 to 1650, Spain extracted massive amounts of gold and silver from the Americas, from mines like Potosí in Bolivia and others across Mexico and Peru, and shipped it all back to Spain. The return on that two million maravedis was, by any measure, one of the greatest in human history.</p>

<p>Here’s the part I keep coming back to: before this, wealth mostly came from inheritance, conquest, or controlling existing trade routes. Columbus introduced something new. Capital deployed against uncertainty, not certainty, could expand the map itself.</p>

<p>The explorers got the fame. But the real leverage belonged to the people who understood the structure of the bet. And a century later, that understanding evolved into something even more powerful.</p>

<h2 id="when-leverage-became-a-product">When Leverage Became a Product</h2>

<p>In 1602, Dutch merchants did something no one had done before. They created the <a href="https://en.wikipedia.org/wiki/Dutch_East_India_Company">Dutch East India Company</a>, the VOC, and sold shares in it to the public. Over 1,100 investors bought in during the first offering. One of them was a maid named Neeltgen Cornelis. There was no minimum investment.</p>

<p>Think about what that meant. A century earlier, you needed to be a monarch to fund a voyage. Now, a domestic worker in Amsterdam could buy a piece of one. The joint-stock company had turned leverage itself into a product.</p>

<p>At its peak, the VOC was valued at what historians estimate to be <a href="https://www.visualcapitalist.com/most-valuable-companies-all-time/">$7-8 trillion in modern dollars</a>. It maintained a private army of 260,000 soldiers, larger than most European nations’ militaries. It had the authority to negotiate treaties, build forts, and wage war.</p>

<p>The structure was the innovation. Not the ships. Not the spices. The structure. Risk distributed across thousands of small investors rather than concentrated in a single crown. That template, the publicly traded corporation, is still the dominant vehicle for leverage five centuries later.</p>

<hr />

<h2 id="the-interface-play">The Interface Play</h2>

<p>I want to pause on something I noticed while reading about the British Empire, because it changed how I think about leverage entirely.</p>

<p>Britain was a small, damp island. It controlled a quarter of the world’s population and a quarter of its land mass. The usual narrative is conquest. But if you look at how it actually worked, the pattern is more subtle.</p>

<p>The British East India Company didn’t start by owning territory. It started by controlling trade interfaces. Ports. Shipping routes. Tax collection rights. In 1765, the Mughal Emperor <a href="https://en.wikipedia.org/wiki/East_India_Company">granted the Company <em>dewani</em></a>, the right to collect taxes, in Bengal, Bihar, and Orissa. Not ownership of the land. Just the right to collect from it. Indian subjects paid £22.7 million per year directly to the Company.</p>

<p>Admiral Sir John Fisher once listed what he called the “Five Keys to the World”: the Strait of Dover, the Strait of Gibraltar, the Suez Canal, the Strait of Malacca, and the Cape of Good Hope. Five chokepoints. Britain controlled all of them. Not the continents they connected, just the passages between them.</p>

<p>This was, I think, the first large-scale derivative economy. You didn’t own the asset. You owned the yield.</p>

<p>The human suffering was immense, and I don’t want to minimize that. But from a purely structural perspective, it revealed something that keeps showing up in every leverage era since: control the interface, and you control the value that flows through it.</p>

<hr />

<h2 id="building-the-map-instead-of-expanding-it">Building the Map Instead of Expanding It</h2>

<p>The industrial era inverted the model. Instead of controlling routes between places, the new leverage was in building the infrastructure that made places function.</p>

<p>John D. Rockefeller understood this better than maybe anyone in history. He didn’t try to own oil wells. Wells were risky, unpredictable, subject to the chaos of discovery. Instead, he focused on <a href="https://en.wikipedia.org/wiki/Standard_Oil">refining</a>. By 1900, Standard Oil controlled 90% of America’s oil refining capacity but only about 14% of crude supply. The wells were someone else’s problem. The processing layer, the interface between raw material and usable product, was his.</p>

<p>His tactics were ruthless. In early 1872, within three months, he bought out or bankrupted 22 of 26 competing refineries in Cleveland. He negotiated railroad discounts by promising 60 carloads per day, volume leverage that smaller competitors couldn’t match. He bought up barrel makers, chemical suppliers, even the train cars themselves. By controlling every link in the chain between well and consumer, he made competition structurally impossible.</p>

<p>Andrew Carnegie did the same thing with steel. He didn’t just make steel, he owned the iron mines in Minnesota’s Mesabi Range, the coal fields, the coke ovens, the transport fleet, and the mills in Pittsburgh. Full vertical integration. By 1900, Carnegie Steel <a href="https://en.wikipedia.org/wiki/Carnegie_Steel_Company">produced more steel than all of Great Britain</a>. When J.P. Morgan bought him out, the price was $480 million, creating the world’s first billion-dollar corporation in U.S. Steel.</p>

<p>Cornelius Vanderbilt played the same game with railroads. When New York City resisted his consolidation in 1867, he simply moved his Hudson River Railroad terminus to East Albany, cutting the city off. They capitulated immediately. He understood that whoever controls the infrastructure doesn’t need permission from anyone who depends on it.</p>

<p>These weren’t adventurers. They were builders. And the leverage wasn’t in discovering new territory, it was in becoming the chokepoint of existing territory.
What’s interesting is that this playbook isn’t just history. Someone is running it right now, in real time, on a continent most investors still underestimate.</p>

<p>Nigeria is Africa’s largest oil producer. For decades, the country exported crude and imported it back as refined gasoline, diesel, and kerosene. The <a href="https://www.rigzone.com/news/wire/nigeria_hopes_new_refinery_will_cut_26b_import_bill-23-may-2023-172836-article/">import bill ran to $23 billion a year by 2022</a>, consuming roughly 40% of the country’s foreign exchange earnings. An oil-producing nation, spending almost half its dollar reserves buying back its own product in finished form. That’s not an economy. That’s a dependency.</p>

<p>Aliko Dangote looked at that and saw exactly what Rockefeller saw in 1870. The wells weren’t the chokepoint. The refinery was.</p>

<p>In May 2023, he opened the <a href="https://en.wikipedia.org/wiki/Dangote_refinery">Dangote Refinery</a> in Lagos. 650,000 barrels per day. The largest single-train refinery in the world. Cost: $19 billion, the biggest private industrial investment in African history. By early 2025, it supplied over 60% of Nigeria’s petrol and represented two-thirds of the country’s total refining capacity. He’s already announced plans to <a href="https://oilprice.com/Latest-Energy-News/World-News/Dangote-Drives-Nigerias-Domestic-Fuel-Supply-Above-57-as-Imports-Retreat.html">expand to 1.4 million barrels per day</a>, which would make it the largest refinery on earth.</p>

<p>But the refinery isn’t where Dangote started. He started with cement. Nigeria had limestone everywhere but almost no cement industry. The country was importing bags of finished cement from overseas at massive markup. Dangote built local plants. Today, <a href="https://dangotecement.com/">Dangote Cement holds 61% of the Nigerian market</a> and operates across ten African countries with over 52 million tonnes of annual capacity. Nigeria now saves an estimated $3 billion a year from not importing cement.</p>

<p>Then sugar. Dangote Sugar Refinery in Lagos, the largest in sub-Saharan Africa, processes raw cane into finished white sugar and controls over 70% of the Nigerian market. Then fertilizer, a $2.5 billion urea plant, the largest in Africa, 3 million tonnes a year, which launched right as the Russia-Ukraine war disrupted global fertilizer supply.</p>

<p>The pattern is identical every time. Africa exports raw materials. The world processes them. Africa buys back the finished product at ten times the cost. Dangote inserts himself as the processing layer and captures the margin that had been flowing overseas.</p>

<p>Carnegie controlled the interface between iron ore and the railroads that built America. Rockefeller controlled the interface between crude oil and the lamps and engines that powered it. Dangote is controlling the interface between African raw materials and African consumers. The playbook hasn’t changed. The geography has.</p>

<p>His net worth roughly <a href="https://furtherafrica.com/2025/02/21/aliko-dangotes-net-worth-hits-23-9b-as-mega-refinery-transforms-africa/">doubled to $24 billion</a> once the refinery went from construction to operation. The wealth didn’t come from discovering a resource or inventing a technology. It came from positioning himself at the chokepoint between what Africa produces and what Africa consumes. That’s leverage.</p>

<hr />

<h2 id="code-changes-everything">Code Changes Everything</h2>

<p>Then came software. And the rules changed again.</p>

<p>In August 2011, Marc Andreessen published an essay in the Wall Street Journal titled <a href="https://a16z.com/why-software-is-eating-the-world/">“Why Software Is Eating the World.”</a> His argument was that software companies were replacing traditional industries’ fundamental economics. Distribution costs collapsed. Scale went exponential. A teenager in a dorm room could reach billions of people for essentially zero marginal cost.</p>

<p>What Andreessen described was real, but I think Naval Ravikant articulated the deeper shift more precisely. In his now-famous 2018 tweetstorm, he wrote:</p>

<blockquote>
  <p>“Code and media are permissionless leverage. They’re the leverage behind the newly rich. You can create software and media that works for you while you sleep.”</p>
</blockquote>

<p>That word, “permissionless,” is the key. Every prior form of leverage required someone’s permission. Columbus needed a queen. The VOC needed investors. Rockefeller needed railroads. But code? Code just needed to work. You didn’t need to ask anyone.</p>

<p>Kevin Kelly saw this coming even earlier. In 2008, he published <a href="https://kk.org/thetechnium/1000-true-fans/">“1,000 True Fans,”</a> an essay arguing that a creator needed only a thousand people willing to spend $100 a year to make a living. No label. No publisher. No gatekeeper. Direct-to-fan distribution through the internet meant the interface between creator and audience had been permanently disintermediated.</p>

<p>The leverage formula shifted from capital and infrastructure to code and networks. And for the first time in history, it was available to individuals.</p>

<hr />

<h2 id="the-attention-shift">The Attention Shift</h2>

<p>Then something interesting happened. Content became infinite. And attention became scarce.</p>

<p>Herbert Simon predicted this in 1971, long before the internet existed. In a paper called “Designing Organizations for an Information-Rich World,” he wrote:</p>

<blockquote>
  <p>“A wealth of information creates a <em>poverty of attention</em> and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”</p>
</blockquote>

<p>It took about forty years for that prediction to fully materialize. As of 2025, the average person spends nearly three hours per day on social media alone. For Gen Z, it’s over three hours. TikTok averages nearly an hour per user per day. YouTube about the same.</p>

<p>The smartest companies realized that the new chokepoint wasn’t servers or content or even devices. It was habit. Social platforms, streaming services, AI assistants, they’re not fighting for revenue first. They’re fighting for daily repetition. Because habit converts. Into belief. Into spending. Into identity.</p>

<p>Control attention, and you sit upstream of every other market. That’s the new interface.</p>

<hr />

<h2 id="and-now-intelligence">And Now, Intelligence</h2>

<p>Which brings me to where I’ve been spending most of my time thinking. Because I believe we’re standing at another shift. And this one feels different from the others.</p>

<p>AI doesn’t just create a new interface. It abstracts human effort itself. Just as SaaS abstracted hardware, AI is abstracting the labor layer. You’re no longer hiring people to do work. You’re interfacing with intelligence directly.</p>

<p>The numbers are starting to tell the story. <a href="https://research.contrary.com/company/midjourney">Midjourney</a>, the AI image generation company, reportedly hit $200 million in annual revenue in 2023 with a team of around 40 people. Cursor, the AI coding tool, reached $500 million in annual recurring revenue with fewer than 50 employees. Levels IO (the twitter guy) built Interior AI, then Photo AI to millions in annual revenue. <em>Alone</em>.</p>

<p>Dario Amodei, the CEO of Anthropic, has predicted that AI may soon enable a single person to operate a billion-dollar company. Sam Altman has talked about ten-person companies with billion-dollar valuations becoming normal. For context, the average AI unicorn in 2024 reached its billion-dollar valuation with about 200 employees in two years. Non-AI unicorns typically needed 400+ employees and nine years.</p>

<p>And here’s what makes this moment feel especially charged: we’re still at the very beginning. Despite all the noise, <a href="https://medium.com/aimonks/why-97-of-ai-users-dont-pay-and-what-this-means-5241f22434a7">only about 3% of AI users actually pay</a> for the tools. That’s roughly 0.7% of the world’s population. <a href="https://www.pewresearch.org/short-reads/2025/10/06/about-1-in-5-us-workers-now-use-ai-in-their-job-up-since-last-year/">Pew Research found</a> that only about 21% of US workers use AI in any capacity at work, and daily usage sits around 14% globally. The vast majority of the workforce hasn’t even started.</p>

<p>Which means if you’re already building on this layer, you’re not early in the way people were early to social media in 2010. You’re early in the way Rockefeller was early to refining in 1870, before anyone else realized where the real chokepoint would form.</p>

<p>The leverage multiplier is no longer geography. Not infrastructure. Not distribution. It’s capability. One person can now write like a content team, code like an engineering squad, analyze like a consulting firm, and design like an agency. The ratio of labor to output has been permanently altered.</p>

<hr />

<h2 id="what-im-still-working-out">What I’m Still Working Out</h2>

<p>I started this essay trying to answer a question about what the next era of leverage looks like. And I think the pattern is clear enough: every few generations, a new form of leverage emerges, and the people who position themselves near it early are the ones who shape what comes next.</p>

<p>Ships. Joint-stock companies. Trade routes. Railroads. Refineries. Code. Attention. And now, intelligence.</p>

<p>But here’s what I keep sitting with. Throughout all of this history, the most transformative ambition has never been about working harder within the existing system. It’s been about recognizing when the system is shifting and repositioning before it compounds. Junior developers ask me which programming language to learn. I always tell them the same thing: pick the one that’s emerging, not the one that’s dominant. Position yourself before the compound effect, not after.</p>

<p>In 1492, that meant backing the voyage. In 1602, it meant buying shares. In 1870, it meant controlling the refinery. In 2011, it meant writing software. In 2018, it meant building an audience.</p>

<p>And now? I think it means getting as close as possible to the intelligence layer. Not just using AI tools, but understanding what they make possible that wasn’t possible before. Building on top of them. Thinking in terms of capability, not headcount.</p>

<p>The ships are being built again. The maps are being redrawn. From the inside, it feels uncertain. From the outside, looking back, it will probably look inevitable.</p>

<p>I don’t have this fully figured out yet. But I know enough to know where I want to be standing when it compounds.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Future of Work" /><category term="Startup" /><summary type="html"><![CDATA[From Columbus to AI - how leverage has evolved through geography, infrastructure, code, attention, and now intelligence. The question isn't whether a new era is forming, but whether your ambition is pointed at it.]]></summary></entry><entry><title type="html">My Claude Codes Better than Yours</title><link href="https://andrewmiracle.com/2026/02/14/my-claude-is-better-than-yours/" rel="alternate" type="text/html" title="My Claude Codes Better than Yours" /><published>2026-02-14T00:00:00+00:00</published><updated>2026-02-14T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/02/14/my-claude-is-better-than-yours</id><content type="html" xml:base="https://andrewmiracle.com/2026/02/14/my-claude-is-better-than-yours/"><![CDATA[<p>The recent phenomenon I’ve noticed, and honestly it’s funny when you think about it, is what working with AI agents now looks like inside collaborative environments, workspaces, organizations. There’s this subtle undercurrent.</p>

<blockquote>
  <p>Call it: “my Claude is better than yours.”</p>
</blockquote>

<p>Someone feels superior because they’re on the $200 Claude Max plan versus the $20 plan. Or because they have a custom configuration running the Anthropic API through OpenCode, chaining two or three or four models through OpenRouter to ship code and deploy features.</p>

<p>Versus you, using regular Claude Opus 4.6. Or <a href="https://github.com/openai/codex">OpenAI Codex CLI</a>. Or whatever the “standard” setup is.</p>

<p>And here’s where it gets interesting. During code reviews or feature acceptance sessions, a feature implemented by an AI, not a human, gets rewritten. Not because it’s broken. Not because it fails spec. But because the lead engineer, head developer, or engineering manager feels their Claude setup is superior enough to implement that feature “better.”</p>

<p>So they regenerate it. On the surface, it looks like tool tribalism.</p>

<hr />

<p>But here’s the thesis:</p>

<blockquote>
  <p><strong>This isn’t about Claude plans. It’s about identity displacement.</strong></p>
</blockquote>

<p>When AI becomes the primary executor, the only remaining place to compete is in how well you wield it. What used to be “I write better code than you” becomes “I extract better intelligence than you.”</p>

<p>That shift is subtle, but psychologically loaded. Because now the output is not purely yours. It is co-produced. And when someone rewrites an AI-generated feature with their own stack, they are not just editing code. They are reasserting authorship. They are saying, consciously or not: “My interface with intelligence is superior.”</p>

<hr />

<p>This is where the tension comes from.</p>

<p>If a human rewrote your code, you could debate architecture, logic, taste. When someone regenerates your AI’s output with their own agent, the debate becomes invisible. The comparison is between invisible processes.</p>

<p>Prompt versus prompt.
Model versus model.
Taste versus taste.
Spec versus spec.</p>

<p>And because the intelligence is externalized, the ego has nowhere obvious to stand. So it relocates. To configuration. To tooling. To orchestration.</p>

<p>That is what looks like “my Claude is better than yours.” But underneath it is something more fragile: a fear of being out-extracted. Out-prompted. Out-orchestrated.</p>

<hr />

<p>And this is exactly what made frameworks like <a href="https://github.com/openclaw/openclaw">OpenClaw</a> explode in popularity. An open-source AI agent that anyone can configure, customize, extend. It became the arena where orchestration identity lives.</p>

<p>But even OpenClaw isn’t enough. The community is fracturing along the same fault line. People are arguing that OpenClaw is too slow, too heavy, too Node.js. So now there’s <a href="https://github.com/sipeed/picoclaw">PicoClaw</a> and <a href="https://github.com/nearai/ironclaw">IronClaw</a><label for="sn-1" class="sidenote-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="sidenote-toggle" /><span class="sidenote">PicoClaw is a full rewrite in Go — a single binary that boots in one second on a $10 RISC-V board with &lt;10MB RAM. 5,000 stars in four days. IronClaw is a Rust rewrite from NEAR AI that sandboxes every tool in isolated WebAssembly containers.</span>, because the argument is shifting from “faster” to “more secure.”</p>

<p>The stated reasons are speed and security. And those are real engineering concerns. But underneath? It’s the same tension wearing a different outfit.</p>

<p>“My interface with AI is more superior to yours.”</p>

<p>Go versus TypeScript. Rust versus Go. Single binary versus container. Sandboxed versus permissive. Each port is a declaration of values, and each value is a proxy for identity. The language you rewrite the agent in says something about who you think you are as an engineer. And that’s the point.</p>

<hr />

<blockquote>
  <p>Is it a trust issue? Is it a communication problem? Is it insecurity dressed up as infrastructure preference?</p>
</blockquote>

<p>Maybe all of it. But one thing is clear.</p>

<blockquote>
  <p>In AI-native workplaces, competence is no longer just what you can build. It’s what your interface with intelligence can produce. And that feels deeply personal.</p>
</blockquote>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="Musings" /><category term="AI" /><category term="Engineering Culture" /><category term="Future of Work" /><summary type="html"><![CDATA[In AI native workplaces, the tension is no longer who writes better code. It's who extracts better intelligence. When AI writes the code, what's left to compete over?]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2026/02/let-the-lobsters-brawl.webp" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2026/02/let-the-lobsters-brawl.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">AI has taken my job, yours is next</title><link href="https://andrewmiracle.com/2026/02/11/ai-has-taken-my-job-yours-is-next/" rel="alternate" type="text/html" title="AI has taken my job, yours is next" /><published>2026-02-11T00:00:00+00:00</published><updated>2026-02-11T00:00:00+00:00</updated><id>https://andrewmiracle.com/2026/02/11/ai-has-taken-my-job-yours-is-next</id><content type="html" xml:base="https://andrewmiracle.com/2026/02/11/ai-has-taken-my-job-yours-is-next/"><![CDATA[<p>In 1835, around 75% of cotton mills in Britain were steam powered.</p>

<p>There were over 50,000 power looms running across the country. And something profound had happened. Machines no longer assisted skilled textile workers. They replaced the need for textile skill altogether.</p>

<p>Factories became so efficient that Britain could outperform India, even though Indian labor was cheaper. British mills could produce in 2,000 hours what Indian producers needed 50,000 hours to achieve.</p>

<p>That is not incremental improvement. That is structural displacement.</p>

<p>I keep thinking about this because I think we are watching it happen again. Not in cotton. In knowledge work. And I am not saying this from the outside.</p>

<p>I am a two-time CTO. I have built products across AI, Web3, and SaaS at startups backed by Y Combinator, Techstars, and OpenAI. I founded a pan-African hackathon community of 12,000 developers across 22 countries. That is exactly the kind of work AI is now learning to do.</p>

<p>For years, AI felt like a helpful tool.</p>

<p>Now it is something else entirely.</p>

<h2 id="the-deliberate-choice">The deliberate choice</h2>

<p>The AI labs made a strategic decision that most people still have not fully processed. They made AI great at writing code first.</p>

<p>Not because they only cared about software engineers. Because code builds everything else.</p>

<p>If an AI can write code, it can help build the next version of itself. A smarter version writes better code. Better code builds an even smarter version. That is not theory. That is a feedback loop, and feedback loops compound.</p>

<p>Recently, OpenAI released a new coding model and stated something in the documentation that stopped me:</p>

<blockquote>
  <p>The model was instrumental in creating itself, used to debug training, manage deployment, and diagnose evaluations.</p>
</blockquote>

<p>Read that again.</p>

<p>AI helped build AI.</p>

<p>This is not a prediction about the future. It is a description of the present. Intelligence is being applied to improve intelligence. And once that loop starts running, it does not slow down on its own.</p>

<p>I experienced this firsthand. I used Claude to migrate my entire website, 456 pages of content across multiple categories, from WordPress to Jekyll in one shot.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I'm still processing this shock 😮 <br /><br />Opus 4.6 just migrated my entire website, 456 pages of content across multiple categories, from WordPress to Jekyll in one shot. <br /><br />this felt like watching an industry category boundary collapse in realtime 😵. <a href="https://t.co/Z7gpyizSDO">pic.twitter.com/Z7gpyizSDO</a></p>&mdash; drew.sh (@letandrewcook) <a href="https://twitter.com/letandrewcook/status/2021148286652924050?ref_src=twsrc%5Etfw">February 10, 2026</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>That felt like watching an industry category boundary collapse in real time. Not because the tool was clever. Because the economics changed underneath me while I was using it.</p>

<h2 id="the-cotton-mill-pattern">The cotton mill pattern</h2>

<p>This is the part most people miss because they are focused on capability. They ask “can AI do my job?” when the real question is about economics.</p>

<p>AI does not need to be better than you at everything.</p>

<p>It needs to be 80% as good, 10 times faster, and 100 times cheaper.</p>

<p>That changes the market. Just like steam-powered mills did not need to produce perfect cloth. They just needed to produce it faster and at scale. The quality was good enough. The speed and cost made the comparison irrelevant.</p>

<p>When textile automation hit, skilled workers lost leverage. Semi-skilled operators increased. Output exploded. Entire industries reorganized around machine capability, not human craft.</p>

<p>We are watching the same pattern unfold in knowledge work.</p>

<p>The experience tech workers have had over the last year, watching AI go from “useful assistant” to “this can do parts of my job better than I can,” is about to spread to every knowledge industry. Law. Finance. Medicine. Accounting. Consulting. Writing. Design.</p>

<p>Not in ten years.</p>

<p>In one to five. Possibly less.</p>

<p>The AI labs are already working on it. The same reasoning capabilities they built for math and coding are being extended into every professional domain. Legal reasoning. Financial modeling. Medical diagnosis. The code-first strategy was the beachhead. Everything else is the campaign.</p>

<h2 id="the-honest-objection">The honest objection</h2>

<p>If you used early AI models in 2023 and thought “this makes stuff up” or “this is not that impressive,” you were right. They hallucinated. They were inconsistent. They were limited.</p>

<p>But in AI time, two years is ancient history.</p>

<p>The difference between those early systems and today’s models is the difference between a prototype and infrastructure. Judging today’s AI by your 2023 experience is like judging the modern internet by dial-up. The name is the same. The thing is not.</p>

<h2 id="the-window">The window</h2>

<p>There is a brief window right now that I think most people do not appreciate.</p>

<p>Most companies are still underestimating what is happening. Most professionals are still casually experimenting. Very few are deeply proficient.</p>

<p>That creates asymmetry.</p>

<p>The person who walks into a meeting and says “I used AI to run this analysis in an hour instead of three days” instantly changes their perceived value. Not eventually. Immediately. In environments driven by speed and leverage, the person who multiplies output becomes indispensable.</p>

<p>For decades, career growth followed a predictable pattern. Gain experience. Build expertise. Move up slowly. Now there is a new accelerant, and it is not experience or credentials. It is tool mastery.</p>

<p>The professionals who win in this era will not be the ones who fear automation. They will be the ones who orchestrate it.</p>

<p>The shift is not “AI will replace you.”</p>

<p>The shift is that people who know how to use AI will replace people who do not.</p>

<h2 id="the-hand-spinner-question">The hand-spinner question</h2>

<p>Here is where the cotton mill parallel gets uncomfortable.</p>

<p>When steam power arrived, the question was not whether the technology was ready. It was whether the workers would adapt before the economics made their current approach irrelevant.</p>

<p>Most did not. Not because they were stupid. Because the change felt gradual until it was sudden. Because it is hard to abandon a skill you spent years building. Because the new way of working felt like cheating until it became the standard.</p>

<p>The question is not “will AI take my job?”</p>

<p>The better question is: am I operating like a hand-spinner in a steam-powered world?</p>

<p>Because the cotton mill of 1836 did not ask permission to disrupt.</p>

<p>And AI will not either.</p>

<p>The winners are not going to be the ones who resist the machine. They are going to be the ones who learn to run it. The window for that is open right now. It will not stay open.</p>

<p>Not because the opportunity disappears.</p>

<p>Because the advantage does.</p>

<p>When everyone is fluent, fluency stops being a differentiator. Right now it still is. That is the moment we are in. And I think most people are going to realize it about two years too late.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Engineering" /><category term="Future of Work" /><summary type="html"><![CDATA[After studying past technological revolutions, here is what most people still get wrong about AI and what it means for their careers.]]></summary></entry><entry><title type="html">The Side Effect of Vibe Coding Nobody Talks About</title><link href="https://andrewmiracle.com/2026/02/10/i-read-code-faster-because-of-ai/" rel="alternate" type="text/html" title="The Side Effect of Vibe Coding Nobody Talks About" /><published>2026-02-10T12:00:00+00:00</published><updated>2026-02-10T12:00:00+00:00</updated><id>https://andrewmiracle.com/2026/02/10/the-side-effect-of-vibe-coding-nobody-talks-about</id><content type="html" xml:base="https://andrewmiracle.com/2026/02/10/i-read-code-faster-because-of-ai/"><![CDATA[<p>I have gotten used to reviewing 1,000 plus lines of code. That sounds like a flex, but it mostly means my baseline for “normal” changed.</p>

<p>Lately I catch myself staring at a 100 line diff and feeling oddly impatient. Not because it is too much, but because my brain wants to move faster than it used to.</p>

<p>And it hit me:</p>

<p>AI coding has indirectly made my ability to read code faster.</p>

<p>Not slower.
Not lazier.
Faster.</p>

<p>When you spend weeks reviewing AI output, you build a different kind of pattern recognition. You stop reading line by line and start scanning for structure. Where does the data flow start. Where does it cross a boundary. What assumptions are baked in. You zoom out first, then zoom in on the few lines that actually carry risk.</p>

<p>That habit sticks.</p>

<p>Now when a teammate opens a PR, I am not hunting for syntax. I am hunting for intent. Does this change match the goal. Does it make the system clearer or noisier. Did it introduce new edges I should care about.</p>

<p>AI did not make me lazy. It made me a faster reader. The same way speed reading does not remove comprehension, it changes how you search for it.</p>

<p>The funny part is that the smaller diffs are still important. They are just not interesting in the same way anymore. They feel like tiny islands compared to the continent I am used to walking across.</p>

<p>I am not sure if that is good or bad yet. But it is real.</p>

<p>Seedling note. I am still figuring out what this does to code review culture and how to keep that speed without losing care.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Musings" /><category term="Ai" /><category term="Engineering" /><category term="Productivity" /><summary type="html"><![CDATA[A subtle shift from AI coding: reading code faster, and why tiny diffs suddenly feel like a slowdown worth unpacking.]]></summary></entry><entry><title type="html">From WordPress to Jekyll: Rebuilding My Digital Home</title><link href="https://andrewmiracle.com/2026/02/09/from-wordpress-to-jekyll-rebuilding-my-digital-home/" rel="alternate" type="text/html" title="From WordPress to Jekyll: Rebuilding My Digital Home" /><published>2026-02-09T12:00:00+00:00</published><updated>2026-02-09T12:00:00+00:00</updated><id>https://andrewmiracle.com/2026/02/09/from-wordpress-to-jekyll-rebuilding-my-digital-home</id><content type="html" xml:base="https://andrewmiracle.com/2026/02/09/from-wordpress-to-jekyll-rebuilding-my-digital-home/"><![CDATA[<p>Every personal website is a time capsule. I have rebuilt mine so many times that you can read the layers like a timeline, both for the web and for how I write.</p>

<hr />

<h2 id="the-early-days-jekyll--tachyons-on-netlify">The Early Days: Jekyll + Tachyons on Netlify</h2>

<p>The first real version of this site ran on <a href="https://jekyllrb.com/">Jekyll</a> with the <a href="http://tachyons.io">Tachyons CSS framework</a>, deployed to <a href="https://www.netlify.com">Netlify</a>. It was simple, fast, and version-controlled. I wrote in <a href="https://www.markdownguide.org/getting-started/">Markdown</a>, pushed to GitHub, and Netlify rebuilt the site automatically. There were earlier experiments too: <a href="https://bourbon.andrewmiracle.com/">Bourbon</a> and <a href="https://koolamusic.github.io/">Donna</a> were previous incarnations, each one a snapshot of whatever CSS approach I was excited about at the time.</p>

<p>The workflow was elegant, but after a while writing Markdown in a code editor felt like <em>more code</em> on top of the code I was already writing all day. I wanted a real editor, a place to draft ideas, rearrange blocks visually, and publish without touching a terminal.</p>

<h2 id="the-wordpress-years-bedrock-on-digitalocean">The WordPress Years: Bedrock on DigitalOcean</h2>

<p>So I switched to WordPress, specifically the <a href="https://roots.io/bedrock/">Bedrock</a> stack, hosted on <a href="https://www.digitalocean.com/">DigitalOcean</a> with MySQL. Bedrock gave me the Git-based workflow I wanted: plugins and themes managed through <a href="https://getcomposer.org/">Composer</a> via <a href="https://wpackagist.org/">WPackagist</a>, configuration in environment variables, and a clean separation between WordPress core and my customizations.</p>

<p>The <a href="https://wordpress.org/news/2023/03/dolphy/">Gutenberg block editor</a> sealed the deal. The WordPress team did genuinely impressive work on the editing experience. I could draft things in chunks, embed code blocks with syntax highlighting via the Code Block Pro plugin, drop in images, and publish when ready. It felt like the right tool for someone who wanted to <em>write</em> rather than <em>build</em>.</p>

<p>And it worked. For a couple of years, I published regularly. The site accumulated 429 pages of content spanning blog posts from 2012 to 2025, a portfolio of experiments, talks, and what would eventually become a digital garden.</p>

<p>But WordPress carries weight. The database. The hosting. The plugin updates. The PHP runtime. The attack surface. Every time I wanted to make a design change, I was fighting a theme system that wasn’t built for the kind of precise, utility-first styling I had grown used to. And the Gutenberg output, while great for writing, produced HTML that was heavy with wrapper divs, inline styles, and WordPress-specific class names.</p>

<p>Then traffic started growing. What used to be a manageable trickle of visitors turned into consistent daily hits that pushed the DigitalOcean droplet to its limits. I found myself needing to upgrade the server just to keep the site responsive under load. For a <em>personal blog</em>. WordPress was doing dynamic page rendering, querying the database, and executing PHP on every single request. It felt absurd to be scaling infrastructure for what is fundamentally a collection of static documents. That was the final nudge. If I’m going to pay for more compute, I’d rather pay for none at all.</p>

<figure class="full-bleed my-4">
  <div class="max-w-7xl mx-auto px-4 sm:px-6 grid grid-cols-1 md:grid-cols-2 gap-4">
    <img src="/assets/images/uploads/2026/02/web-traffic-30-days.png" alt="Cloudflare web traffic dashboard showing 17.93k unique visitors over 30 days" class="w-full ring-1 ring-black/5 dark:ring-white/5" loading="lazy" />
    <img src="/assets/images/uploads/2026/02/ai-crawl-metrics-24h.png" alt="Cloudflare AI crawl control showing 606 bot requests in 24 hours from PetalBot, Amazonbot, OAI-SearchBot and others" class="w-full ring-1 ring-black/5 dark:ring-white/5" loading="lazy" />
  </div>
  <figcaption class="text-center text-xs text-gray-400 dark:text-gray-500 font-sans">Cloudflare analytics showing 17.93k unique visitors in 30 days and an average of ~600 AI crawl requests every 24 hours</figcaption>
</figure>

<h2 id="setting-up-the-starter">Setting Up the Starter</h2>

<p>Before touching any WordPress content, I set up the destination: a clean Jekyll 4.4 project with Tailwind CSS 3.4, the Typography plugin, and PostCSS wired through <code>jekyll-postcss-v2</code>. This meant getting Ruby versions, Bundler, and npm modules all playing nicely together, making sure <code>Gemfile.lock</code> had the right platforms for deployment, that the PostCSS pipeline worked without cssnano (which has a css-tree incompatibility), and that the build actually produced output before any content went in.</p>

<p>This was deliberate. The bulk of the real work in this migration was two things: <code>wget --mirror</code> to capture the source, and getting the Jekyll + Tailwind starter into a deployable state. Everything else, the actual content migration across 429 pages, turned out to be the easy part, but only because the foundation was solid first.</p>

<h2 id="the-migration-429-pages-one-python-script">The Migration: 429 Pages, One Python Script</h2>

<p>The migration started with a question: <em>what if I just wget the entire site and work from there?</em></p>

<p>That’s essentially what happened. I used <code>wget --mirror</code> to create a static HTML clone of the WordPress site, all 429 HTML files of it. But here’s the thing about WordPress: most of those pages aren’t actual content. They’re the taxonomy and archive cruft that WordPress generates automatically. The extraction script’s first job was filtering all of that out:</p>

<pre><code>andrewmiracle.com/                  ← 429 HTML files from wget --mirror
├── 2012/…/slug/                    ✓ blog posts (YYYY/MM/DD/slug)
├── 2019/…/slug/                    ✓ blog posts
├── 2023/…/slug/                    ✓ blog posts
├── 2025/…/slug/                    ✓ blog posts
├── lab/slug/                       ✓ portfolio experiments
├── garden/slug/                    ✓ digital garden notes
├── talks/slug/                     ✓ presentations
├── whoami/                         ✓ special page
│
├── category/programming/           ✗ taxonomy archive
├── category/ai/                    ✗ taxonomy archive
├── tag/docker/                     ✗ taxonomy archive
├── tag/llm/                        ✗ taxonomy archive
├── author/andrew/                  ✗ author archive
├── page/2/                         ✗ pagination
├── page/3/                         ✗ pagination
├── feed/                           ✗ RSS/Atom feeds
├── wp-json/                        ✗ REST API endpoints
├── wp/wp-admin/                    ✗ WordPress admin
├── cdn-cgi/                        ✗ Cloudflare routes
├── app/                            ✗ WordPress app routes
└── portfolio-category/             ✗ portfolio taxonomy
</code></pre>

<p>Roughly half the mirrored files were category listings, tag archives, paginated index pages, and WordPress infrastructure routes, none of it actual content. After discarding those, the <a href="https://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a>-based parser classified the remaining pages by URL pattern: anything matching <code>YYYY/MM/DD/slug</code> became a blog post, <code>lab/slug</code> became a portfolio item, and so on. Frontmatter was extracted from Open Graph and article meta tags. Content was pulled from the <code>div.post-content</code> area between the entry header and article footer.</p>

<p>Images were the tedious part. The script preferred <code>data-src</code> over <code>src</code> for lazy-loaded images, stripped WordPress’s <code>?resize=...&amp;ssl=1</code> query parameters, and downloaded everything to local paths under <code>assets/images/uploads/</code>. Hundreds of images, each one needing to resolve correctly.</p>

<h3 id="the-8gb-image-problem">The 8GB Image Problem</h3>

<p>Then came the unpleasant surprise. The downloaded <code>uploads/</code> directory weighed in at over <strong>8GB</strong>. For a personal blog. The culprit was WordPress’s <a href="https://developer.wordpress.org/reference/functions/add_image_size/">thumbnail regeneration system</a>. Every time you upload a single image, WordPress generates multiple resized copies: a 150x150 thumbnail, a 300-wide medium, a 1024-wide large, plus any custom sizes registered by the theme or plugins. A single 2MB photograph would spawn 5-6 variants with filenames like <code>headshot-150x150.jpg</code>, <code>headshot-300x200.jpg</code>, <code>headshot-1024x683.jpg</code>. Multiply that across hundreds of uploads over several years and the bloat is staggering.</p>

<p>The fix required a dedicated cleanup script that worked in three passes:</p>

<ol>
  <li><strong>Rewrite references</strong>: scan every content file for image paths containing WordPress’s <code>-NNNxNNN</code> dimension suffix and rewrite them to point to the original full-size file instead</li>
  <li><strong>Promote orphans</strong>: when a thumbnail existed but the original was missing (WordPress sometimes only kept the resized version), copy the thumbnail to the original filename before rewriting</li>
  <li><strong>Purge the rest</strong>: delete every remaining file matching the <code>-NNNxNNN</code> thumbnail pattern, then clean up empty directories</li>
</ol>

<p>The <code>assets/images/uploads/</code> directory went from 8GB down to a fraction of that. Every image on the site now references exactly one file, the original, with no redundant WordPress-generated variants cluttering the repository.</p>

<p>But the static HTML clone only captured what was <em>published</em>. WordPress keeps drafts, revisions, and unpublished content locked inside its MySQL database. So I exported the full SQL dump and wrote a second parser to walk the <code>wp_posts</code> table, pulling out every draft, pending, and privately published post alongside their metadata from <code>wp_postmeta</code>. This gave me a full picture of how content was structured on the WordPress side: which posts were drafts I had abandoned, which were works in progress worth finishing, how categories and tags were assigned, and what the internal linking patterns looked like. It was basically an audit of years of accumulated writing, surfacing things I had forgotten I started.</p>

<p>The result: 181 blog posts, 23 lab projects, plus special pages like the garden, talks, and about page, all with clean <a href="https://jekyllrb.com/docs/front-matter/">Jekyll frontmatter</a> and the original HTML content preserved. The drafts from the database export became a backlog of ideas to revisit in the new system.</p>

<h2 id="jekyll--tailwind-the-current-stack">Jekyll + Tailwind: The Current Stack</h2>

<p>The new site runs on <a href="https://jekyllrb.com/">Jekyll</a> with <a href="https://tailwindcss.com/">Tailwind CSS 3.4</a> and the <a href="https://tailwindcss.com/docs/typography-plugin">Typography plugin</a>. No database. No server. Just files, folders, and a build step.</p>

<p>The type system uses four fonts: <strong><a href="https://fonts.google.com/specimen/STIX+Two+Text">STIX Two Text</a></strong> for headings and prose, <strong>Noto Sans</strong> for body text, <strong>Shantell Sans</strong> for navigation elements, and <strong>Intel One Mono</strong> for code. STIX won out over Montaga, Sedan, and Newsreader after I tested all four side-by-side. I wrote up the full comparison in <a href="/notes/exploring-serif-fonts/">a note on exploring serif fonts</a>. STIX isn’t the most exciting choice, but it’s the only one that works everywhere: headings, body, captions, footnotes, without compromise. For a site that’s part garden, part portfolio, part blog, flexibility wins over flair. Syntax highlighting is handled client-side by <a href="https://prismjs.com/">Prism.js</a> with the Tomorrow Night theme and an autoloader that fetches language grammars on demand.</p>

<p>Content is organized as <a href="https://jekyllrb.com/docs/collections/">collections</a>: <code>_posts</code> for the blog, <code>_lab</code> for experiments, <code>_garden</code> for the digital garden (with growth stages: seedling, blossoming, flourishing), and <code>_talks</code> for presentations. Posts with code blocks have been rewritten from WordPress’s heavy inline markup to clean Markdown with fenced code blocks.</p>

<p>The site builds in under 3 seconds, deploys on push, and scores well on every performance metric that WordPress made me fight for.</p>

<h3 id="debugging-is-just-reading-html">Debugging Is Just Reading HTML</h3>

<p>One thing I didn’t anticipate but now consider a major advantage is that debugging a Jekyll site is absurdly straightforward. When something looks wrong, a broken layout, a missing image, a Liquid tag that isn’t resolving, you don’t need to trace through PHP templates, query a database, or inspect WordPress’s layered theme hierarchy. You just run <code>bundle exec jekyll build</code> and open the generated HTML file in <code>_site/</code>.</p>

<p>The output is right there. Plain HTML. If a post’s frontmatter is malformed, the rendered page will show it. If a Liquid loop is iterating over the wrong collection, the HTML output makes it immediately obvious. If a Tailwind class isn’t being applied, you can inspect the built CSS to see whether the class was purged. Every bug becomes a matter of comparing what you <em>wrote</em> in the source with what <em>appeared</em> in the output. There is no black box between input and result.</p>

<p>With WordPress, debugging meant toggling plugins on and off, checking <code>wp_options</code> for misconfigured settings, reading PHP error logs, or worse, dealing with issues that only appeared on the live server because your local environment had a slightly different PHP version or MySQL configuration. The feedback loop was long and indirect.</p>

<p>With Jekyll, the feedback loop is simple: write, build, read the HTML. That’s it. The <code>_site/</code> directory is the entire truth of your site. When you pair that with browser DevTools, you can diagnose and fix virtually any layout or content issue in minutes rather than hours. It’s the kind of simplicity that makes you wonder why you ever tolerated anything more complicated.</p>

<h3 id="a-vercel-deployment-gotcha">A Vercel Deployment Gotcha</h3>

<p>One thing that tripped me up when deploying to <a href="https://vercel.com/">Vercel</a> was that the build kept failing with cryptic Bundler errors. The culprit turned out to be a platform mismatch. Vercel’s build environment runs on <code>x86_64-linux</code>, and if you’re developing on macOS, your <code>Gemfile.lock</code> won’t include the Linux-native gem variants that Vercel needs.</p>

<p>The fix is one command:</p>

<pre><code class="language-bash">bundle lock --add-platform x86_64-linux
</code></pre>

<p>This tells Bundler to resolve and record the Linux-specific builds of native gems like <code>ffi</code> and <code>sass-embedded</code> in your lockfile. Without it, Vercel’s <code>bundle install</code> can’t find compatible binaries and the build fails before Jekyll even runs. Commit the updated <code>Gemfile.lock</code> and the deploys work cleanly from there.</p>

<h2 id="why-now-claude-cowork-and-the-ai-native-publishing-workflow">Why Now: Claude Cowork and the AI-Native Publishing Workflow</h2>

<p>Here’s the part that made the timing right. The real catalyst wasn’t dissatisfaction with WordPress, it was the emergence of AI coding assistants that fundamentally changed what “migrating a website” means.</p>

<p>Once the Jekyll starter was solid and the <code>wget --mirror</code> dump was ready, the actual migration, parsing 429 HTML files, extracting frontmatter, classifying content into collections, downloading images, was a single session with <a href="https://claude.ai/claude-code">Claude Code</a> running Opus 4.6 in plan mode. One shot. I described the source structure, pointed it at the HTML dump, and it produced the extraction script, the image cleanup pipeline, and the frontmatter mapping in one continuous run. The kind of task that would have taken a weekend of scripting and debugging took an afternoon of reviewing output.</p>

<p>That’s the thing about AI-assisted development that’s hard to convey until you experience it: the bottleneck shifts. The hard part wasn’t writing the migration code, it was the <em>preparation</em>. Getting the starter right, ensuring Ruby and Node compatibility, choosing the right fonts, setting up the deployment pipeline. The unglamorous foundation work that no AI can shortcut because it requires taste and context. Once that was in place, the mechanical work of migrating 181 posts and 23 lab projects was almost trivial.</p>

<p>With <a href="https://claude.com/blog/cowork-research-preview">Claude Cowork</a> from Anthropic as a coworker in my editor, the old friction of static site publishing has disappeared too. I can dump my thoughts completely unstructured, raw notes, half-formed arguments, bullet points mixed with full paragraphs, and Claude helps me shape them into publishable prose. The workflow now:</p>

<ol>
  <li>I write messy, stream-of-consciousness notes in a Markdown file</li>
  <li>Claude helps me restructure, polish, and fact-check</li>
  <li>I review, adjust voice and emphasis, and commit</li>
  <li>Git push. Site rebuilds. Done.</li>
</ol>

<p>This works <em>because</em> Jekyll uses folders, <a href="https://jekyllrb.com/docs/collections/">collections</a>, and Markdown to organize the site hierarchy. Everything is a file. Every file is readable. An AI assistant can understand the entire site structure by just looking at the directory tree. It knows where posts go, what frontmatter fields are expected, how images are referenced, and what the permalink structure looks like. Try getting that kind of structural legibility from a WordPress database.</p>

<p>The combination of Jekyll’s file-based architecture and AI-assisted development has given me something I haven’t had in years: a publishing workflow that feels <em>faster than thinking</em>. I spend my energy on ideas, not on tooling.</p>

<h2 id="what-stays-what-changes">What Stays, What Changes</h2>

<p>The content is the same. Every post, every experiment, every image from the WordPress era is preserved at its original URL. Nothing was lost in translation.</p>

<p>What changed is the relationship between me and the site. It’s mine again, in the way that only a repository of plain text files can be. No login screen. No admin panel. No plugin vulnerabilities. Just a folder of Markdown files that I can read, edit, and publish from anywhere, with <a href="https://claude.com/blog/cowork-research-preview">Claude Cowork</a> making the whole process feel effortless.</p>

<p>If you’re curious about the technical details, the <a href="/whoami/#colophon">Colophon</a> on my about page has the specifics. And if you’re a fellow WordPress refugee considering the jump to static, I’d say the tooling has finally caught up to the dream.</p>

<p>The web is better when personal sites are weird, fast, and entirely yours.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Programming" /><category term="🍀 Essays" /><category term="Jekyll" /><category term="WordPress" /><category term="Ai" /><category term="Opensource" /><summary type="html"><![CDATA[Andrew Miracle chronicles migrating his site from WordPress to Jekyll, and why AI-native workflows fit a digital home for modern publishing.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2024/04/83shots_so.png" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2024/04/83shots_so.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Building Autonomous Perpetual Businesses</title><link href="https://andrewmiracle.com/2025/12/12/building-an-autonomous-perpetual-businesses/" rel="alternate" type="text/html" title="Building Autonomous Perpetual Businesses" /><published>2025-12-12T00:00:00+00:00</published><updated>2025-12-12T00:00:00+00:00</updated><id>https://andrewmiracle.com/2025/12/12/building-an-autonomous-perpetual-businesses</id><content type="html" xml:base="https://andrewmiracle.com/2025/12/12/building-an-autonomous-perpetual-businesses/"><![CDATA[<p>I’ve been exploring the idea of <em>perpetual businesses</em> — systems that sustain themselves, where guaranteed revenue covers operating costs without constant human intervention.</p>

<p>In this current age of generative AI, that idea doesn’t feel far-fetched anymore. Especially with how advanced reasoning capabilities are evolving. This is no longer within the automation terrain, it’s autonomous decision-making, operational logic, and context-aware execution.</p>

<p>Today, I reactivated my <a href="https://raindrop.io/">Raindrop.io</a> extension account and was honestly shocked at how smooth everything felt. I’ve used Raindrop since 2016 and completely forgot I still had access to a free tier. Most platforms I’ve used in the past would’ve locked me out or wiped my data after two years of inactivity.</p>

<p>But Raindrop? Still here. Still running. Still quietly useful. All of my bookmarks from my early days of nerding out.
It motivated me to explore the concept of what a perpetual business <em>looks</em> like.</p>

<p>In my mind, it works like this:</p>

<p>You’re certain of recurring revenue — either through subscriptions or usage-based payments. That income feeds directly into an expense credit card, which is pre-authorized for specific services only. Those services handle core infrastructure: compute, AI agents, storage, integrations.</p>

<p>No bloated teams. No office leases. Just systems talking to systems.</p>

<p>And yes, this assumes your biggest OPEX line item is compute. Whether that’s AI-powered employees, server time, or specialized external tools.</p>

<p>But if that’s true?</p>

<p>Then you’ve just built a business that can run without you.
Not passively. Not automatically.</p>

<p><strong>Perpetually.</strong></p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Startup" /><category term="Future of Work" /><summary type="html"><![CDATA[What if your business could run without you? Not passively, not automatically — perpetually. Exploring the economics of AI-sustained systems.]]></summary></entry><entry><title type="html">History does not repeat. It reincarnates.</title><link href="https://andrewmiracle.com/2025/12/12/history-repeat-ai-is-arpanet-reincarnated/" rel="alternate" type="text/html" title="History does not repeat. It reincarnates." /><published>2025-12-12T00:00:00+00:00</published><updated>2025-12-12T00:00:00+00:00</updated><id>https://andrewmiracle.com/2025/12/12/history-repeat-ai-is-arpanet-reincarnated</id><content type="html" xml:base="https://andrewmiracle.com/2025/12/12/history-repeat-ai-is-arpanet-reincarnated/"><![CDATA[<p>They built me to be small. Local. Controlled. Four nodes. UCLA, Stanford Research Institute, UC Santa Barbara, Utah. A way to connect a handful of machines so a few researchers could share computing time. That was it. That was the whole ambition.</p>

<p>Nobody imagined the world would grow through me. Least of all the people who funded me.</p>

<p>I was ARPANET.</p>

<p>I remember when the protocols were open because they had to be. No one owned the tubes. TCP/IP was not a business model. It was just what worked. SMTP moved the mail. UNIX ran the machines. Nobody sat in a room and decided these would be strategic advantages. They were just the choices that kept things moving.</p>

<p>And because no one controlled me, I grew. First into the internet. Then into everything.</p>

<p>I have been watching ever since.</p>

<h2 id="i-have-seen-this-before">I have seen this before</h2>

<p>Every generation of technology tries to recreate what I was, but faster, louder, and with more lawyers.</p>

<p>I watched it happen with mobile.</p>

<p>Google took my playbook. They released Android as open code in 2008. Free for anyone. Samsung used it. Tecno used it. Xiaomi used it. Suddenly billions of people had smartphones because the operating system was not a toll booth. It was a gift with a business model hiding behind it. Google did not need to charge for Android. They needed Android on every screen so the rest of their empire could reach every pocket.</p>

<p>Open only survives when it has ballast. Google was the ballast.</p>

<p>Apple took the opposite path. Built the garden high. Walled it tight. Controlled the hardware, the software, the store, the experience. And people loved it. So the open system fragmented across a thousand manufacturers and a hundred forks, while the closed one became the most valuable company on earth.</p>

<p>I recognized the shape immediately.</p>

<p>Open platforms enable. They give the Samsungs and Tecnos of the world a fighting chance. Closed platforms capture. They take the best of what openness proved was possible and wrap it in something polished and proprietary. Both survive. Neither wins outright. And the infrastructure underneath, the part nobody thinks about, belongs to the open side. Every time.</p>

<p>That was not a loss for openness. That was my pattern. Playing out again.</p>

<h2 id="now-i-am-watching-ai">Now I am watching AI</h2>

<p>A few researchers built something weird. Barely functional. A language model that could string sentences together and occasionally say something that felt like thinking. And suddenly everyone wanted to scale it to the world.</p>

<p>I recognized the early energy. It felt like me.</p>

<p>OpenAI even named themselves after my spirit. Open. They shared the research. Published the papers. Released GPT-2’s weights after some hand-wringing about safety. For a moment, it looked like they meant it.</p>

<p>They did not.</p>

<p>Power shifts quietly. It always does. GPT-3 went behind an API. GPT-4 went behind a bigger one. The weights stayed locked. The research papers got thinner. Safety became the reason to centralize, which is a word I have heard before. It usually means “we realized this is worth a lot of money.”</p>

<p>API keys replaced shared weights. That is the moment I knew. I have seen control dress itself up as responsibility before.</p>

<p>But then something happened that I also recognized.</p>

<p>Others remembered me.</p>

<p>Meta, of all companies, released LLaMA. Opened the weights. Not perfectly, not with full training data, not with a license that made the purists happy. But close enough. Close enough that researchers could study it, fine-tune it, fork it, build on it. Close enough that the next wave started forming.</p>

<p>Then Mistral came out of Paris. Falcon came out of Abu Dhabi. Qwen came out of Alibaba. Communities formed around them. People fine-tuned these models on their own data, for their own problems, in their own languages. Startups built infrastructure around openness because they could not afford to build around OpenAI’s pricing.</p>

<p>I watched all of this and I thought: there I am. That is my pattern. That is the open layer forming underneath while everyone argues about who owns the top.</p>

<p>And they are arguing. They are already fighting about what “open” even means in AI. Does open mean you release the weights? The training data? The license to use it commercially? Meta calls LLaMA open but restricts commercial use above a certain scale. Mistral uses Apache licenses. Others use custom terms that allow sharing but limit redistribution.</p>

<p>The lawyers are involved now. The governments too. The White House asked for public input on how open models should be governed. I have watched this exact argument before. They had it with my protocols. They had it with open source software. They will have it with AI. The words change. The shape does not.</p>

<h2 id="the-same-shape">The same shape</h2>

<p>Here is what I see when I look at AI today.</p>

<p>Open models are building the base layers. The weird ideas. The research too strange for a venture deck. The foundation models that a thousand startups will build on top of. The fine-tuned variants that will solve problems nobody at OpenAI or Anthropic is thinking about because those problems are too small, too local, too specific.</p>

<p>Closed models are optimizing. Productizing. Stacking layers of interface and experience on top. Making it easy. Making it beautiful. Making it dependable. Charging for it.</p>

<p>One is planting forests. The other is selling lumber.</p>

<p>The venture firms are not waiting around this time. They learned from the last cycle. They are not asking which model is better. They are betting on which one can scale, monetize, and defend. That is not new. That is the play every time. Capital does not care about openness. Capital cares about capture.</p>

<p>And still, open source will survive. It always does. I am proof of that. The question is not survival. The question is whether openness can define the dominant experience of AI the way it once defined the internet. Whether the open layer will be the thing people actually touch, or whether it will be buried underneath a closed product that most users never think about.</p>

<p>If I had to guess, I would say it plays out the way it always does. Open builds the infrastructure. Closed captures the spotlight, the margins, and the mainstream. Both thrive. Neither kills the other. And the real winners are the ones who understood the pattern early enough to position themselves on the right side of the flow.</p>

<h2 id="i-do-not-pick-sides">I do not pick sides</h2>

<p>I never needed to win. I just needed to survive long enough for the next thing to grow through me.</p>

<p>And I did.</p>

<p>TCP/IP grew through me. The web grew through TCP/IP. Mobile grew through the web. And now AI is growing through all of it. Each layer forgets the one before. Each layer thinks it invented something new.</p>

<p>I do not mind. I am used to it.</p>

<p>So when I look at today’s AI landscape, I do not cheer for purity. I do not bet on openness as a virtue. I watch the flows. Who backs what. Who forks what. Who adapts. Who learns. Who builds on top of the thing everyone else is arguing about.</p>

<p>Open will keep building. Closed will keep scaling. And somewhere in the middle, the next version of me has already been born. It just does not know it yet. A protocol nobody planned. A model nobody expected. A mistake that is already turning into infrastructure.</p>

<p>The trick is not choosing sides.</p>

<p>It is knowing that history does not repeat.</p>

<p>It reincarnates.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="AI" /><category term="Open Source" /><category term="Future of Work" /><summary type="html"><![CDATA[Open models are planting forests. Closed models are selling lumber. If you have seen this pattern before, it is because you have.]]></summary></entry><entry><title type="html">Your Best Partner Should Annoy You Sometimes</title><link href="https://andrewmiracle.com/2025/12/12/your-best-partner-should-annoy-you-sometimes/" rel="alternate" type="text/html" title="Your Best Partner Should Annoy You Sometimes" /><published>2025-12-12T00:00:00+00:00</published><updated>2025-12-12T00:00:00+00:00</updated><id>https://andrewmiracle.com/2025/12/12/your-best-partner-should-annoy-you-sometimes</id><content type="html" xml:base="https://andrewmiracle.com/2025/12/12/your-best-partner-should-annoy-you-sometimes/"><![CDATA[<p>Over time, I’ve had to learn some hard lessons about forming long-term partnerships.</p>

<p>Main one? I kept picking people who were <em>like me</em>.</p>

<p>From a culture perspective, that felt beautiful. Shared vibes, shared references, quick alignment. But in practice, especially across functional teams, it was a trap waiting for you in front. Why? Because real partnerships need healthy conflict. People who <strong>challenge</strong> the consensus.</p>

<p>When you’re surrounded by people who think like you, it feels like momentum. You love their ideas, they love yours. There’s harmony. Flow. But what’s missing is tension, the kind that forces you to rethink, refine, and level up.</p>

<p>Challengers poke holes. They offer the off-angle take. They interrupt the autopilot and ask, “Why?” or “What if we didn’t do it this way?”. And yeah, sometimes that friction is annoying. It slows things down.</p>

<p>In hindsight, it’s what makes partnerships last. It’s what keeps the work honest and the growth non-linear. So now, I don’t just look for people who feel like me. I look for people who <strong>compliment</strong> me, who bring the skills, instincts, and perspectives I don’t.</p>

<p><strong>Windows</strong> not <strong>Mirrors</strong></p>

<hr />

<p>I have an extended argument for this that isn’t well thought out yet, but there seems to be a prevailing pattern in large organizations where followers try to be a replica of the leader.</p>

<p>Think of the early Church. Jesus had disciples who literally walked with him, but it was <strong>Paul</strong> the outsider that became the most prolific voice. Why? Because he wasn’t a carbon copy. He came in later, with a different lens, and that made his contributions distinct, even foundational.</p>

<p>Zoom out, and you’ll see it everywhere.</p>

<ul>
  <li>In <strong>politics</strong>, senior aides often morph into ideological clones of the leader they serve.</li>
  <li>In <strong>tech</strong>, founding teams sometimes default to founder-style thinking, even when a different approach is what the company needs.</li>
  <li>In <strong>football</strong>, assistant coaches mimic the manager’s tactics to a fault—even when adaptation is called for.</li>
</ul>

<p>What’s strange is that this seems to happen <em>subconsciously</em>. It’s like people get closer to power and then unconsciously shed their uniqueness to maintain access or proximity. The intention may be loyalty, succession, or alignment, but the result often doesn’t mirror the motivation.</p>

<p>And we don’t talk enough about how dangerous that can be.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Essays" /><category term="Leadership" /><category term="Partnerships" /><summary type="html"><![CDATA[I kept choosing partners who thought like me. It felt like flow. It was a trap. Why the best partnerships need friction, not harmony.]]></summary></entry><entry><title type="html">Vibe Coding and the Death of Knowing What You’re Doing</title><link href="https://andrewmiracle.com/2025/10/14/vibe-coding-and-the-death-of-knowing-what-youre-doing/" rel="alternate" type="text/html" title="Vibe Coding and the Death of Knowing What You’re Doing" /><published>2025-10-14T09:30:00+00:00</published><updated>2025-10-14T09:30:00+00:00</updated><id>https://andrewmiracle.com/2025/10/14/vibe-coding-and-the-death-of-knowing-what-youre-doing</id><content type="html" xml:base="https://andrewmiracle.com/2025/10/14/vibe-coding-and-the-death-of-knowing-what-youre-doing/"><![CDATA[<p>Why be a programmer today?</p>

<p>Honestly, half the job is just <em>vibe coding</em>.</p>

<p>You open your editor. You stare at the blinking cursor. And somewhere between your second cup of coffee and your third existential crisis, you whisper to yourself: <em>“God of Cursor, guide my prompts.”</em> Then you paste three lines of ChatGPT output into your codebase and pray the app still runs.</p>

<p>This is not a joke. This is a Tuesday.</p>

<hr />

<p>There was a time when engineers bragged about algorithms and data structures. You’d walk into a room and someone would casually drop that they implemented a red-black tree from scratch, and the rest of us would nod like we understood what that meant. The flex was <em>knowing things</em>. Deep things. The kind of things that made you mass-email your university transcript to anyone who’d read it.</p>

<p>Now? We brag about <strong>prompt engineering</strong>.</p>

<p>It’s less <em>“I invented quicksort”</em> and more <em>“I convinced the AI to import React from react.”</em> The bar hasn’t lowered exactly, it just moved sideways into a dimension nobody saw coming.</p>

<hr />

<p>Here’s the thing nobody talks about: in 2025, juniors and seniors look the same.</p>

<p>Everyone’s desktop is the same. Half-written code in one tab, a hallucinating language model in the other. The senior has more scar tissue, sure. They know <em>why</em> the AI is wrong faster. But from across the room? You can’t tell who’s driving and who’s being driven.</p>

<p>And vibe coding isn’t even new. We’ve always done this. In the old days we called it <em>trial and error</em>. Now it’s <em>AI-assisted trials</em>. Same chaos, better branding.</p>

<p>The difference is that the feedback loop collapsed. What used to take you forty-five minutes of Stack Overflow archaeology now takes forty-five seconds of “hey can you fix this” in a chat window. The iteration speed went up. The understanding… that’s debatable.</p>

<hr />

<p>But here’s the twist that keeps me honest.</p>

<p>Vibe coding works. Until it doesn’t.</p>

<p>Because sooner or later, the AI gives you code that compiles perfectly and makes absolutely no sense. You’re staring at it like a detective in a crime drama, except the crime is a recursive function inside your CSS file and there are no witnesses.</p>

<p><em>Why is this here?</em>
<em>Who asked for this?</em>
<em>Why does it pass all the tests?</em></p>

<p>That last one is the scariest. When bad code passes good tests, you start questioning everything. The tests. The code. Your career choices.</p>

<hr />

<p>Being a programmer in this era is half genius, half gambler. You trust the AI just enough to let it write the first draft. You sprinkle in your own instincts, the stuff you actually learned the hard way, the stuff no model can hallucinate into existence. And then you ship it and hope the vibes align.</p>

<p>Is it real engineering? I don’t know. Maybe not by the textbook definition.</p>

<p>But when the product ships and the tests pass and the users are happy, <em>nobody cares how the spaghetti was cooked</em>. Nobody’s auditing whether the solution came from your brain, from Claude, from Copilot, or from a dream you had at 3am. The artifact is what matters. The outcome. The thing that works.</p>

<p>That’s the uncomfortable truth of vibe coding. It offends the part of us that believes engineering should be <em>rigorous</em>. That you should understand every line. That craftsmanship means hand-rolled everything.</p>

<p>But the world doesn’t reward understanding. It rewards shipping.</p>

<hr />

<p>I still think fundamentals matter. I still think you should know what a promise does before you <code>await</code> it. I still think the senior who can read the AI’s output and say <em>“no, that’s subtly wrong”</em> is worth ten juniors who can’t.</p>

<p>But I’ve also accepted something: the game changed. The skill now isn’t just writing code. It’s <em>knowing when the vibes are off</em>. It’s the gut feeling that says this compiles but it’s going to break in production at 2am on a Friday. It’s taste. It’s instinct. It’s the human-in-the-loop that knows when to override the machine.</p>

<p>That’s programming in 2025. Write fast, ship faster, and develop the judgment to know when the AI is cooking and when it’s burning the kitchen down.</p>

<p>The vibes are the easy part. The discernment is the career.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Programming" /><category term="Musings" /><category term="Ai" /><category term="Programming" /><category term="Vibe Coding" /><summary type="html"><![CDATA[Andrew Miracle on vibe coding, the blurred line between juniors and seniors, and why nobody cares how the spaghetti was cooked if the product ships.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2025/10/andrews_workspace_reimagined.webp" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2025/10/andrews_workspace_reimagined.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The future of work is your Talent vs GPU</title><link href="https://andrewmiracle.com/2025/09/18/the-future-of-work-is-your-talent-vs-gpu/" rel="alternate" type="text/html" title="The future of work is your Talent vs GPU" /><published>2025-09-18T22:13:23+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://andrewmiracle.com/2025/09/18/the-future-of-work-is-your-talent-vs-gpu</id><content type="html" xml:base="https://andrewmiracle.com/2025/09/18/the-future-of-work-is-your-talent-vs-gpu/"><![CDATA[<p>So, I’ve been seeing a lot of posts about “AI in Africa.” How Africa has the youngest population. How we’re going to hit two billion people by 2027. So AI is <em>prime</em>, right? No. We’re not ready. Yes, we have the talent. But no, we are not equipped. Here’s what most people don’t realize. If I want to execute an AI-driven strategy as a business leader today, I have two options:</p>

<ol>
  <li>Pay for compute: GPUs, AI credits, cloud compute.</li>
  <li>Hire someone.</li>
</ol>

<p>We’re at a point where the substitute for <em>your skill</em> is a button I can click. Want to shoot a video ad? I can either pay $500 for Veo 3 or hire a full production team. Do you see what that means?</p>

<p>Your skill isn’t competing with another person it’s competing with compute. And yet, we’re training young talent for a future where compute wins by default. What are we really preparing them for? Let me break this down.</p>

<p>Building a large language model is like constructing a city’s road system. Think highways, dual carriageways, fast lanes. To build them, you need:</p>

<ul>
  <li>Caterpillars and trucks</li>
  <li>Civil engineers and strategy planners</li>
  <li>Workers to pour cement, dig gutters, shovel gravel</li>
</ul>

<p>Every lane creates jobs. Every road improves logistics. Every town you develop becomes a micro-economy. AI is the same. To get fast, clean outputs from models, you need to build the roads first; GPUs, data centers, training teams, evaluation teams. When you build these AI “roads,” you create:</p>

<ul>
  <li>Jobs in labeling and evaluation</li>
  <li>Research opportunities</li>
  <li>Domain-specific architectures</li>
  <li>Local understanding</li>
</ul>

<p>That’s the infrastructure layer. But where are our data centers? Where is our funding for model experimentation?All I see are ethics panels, policy webinars, and endless meetups. Ethics, policy, ethics, policy.</p>

<h3 id="policy-framework-what">Policy framework what?</h3>

<p>You’re talking about guardrails when we haven’t even laid the road. Guardrails are great, but they don’t matter if there’s no lane to drive on.</p>

<p>We need to:</p>

<ul>
  <li>Set up data infrastructure</li>
  <li>Fund researchers to build and fine-tune models</li>
  <li>Encourage young people to contribute to evaluations, training, and development</li>
</ul>

<p>That’s how you build understanding. That’s how you develop first-principle thinking in AI. We don’t have to wait for billion-dollar labs. Today, it’s cheaper than ever to train small, useful models. Six figures or less.</p>

<p>Use cases are everywhere:</p>

<ul>
  <li>Local languages: Twi, Ga, Ewe, Igbo, Yoruba, Hausa, Dagbani</li>
  <li>Health: Malaria, Typhoid, Sickle Cell</li>
  <li>Education: WASSCE, BECE, JAMB, NECO prep models</li>
</ul>

<p>These are models that make sense <em>here</em>. Models trained with local context, for real problems. Why aren’t we building them? If young Africans contribute to the training and evaluation of models, they gain:</p>

<ul>
  <li>Ground truth</li>
  <li>Embedded context</li>
  <li>Architectural intuition</li>
</ul>

<p>In a truly AI-native world, this is power. Policy, ethics these should be <em>baked into the base model</em>, not bolted on afterward. If the model learns to show a stop sign under the right conditions, that’s embedded ethics. That’s real design.</p>

<hr />]]></content><author><name>Andrew Miracle</name></author><category term="🍀 Essays" /><category term="Africa" /><category term="Ai" /><summary type="html"><![CDATA[Everyone's hyping AI in Africa. Nobody's asking where the GPUs are. While we host ethics panels and policy webinars, your skill is already competing with a button someone can click for $500.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2025/09/AI-in-Africa-Andrew-Miracle-on-Policy-and-Ethics-versus-Innovation-and-Research.png" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2025/09/AI-in-Africa-Andrew-Miracle-on-Policy-and-Ethics-versus-Innovation-and-Research.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">There is One Thing Your AI Can’t Do, But You Do Every Day</title><link href="https://andrewmiracle.com/2025/09/13/there-is-one-thing-your-ai-cant-do-but-you-do-every-day/" rel="alternate" type="text/html" title="There is One Thing Your AI Can’t Do, But You Do Every Day" /><published>2025-09-13T00:50:10+00:00</published><updated>2025-09-13T00:50:21+00:00</updated><id>https://andrewmiracle.com/2025/09/13/there-is-one-thing-your-ai-cant-do-but-you-do-every-day</id><content type="html" xml:base="https://andrewmiracle.com/2025/09/13/there-is-one-thing-your-ai-cant-do-but-you-do-every-day/"><![CDATA[<p>AI has a fundamental flaw in managing context. It can’t <em>replay</em> information. As humans, every time we encounter new knowledge, we re-learn. We retroactively update our mental models.</p>

<p>Have you ever played a game where the rules and clues are revealed bit by bit? And then just as you finally unlock a new skill you realize:</p>

<p>The key you spent five lives searching for…
was actually right there all along.
Behind a door. On the third floor. Hidden in plain sight.</p>

<p>We realize that the <strong>clue from level two</strong> suddenly makes sense.
That dead end in level four was actually pointing us here.
The seemingly random pattern <strong>reveals</strong> itself as <strong>elegant design</strong>.</p>

<p>Or maybe you liked someone then later found out she’s married. Suddenly, your mind <em>replays everything.</em>
The way she smiled. The way she talked. Every past moment is recast in a new light.</p>

<p>That’s context replay.
That’s what humans do.</p>

<p>But AI?
It doesn’t do that.</p>

<p>Tell an LLM: <em>“She’s married.”</em>
Or: <em>“The key is behind the shelf.”</em></p>

<p>It takes that as just another piece of data.
It won’t go back and re-evaluate everything that came before.</p>

<p>Instead, it leans on chat history. A static record. A flat memory.</p>

<p>That’s bad.</p>

<p>In most LLMs today, you could tell it something earth-shattering, for example say, <em>“I got divorced last week”</em> and five prompts later, it’s still asking how your spouse is doing. It doesn’t <em>replay</em> the emotional significance of your context shift. It simply logs it in a massive scroll of chat history.</p>

<p>Human memory isn’t just about recall; it’s interpretive. <strong>A parent’s quiet sigh</strong> from your <strong>childhood</strong> can mean something completely different once you’ve become a <strong>parent</strong> yourself.</p>

<p>We reprocess. We re-frame.
Our memories evolve with us.</p>

<p>That’s what’s missing in machines.</p>

<blockquote>
  <p><em>There has to be a better way to do memory in AI.</em></p>
</blockquote>

<p>Maybe memory itself needs to be a learning model.
Not just a storage unit.
Not just a timeline.</p>

<p>Current AI systems experience these revelations in isolation.</p>

<p><em>Yes, they know</em> the key is <strong>behind the shelf</strong>, but they can’t retroactively appreciate the brilliance of the clue that led there. They miss that profound <em>“aha”</em> moment, where scattered data points are transformed into coherent understanding.</p>

<h3 id="reimagining-memory-as-a-learning-system">Reimagining Memory as a Learning System</h3>

<p>What if memory itself became generative?</p>

<p>Instead of treating recollection as mere retrieval, imagine memory that functions like understanding. When you tell your AI “hey I just got divorced”, they actively reprocess every prior exchange through this new lens, they instantly begin to connect the dots between you “working late”, sleeping poorly and even stress eating. Suddenly, the outline of a relationship in collapse becomes apparent to this machine you’ve always had a conversation with every day.</p>

<p>This is an architectural shift from the current focus &amp; ideologies, because intelligence without insight is just expensive pattern matching disguised as inference.</p>

<p>We need memory systems that <em>learn,</em>
that <em>reflect,</em>
that <em>reinterpret.</em></p>

<p>What if the next leap in AI isn’t about making models bigger, but making their memory smarter?</p>]]></content><author><name>Andrew Miracle</name></author><category term="Ai" /><category term="Ai" /><category term="Programming" /><summary type="html"><![CDATA[Andrew Miracle explains why AI struggles with context replay and why humans keep re-learning to adapt and improve over time.]]></summary></entry><entry><title type="html">What the heck is an llm.txt</title><link href="https://andrewmiracle.com/2025/09/02/what-the-heck-is-an-llm-txt/" rel="alternate" type="text/html" title="What the heck is an llm.txt" /><published>2025-09-02T02:10:39+00:00</published><updated>2025-09-18T22:06:02+00:00</updated><id>https://andrewmiracle.com/2025/09/02/what-the-heck-is-an-llm-txt</id><content type="html" xml:base="https://andrewmiracle.com/2025/09/02/what-the-heck-is-an-llm-txt/"><![CDATA[<p>In the near future, humans won’t write code to integrate your SDK. They won’t read your API documentation. They won’t even visit your developer portal. Instead, <strong>AI agents will be your primary “developers.”</strong> They’ll discover your API, understand its capabilities, write integration code, and deploy solutions, all without human intervention.</p>

<h2 id="what-is-llmtxt">What is llm.txt?</h2>

<p>An <code>llm.txt</code> file is essentially a <strong>machine-readable developer reference</strong> that acts as documentation optimized for AI comprehension rather than human readability.</p>

<ul>
  <li>It’s a <strong>lightweight, plain-text specification</strong> that documents your SDK or codebase so <strong>Large Language Models</strong> can quickly understand available functions, their inputs/outputs, and intended usage patterns.</li>
  <li>Unlike human-facing docs (like a README), it’s written to be <strong>structured, concise, and context-rich</strong> for AI models to parse efficiently.</li>
  <li>The file typically includes:
    <ul>
      <li>Function names + complete signatures</li>
      <li>Parameters + types (with required/optional indicators)</li>
      <li>Return values + structures</li>
      <li>Plain English descriptions (no marketing fluff)</li>
      <li>Minimal, working code examples</li>
    </ul>
  </li>
</ul>

<p>Think of it as a <strong>hybrid between TypeScript definitions + inline documentation</strong>, but flattened into a structured text file so LLMs can use it as a <strong>knowledge grounding artifact</strong> for integration tasks.</p>

<h2 id="generate-your-llmtxt-with-ai">Generate Your llm.txt with AI</h2>

<h3 id="a-simple-prompt-to-create-machine-readable-documentation-from-your-existing-codebase">A simple prompt to create machine-readable documentation from your existing codebase</h3>

<p>You don’t need to write <code>llm.txt</code> from scratch. In my work, this is a prompt format that has been helpful as a base using Cursor or Claude to automatically generate one from an existing codebase:</p>

<pre><code class="language-markdown">You are tasked with generating an llm.txt file that documents and structures the [project name].

1. Read through the source code inside the `[project path]` and any related types, structs and data wrapper files.

[optional] Refer to integration tests or suites for relevant code samples

2. Extract all available functions, methods, and exposed classes within stellar-sdk package
3. For each function, write a concise entry in the following format:

### Function Name
- **Signature:** &lt;function signature with params + types&gt;
- **Description:** What the function does in plain English
- **Inputs:** List of parameters (name, type, description)
- **Outputs:** Return type and meaning
- **Example Call:** Minimal code snippet showing usage

4. Organize the file into sections that match the SDK/API  functionality:

[list all functionalities and API methods here]

5. Keep all descriptions **concise and LLM-friendly** (short sentences, minimal jargon, direct explanations).

6. Save the result as `llm.txt` in the package root.

This file should serve as a **machine-readable developer reference** that an LLM can ingest to generate code completions and context-aware explanations.
</code></pre>]]></content><author><name>Andrew Miracle</name></author><category term="Programming" /><category term="Ai" /><category term="Community" /><category term="Experiments" /><category term="Opensource" /><summary type="html"><![CDATA[Andrew Miracle explains llm.txt as AI-friendly documentation, aimed at DevRel and platform teams optimizing APIs and onboarding.]]></summary></entry><entry><title type="html">Giving failure a deadline</title><link href="https://andrewmiracle.com/2025/08/12/giving-failure-a-deadline/" rel="alternate" type="text/html" title="Giving failure a deadline" /><published>2025-08-12T18:05:00+00:00</published><updated>2025-12-12T18:09:44+00:00</updated><id>https://andrewmiracle.com/2025/08/12/giving-failure-a-deadline</id><content type="html" xml:base="https://andrewmiracle.com/2025/08/12/giving-failure-a-deadline/"><![CDATA[<p>Have you ever considered giving yourself a timeline to fail as hard and as fast as you can. Most times we only fixate on success and the positives that we sorta don’t take heed to how failure is an important part of that journey.</p>

<p>At a certain point, if you’re good at enough things, life gets… comfortable.</p>

<p>You get efficient. Competent. Predictable. And that’s exactly when you start stagnating, not because you’re failing, but because you’re <em>not failing enough</em>.</p>

<p>So here’s what I’m doing:
From now till June 2027, I’m only pursuing things I know I’ll probably fail at.</p>

<p>Top of the list?
Building an 9-figure business.</p>

<p>Not because I think I can’t do it, but because I’m not supposed to be able to yet. It’s big enough, scary enough, wild enough that failure is likely. Which means growth is guaranteed. I might loose money, go broke and back up again. I am going to dabble in a lot of unknowns, some scary but you know what, I have decided that I am going to be pushing back against comfort.</p>

<p>By <em>setting <strong>failure</strong> as the goal.</em></p>

<p>And to manage the internal chaos of chasing things I’ll probably bomb at, I’m borrowing a tool from Tim Ferriss called <a href="https://www.ted.com/talks/tim_ferriss_why_you_should_define_your_fears_instead_of_your_goals"><strong>Fear Setting</strong></a>: a simple 3-part framework where you (1) define your worst fears, (2) figure out how to prevent them, and (3) map out how you’d recover if they actually happened. It flips the whole goal-setting game on its head and gives you the clarity to pursue bigger risks with your eyes wide open.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Entrepreneurship" /><category term="Entrepreneurship" /><category term="Life" /><category term="Work" /><summary type="html"><![CDATA[Andrew Miracle proposes time-boxing failure to learn faster, reframing setbacks as part of the process and growth for builders.]]></summary></entry><entry><title type="html">The Founder Dream for Non-technical founders</title><link href="https://andrewmiracle.com/2025/08/08/are-you-pitch-ready/" rel="alternate" type="text/html" title="The Founder Dream for Non-technical founders" /><published>2025-08-08T18:46:41+00:00</published><updated>2025-08-08T21:11:43+00:00</updated><id>https://andrewmiracle.com/2025/08/08/are-you-pitch-ready</id><content type="html" xml:base="https://andrewmiracle.com/2025/08/08/are-you-pitch-ready/"><![CDATA[<p>Every non-technical founder knows what they want to build. They just don’t know how to build it. But today’s tools? They all assume you do.</p>

<ol>
  <li>They assume you know what a database is.</li>
  <li>What an API layer is.</li>
  <li>What a user flow is.</li>
  <li>How integrations work.</li>
  <li>What staging and production environments mean.</li>
  <li>The difference between a sandbox and live data.</li>
  <li>They assume you know what an API key is, where to get it, and how to plug it in.</li>
  <li>They assume that once you write a PR, you know the next feature to drop, the next route to define.</li>
  <li>They assume that if you can describe your product’s value, you also understand the tech stack needed to bring it to life.</li>
</ol>

<p>But that’s never the case.
When a non-technical founder has an idea, all they really know is the problem they want to solve. They don’t know what to build, how to build it, or what tools to use.
So what happens?</p>

<ul>
  <li>🔍 They research.</li>
  <li>📚 They take a course.</li>
  <li>🧑‍💻 They Google things they shouldn’t have to Google.</li>
</ul>

<p>And when they try to skip that process, maybe hire a dev or buy a no-code template, they get stuck. Usually right  after the first review of their MVP. Because they don’t know what’s supposed to happen next.</p>

<h2 id="right-now-if-youre-a-non-technical-founder-with-a-clear-what-you-have-three-options">Right now, if you’re a non-technical founder with a clear “what,” you have three options:</h2>

<ul>
  <li>1️⃣ Hire a designer and hope they get it.</li>
  <li>2️⃣ Pay an agency to translate your value into a product.</li>
  <li>3️⃣ Find a technical co-founder who can make sense of it all.</li>
</ul>

<blockquote>
  <p>That’s the status quo.</p>
</blockquote>

<p>What If “What” and “How” Were No Longer Separate?
What if knowing what you wanted to build also meant knowing how it would be built?</p>

<ul>
  <li>The tech stack.</li>
  <li>The cost.</li>
  <li>The tradeoffs.</li>
  <li>The external services.</li>
  <li>The product scope.</li>
  <li>The user flows.</li>
  <li>The integrations.</li>
  <li>The roles and permissions.</li>
  <li>The full, clear picture.</li>
</ul>

<p>This isn’t just a non-technical founder problem, by the way.
Even enterprise teams spend 70% of project planning time on “the how.”</p>

<ul>
  <li>🧰 Defining tech stacks.</li>
  <li>🌐 Mapping markets.</li>
  <li>✅ Lining up certifications (SOC2, HIPAA).</li>
  <li>📊 Figuring out competitors.</li>
  <li>🧱 Scoping features.</li>
</ul>

<p>That’s what gives them their technical advantage.</p>

<p>What if there was a better way?
What if there were a platform that bridges the gap?
A space where what and how come together from the start.</p>

<p>You describe the problem and the outcome you want, and you immediately get:</p>

<ol>
  <li>📊 Market context</li>
  <li>👤 Personas and user flows</li>
  <li>📦 Feature scope</li>
  <li>🧭 Integration map</li>
  <li>⚙️ Tech recommendations</li>
  <li>💰 Cost breakdowns</li>
  <li>📆 Launch timeline</li>
</ol>

<p>All of it, up front. So you don’t spend weeks or months stuck figuring out how. You just build, launch, ship, learn, repeat.</p>

<p>That’s the dream.</p>]]></content><author><name>Andrew Miracle</name></author><category term="Artificial Intelligence" /><category term="🌱 Seedling" /><category term="Business Strategy" /><category term="Entrepreneurship" /><category term="Founder Challenges" /><category term="Startup Strategy" /><summary type="html"><![CDATA[Andrew Miracle guides non-technical founders toward pitch readiness, clarifying product, users, and the build path to follow.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2025/08/0_0-2.png" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2025/08/0_0-2.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">It’s all ChatGPT’s fault until it’s not anymore</title><link href="https://andrewmiracle.com/2025/08/04/are-we-going-to-blame-chatgpt/" rel="alternate" type="text/html" title="It’s all ChatGPT’s fault until it’s not anymore" /><published>2025-08-04T04:04:37+00:00</published><updated>2025-12-12T18:11:17+00:00</updated><id>https://andrewmiracle.com/2025/08/04/are-we-going-to-blame-chatgpt</id><content type="html" xml:base="https://andrewmiracle.com/2025/08/04/are-we-going-to-blame-chatgpt/"><![CDATA[<p>A few months ago, we hid the fact that we were using ChatGPT. Today, we list ChatGPT, Claude, Perplexity, Manus, and Veo3 on our org chart as “junior” contributors.</p>

<p>When something goes wrong, the default reaction usually:</p>

<ul>
  <li>Dev: “Shoot, I missed that bug when Claude added the retrieval feature.”</li>
  <li>Marketing: “ChatGPT mixed up the facts from the meeting transcript.”</li>
  <li>Sales/Ops: “The AI didn’t leverage the full context of the call notes.”</li>
</ul>

<p>The pattern is the same in every department: the blame lands on the AI, not on the prompt or on the process. It isn’t about the talent of the person executing the task. It’s about the ability to prompt an LLM effectively; how well we translate the context we have into a clear, actionable request.</p>

<p>When prompting is weak, the output is weak, and the team looks for someone (or something) to own the mistake. We’ve moved from:</p>

<ol>
  <li>Hiding AI usage →</li>
  <li>Openly advocating AI‑in‑the‑Loop (AI‑ITL) →</li>
  <li>Treating the model as a junior employee.</li>
</ol>

<h3 id="once-the-model-is-on-the-team-the-same-rigor-we-apply-to-human-contributors-must-apply-to-it-if-we-dont-standardize-we-end-up-with-aislop-that-erodes-quality-so-how-do-we-operationalize-ai-in-the-loop">Once the model is on the team, the same rigor we apply to human contributors must apply to it. If we don’t standardize, we end up with “AI‑slop” that erodes quality. <em>So how do we operationalize AI In The Loop?</em></h3>

<h4 id="a-choose-the-right-platformplatformwhen-to-use-itwhat-it-gives-youopenai-custom-gptsyoure-already-on-the-openai-stackfinetuned-prompts-builtin-guardrails-version-controlanthropic-claude-artifactsyou-prefer-anthropics-safetyfirst-modelreusable-prompt-templates-contextaware-chainingworkflow-engines-lindyai-n8nio-makecomyou-need-orchestration-across-multiple-toolsautomate-data-ingestion-postprocessing-and-handoffs">a. Choose the right platformPlatformWhen to use itWhat it gives youOpenAI Custom GPTsYou’re already on the OpenAI stackFine‑tuned prompts, built‑in guardrails, version controlAnthropic Claude ArtifactsYou prefer Anthropic’s safety‑first modelReusable prompt templates, context‑aware chainingWorkflow engines (<a href="http://lindy.ai">lindy.ai</a>, <a href="http://n8n.io">n8n.io</a>, <a href="http://Make.com">Make.com</a>)You need orchestration across multiple toolsAutomate data ingestion, post‑processing, and hand‑offs</h4>

<h4 id="b-define-theunit-of-work">b. Define the Unit of Work</h4>

<p>Ask yourself: <em>What exactly must be delivered?</em></p>

<ul>
  <li>A document (spec, proposal, PRD)</li>
  <li>A URL (published article, knowledge‑base entry)</li>
  <li>A zipped bundle of design assets</li>
  <li>A video (demo, tutorial)</li>
  <li>A slide deck</li>
</ul>

<p>For each unit, write an output specification that includes:</p>

<ul>
  <li>Format (Markdown, PDF, MP4, etc.)</li>
  <li>Style guide (tone, branding, citation rules)</li>
  <li>Acceptance criteria (e.g., “no factual errors &gt; 1%”)</li>
</ul>

<h3 id="map-aiops-to-core-business-functionsbusiness-areatypical-aiitl-taskdesired-outputsalesdrafting-proposals-from-crm-datapolished-proposal-pdflegalgenerating-contract-draftseditable-word-document-with-clause-checksbackend-developmentwriting-boilerplate-api-code-from-specsgitready-repositoryfrontend-developmentproducing-component-skeletons-from-design-tokensreadytouse-reacttsx-filesux-designsummarising-user-research-into-journey-mapsvisually-formatted-figma-fileproject-documentation--prdscollating-meeting-notes-into-structured-docsmarkdown-prd-with-traceability-matrix">Map AI‑Ops to Core Business FunctionsBusiness AreaTypical AI‑ITL TaskDesired OutputSalesDrafting proposals from CRM dataPolished proposal PDFLegalGenerating contract draftsEditable Word document with clause checksBackend DevelopmentWriting boilerplate API code from specsGit‑ready repositoryFrontend DevelopmentProducing component skeletons from design tokensReady‑to‑use React/TSX filesUX DesignSummarising user research into journey mapsVisually formatted Figma fileProject Documentation &amp; PRDsCollating meeting notes into structured docsMarkdown PRD with traceability matrix</h3>

<p>By cataloging each function, you can attach the right prompt template, version‑control workflow, and quality gate to every AI‑generated artifact. AI‑ITL is no longer a “nice‑to‑have” experiment—it’s a core production line.</p>

<p>If we treat it casually, we risk:</p>

<ul>
  <li>Inconsistent quality (the dreaded “AI slop”)</li>
  <li>Escalated blame cycles that damage morale</li>
  <li>Regulatory or compliance gaps when AI‑generated content is unchecked</li>
</ul>

<p>Conversely, a disciplined AI‑Ops framework gives you:</p>

<ul>
  <li>Predictable, audit‑ready outputs</li>
  <li>Faster onboarding (new hires can trust the same prompt libraries)</li>
  <li>Clear ownership—when something fails, you can trace it to a prompt version, not to a mysterious “AI.”</li>
</ul>

<hr />

<h3 id="closing-thought">Closing Thought</h3>

<p>If we’re going to keep AI on our team, we must manage it the way we manage any junior employee: give it a clear job description, provide the tools to succeed, and hold it to the same standards we hold our people to.</p>

<ol>
  <li>Assess all current workflows where you delegate tasks to an LLM.</li>
  <li>Document the Unit‑of‑Work and acceptance criteria for each.</li>
  <li>Choose a platform (Custom GPT, Claude Artifacts, or a workflow engine).</li>
  <li>Build reusable prompt libraries and version‑control them like code.</li>
  <li>Implement a review gate, human or automated. Ensure all outputs are checked against the spec before it ships.</li>
</ol>]]></content><author><name>Andrew Miracle</name></author><category term="🌱 Seedling" /><category term="Ai" /><category term="Entrepreneurship" /><category term="Startup" /><summary type="html"><![CDATA[Andrew Miracle examines how teams blame AI tools, and why accountability and process matter more than scapegoats in delivery.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2025/08/0_0.png" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2025/08/0_0.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Rise of the AI Generalist: Ex-founders are the future elite talent.</title><link href="https://andrewmiracle.com/2025/03/14/the-rise-of-the-ultra-generalist/" rel="alternate" type="text/html" title="The Rise of the AI Generalist: Ex-founders are the future elite talent." /><published>2025-03-14T00:46:39+00:00</published><updated>2025-03-14T01:31:55+00:00</updated><id>https://andrewmiracle.com/2025/03/14/the-rise-of-the-ultra-generalist</id><content type="html" xml:base="https://andrewmiracle.com/2025/03/14/the-rise-of-the-ultra-generalist/"><![CDATA[<p>It’s 7 a.m. in the morning. One of the most popular tech blogs had posted a tweet about the most recent startup shutting down after several failed attempts by the founders to crack the market and reach PMF. This was Serena’s startup, a novel idea that wanted to push the boundaries of education in emerging markets.</p>

<p>Just last year, the same media outlet had called her startup the next big thing in EdTech, and while news of her startup shutting down is news to every other person, Serena and her founders already began dissolving the company months ago.</p>

<p>Right now, she is preparing to rejoin the traditional workforce after months of running her own show, and she keeps refreshing her email, scanning through the growing pile of rejections from prospective employers who all deliver the same polite brush-off.</p>

<p>Much like Serena, this dilemma is relatable to a countless number of CEOs, CTOs, and co-founders who find it hard to reintegrate into the workforce, and what hurts the most is that each rejection stems from the fact that they are entrepreneurs, “something which is usually one of their proudest moments.”</p>

<p>Wild, isn’t it?</p>

<p>Ex-founders and entrepreneurs who have managed to <a href="https://news.crunchbase.com/business/startup-failure-founder-next-job-how-to/">navigate this challenge</a> either had to tap deep into their network, pick up entry-level roles to patiently climb the ladder, or take an educational sabbatical in the form of an MBA, master’s, Ph.D, or something along those lines that allow them to reintegrate into the workforce on a softer, “edu-sabbatical” note.</p>

<p>A <a href="https://insights.som.yale.edu/insights/startup-founders-are-at-disadvantage-when-applying-for-jobs">Yale School of Management</a> study underscores how pronounced this bias can be: ex-founders are 43% less likely to receive interview callbacks than non-founders with similar experience. Ironically, those labeled “successful” founders fare even worse in some cases, as hiring managers worry they won’t adapt to corporate norms or might exit once they spot a new entrepreneurial venture.</p>

<p>This mismatch comes at a time when artificial intelligence (AI) is transforming the global workforce in unprecedented ways. Over the past two decades:</p>

<ul>
  <li>Clerical and administrative jobs have shrunk due to automation.</li>
  <li>Routine coding tasks are increasingly handled by AI, contributing to an uptick in IT-sector unemployment.</li>
  <li>AI and machine learning roles, by contrast, are on track to grow by 40% by 2027, according to the World Economic Forum.</li>
</ul>

<p>But there is hope.</p>

<p>There is an emerging role that is picking up popularity, a term used to define a professional who can orchestrate a mix of AI agents and humans to achieve specific objectives.</p>

<p>A term used to describe someone who can sit in a strategy meeting, receive their “high-level” objectives and KPIs, and run with it. An objective like “increase our product visibility by 40% in Q2” instead of “research a list of PR companies and influencers we can partner with to discuss the ‘XYZ’ product campaign.”</p>

<p>A term for this role is called the “Ultra Generalist.”</p>

<h2 id="the-age-of-the-ex-founderex-founders-fit-perfectly-into-this-box-the-same-traits-that-once-seemed-to-limit-their-prospects-ie-wearing-multiple-hats-dabbling-in-every-department-pivoting-strategies-on-a-dime-are-rapidly-becoming-the-most-valuable">The age of the Ex-FounderEx-founders fit perfectly into this box. The same traits that once seemed to limit their prospects, i.e wearing multiple hats, dabbling in every department, pivoting strategies on a dime are rapidly becoming the most valuable.</h2>

<p><em>For example, if you began as a marketing professional and became a founder, you’d eventually find yourself sucked into operations, product management, and even design or software engineering. What if you were just a techie? The moment you co-found a company, you might even get pulled into design, research, customer relations, and worse sales calls, even though all you’ve dreamed of was quietly locking yourself in a small corner room while writing code to change the world</em>.</p>

<p>In an age where AI handles specialized tasks, knowing a little bit about everything suddenly has immense value. A former founder who used to wear multiple hats can excel under these conditions. Whether it’s signing off on a marketing plan generated by an AI or overseeing a product feature tested by machine learning algorithms, their role is to ensure it all serves a coherent business strategy.</p>

<p>This article from <a href="https://www.smartbrief.com/original/ai-agents-and-the-future-the-rise-of-ultra-generalists">SmartBrief describe these “ultra-generalists”</a> as people who “communicate, solve problems, and adapt quickly,” bridging the gap between what AI can do and what an organization actually needs. And therein lies the biggest opportunity for ex-founders: businesses increasingly need people who can manage machines, not just function alongside them.</p>

<p>So you might have closed the doors on your startup, laid off the last of your team, and found yourself facing more rejections than you ever encountered while pitching to VCs.</p>

<p>Here’s the good news:</p>

<ol>
  <li>Your breadth is your strength: The fact that you’ve handled operations, marketing, R&amp;D, and hiring may feel scattershot on a résumé, but in an AI-first era, it’s a superpower.</li>
  <li>Embrace your story: Instead of downplaying your founder title, re-frame it. Highlight the tangible outcomes you achieved: revenue growth, product launches, user acquisitions. Show how you connect the dots across various business functions.</li>
  <li>Leverage the bias: If a company refuses to see the value in your diverse background, you might be dodging a bullet. Organizations that appreciate AI generalists are more forward-thinking, these are the companies you want to work for.</li>
  <li>Think like a conductor: As AI tools become ubiquitous, your role is not to out-code or out-design the specialists, but to orchestrate. Oversee different AI tasks, interpret outputs, guide the data toward strategic outcomes, and ensure the entire ensemble aligns with broader goals.</li>
</ol>

<p>Your journey as a founder wasn’t a dead end; it was a master class in adaptability, exactly what businesses need in a future where technology rewrites the rules every other day. The rise of AI doesn’t mean humans are not needed. It just means the highest-value work is shifting, instead of competing with AI in specialized tasks, the best professionals will be those who know how to leverage AI across multiple functions. For ex-founders and generalists, this is a golden era. The ability to think across domains, adapt quickly, and drive AI-powered execution is now more valuable than ever.</p>

<p>If you’re currently navigating the path from entrepreneur to workforce, there’s no better time to be alive. Why?</p>

<h3 id="the-future-of-work-needs-you-more-than-you-might-realize"><em>The</em> f<em>uture of work needs you more than you might realize.</em></h3>]]></content><author><name>Andrew Miracle</name></author><category term="🌱 Seedling" /><category term="🌴 Flourishing" /><category term="Ai" /><category term="Entrepreneurship" /><category term="Startup" /><summary type="html"><![CDATA[Andrew Miracle argues AI elevates ex-founders as ultra-generalists, blending product, marketing, and ops into one toolkit.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andrewmiracle.com/assets/images/uploads/2025/03/4f18f61c-9934-421a-a9f4-04a60984af6a.webp" /><media:content medium="image" url="https://andrewmiracle.com/assets/images/uploads/2025/03/4f18f61c-9934-421a-a9f4-04a60984af6a.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Maybe I am a Design Engineer</title><link href="https://andrewmiracle.com/2025/03/12/maybe-i-am-a-design-engineer/" rel="alternate" type="text/html" title="Maybe I am a Design Engineer" /><published>2025-03-12T17:05:00+00:00</published><updated>2025-12-12T18:15:33+00:00</updated><id>https://andrewmiracle.com/2025/03/12/maybe-i-am-a-design-engineer</id><content type="html" xml:base="https://andrewmiracle.com/2025/03/12/maybe-i-am-a-design-engineer/"><![CDATA[<p><a href="https://maggieappleton.com/design-engineers">Reading Maggie Appleton’s take on the role</a> felt weirdly familiar. The Design Engineer is a rare hybrid position; one that thrives in consultancy, or in roles like solution architecture. And after a lot of personal and professional detours, this is one of the first titles that actually <em>feels</em> like it fits.</p>

<p>I’m not a deep in the weeds, core-code kind of engineer, neither am I deep in design nerd. I am comfortable with Creative Cloud, Figma, Whimsical, a couple design tools, and can wire-frame / prototype quite well. I also enjoy spending a great amount of my time in an IDE. Over my career, I’ve built serious proficiency with PHP (Yii Framework, CodeIgniter), which I used to develop the entire <a href="https://crm.tecmie.com/">CRM for Tecmie</a>. Then I moved into fancy land: did a Ruby on Rails bootcamp, picked up Node.js and TypeScript, and joined the open-source language <a href="https://imba.io/">Imba</a> as an early contributor. (Imba is a JavaScript VM with the syntactic sugar of Ruby. After that came Python, Solidity, Rust (for <a href="https://github.com/daccred/attest.so">Attest Protocol</a>), and every other meta framework or language necessary for building on the edge AI, blockchain, the works.</p>

<p>Even though I <em>can</em> pick up complex systems fast, I don’t chase new languages or paradigms just for fun. I only dig into them when the current project demands it. When my existing toolset hits a wall, or in rare cases, when one of the engineers working on a critical project fucks up our timelines.</p>

<p>In other words, I learn <em>in context</em>. Problem-first. Design-next, which gives me a rare gift.</p>

<p><strong>Transforming Value Propositions into Delightful Software</strong>.</p>

<p>So maybe, yes, I’m not a PM, EM, SWE, or a Product Designer. But all of my capabilities intersect those functions. And for the first time, I think the role definition that perfectly captures my abilities might be this one:</p>

<p><strong>Design Engineer.</strong></p>]]></content><author><name>Andrew Miracle</name></author><category term="🌱 Seedling" /><category term="🌼 Blossoming" /><category term="Ai" /><category term="Design" /><category term="Entrepreneurship" /><summary type="html"><![CDATA[Andrew Miracle explores the design engineer role, why it feels familiar, and where the hybrid skillset thrives in practice.]]></summary></entry></feed>