AI, Art & Mortgages

I want to start by acknowledging that this is a topic that directly affects people’s livelihoods. Real people are losing real work to generative AI right now, and that matters. I’m not going to pretend this is purely an abstract or anonymous philosophical debate. Also I have enjoyed every Sanderson book I’ve read and have no beef with him, he’s simply a target of his own making here by communicating clearly.

That said, I’ve been struggling with this topic because I can’t find a clean position. The conversation around AI and art tends toward extremes: either it’s theft and should be banned, or it’s a tool like any other and everyone should embrace it. I’m not comfortable on either end. There are too many layers and angles, and I think flattening them into a simple take does a disservice to everyone involved.

The clearest version of the anti-AI argument I’ve encountered comes from Brandon Sanderson. His thesis, roughly: the struggle is the art. The book you write isn’t really the product, it’s a “receipt” proving you did the work. You become an artist by writing bad books until you write good ones. The process of creation changes you, and that transformation is the actual art. LLMs can’t grow, can’t struggle, can’t be changed by what they make. So they can’t make art.

It’s a thoughtful position. But I think it’s also circular. He’s defined art as the process of struggle, but the audience doesn’t experience your struggle. They experience the output. Nobody listening to an album knows or cares whether it took a week or three years to record it. They care if it moves them. When I read Mistborn (which I enjoyed!), I’m not feeling Sanderson’s growth journey from White Sand Prime through six unpublished novels that I never read. I’m feeling the story he eventually learned to tell.

“Put in the work” is real advice and I believe in it deeply. But the work is how you get good, not why the result matters to anyone else. Those are different things. Conflating them feels like asking the audience to subsidize your growth journey.

Subsidy

And maybe that’s what some of the anger is actually about. AI threatens the subsidy.

The middle tier of creative work: background music, stock photography, commercial illustration, session gigs was never really about profound artistic growth. It was a way to pay the mortgage while developing your craft on nights and weekends. You do the pedestrian work that keeps the lights on, and that buys you time to make the art you actually care about. AI competes in that middle tier directly, and it’s winning.

That’s a real economic disruption, and I don’t want to minimize it. But framing it as “AI can’t make art because it doesn’t struggle” is a philosophical dodge of an economic problem.

That model isn’t ancient. It’s maybe 50-80 years old. The session musician, the stock photographer, the commercial illustrator working on their novel at night, these are 20th century inventions. Before that, you had patrons, or you were wealthy, or you just didn’t make art professionally. The “starving artist” is a well-known trope, but the “starving artist who does commercial work to fund their real art” is a much more recent arrangement. But there were also far fewer artists, with a lot more gatekeeping, so I’m not arguing that everything was great before then either.

“I did it myself”

There’s also the provenance argument, that AI is trained on copyrighted work without consent or compensation. And that’s a real concern. But virtually all musicians learned to play and write by listening to and studying other musicians. There’s no system to track that provenance or pay royalties unless it’s a nearly-direct copy. The line between “learned from” and “trained on” is blurrier than it feels.

That said, I don’t want to dismiss the emotional weight here. Feeding your art and creativity into a machine with no credit—while some corporation profits from it—is a tough hit to the ego, not just the bank account. That’s a legitimately hard thing to get past, and I hope we find a better solution for it. The current arrangement feels extractive in ways that don’t sit right, even if I can’t articulate exactly where the line should be.

Sanderson said “I did it myself” referencing his first novel that he hand-wrote on paper. This feels cringeworthy to me, because in no way is he doing it himself. That first novel had thousands of contributors, from his parents and teachers to stories he read, conversations he had about it, movies he watched and so on.

This connects to something my thoughts keep coming back to: we’re always in the middle. Most people like to think of their place in a creative effort as the beginning or the end; the origin of something new, or the final word on something complete. But nobody starts from zero. The most original ideas are still cued by experiences. The most original inventions are still spurred by problems. Your inputs came from somewhere.

And it goes the other direction too. If we write the book, people still need to read it. If we compose the song, someone still needs to hear it. Our outputs are someone else’s inputs, often without permission, credit, or compensation. The chain keeps going.

Sanderson’s framing puts the artist at the center as the origin point of authentic creation, forged through struggle. But if we’re all in the middle, if every artist is just transforming their inputs into outputs that become someone else’s inputs, then the question of whether the transformer “struggled” feels less central. The chain of influence extends in both directions, through every artist who ever lived, and will continue through whatever comes next.

Starving Engineers

And then there’s the scope problem. Generated music is bad but generated code is fine? Generated paintings are theft but generated infographics are helpful? The reactions seem to track with how much cultural romance we attach to the craft. Software engineering has no “starving engineer” mythology. Nobody thinks I suffered for my art when I debugged a race condition. So when AI writes code, it’s a tool. When it writes songs, it’s an existential threat.

Photography is worth remembering here. In the 1800s, critics argued photography wasn’t art because it merely captured what already existed. Some said copyright should go to the subject, or even to God, not the photographer. It was too easy, just thoughtlessly press a button.

But over time, people figured out that taking a photo wasn’t a mundane task. Good photographers could be in the same place with the same equipment and consistently create images that moved people. The tool became a medium. Mastery emerged.

I think AI will follow a similar path. Right now most people are still tinkering, having mixed results. But we’re starting to see glimpses of people getting genuinely good at it, comfortable enough that they can do things most people can’t, or never thought of. They’ll convey ideas and emotions in new ways. They’ll be drawing on the collective contributions of thousands of generations of prior artists, just like every artist always has.

I don’t have a clean conclusion here, and I’m not sure anyone should right now. The displacement is real. The ethical questions around training data are real. The cultural anxiety about what counts as “real” art is real. I can’t join the strong positions on either side, because I think we’re very early in a journey that will outlive all of us.

What I am is cautiously optimistic. The history of art is full of new tools that were rejected as cheating until people learned to master them. The history of technology is full of painful transitions that looked like apocalypses at the time and turned out to be recalibrations. I suspect this is one of those. I hope so, anyway. We won’t know for a while yet.

Building at the speed of … builds

I’ve been thinking about build speed lately, usually while waiting for builds, and I think the thing that’s underappreciated isn’t the raw numbers, it’s that different speeds are qualitatively different experiences. Faster is always better, but it’s far from a linear relationship.

Working on a package that builds in 100ms is basically invisible. You don’t even notice it’s happening. The feedback loop is so tight that it feels like the code is just doing what you told it to do. You’re in conversation with the machine and you are the bottleneck, which is the goal.

At 10 seconds, it’s disruptive, but if the tooling is set up well you can stay in flow. You wait. You’re still there when it finishes. You might even find a bit of rhythm or cadence here and get a little thrill from the anticipation like hitting a long fly ball and seeing if it makes it out.

At a minute, it’s more like someone tapping you on the shoulder to ask a question. Your attention wobbles. You notice you could use a coffee, or you tab over to email to check something “real quick.” Five minutes later you come back and the build failed two minutes ago. Now you’re reloading context.

At 10 minutes, it changes your whole relationship with the work. You start actively avoiding triggering builds. You’re trying to see how far you can get while holding your breath. If it fails at 9:30 you’re genuinely frustrated, and maybe you’ll just go find something else to do for a while.

The reason I think this matters is that people tend to look at build optimization as a spreadsheet exercise: spend 8 hours to save 30 seconds, amortize across however many builds, calculate break-even. Even if the math works out it feels tedious and while the other coders might thank you for a 5% reduction the suits won’t.

I think that exercise misses the point entirely. The less quantifiable stuff pays back almost immediately. You’re more focused. You’re doing better work. You’re just happier. A developer who’s been trained by their feedback loop to flinch isn’t going to produce the same work as one who can iterate freely.

But AI

There’s an argument to me made that AI changes this calculus, that it doesn’t matter anymore because the AI is doing the building in the background and will let you know when it’s done. But I think it actually makes build speed more important, not less.

Since the flow state and focus don’t matter as much with async coding, now the math is actually meaningful and the small wins will compound even further. If you’re coding at 1x speed and building every 10 minutes, and the build takes 2 minutes, you’re spending about 20% of your time waiting on builds. Annoying, but manageable.

Now imagine an AI coding at 10x. It wants to build every minute to verify its work. But the build still takes 2 minutes. Suddenly 66% of the time is build. The AI isn’t going to get frustrated and check its email, but it’s also not doing useful work during that time. And if you’ve got multiple agents running in parallel, that bottleneck adds up and leaves even more open loops to manage.

When you speed up one part of a pipeline, the bottleneck shifts somewhere else. AI sped up the coding. Now the build is often the bottleneck. If anything, that’s an argument for investing more in build speed than we did before, the returns are even higher when you’re trying to iterate faster.

The Middle

I think most people like to think of their place in a creative effort as the beginning or the end or even both, but the reality is that we’re always in the middle.

The most original ideas are still cued by experiences. The most original inventions are still spurred by problems. Nobody starts from zero, and I don’t just mean privilege or connections (though those count too). I mean the basic fact that your inputs came from somewhere, just like you did.

And it goes the other direction too. If we ship the software, people still need to use it. If we build the house, someone still needs to live in it. Our outputs are someone else’s inputs. The chain keeps going.

Once you accept this, something shifts. You don’t need to credit every influence or take responsibility for everything that happens downstream, but being aware that they exist opens your eyes. You start to see how your work can go places you didn’t expect, inform decisions you weren’t part of, generate ideas you won’t be around for. And it lets you rewind. If your “great idea” doesn’t work, it was always just one link in a chain, and you can go back and try a different path.

I think this actually reinforces your contribution rather than reducing it. We tend to put new ideas and great results on a pedestal and treat everything in between as an unavoidable burden. But if it’s all in between, if there is no pristine beginning or triumphant end, just the middle, then you have permission to appreciate and invest in the whole process. This may not get you on the front page or in the corner office, but I think it’s a clearer path to fulfillment and happiness

Look Up First

If you put an architect or designer in an unfamiliar space and take a blindfold off, the first thing they’re likely to do is look up. They’re looking for load-bearing walls. For structure. For constraints.

They might not even be able to tell at a glance. But it’s so important that it’s worth a try. Why? Because it instantly reduces the number of possibilities from near-infinite to something tangible. The pattern recognition finds some surface area to grab onto. Ideas start to get bounced at the door, and the ones that don’t get bounced find space to flourish.

Point an experienced engineer at a codebase and they’re doing the same thing. What frameworks are you using, and which parts are you actually using? What are people depending on? What are your interfaces, your standards?

Not what version of the language it’s written in. Not tabs versus spaces. Those matter eventually, but not on day one. On day one, you need to know the shape of the thing, not the texture.

You don’t need the full blueprint in your head, but you need to know the important parts. You need to know which metaphorical walls you can drill into or knock down and which carry the roof or have plumbing and will blow your budget if you open them up.

One powerful way to do this: look at the abstractions the system makes. You can see which ones held up, which needed workarounds or patching. Which added value. What customers are relying on. What you can and can’t move or remove. The abstractions that survived are load-bearing now—whether they were good bets or just bets that got stuck.

If you can’t point to which code is load-bearing in your system, you don’t understand your system—even if your goal is to tear it down.

52 Word Review: One Battle After Another

One Battle After Another was well-written, well-paced, well-cast, well-acted and well-shot, and yet felt completely forgettable and somehow unoriginal.  There were echoes of Tarantino, Terminator 2, Easy Rider and many others, but that’s all it felt like.  This might be an accomplishment for other directors but for Anderson it was a disappointment.

The Fantasy of Always and Never

One of the patterns I picked up during my freelancing years was to gently probe any time a client made an absolute statement. “We always do X” or “Y never happens.” These were usually non-technical clients describing their business processes, and I needed to understand them well enough to build software around them.

Honestly, I can’t remember a single one of those statements holding up to any scrutiny. Most fell apart with a single question. “What if Z happens?” “Oh, in that case, we do this other thing.” Almost none of them survived two or three follow-ups. It wasn’t that people were lying or even wrong, they just had a mental model of how things worked that was cleaner than reality.

This matters a lot when you’re building abstractions. When you create a shared component or a unified data model, you’re betting that these things really are the same. That they’ll change together. That the commonality you see today will hold tomorrow. Sandi Metz said “Duplication is far cheaper than the wrong abstraction.” I’d take it further, you need to be really certain an abstraction is right, or it’s probably wrong.

Abstractions and DRY are, in a sense, intentional single points of failure. That’s not to say they’re bad (I build them all the time), but it’s worth keeping in mind. You are hitching a lot to the same post. If you try to abstract physical addresses to cover all addresses everywhere, you’re left with basically zero rules because they’re all broken somewhere. Same for names, medical histories, pretty much any taxonomy that involves humans.

So now when I’m looking at a potential abstraction, I try to pressure-test it. “Users always have an email address” is fragile, there’s probably an exception lurking somewhere. “Users usually have an email address, and we handle it gracefully when they don’t” is something you can build on. If your abstraction can’t survive that kind of flex, it might not be an abstraction at all, just a few things that happen to look similar, until they don’t.

Baseball Hall of Fame 2026

Carlos Beltrán – Weak Yes
Andruw Jones – Weak No
Chase Utley – No
Alex Rodriguez – Strong Yes
Manny Ramirez – Strong Yes
Andy Pettitte – No
Félix Hernández – Weak No
Bobby Abreu – No
Jimmy Rollins – Weak No
Omar Vizquel – Weak No
Dustin Pedroia – No ?
Mark Buehrle – No
Francisco Rodríguez – No
David Wright – No
Torii Hunter – Weak No
Cole Hamels – No
Ryan Braun – No
Alex Gordon – No
Shin-Soo Choo – No
Edwin Encarnación – No
Howie Kendrick – No
Nick Markakis – No
Hunter Pence – No
Gio Gonzalez – No
Matt Kemp – No
Daniel Murphy – No
Rick Porcello – No

I’ve added strong and weak to a few this year. The weak ones I could be probably convinced to switch my vote or might change it with a different set of options.

If my vote counted, I would hate to actually vote against Pedroia, he was one of my favorite players and had a HOF-trajectory career that was cut short by injury.

Where I’m going against the grain: Jones was very good, but just not best-of-the-best. Utley was very popular, but had a few good years early in his career. Pettitte was a reliable workhorse. Both had one meaningful league-leading stat in their whole career, but if they were on small market teams we’d barely remember them.

Hernández just doesn’t quite make the cut for me, along with Hunter. Rollins and Vizquel are tough calls because they were both good contributors, great defenders, and important pieces of their teams.

My New Editor

Google has a concept called “Readability” which usually means you can write and review code of a certain language in the accepted style.  When I first joined Google it seemed bureaucratic but I think it strikes a good balance and the languages I have readability in I feel pretty confident that I can write things that will incorporate the style decisions, and the languages I don’t have it, I don’t have that confidence.  

A couple of months ago I had an idea that this concept might be useful at a higher level.  Instead of Go or TypeScript, what if there was Readability for concepts like security, or maintainability?  I started chatting with AI and eventually ended up on what has been a very enjoyable project.  It’s not “readability for maintainability” like I thought, nor is it about certification or iron-clad rules, but it’s become a growing repository of thoughts, stories, and observations.  I’ll get into more about the process in the future.

I’ve gone back and forth about sharing it here, not because it’s embarrassing but because of the AI aspect.  I think it’s been an absolutely fantastic tool for editing and organizing these thoughts and notes, and prompting me for topics to write on, and I love just being able to write a draft and have it come out as something built a bit better.  Much of it is my words exactly, but rearranged or glued together into a better structure.

So far 100% of this blog, including this post, has been hand-written by me, I’ve used AI to review the past few posts and made some revisions based on feedback but none of it was written by an AI.  I’m now in a position where my choices are to share AI-assisted content, rewrite it myself to satisfy some arbitrary rule, or not share it at all.  I don’t like the last option, and the second option seems kind of foolish, so I’m going with the first one and I’m going to tag these posts as #ai-assist for transparency. Where I’ve only used it for feedback I’ll use #ai-review.

The AI isn’t doing anything a good editor wouldn’t do. All of the thoughts, examples, principles and stories are mine (or credited where due).  To me, that’s not slop.  I hope people judge the content based on what it’s saying rather than the tools involved, but I respect the sensitivity people have towards these tools and don’t want to misrepresent things.

Personal Computing Returns

I’ve been doing a lot of AI-assisted coding both at work and at home lately, and am noticing what I think is a positive trend. I’m working on projects that I’ve wanted to do for a while, years in some cases, but never did. The reason I never did them was because they just didn’t seem like they were worth the effort. But now, as I become a better vibe coder, that effort has dropped rapidly, while the value remains the same. Even further, the value might actually be more, because I can take it even beyond MVP levels and get it to be really useful.

Case in point: I do a lot of DIY renovation work and woodworking (though not enough of the latter). I use a lot of screws and other hardware, and it can be very disruptive to run out. I try to stay organized and restock pre-emptively, but it’s easy to run out. What if there was an app that was purpose-built for tracking this, that made checking and updating inventory as simple as possible, and made it easy to restock? Even better, what if it was written exactly how I track my screws, and had all of the defaults set to the best values for me? Better still, what if it felt like the person who wrote it really understood my workflow and removed every unnecessary click or delay?

Screenshot of a vibe-coded screw inventory app.

Anyone familiar with app development knows that once you get into the domain-specific details and UX polish necessary to take something from good to great, the time really skyrockets. Screws have different attributes than nails, or hinges, or braces, or lumber. People do things in different ways, and if you miss one use case, they won’t use it. If you cover everything, it’s hard to use and doesn’t feel magical for anyone. You could knock out a very basic version in a few nights, maybe 10 hours, but this wouldn’t do much more than a spreadsheet, which is probably what you’ll go back to as soon as you find some bug or realize you need to refactor something. To make this thing delightful you’re likely in the 50-100 hour range, which is maybe in the embarrassing range when you tell your friends you just spent a month of free time writing an app to keep track of how many screws you have in your basement.

With the current crop of tools like Claude Code and Gemini CLI, that MVP takes 20 minutes, and you can do it while watching the Red Sox. Another hour and it’s in production, and starting to accrue some nice-to-have features, even if the Rays played spoiler and beat the Sox. It works great on desktop and mobile, it safely fits on the free tiers of services like Firebase and Vercel so it’s basically maintenance-free. One more hour while you’re poking around YouTube and you’ve got a fairly polished tool you’re going to use for a while.

I think most people probably have a deep well of things they’d like to have, that never made any financial sense, and probably aren’t interesting to anyone else. We’ve probably even self-censored a lot of these things so we’re not even coming up with as many ideas as we could. But when the time/cost drops by 90% or more, and you can take something from good to great, and have it tailored exactly to you, it’s a whole new experience.

The term “personal computing” went out of style decades ago, and now it feels like we’re all doing the same things in the same way with the same apps, but maybe it’s time to start thinking for ourselves again?