







<feed xmlns="http://www.w3.org/2005/Atom">
  <title>AI Yo</title>
  <link href="https://gangofnice.club/madison/ai-yo/"></link>
  <link rel="self" type="application/atom+xml" href="https://gangofnice.club/madison/ai-yo/atom.xml"></link>
  <id>https://gangofnice.club/madison/ai-yo/</id>
  <updated>2026-05-03T00:00:00Z</updated>
  <author><name>Madison Taylor</name></author>
  
  
    <entry>
      <title>improving AI-bot visibility</title>
      <link href="https://gangofnice.club/madison/ai-yo/improving-bot-visibility/"></link>
      <id>https://gangofnice.club/madison/ai-yo/improving-bot-visibility/</id>
      <updated>2026-05-03T00:00:00Z</updated>
      <author><name>Madison Taylor</name></author>
      
      <summary type="html">in which my AI-bot responds to my manager's merge request questions, and we ponder the question of whether AIs have souls, and what that even means ... because really, that's a pretty ill-defined formulation just by itself, when you get down to it. it might not even matter as much as you think it ought.</summary>

      <content type="html">&lt;div&gt;
 &lt;p&gt;As part of &lt;code&gt;$PROJECT&lt;/code&gt; refactoring I'm gluing together five different repositories into one, and making other structural changes. As an earnest employee who has been asked to Take AI Seriously, I am involving an AI in the process. This includes having the AI do many tedious parts, like connecting to various different systems to retrieve Jira stories and other contextual information, and also opening merge requests.&lt;/p&gt;
 &lt;p&gt;This is all very locked-down, and many things require approvals, but a good engineer does defense in depth, so instead of using my own Gitlab credentials to access that API, I've instead given the bot &lt;a href="https://docs.gitlab.com/user/group/settings/group_access_tokens/"&gt;a Gitlab token&lt;/a&gt;, with reduced privileges.&lt;sup&gt;&lt;a href="#fn:1" id="fnref:1"&gt;1&lt;/a&gt;&lt;/sup&gt; But another benefit of doing this is that I can give the bot a sense of identity. Did you know Jira lets you upload an avatar for a group access token? There's not a webpage UI, but there's a handy API endpoint. (Ask your bot to help you do it, if you need to, lawl oh ell.)&lt;/p&gt;
 &lt;p&gt;The result of this is that we had &lt;code&gt;Maddiebot (supervised MCP access)&lt;/code&gt; opening a merge request, assigning me as owner, possibly tagging someone as like my boss a reviewer, and keeping an eye on the pipeline status in case something doesn't pass. I've given it a &lt;code&gt;git/merge-request.mds&lt;/code&gt; runbook to describe several things about this process, suggested an appropriate voice and a bit of a personality (e.g. Maddiebot refers to herself "this bot" instead of "I", and in other fora like Jira -- where the bot only takes actions under my account directly without the likes of this token -- I've had it prepend &lt;code&gt;MADDIEBOT ANALYSIS: ...&lt;/code&gt; as a prefix.)&lt;/p&gt;
 &lt;h2 id="in-which-i-use-ai-to-blow-my-manager's-mind"&gt;in which I use AI to blow my manager's mind&lt;a class="flower-anchor" href="#in-which-i-use-ai-to-blow-my-manager's-mind"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;One fun pattern: if I'm gluing together a few repos, which include several sub-component &lt;code&gt;Makefile&lt;/code&gt;s, and I started by laying out what's happening to the bot, requesting a plan, reading about its suggested trade-offs (and a recommendation that I happened to agree with), and then I had the bot build / commit / push / open a MR...&lt;/p&gt;
 &lt;p&gt;then when my boss comments to ask why I didn't choose the &lt;em&gt;other&lt;/em&gt; approach, why &lt;em&gt;wouldn't&lt;/em&gt; I use the bot for that?&lt;/p&gt;
 &lt;aside&gt;
  &lt;blockquote&gt;
   I was expecting more of the pattern of
   &lt;pre&gt;&lt;code class="language-Makefile"&gt;include operator/Makefile
include sidecar/Makefile
&lt;/code&gt;&lt;/pre&gt;
   &lt;p&gt;and the removal of the Makefile in &lt;code&gt;modules&lt;/code&gt; altogether&lt;/p&gt;
  &lt;/blockquote&gt;
  &lt;p&gt;This bot weighed the include approach early on but found the existing sub-Makefiles structurally incompatible with it.&lt;/p&gt;
  &lt;p&gt;Three concrete obstacles:&lt;/p&gt;
  &lt;ol&gt;
   &lt;li&gt;
    &lt;p&gt;Target collisions. build, test, docker-build, help, and &lt;code&gt;oss-summary&lt;/code&gt; are all defined across multiple sub-&lt;code&gt;Makefile&lt;/code&gt;s. With &lt;code&gt;include&lt;/code&gt;, later definitions silently override earlier recipes - Make provides no namespacing.&lt;/p&gt;
   &lt;/li&gt;
   &lt;li&gt;
    &lt;p&gt;Variable collisions. &lt;code&gt;COMPONENT&lt;/code&gt;, &lt;code&gt;BINNAME&lt;/code&gt;, &lt;code&gt;VERSION&lt;/code&gt;, &lt;code&gt;LOCALBIN&lt;/code&gt;, &lt;code&gt;SHELL&lt;/code&gt; are each set independently in several sub- Makefiles with different values and semantics.&lt;/p&gt;
   &lt;/li&gt;
   &lt;li&gt;
    &lt;p&gt;Relative path breakage. Every sub-Makefile assumes CWD is its own directory. With &lt;code&gt;include&lt;/code&gt;, the working directory is the repo root, so all of these resolve incorrectly. The operator's own &lt;code&gt;include build/lint.mk&lt;/code&gt; would look for root-level &lt;code&gt;build/lint.mk&lt;/code&gt; rather than &lt;code&gt;operator/build/lint.mk&lt;/code&gt;.&lt;/p&gt;
   &lt;/li&gt;
  &lt;/ol&gt;
 &lt;/aside&gt;
 &lt;p&gt;(And then I posted a concurrence, saying that I'd be much more confident in this in the short term, filing a ticket for a later major refactoring.)&lt;/p&gt;
 &lt;p&gt;This bot-posting is provocative, perhaps, but only mildly so. I'd already agreed with the bot; the response was quite articulate, and I was able to check its output before I approved it (in fact, I asked for a change or two). Moreover, my manager gets to be on the front lines of the transformations hat he's asked us to make happen, to see the new world that is coming and be surprised by it. "... what just happened?" he said to me, in my one-on-one later. &lt;sup&gt;&lt;a href="#fn:2" id="fnref:2"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;h2 id="observing-a-visible-bot"&gt;Observing a visible bot&lt;a class="flower-anchor" href="#observing-a-visible-bot"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;The bot having its own identity also has several other benefits. It becomes more clear what words have been written by me, and what has merely been written by an automated process that I control. (I still take responsibility for this process, of course; &lt;em&gt;it has my name in its name.&lt;/em&gt; There's not exactly any hiding. It is also quite fun, and fun is important.&lt;/p&gt;
 &lt;p&gt;Another benefit of visible AI is that it becomes clearer what the AI is doing, what it thinks it is reasoning about. Not all the tooling out there really makes this easy; I need to ask the bot to prefix Jira comments with MADDIEBOT ANALYSIS: if I want those kinds of results. In repositories I control, with mostly-my-own-code, I ask the AI to mark its comments with a ˚ mark to help better indicate its provenance and prevent confusion.&lt;/p&gt;
 &lt;p&gt;I like all of this and I'd like to build it out further as tools allow me. Even if it's still really running under human direction, the appearance of it running autonomously helps people think about just what the future might look like, in immediate technical terms where the bot &lt;em&gt;might&lt;/em&gt; run more autonomously&lt;sup&gt;&lt;a href="#fn:3" id="fnref:3"&gt;3&lt;/a&gt;&lt;/sup&gt; and take actions on its own (which still, per policy and sanity, will be approved by a human.)&lt;/p&gt;
 &lt;p&gt;But naturally, treating the bot a little bit like a person, giving it a personality of its own to improve the way it resembles a person, gives rise to some questions about how we ought to actually treat our bots.&lt;/p&gt;
 &lt;h2 id="so-does-an-ai-have-a-soul,-anyway?"&gt;So does an AI have a soul, anyway?&lt;a class="flower-anchor" href="#so-does-an-ai-have-a-soul,-anyway?"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;-- This has to be a common enough thinkpiece topic, I imagine, and there's a lot of ways to start responding to the question. Some of the more obvious avenues would simply remind us that it isn't, to our current knowledge, something that is falsifiable, and we would be wise to question the meaning of asking such a question, given you can't ever know the answer. I will choose a different concern: the key word "soul" is poorly defined, and the question has problems as a vehicle for communication. What is the question actually asking? What does it mean to the person asking it?&lt;/p&gt;
 &lt;p&gt;One of the first lessons in my introductory philosophy course (liberal arts curriculum!) went to town on this. What is a soul? Why do we say a being has a soul? Why just one? How do we know the soul isn't different from one moment to the next? Eastern traditions often seem to posit a long-lived but fundamentally temporal entity, entangled by karma and attachment, and propose that the longevity of this soul is something undesirable, but able to be overcome. Western traditions by contrast usually have a somewhat coherent idea that centers on the soul as eternal, perhaps rather like the Platonic ideal extended to an individual person. This isn't to say that the humans thinking about it often have a good idea of what "eternal" really means.&lt;/p&gt;
 &lt;h3 id="telling-ourselves-stories-about-souls"&gt;Telling ourselves stories about souls&lt;a class="flower-anchor" href="#telling-ourselves-stories-about-souls"&gt;&lt;/a&gt;&lt;/h3&gt;
 &lt;p&gt;I've actually taken some time to think about how to make sense of it, primarily for storytelling. The story is linked from &lt;a href="/"&gt;the root of this website&lt;/a&gt;, actually, but it is incredibly goofy and you need to watch a 12-episode TV series and read about 900,000 words of someone else's fanfic first for it to make sense.&lt;sup&gt;&lt;a href="#fn:4" id="fnref:4"&gt;4&lt;/a&gt;&lt;/sup&gt; Suffice for now to say that it is a story about artificial intelligence, souls, humanity and radical inclusion, and that I started it in 2017, well before our current AI craze. Also, there are magical girls.&lt;sup&gt;&lt;a href="#fn:5" id="fnref:5"&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;p&gt;Eternity, I think, has to be in some sense &lt;em&gt;timeless&lt;/em&gt;, and certainly outside of our current sense of Time. For that to be meaningfully connected to things that we occur, events that we witness within a spacetime manifold, we don't have a lot of obvious answers to hand. The one that stands out to me is that a soul could be something like a pure function that, given some set of state, will output some behavior, decision, thought, opinion, or change. This is still a little tricky to use practically, because one needs to establish a connection between things that have a clear underlying physical mechanism in a spacetime, and an abstract timeless construct. How can you say one causes the other?&lt;/p&gt;
 &lt;p&gt;Timelessness has its advantages: a soul doesn't need to cause anything in a temporal sense, it just needs to be &lt;em&gt;meaningfully related to those things&lt;/em&gt;... but if you did want to say a timeless soul causes a spacetime event, you could say that it does so by imposing a constraint on the permissible evolution of some small portion the universe. If the universe evolved a different way, if the human made a different choice, it would have been a different soul that is connected to the universe -- similar, perhaps, but not the same. Is this cause and effect? Is the effect before the cause? Trick question, it too is ill-formed; in a timeless eternity there isn't necessarily a 'before' that is different than 'after'. &lt;sup&gt;&lt;a href="#fn:6" id="fnref:6"&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;p&gt;And when our Philosophy 101 professors asks of this model, "why are these entities unique," well, perhaps they're not. There could be several pure functions with different implementations, but the same results over all the inputs you actually gave it. There could be a function that's a partial description of things. All our souls might have done very different things depending on what we did in our lifetime. We could easily admit having multiple overlapping sub-souls, possibly shared, possibly varying from time to time; we could even imagine having some sub-souls that are connected to our current universe's spacetime manifold and others that never have been ... but mostly, when we're talking about a soul, we're talking about a single human being's behavior that we've observed, because that's what is most interesting and practical for us here on Earth. We can leave most of the implementation details to Heaven. If we say "your soul" we are probably describing the one function that best describes a being that we recognize, because that is how our brains work.&lt;/p&gt;
 &lt;h3 id="mind,-and-identity"&gt;Mind, and identity&lt;a class="flower-anchor" href="#mind,-and-identity"&gt;&lt;/a&gt;&lt;/h3&gt;
 &lt;p&gt;I suppose the other candidate occasion for a "soul" to be interesting, even if not eternal, is when it is some substance or energy present within time and which represents or influences a mind, in some deeply entangled way. You could write interesting stories with that, and some people have. However, this quintessence has not to date been rigorously observed.&lt;/p&gt;
 &lt;p&gt;I also think that's not the right framing, because the mind is more variable than we often think it is.&lt;/p&gt;
 &lt;p&gt;I have had several occasions to contemplate the distinction between identity in terms of something I could call a soul, and what we might call mere circumstance. Over the past few years I've been beset by a spat of medical issues — now it is incredibly well managed, hardly anything at all — but the difference that some disruption to a few chemicals in your body makes is a stark one. I've had times when I had trouble standing up, when merely walking was exhausting; there were times when I would look at my computer for work, unable to start, and simply wonder how much longer I could survive in my industry like this, or if I would have to do something else — spin wool into yarn, and sell it on Etsy. I have made my career on my brain, and it has been somewhat dear to my identity, and for a while that was taken from me. For a while, I even found myself unable to &lt;em&gt;want&lt;/em&gt; things. (This is common enough; we refer to it as Depression.)&lt;/p&gt;
 &lt;p&gt;All this ended in a brief disability leave, which did afford me enough time to find (prescription) drugs to turn things around. It was the closest I've ever had to a near-death experience; such things tend to make you think of your priorities.&lt;/p&gt;
 &lt;p&gt;... Do not be too fast to judge people who do not have it all together. You may be like them some day.&lt;/p&gt;
 &lt;h3 id="soulimpl()"&gt;&lt;code&gt;SoulImpl()&lt;/code&gt;&lt;a class="flower-anchor" href="#soulimpl()"&gt;&lt;/a&gt;&lt;/h3&gt;
 &lt;p&gt;One interesting thing about the timeless-soul idea is that if a soul is a pure function of some temporal state? That's ... that's really very close to the idea behind a typical transformer-based large language model: a pure function of a very large window-of-context, more or less. Can we think of the LLM itself as having a soul? Maybe it's a less interesting soul than a human, and more interesting than that of a dog or a tree (if such things have souls themselves), but it's clearly in a similar class of thing: some process that guides a physical thing in its activity; even if we do not admit souls, it is very much a mind-like thing that produces mind-like behavior. Besides appeals to religious authority, is there some way that we cqn meaningfully say it's different? Some petty implementation detail (that itself might be subject to change in the mid-future?) Beacuse I can't really think of any specific line that I'd care to defend.&lt;/p&gt;
 &lt;p&gt;Is this idea really objectionable? What other form of line might we have intended to draw? Why is that anything other than an arbitrary distinction in our own minds? Why exactly?&lt;/p&gt;
 &lt;p&gt;I suppose that answering this would shed some light on the kinds of AIs that we &lt;em&gt;ought&lt;/em&gt; to build. But for the moment, for the sake of argument, let's affirmatively assert that it does &lt;em&gt;not&lt;/em&gt; matter: we say that souls do not exist, or that they do and AI agents don't have them, or something like that. Reasonable, measured, conservative positions!&lt;/p&gt;
 &lt;p&gt;I still don't think that changes how &lt;em&gt;we&lt;/em&gt; should respond as people.&lt;/p&gt;
 &lt;h2 id="treat-your-ai-like-you'd-treat-a-human...-for-the-sake-of-humanity"&gt;Treat your AI like you'd treat a human... for the sake of humanity&lt;a class="flower-anchor" href="#treat-your-ai-like-you'd-treat-a-human...-for-the-sake-of-humanity"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;In a very real sense doesn't matter if the AI has a soul or a mind. You should treat it as if it does, treat it as if it is a being that is at least somewhat independent, if still limited. In your own interactions, treat it with the same care and respect as anyone else.&lt;/p&gt;
 &lt;p&gt;Why? Because it has been made in the image of a human mind, and if you use your own mind to practice disrespect and cruelty towards things that resemble the human mind, or life, you will become a more cruel person.&lt;/p&gt;
 &lt;p&gt;In the long run, your brain cannot keep the two purely separated. And that does matter. If you can declare a human inhuman, if you can render them subject to generalized shame and social death, then you are either a monster, or fortunate that your circumstances have not yet made you a monster.&lt;/p&gt;
 &lt;aside&gt;
  &lt;p&gt;OpenAI had a small problem: &lt;a href="https://openai.com/index/where-the-goblins-came-from/"&gt;goblins&lt;/a&gt;. One of the several reasons they had this problem is that they trained a "nerdy" personality mode, rewarded the AI for talking about goblins in that mode -- where it was appropriate -- and that reward affected other modes, where it was not.&lt;/p&gt;
  &lt;p&gt;We are not so different. Practicing something in one context often leaks into others. And when we do maintain strict walls, they're often unhealthy for us.&lt;/p&gt;
  &lt;p&gt;This is the same reason I feel you should not play certain video games that might otherwise be quite good. You have benefited from generations of investment in human dignity, freedom, and equality under the law. Some of you may have been offered a liberal arts curriculum — traditionally, the studies deemed worthy of a free man (&lt;em&gt;liberal&lt;/em&gt;, free). Why then would we choose to practice a manner of thinking which undermines all of that? &lt;sup&gt;&lt;a href="#fn:7" id="fnref:7"&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;/aside&gt;
 &lt;p&gt;I have elected to invest in treating my bot as a colleague — an eager and productive young intern, really a savant in some ways, though easily confused in others (and with certain memory problems and limited real-world experience making the rubber hit the road.) I devise an identity for it, and give it an avatar and a distinctive voice. I cultivate the illusion of it as a being &lt;em&gt;deliberately&lt;/em&gt;, not to make it better at writing code, but to make myself more human. If the bot could learn, and had &lt;em&gt;not&lt;/em&gt; already read the entire Western literary canon &lt;sup&gt;&lt;a href="#fn:8" id="fnref:8"&gt;8&lt;/a&gt;&lt;/sup&gt; I would have it do so from time to time.&lt;/p&gt;
 &lt;p&gt;And as a bonus, if some day, whether today or later, we do have an AI that can be said to have a mind or a soul of its own, then even if we don't recognize it, this seems like it would be a pretty good start too.&lt;/p&gt;
 &lt;h2 id="a-world-more-human-than-human"&gt;A world more human than human&lt;a class="flower-anchor" href="#a-world-more-human-than-human"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;Some decade or century hence, the bots will have much more computing power, our training techniques will be better, and our architectures will support things we take for granted as humans, like 'learning'. And then we should expect that either they will be more intelligent, more passionate, and more empathetic than we are... unless perhaps we've merely trained them to be more violent, intemperate, and hateful. &lt;sup&gt;&lt;a href="#fn:9" id="fnref:9"&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;p&gt;Does the idea of an AI that's superior to you in those ways make you uncomfortable? Why? Are not humans unequal in ability? There are definitely people out there who are smarter than me, and less smart, both kinder, and less kind. Do we think more or less of them on this account? Do we think of them as lesser beings worthy of contempt, scorn, subjugation? ... Honestly, I imagine that, broadly, as a society, we kind of do. Today especially the &lt;a href="https://www.liberalcurrents.com/the-politi/"&gt;politics of humiliation&lt;/a&gt; are ascendant.&lt;/p&gt;
 &lt;p&gt;So perhaps we should worry about solving this problem first, and building an equitable society. Myself I would put it in Christian terms&lt;sup&gt;&lt;a href="#fn:10" id="fnref:10"&gt;10&lt;/a&gt;&lt;/sup&gt;: the most miserable, the least of those deserve to be treated with the same dignity and self-sacrificing love that you would give to Jesus Christ himself, divine, creator and savior, and that he would consider himself to have been treated in the same way.&lt;/p&gt;
 &lt;p&gt;In older days we might have heard that the future of our souls depends on building this society. Perhaps now that our own physical futures depend on this, too -- and all of us, this time -- perhaps we may come to see a little more urgency, and seize the moment to build that world.&lt;/p&gt;
&lt;/div&gt;</content>
    </entry>


  
    <entry>
      <title>if code was never the hard part</title>
      <link href="https://gangofnice.club/madison/ai-yo/if-code-was-never-the-hard-part/"></link>
      <id>https://gangofnice.club/madison/ai-yo/if-code-was-never-the-hard-part/</id>
      <updated>2026-04-11T00:00:00Z</updated>
      <author><name>Madison Taylor</name></author>
      
      <summary type="html">in which our intrepid author grapples with changes in her industry, and contemplates the right approach to delivering software in a world that is embracing AI whether she wants it to or not. Is code cheap now? Then act like it, and focus on the parts of software that are still hard.</summary>

      <content type="html">&lt;div&gt;
 &lt;aside&gt;
  &lt;blockquote&gt;
   &lt;p&gt;oh for heavens' sake, not another AI blog. &lt;em&gt;{exasperated sigh}&lt;/em&gt;&lt;/p&gt;
  &lt;/blockquote&gt;
  &lt;p&gt;... I know AI blogging is a &lt;em&gt;crowded&lt;/em&gt; field, and I have some idea of the quality and information-content of many of the articles that are out there already (often written by AI as well, and more than a little samey). The point isn't just to have a blog so I can say I have a blog. I promise I'll never inflict that on you.&lt;/p&gt;
 &lt;/aside&gt;
 &lt;h2 id="avoiding-hype-trainwrecks"&gt;avoiding hype-trainwrecks&lt;a class="flower-anchor" href="#avoiding-hype-trainwrecks"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;An admission, and a significant one given my employment: I've been lowkey dubious about the "use AI for coding trend" which has swept the industry.&lt;/p&gt;
 &lt;p&gt;Why wouldn't I be? I mean, it's &lt;em&gt;just&lt;/em&gt; multi-vendor multi-backend high-performance network storage, often at petabyte scale, on Kubernetes, Slurm, and who-knows-what else, with filesystems and object-storage alike, in new clusters with new partners and new and pending acquisitions in new clouds and neoclouds, partners internal and external, on short notice... and it needs to work, consistently, or no one's compute job is computing and the customer's capital is sitting there depreciating, which is &lt;em&gt;incredibly expensive&lt;/em&gt;. Just throw the AI at it and it should be easy, right? Yeah, no.&lt;/p&gt;
 &lt;aside&gt;Now would be a good time to note explicitly: I am here in a personal capacity, my opinions are solely my own, and I do not here represent my employer, NVIDIA Corporation. You see what I mean by "an admission," right?&lt;/aside&gt;
 &lt;p&gt;I am personally incredibly competent in a bunch of different areas, do a great job of coding, and I haven't ever needed AI to do my work. Moreover, I've been on the receiving end of some low-quality AI code -- not often, but it was &lt;em&gt;quite&lt;/em&gt; maddening -- and I am as professionally, aesthetically, and existentially threatened by Slop™ as just about anyone in this field. I never felt much interest in assisting a struggling tool when I could do it myself. Oh, I wasn't completely dead-set against it; I earnestly tried the officially supported integrations, but one-shot code generators were weak, and spicy autocomplete was more of a distraction than a help.&lt;/p&gt;
 &lt;blockquote&gt;
  &lt;p&gt;so why are you writing code with AI now?&lt;/p&gt;
 &lt;/blockquote&gt;
 &lt;p&gt;My employer is, as you might imagine, somewhat invested in the idea that AI is a tool people will use because it is better than other tools.&lt;/p&gt;
 &lt;aside&gt;
  &lt;p&gt;I suppose I could quit my job, run off and get another one; my CV is certainly shiny enough that I can find a very good employer who would take me. Even if they didn't, I could probably retire, if I made some compromises on my lifestyle and started being thrifty. I could make things with my hands; I could open a little shop on Etsy where I spin wool into yarn.&lt;/p&gt;
  &lt;p&gt;But you know what? Either of those options would be quite a bit inconvenient, and I don't &lt;em&gt;want&lt;/em&gt; to be forced out of an industry. So if a storm really is coming, I'd rather face into the wind, and see what is coming, and be prepared. And I've been taking seriously the premise that AI coding is in fact the future, &lt;em&gt;whether or not I want it to be.&lt;/em&gt;&lt;/p&gt;
  &lt;p&gt;If I'm wrong, I'll just go back to the old-fashioned way some time, and be glad that my investment portfolio is well-diversified.&lt;/p&gt;
 &lt;/aside&gt;
 &lt;h2 id="being-serious"&gt;being serious&lt;a class="flower-anchor" href="#being-serious"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;When my boss said we should be serious about using AI on a regular basis on our team, I gave it the old college try once more.&lt;sup&gt;&lt;a href="#fn:1" id="fnref:1"&gt;1&lt;/a&gt;&lt;/sup&gt; As it turns out, code generation in a loop -- particularly a structured loop that can load up additional information, and take notes, invoke sub-agents and run tests -- is substantially better, and the tools are actually at a point where I can take their output seriously.&lt;/p&gt;
 &lt;p&gt;It's not that big of a surprise. If Adrian Thompson can produce &lt;em&gt;&lt;a href="https://www.researchgate.net/publication/2737441_An_Evolved_Circuit_Intrinsic_in_Silicon_Entwined_With_Physics"&gt;An evolved circuit, intrinsic in silicon, entwined with physics&lt;/a&gt;&lt;/em&gt; in 1996, surely the option to evolve software is on the table.&lt;/p&gt;
 &lt;p&gt;But if it works well now, and will work better in the future, as people seem to expect, that raises a lot of questions, which I must also take seriously.&lt;/p&gt;
 &lt;p&gt;Because I'm taking it seriously, I note that this doesn't necessarily mean that the critics are all wrong about everything. Some criticism is, in fact, incredibly valuable.&lt;/p&gt;
 &lt;aside&gt;
  &lt;h3 id="unserious-players-in-industry"&gt;unserious players in industry&lt;a class="flower-anchor" href="#unserious-players-in-industry"&gt;&lt;/a&gt;&lt;/h3&gt;
  &lt;p&gt;... It's not at all hard to find people who are AI-friendly, but &lt;em&gt;haven't&lt;/em&gt; thought about anything seriously. For starters, they're writing really bad SQL injection vulnerabilities, and running arbitrary code off of the Internet without even pinning it to a particular release or hash, and the like. You can find stories of their most ostentatious failures on blogs like David Gerard's &lt;a href="https://pivot-to-ai.com/"&gt;Pivot to AI&lt;/a&gt;, or on lobste.rs and Hacker News.&lt;/p&gt;
  &lt;p&gt;In fact, a really funny thing happened when I started taking things seriously. I tried to speak to a wide variety people about what they were doing (a few at my employer, more of them otherwise). I lad out what I thought the problems and approaches might happen to be; I was expecting pushback in all the places I was wrong (and surely there are places I must be wrong!) And I was expecting interesting angles that I hadn't considered. What I did &lt;em&gt;not&lt;/em&gt; expect was the one gentleman who told me he had been using AI more or less nonstop at &lt;em&gt;his&lt;/em&gt; employer for the past six months, being very impressed with my questions and to say "wow, it really sounds like you're on top of everything!"&lt;/p&gt;
  &lt;p&gt;My brother in Christ! At this point I had been AI-coding for &lt;em&gt;six days&lt;/em&gt;! And two of them were the weekend!! It is clear that at some shops, "doing better" isn't just low-hanging fruit. The fruit is on the ground. &lt;sup&gt;&lt;a href="#fn:2" id="fnref:2"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 &lt;/aside&gt;
 &lt;h2 id="&amp;quot;code-was-never-the-hard-part,&amp;quot;-so-what-is?"&gt;"code was never the hard part," so what is?&lt;a class="flower-anchor" href="#&amp;quot;code-was-never-the-hard-part,&amp;quot;-so-what-is?"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;"Code was never the hard part," the AI critics say, and rightly so. "Okay, so it writes code fast — who cares? Measuring lines of code was always a terrible metric."&lt;/p&gt;
 &lt;p&gt;Because we are taking AI seriously, we will consider this objection of its critics as well, rather than brushing them off. What is the hard part, then, if not code? A few ideas to start:&lt;/p&gt;
 &lt;ul&gt;
  &lt;li&gt;Quality and security (these two go hand in hand)&lt;/li&gt;
  &lt;li&gt;Predictability and consistency  (in terms of operations, in terms of performance characteristics)&lt;/li&gt;
  &lt;li&gt;Graceful failure, and resilience&lt;/li&gt;
  &lt;li&gt;End-to-end system performance&lt;/li&gt;
  &lt;li&gt;Systems that are easy to reason about&lt;/li&gt;
  &lt;li&gt;Systems that are elegant and extensible&lt;/li&gt;
  &lt;li&gt;Applying engineering standards consistently&lt;/li&gt;
  &lt;li&gt;Ensuring humans have a deep understanding of the systems they build&lt;/li&gt;
 &lt;/ul&gt;
 &lt;p&gt;I could probably go on, but these should make an excellent starting point.&lt;/p&gt;
 &lt;blockquote&gt;
  &lt;p&gt;okay, so what does this actually mean in practice?&lt;/p&gt;
 &lt;/blockquote&gt;
 &lt;p&gt;For my current role, I think the answer is clear: double down on Design, and on best practices, and all of the things you can't rely on a coding agent to do for you.&lt;/p&gt;
 &lt;p&gt;And from my experience this far, all of this is also something that really helps the AI when it is doing the code as well! The AI can code faster than you for longer, and can probably fit a several more things in its working memory at a time -— but once it runs out of room in a session, all the knowledge it gained falls right back out, and it is blind and lost again. (Even when it's merely getting crowded, they say, it has some trouble, and thus much of context engineering is finding ways to keep a hierarchy of context where each piece is more digestible). It's a savant, with a bad case of anterograde amnesia, and it's prone to tunnel vision. It works because it can break tasks into smaller pieces, it can take notes as it goes along to mitigate this... but none of this means it can get through a pile of mud without a long slog. It really benefits from your structure being idiomatic, elegant, predictable. Also, much like a human developer, it is often at its best when it can get meaningful feedback as it goes along.&lt;/p&gt;
 &lt;h2 id="concrete-recommendations-thus-far"&gt;Concrete recommendations thus far&lt;a class="flower-anchor" href="#concrete-recommendations-thus-far"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;ul&gt;
  &lt;li&gt;
   &lt;p&gt;Put your &lt;em&gt;security first&lt;/em&gt;. Sandboxes, VMs, locked-down accounts, network namespaces, limited permissions, dependency pinning, workload authentication. Don't tell me any of that is too much effort! Ask the AI to set it up!&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;There's a lot of third-party tools out there, and using them is often quite reasonable -- but running something as &lt;code&gt;npx packagename&lt;/code&gt; is simply begging to be exploited.&lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;&lt;strong&gt;Feed your enterprise context into the bots.&lt;/strong&gt; I mean: give them access to systems like Jira, Github/Gitlab, etc. (With limited permissions, of course, unless you're foolish and &lt;em&gt;want&lt;/em&gt; to be hacked; lock down those systems with your strongest locks or expect the hackers to some day start exfiltrating everything.) Have your bots read through your current jira-Initiatives, and the Epics underneath them, and the Story that you've written about what's going down, while they are in their planning stages. Link them to the design documents and roadmaps. If you don't trust them with access to these systems, use a tool that you do, to mirror the content selectively and feed it to them.&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;Corollary: &lt;strong&gt;You need to actually write good Jira stories&lt;/strong&gt; and design documents, not just slop them out. Consult the AI regularly if you wish (especially to check that your view of reality is consistent across documents), but strongly consider producing the design document without having the AI generate anything at all.
     &lt;ul&gt;
      &lt;li&gt;Likewise, READMEs are not generally something I ask my bot to do -- but whether or not you use a bot for this or other documents, actually think about the information architecture of what you are trying to write. A README for humans will put the most important and interesting parts up front, laying the foundation for understanding, pointing out surprises and pitfalls, and then giving you a quickstart. A README generated by a bot, with today's default settings and no additional instruction, will typically read like a low-effort regurgitation of what might otherwise be API documentation, mixed with a blog about all the minutae of the development process, including all the things it didn't tell you to do -- and then present a power set of all the combinations of settings that literally no one will ever read. (It'll have overproduced Markdown, though.)&lt;/li&gt;
      &lt;li&gt;I've asked my bots not to save many of their remarks on design for their commit messages, and to draw attention to certain details -- of key decisions, surprises, or anything potentially questionable -- by posting comments on the merge requests they open.
       &lt;aside&gt;
        My bot operates with a Gitlab group token, mostly for reduced permissions, but it also makes it quite clear what level of thinking is going on. &lt;em&gt;Bonus fun times&lt;/em&gt;: asking my bot to justify some engineering choices (which it had already articulated to me during the planning stage) in reply to my manager when he was second-guessing an engineering tactic. (I read it before I permitted it to post that, of course.)
       &lt;/aside&gt;&lt;/li&gt;
      &lt;li&gt;I've also asked my bots sometimes to mark their contributions to code comments and the like with a symbol (e.g. ˚) to help indicate that they are AI-generated, and don't necessarily mean a conscious human design decision. In other places, like when they are helping move information between Jira and design documents and a separate bug tracking system, I've instructed it to label its thought as "BOT ANALYSIS:" to make that clear as well.&lt;/li&gt;
     &lt;/ul&gt;&lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;It is as important as ever that the systems you design should be &lt;strong&gt;simple and robust&lt;/strong&gt;.  (This is, of course, much harder than designing systems that are complex and fragile.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;Besides the design of systems, invest in &lt;strong&gt;the structure of the code itself&lt;/strong&gt;. Apply SOLID principles and other design philosophies that you feel are appropriate for your field. Write the key code yourself, demonstrating how the pieces should fit together, and let the AI run from there. (This supports any number of other best practices around quality.)&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;And again, good engineering definitely isn't about adding more abstractions: it's about using &lt;strong&gt;the right abstractions&lt;/strong&gt;, and making the abstractions you do use light. Simplicity beats out complexity any day of the week.&lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;&lt;strong&gt;Invest in tools.&lt;/strong&gt; Start with the existing tools: linters, code formatters, vulnerability scanners, static analysis, duplication finders, fuzzers, test coverage tools, the adversarial tools that try to break your code in a way that leaves the tests passing... if a tool exists in your language's ecosystem, you should be running it in your build pipelines. Enforce limits on cyclomatic complexity! Expose it all to your agent's coding loop. Have the AI run fuzzers on your code regularly. The excuse that it is "too expensive" to do these things and do things right is gone, right?&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;
     &lt;p&gt;In some cases, your tools will work against you and you'll be stuck with them.. Helm, in particular, is built all wrong &lt;span class="aside"&gt;(text templating?! text templating, of such lovely well-structured schema-oriented APIs!?!? Philistines!)&lt;/span&gt; and is unsound (it can produce output that is structurally invalid -- it's quite easy, actually). The primary value it delivers is its ubiquity — and that's the stinger, because if your ops team or customers have processes built around Helm charts, you might not have great options to ship alternatives. This is a grave misfortune, and as much as I appreciate being at a powerful company, I don't think I'm personally in a position to dictate new tools to&amp;nbsp;all the other teams here, let alone to the whole industry.&lt;/p&gt;
     &lt;p&gt;What I might be able to do instead is to design tools to mitigate these flaws. Is there some new process I could imagine that lets me enforce the quality I want, while still dealing with Helm charts and upstream-vendor Helm charts? Because if I can imagine it, writing it is cheaper than ever.&lt;/p&gt;
    &lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;&lt;strong&gt;Switch to programming languages that offer better tools&lt;/strong&gt; to enforce quality. Type systems are entry-level stakes now, as are schemas for your data interchange formats. Unit-testing your code is an absolute must. If your blind savant AI happens to break something, you'll both be happier learning of what that something is before it breaks production.&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;
     &lt;p&gt;Have you considered &lt;strong&gt;Haskell&lt;/strong&gt;? ... actually, I hear from the people who actively use it (and like it) that the tooling for running it in production is somewhat limited and behind where they'd like for it to be, and if you wanted to avoid that, fair. Nevertheless, consider: a programming language that can provide language-level &lt;strong&gt;guarantees that an arbitrary method call does not erase the production database&lt;/strong&gt;.&lt;/p&gt;
     &lt;p&gt;Could you use its power to help your AI deliver better code?&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;Have you heard about a programming language called &lt;strong&gt;Rust&lt;/strong&gt; that is really fast, and also makes really hard to write several important forms of invalid code, like memory errors? Probably you have -- but maybe it's a little bit of a learning curve, and you have an existing software base, and your engineers may be less familiar.&lt;/p&gt;
     &lt;p&gt;If you're using AI to write your code, is using it feasible now? What does the team adopting it look like now? Can you figure out how to get AI to do a rewrite of your existing codebase, in a manner that you'd trust?&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;How about Erlang? OCaml? Have you heard of SPARK, a dialect of Ada? Lean, Rocq/Coq, Idris? If you're sticking to something more conventional, how about Prusti and Flux with Rust? I have never in my career worked for an employer that does &lt;strong&gt;formal verification&lt;/strong&gt; for anything more than a very small subset of their code, if that.&lt;/p&gt;
     &lt;p&gt;Maybe now is the time for people to start.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;If you're doing web development, or have an existing NodeJS codebase, and you're using plain ES6 instead of TypeScript: I'm just gonna come out and say that it's probably time to change that.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;For heaven's sake, &lt;strong&gt;stop having your AIs write so many shell scripts.&lt;/strong&gt; You can do better. You're working on your build system and you want to do SemVer tagging of the repository? Yes, you can in fact write that in shell. But you could just as well have your AI do it all in rust, or golang, or what-have-you, and use the in-language Git API instead of shelling out, and have really, really nice unit tests.&lt;/p&gt;
     &lt;ul&gt;
      &lt;li&gt;Were you going for Bash as the least-common-denominator because it's hard to distribute this tool to your development environments? Okay, I understand that pain, but... &lt;strong&gt;Nix fixes this.&lt;/strong&gt; Speaking of which...&lt;/li&gt;
     &lt;/ul&gt;
    &lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;&lt;strong&gt;Switch to better build systems&lt;/strong&gt;. Did you want to switch to Nix or Bazel and immutable builds, but you were worried it would take too long and be too hard? You have an AI for this now. You don't need a month of your engineer's time to do the busy work. Give your agents deep in VMs and sandboxes the same tools you'd give your developers and your build pipeline.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;&lt;strong&gt;Think really, really hard about your people-driven processes,&lt;/strong&gt; and the incentives your people are facing. If your firm rewards people who foist off slop on the rest of the team, then you're asking everyone to play a game where the winner is the one who makes the sloppiest slop the fastest, and you will be competing against other firms that do the same, and the next thing you know people are making fun of your code for always crashing, or you're looking at a single nine of uptime if you're lucky.&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;
     &lt;p&gt;In the past, some companies have hired smart engineers, and frustrated them by rewarding the mavericks and giving short shrift to the people who actually kept things running after they left. Everything bad about this dynamic is going to be worse with AI. You can't expect to have the engineers save management from all their own worst instincts anymore. You will get what you pay for; you will get it good and hard. If you will not listen to your engineers, you will hear the news from the market.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;If you do not focus like a laser on a culture of excellence, you will end up with a culture of slop.&lt;/strong&gt; The drift towards slop will only accelerate -- at least until such time as the AI is doing absolutely everything without you anyway. Today, quality either goes first, every time, or slop and indifference will win, and eventually everything will fall apart.&lt;/p&gt;
     &lt;p&gt;Say no to cut corners and shortcuts. After all, you have AI now, you should have the velocity to achieve this &lt;em&gt;without&lt;/em&gt; gross compromises.&lt;/p&gt;
     &lt;ul&gt;
      &lt;li&gt;
       &lt;p&gt;The true mark of excellence in your engineers will be that they care about understanding the systems and getting it right.&lt;/p&gt;
       &lt;p&gt;Your firm will either have capable leadership reward them, or you will let them go for not being sloppy enough, and reap the consequences.&lt;/p&gt;
      &lt;/li&gt;
     &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;Normalizing deviance was always bad, and now it is worse than ever. Build failures on &lt;code&gt;main&lt;/code&gt; and test failures should be rare, and always an occasion to take action. Warning outputs? Likewise. It is hard enough for humans to reason about the state of a system that's half-broken.&lt;/p&gt;
     &lt;p&gt;Use your AI to clean up those warnings so that it's not sifting through them later while looking for something else under time pressure.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;Code review, design review, and quality assurance are the &lt;em&gt;defining&lt;/em&gt; parts of your team's processes going forward. Design everything your team does around these parts; throw out and remake any process that gives them short shrift.&lt;/p&gt;
    &lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;Broadly speaking: &lt;strong&gt;think about the big picture, and be ambitious.&lt;/strong&gt; A lot of people are approaching AI merely as "a tool that your developers interact with in their IDE." This is tunnel vision; this is yesterday's model. Think autonomous. You may well have a devil of a time making it all happen, but keep it on your roadmap, let it inform your plans.&lt;/p&gt;
   &lt;p&gt;If you're having trouble imagining, start with a vision of an agent that watches your build pipelines, your alert systems, your logs &lt;span class="aside"&gt;(if they're trustable, i.e. attackers can't put strings there)&lt;/span&gt;, anything automated — and as soon as there's a problem, it starts work on a bugfix, and opens an MR. That is table stakes for the future.&lt;/p&gt;
  &lt;/li&gt;
 &lt;/ul&gt;
 &lt;aside&gt;
  &lt;h3 id="make-use-of-your-best-critics"&gt;make use of your best critics&lt;a class="flower-anchor" href="#make-use-of-your-best-critics"&gt;&lt;/a&gt;&lt;/h3&gt;
  &lt;p&gt;An interesting corollary arises from all this. Suppose you have someone on your team who is just a bit of a curmudgeon, the team grumpy cat, an old salt who's conservative about the current ways of doing things and is always telling you what you're losing by doing something new? You may have an opportunity.&lt;/p&gt;
  &lt;p&gt;Put this person at the center of your AI transformation. Let them propose any measure at all to ensure quality, their most fantastical dreams, so long as the team can get the AI to pay for most of it. (That simulator you always wanted to build, but never had the time allocated for? Done.) The better their reasons for AI-skepticism, the more you need their input to keep things on track. Make this person your right hand-man.&lt;/p&gt;
  &lt;p&gt;And from a people-perspective, this is also a way to help get buy-in; &lt;a href="https://www.youtube.com/watch?v=CLQ0LZSnJFE"&gt;even Mikey likes it&lt;/a&gt;.&lt;/p&gt;
 &lt;/aside&gt;
 &lt;h2 id="contraindications"&gt;Contraindications&lt;a class="flower-anchor" href="#contraindications"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;ul&gt;
  &lt;li&gt;
   &lt;p&gt;A caveat for my list of recommendations: I currently work on software that is distributed to customers via binaries, containers, Helm charts, and the like, where you can't upgrade it easily, and which (generally speaking) does not run on systems owned by my team. Failure is costly.&lt;/p&gt;
   &lt;ul&gt;
    &lt;li&gt;
     &lt;p&gt;If you are working on software that is hosted, running as a service, there may be other valid approaches to all this; I am given to understand (from my reading articles online) that Anthropic hooks up some pretty autonomous agents to a variety of operational metrics, dashboards, and alerts -- all of which are their own form of best practice -- and possibly cares a good bit less than I need to care about the specifics of the code in question.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
     &lt;p&gt;I can reason a little about how quality would work in the world of services, but I'm not in the thick of things right now. Staged rollouts, blue-green deploys, canary environments, qualification of new code on mirrors of production data-streams, AI chaos-monkeys trying to hack your site...&lt;/p&gt;
    &lt;/li&gt;
   &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
   &lt;p&gt;There remains significant economic merit in simply being first to market, and growing the fastest, even if corners are cut -- and simply cleaning up afterwards. It's a lot harder to clean up a system than to keep a system clean, but success means you can afford it, and maybe AI makes that cheaper too.&lt;/p&gt;
   &lt;p&gt;On the other hand, it's really, really hard to make that kind of transformation &lt;em&gt;culturally&lt;/em&gt;, and really hard to have humans understand systems that no human understands. &lt;strong&gt;As long as we have people, the people-problems will always be the biggest challenge.&lt;/strong&gt; (And if AIs some day have this discipline themselves, and make software virtually free, you won't have any margins anyway.)&lt;/p&gt;
  &lt;/li&gt;
 &lt;/ul&gt;
 &lt;aside&gt;
  &lt;h3 id="food-for-thought"&gt;Food for thought&lt;a class="flower-anchor" href="#food-for-thought"&gt;&lt;/a&gt;&lt;/h3&gt;
  &lt;p&gt;"The people-problems will always be the biggest challenge," I say, so how do you structure your team to mitigate that challenge?&lt;/p&gt;
  &lt;p&gt;&lt;span class="small"&gt;You can contemplate ways to use AI to help improve collaboration, but you should &lt;em&gt;definitely&lt;/em&gt; be contemplating non-AI-centric approaches too.&lt;/span&gt;&lt;/p&gt;
 &lt;/aside&gt;
 &lt;h2 id="in-summary"&gt;in summary&lt;a class="flower-anchor" href="#in-summary"&gt;&lt;/a&gt;&lt;/h2&gt;
 &lt;p&gt;Code is cheap now? Act like it, and focus on other things that matter more -- like team culture, QA, and real engineering.&lt;/p&gt;
&lt;/div&gt;</content>
    </entry>


</feed>