<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI on Non-Functional Blog</title><link>https://non-functional.net/tags/ai/</link><description>Recent content in AI on Non-Functional Blog</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 30 Apr 2026 14:05:54 +0100</lastBuildDate><atom:link href="https://non-functional.net/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Corporate slop</title><link>https://non-functional.net/posts/2026-04-30-corporate-slop/</link><pubDate>Thu, 30 Apr 2026 14:05:54 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-30-corporate-slop/</guid><description>&lt;p&gt;A great article from the always authentic and enjoyable &lt;em&gt;World&amp;rsquo;s Greatest Newsletter&lt;/em&gt; (ignore the name) from the Raw Signal Group.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.rawsignal.ca/newsletter-archive/the-people-who-care-are-having-the-hardest-time/" target="_blank" rel="noreferrer"&gt;&amp;ldquo;Businesspeople, we’ve entered a weird moment when caring about the organization and your craft is a liability. And when pressed for details on why caring less seems appealing, the answers are dark.&amp;rdquo;&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Incident Residue</title><link>https://non-functional.net/posts/2026-04-23-incident-residue/</link><pubDate>Thu, 23 Apr 2026 23:41:35 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-23-incident-residue/</guid><description>&lt;p&gt;I&amp;rsquo;ve been thinking for a while about how incident response is going to
change, and how it has already changed since the pre-ML days.
&lt;a href="https://www.linkedin.com/in/toddunder/" target="_blank" rel="noreferrer"&gt;Todd Underwood&lt;/a&gt; did a great
chapter in &lt;a href="https://www.oreilly.com/library/view/reliable-machine-learning/9781098106218/ch11.html" target="_blank" rel="noreferrer"&gt;Reliable Machine
Learning&lt;/a&gt;
which tried to illustrate how IR changes in the modern world. In
brief, it becomes harder to both investigate what&amp;rsquo;s going on, and also follow the standard
troubleshooting approach of building a mental model in your head of what&amp;rsquo;s happened
when you no longer have a causally strong relationship between actions and outcomes.
It&amp;rsquo;s also going to involve a lot more coordination between different groups, as ML will
typically pull in data from across the business to a previously unprecedented extent.&lt;/p&gt;
&lt;p&gt;But I came across this today - thanks to &lt;a href="https://www.linkedin.com/in/dobbse" target="_blank" rel="noreferrer"&gt;Eric
Dobbs&lt;/a&gt; in
&lt;a href="https://resilienceinsoftware.org/" target="_blank" rel="noreferrer"&gt;RISF&lt;/a&gt; - which talks about one
likely feature of the future that hasn&amp;rsquo;t gotten much attention outside
leading edge circles, and that&amp;rsquo;s the fact that as AI SRE systems
hoover up the easier tasks, the harder tasks will be the only ones
that are left: the &lt;a href="https://www.linkedin.com/pulse/what-ai-incident-response-leaves-behind-uptime-labs-tmdve/" target="_blank" rel="noreferrer"&gt;&amp;ldquo;left behind&amp;rdquo;
issue&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Most folks who look at this have pointed out that as the easier issues
go away, it&amp;rsquo;s harder to train on what remains, and (modulo learning
styles) I think that&amp;rsquo;s true; what I think is less explored is how IR changes
when you actually &lt;em&gt;can&amp;rsquo;t&lt;/em&gt; construct a model of how the system works by
asking a sufficiently aware human. We will, in short, become dependent on
the same tools that created the additional complexity to penetrate and
resolve that complexity in real-time, every time there&amp;rsquo;s an incident.&lt;/p&gt;
&lt;p&gt;We should bear that in mind when we think about how to staff, and what
to pay for, in the domain of incident response. The stuff that&amp;rsquo;s left
behind - the incident residue - is the stickiest of all.&lt;/p&gt;</description></item><item><title>Komodor doing an AI SRE summit</title><link>https://non-functional.net/posts/2026-04-22-komodor-doing-an-ai-sre-summit/</link><pubDate>Wed, 22 Apr 2026 18:47:19 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-22-komodor-doing-an-ai-sre-summit/</guid><description>&lt;p&gt;The AI SRE space is, as of the time of writing, absolutely insane. At some point in 2025, I counted the number of
players and the amount of money rushing into the space - it was 20+ and over a billion dollars, if you included
all funding numbers I&amp;rsquo;d found plus the numbers of incumbents in e.g. Cloud talking about how much they were going to invest in the space.
It may well turn out to be one of those situations where it&amp;rsquo;s easy to make a &lt;em&gt;prima-facie&lt;/em&gt; argument that the problem
space is big, almost everyone &amp;ldquo;suffers from it&amp;rdquo;, and that it&amp;rsquo;s easy to make progress (given the current state of agentic development, etc etc),
but it&amp;rsquo;s quite hard to deliver something that actually makes a difference and more importantly that is not like everyone
else&amp;rsquo;s three foundational models in a trenchcoat.&lt;/p&gt;
&lt;p&gt;Earlier in my career there were very similar conversations about mobile phone providers (really operators), who quickly
became seen as being essentially commodotised - everyone would pick from a similar set of network gear provided by a small set
of manufacturers, the handsets were mostly commodotised etc, etc. Ultimately they did what a lot of businesses in similar
positions did, which is to attempt to differentiate themselves on price, branding/marketing, or customer service. There may well be a similar
effect playing out in this market too.&lt;/p&gt;
&lt;p&gt;In unrelated events, I see that Komodor are organising &lt;a href="https://komodor.com/ai-sre-summit-2026/" target="_blank" rel="noreferrer"&gt;an AI SRE summit&lt;/a&gt; and that
looks like an interesting speaker list, though I wonder precisely how vendor neutral that&amp;rsquo;s going to be.&lt;/p&gt;</description></item><item><title>Bot traffic on the web</title><link>https://non-functional.net/posts/2026-04-21-bot-traffic-on-the-web/</link><pubDate>Tue, 21 Apr 2026 21:22:22 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-21-bot-traffic-on-the-web/</guid><description>&lt;p&gt;From college mate Ian&amp;rsquo;s time at a &lt;a href="https://world.hey.com/ian.mulvany/cloudflare-connect-on-tour-london-notes-69a37b5a" target="_blank" rel="noreferrer"&gt;Cloudflare
session&lt;/a&gt;,
we learn that bot traffic is 50% of overall web traffic, and AI agent traffic is circa 7%.&lt;/p&gt;
&lt;p&gt;It seems likely both of those numbers will go up.&lt;/p&gt;</description></item><item><title>The Revenge of K-shaped Engineering</title><link>https://non-functional.net/posts/2026-04-21-the-revenge-of-k-shaped-engineering/</link><pubDate>Tue, 21 Apr 2026 16:23:42 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-21-the-revenge-of-k-shaped-engineering/</guid><description>&lt;p&gt;The incomparable &lt;a href="https://ethanding.substack.com/p/claude-code-is-not-making-your-product" target="_blank" rel="noreferrer"&gt;Ethan Ding on the disjunction&lt;/a&gt;
that most of us are feeling right now. Claude speeds up certain things - quite a lot - but it also slows us down. We need to
have a more accurate model of what&amp;rsquo;s happening to software, and this is an accessible primer on one possible scenario.&lt;/p&gt;
&lt;p&gt;(I also loved &lt;a href="https://ethanding.substack.com/p/levered-beta-is-all-you-need" target="_blank" rel="noreferrer"&gt;his piece on levered beta&lt;/a&gt;, which I think played out
essentially as he wrote.)&lt;/p&gt;</description></item><item><title>SRE Book Second Edition Early Release</title><link>https://non-functional.net/posts/2026-04-20-sre-book-second-edition-early-release/</link><pubDate>Mon, 20 Apr 2026 19:06:49 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-20-sre-book-second-edition-early-release/</guid><description>&lt;p&gt;As an author, I strongly dislike O&amp;rsquo; Reilly&amp;rsquo;s Early Release model,
since my stuff gets poked at before it&amp;rsquo;s ready. As a reader, I
strongly like O&amp;rsquo; Reilly&amp;rsquo;s Early Release model, since I can poke at
other people&amp;rsquo;s stuff before it&amp;rsquo;s ready!&lt;/p&gt;
&lt;p&gt;O&amp;rsquo; Reilly&amp;rsquo;s Safari platform is hosting the latest chapters on &lt;a href="https://www.oreilly.com/library/view/site-reliability-engineering/9798341607675/ch03.html" target="_blank" rel="noreferrer"&gt;STPA&lt;/a&gt; and &lt;a href="https://www.oreilly.com/library/view/site-reliability-engineering/9798341607675/ch04.html" target="_blank" rel="noreferrer"&gt;AI for SRE&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Opus 4.7 Model Card and Mythos Preview</title><link>https://non-functional.net/posts/2026-04-20-opus-4-7-model-card-and-zvi/</link><pubDate>Mon, 20 Apr 2026 18:56:03 +0100</pubDate><guid>https://non-functional.net/posts/2026-04-20-opus-4-7-model-card-and-zvi/</guid><description>&lt;p&gt;I strongly suspect that Zvi doesn&amp;rsquo;t need more inbound links, but his latest
&lt;a href="https://thezvi.substack.com/p/opus-47-part-1-the-model-card" target="_blank" rel="noreferrer"&gt;model card assessment&lt;/a&gt;
(which is, as usual, very well written) has a couple of notable quotes:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;So yeah, none of that sounds great. It all sounds like the types of thing that, if you caught a human doing them even once, that would be a very bad sign, and in several cases you would obviously have to fire them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Check out the examples of Mythos Preview attempting (and in some cases succeeding, only to be caught by the human at the last moment) to escape containment.&lt;/p&gt;</description></item></channel></rss>