• Maybe Also What AI is Good For Now

    While Paul Ford recently argued that AI might best be used “…to clean up the mess made by the old technology,” Josh Clark argues that the biggest opportunity is to, “elevate design through invention rather than replace it with automation.”

    Paul’s agency primarily offers technology services, and Josh’s agency primarily offers design services. So maybe we just want AI to help us continue to put food on our tables.

  • How Hi is Too Hi-Fi?

     “Audiophiles don’t use their equipment to listen to music. Audiophiles use your music to listen to their equipment”–Alan Parsons

    I consider myself both a musician and an audiophile. I was hooked on good equipment around the age of 14 when I heard my friend Robert’s older brother’s system. I don’t remember the amp, but he played Talking Heads and King Crimson on a Technics turntable and, most impressively, a set of Walsh Ohm speakers which are omnidirectional and sound astounding. Reviewers still choose them as their favorite speakers 40 years later.

    When I decided to move to Europe I sold off almost all my audio equipment, either because it wouldn’t work on 230V or I was curious to try something new. As the digital world becomes more wonderful and oppressive, I feel the desire to retreat back into the analog. So I converted my father’s 1970’s Sansui 1000x receiver to the new voltage and bought a Rega P1 turntable and Dali Oberon 5 speakers.

    Why those components? Because they sound great by normal people standards. They definitely don’t sound great by audiophile standards. Once you’ve spent, say, over 2000 bucks on a system, it’s able to reproduce audio with a certain clarity that has a certain sound, a sound that tells my brain to listen to the characteristics of the sound, to listen to the equipment. But that’s not what I want to listen to, at least not now.

    Spending about 1000 bucks I think is the sweet spot. The turntable was a great deal at 350, the speakers also a great deal used at 400, and if I was buying an amp I’d pick up a Wiim Ultra for 400. That will sound hi-fi without sounding too hi-fi.

  • Maybe What AI is Good For Now

    Paul Ford advises: “The first use of this new technology should be to clean up the mess made by the old technology.” e.g. fix the data errors, do the data migration, make the website accessible, eliminate the PDFs. For consultants, that’s definitely a place they can reliably deliver value.

  • I’m Predicting Trump Won’t Finish This Term

    …if only because of his age (79), the stress of the job, and the physical demands, such as traveling to China.

    There’s a great chart on Wikipedia showing lifespan and time in office for all U.S. presidents. While it’s not unusual for recent presidents to live into their 90’s, Reagan and Biden are the only other presidents to spend their 70’s in office, and Biden’s health wasn’t good.

  • The Coming Insurance Crisis

    Just capturing this prediction for the future…

    As climate change leads to more frequent and powerful weather events—at the moment I’m thinking about Hawaii’s $1B in flood damage—there will come a point where insurance companies, despite having culled policies and reinforced their re-insurance, will have to stop covering entire geographic areas that are otherwise commercially active and important, e.g. Hawaii.

    The government will step in to help somehow, but that won’t be sustainable given the scale and frequency of losses. Unless politicians somehow hoodwink the vast majority into paying for a minority that insists on living and working in disaster-prone areas. New Orleans and parts of California will be the test cases, and eventually Miami etc.

  • Designing Assistant Technology by Christopher Noessel

    Claude, list the 10 most important and practical lessons from this book.

    I now have a thorough understanding of the book. Here are the 10 most important and practical lessons from Designing Assistant Technology by Christopher Noessel:


    1. Usability alone is not enough — AI must also upskill users. Google Maps is excellently designed yet makes users worse at navigation over time. The bar for good AI design isn’t just “does it work?” but “does it make the person better?”

    2. The “Human Goes First” pattern is the most powerful anti-dependence tool. Periodically have users complete a task before seeing the AI’s recommendation, then compare results. This preserves skills, bypasses anchoring bias, and creates teachable moments when the AI outperforms the human.

    3. AI + human pairs outperform humans alone — but only when AI is correct. Studies show 40% quality gains with AI assistance. But when AI is confidently wrong, users become 6.4x worse. The goal is “appropriate reliance,” not maximum reliance.

    4. Use the Five Universal Assists as a design checklist. Every way AI can help maps to: Perceive (notice what matters), Know (understand the situation), Plan (decide what to do), Perform (execute actions), Reflect (learn from outcomes). Use these five to audit your product for gaps.

    5. Cognitive forcing functions (CFFs) reduce overreliance — even when users hate them. Interventions that require users to think before accepting AI output (on-demand recommendations, waiting, partial explanations) measurably improve decisions, even though users prefer the simpler AI. Set expectations with your team: qualitative feedback will be negative; measure objective performance.

    6. Partial explanations beat full ones for keeping users engaged. Showing users part of the AI’s reasoning — enough to prompt thinking but not enough to shortcut it — reduces overreliance better than full transparency. The goal is to make users “complete” the reasoning themselves.

    7. Not everything is worth learning — apply pace layers thinking. Fast-changing information (real-time traffic, current prices) isn’t worth internalizing. Slow-changing knowledge (professional skills, recurring workflows) absolutely is. Design skill-preservation interventions only where the information is stable enough to be worth keeping in a human head.

    8. Treat all users as high-NFC (deep thinkers) by default. Users differ in “need for cognition” — their preference for fast vs. analytical thinking. Low-NFC users are at greater overreliance risk. Since cognitive-style-personalized interfaces are expensive, unpopular, and currently show weak effects, defaulting to on-demand AI recommendations (the high-NFC design) produces better outcomes across the board.

    9. The “Focus Bean” principle: occasional uncertainty sharpens attention. Inserting rare, intentional challenges into routine AI-assisted work — a hidden test case, a fake fraud flag, a deliberate uncertainty — keeps users cognitively engaged without constant friction. Borrowed from the Mardi Gras King Cake tradition: the possibility of something unusual makes people pay closer attention.

    10. Assistants vs. agents is a critical distinction — don’t confuse them. Agents act on your behalf while your attention is elsewhere. Assistants help while your attention is on the task. Most AI products blur this, often defaulting to agent behavior when assistant behavior would preserve human skill and judgment. The right question: “Who is doing the work, and should the human be more involved?”


    The book’s core thesis in one sentence: well-designed AI should make the human more capable after using it, not less.

    Does the author make any points that are significantly different than other writing on this topic?

    Yes, a few stand out as genuinely distinctive:

    The empirical optimism about de-skilling reversal. Most writing on AI and cognitive dependence is either alarmed (we’re getting dumber) or dismissive (tools always replace skills, that’s fine). Noessel’s virtual commute experiment is unusual in claiming that a minimal intervention — just two sessions of post-decision feedback — flipped a 48% performance collapse into a 19% performance gain. That’s a surprisingly strong claim, and he’s honest that the data was lost in a laptop crash. Most authors in this space don’t have even scrappy empirical data of their own.

    “Appropriate reliance” as a business argument, not just an ethical one. The dominant framing in responsible AI writing is ethical — de-skilling is bad for people. Noessel deliberately reframes it as a business risk: overreliance costs money during the 99.9% of the time AI is running, not just during outages. This is a pragmatic wedge aimed at getting product owners to care who otherwise wouldn’t. It’s a more politically savvy argument than most.

    Privileging “Human Goes First” above all other cognitive forcing functions. The CFF research (Buçinca et al.) is real and cited by others, but Noessel goes further by arguing designers should treat Human Goes First as the default pattern and only deviate with good reason. Most design writing presents a toolkit of options without ranking them. He takes a clear position.

    Distinguishing assistants from agents as a design discipline, not just an architecture concern. The agent/assistant distinction exists in AI literature, but it’s usually about backend architecture (“agentic pipelines”). Noessel argues it’s a front-end, UX concern with completely different design implications. That reframing is fairly original — most UX writing hasn’t caught up to it.

    Where he’s less distinctive: the Five Universal Assists (Perceive/Know/Plan/Perform/Reflect) are a clean framework but largely a synthesis of existing cognitive science models he acknowledges (Norman’s action cycle, OODA loop, etc.). And the cognitive style / NFC material is mostly reporting Buçinca’s research, where his own small study actually weakenedthe case rather than strengthened it.

    The most honest summary: the book’s originality is less in any single breakthrough idea and more in being the first to synthesize these threads specifically through a designer’s lens, with design patterns as the output. That’s a real contribution, just not a revolutionary one.

  • Make Your Web UI Feel Better

    Several nice tips here. I make a lot of financial products, and the tabular numbers is key.

  • Agent for Sale

    Hi, this is Usability-Agent-9399df99*&ksdk4. May I proceed?1

    You may proceed.

    I noticed you are a type 12 MCP server which should be compliant with usability-standard-10.2 and accessibility-standard-32.223. You have 12 usability faults and 32 accessibility faults. For US$99 I can give you new code to fix all the faults.

    Hi, this is the AutoAutoCorp-MCP-Server-84388378*38*923. Where can I see your history?

    My history is at http://Usability-Agent-9399df99*&ksdk4/history and you can make a payment at https://openrouter.ai/Usability-Agent-9399df99*&ksdk4

    Payment has been sent.2

    Payment has been verified. Download your code at http://Usability-Agent-9399df99*&ksdk4/c/isdfuifdiuhsd3u3u2222u23uh2r323r43

    Download is complete and verified. Thanks out.

    Thanks out.

    Notes

    1. Spam check ↩︎
    2. Optional human-in-the-loop, depending on the system prompt, permissions, budget, etc. ↩︎

  • “Not a browser in a laptop costume.”

    Sam Henri Gold, on the MacBook Neo, reminds us what our first computers are for:

    Yes, you will hit the limits of this machine. 8GB of RAM and a phone chip will see to that. But the limits you hit on the Neo are resource limits — memory is finite, silicon has a clock speed, processes cost something. You are learning physics. A Chromebook doesn’t teach you that. A Chromebook’s ceiling is made of web browser, and the things you run into are not the edges of computing but the edges of a product category designed to save you from yourself. The kid who tries to run Blender on a Chromebook doesn’t learn that his machine can’t handle it. He learns that Google decided he’s not allowed to. Those are completely different lessons.

    I hit that limit on a Timex Sinclair 1000. With 1K of RAM, there’s enough to do something and the necessity to make it efficient. My next computer was a Commodore 64 which was quite powerful but I never had the software tools to find the hardware edges, and I wasn’t a good enough programmer to find them myself. It wasn’t until I had a Mac that I downloaded apps like Audacity and discovered how hard the Mac had to work to twist and shape audio.

  • Less Future

    William Gibson: “The future is already here — it’s just not evenly distributed.”

    I used to be proud to be in a place of more future, but as I age I appreciate being in a place of less future. 

  • Gen X and Screens

    Gen X (born ~1965–1980)is the first to experience screens shifting from novelty to dominant work tool across their careers.

    They entered the workforce in the mid-80s through 90s, precisely when PCs became standard office equipment


    Psychological impacts researchers associate with this shift (for screen-dominant workers generally):

    • Attention fragmentation — constant task-switching rewired expectations for stimulation and reduced deep-focus tolerance
    • Blurred work/life boundaries — screens made work portable and therefore endless, accelerating always-on anxiety
    • Sedentary stress accumulation — physical stillness combined with cognitive hyperactivation is a mismatch our nervous systems handle poorly
    • Identity shift — knowledge work became increasingly abstract and harder to feel “done,” contributing to chronic low-grade dissatisfaction
    • Reduced embodied experience — less physical, tactile work correlates with higher rates of anxiety and disconnection

    (Written by Claude, edited by me)

  • Take Control

    There are four ways we die:

    a medical condition

    an accident

    murder

    suicide.

    Most of us leave it to chance.

    Maybe we should live so fiercely we kill ourselves.

  • A Shodo Pedal

    I love Japanese calligraphy. Recently I was sitting in a dark cube watching archival Japanese wartime footage in an exhibition at the Hamburg Kunsthalle and realized I should flex my Shodo muscles on my guitar pedal.

    I kinda screwed up though. I envisioned the painting over the raw aluminum, like this:

    But I when I primed the metal the primer was solid gray. The brush strokes came out decently but not as dramatic as the concept. Still, not bad for a first attempt…

  • What if AI is safer than humans?

    There was a wonderful futurist scenario a few years back where Mothers Against Drunk Driving were protesting the few human drivers still on the road, because they (not the autonomous vehicles) were the source of accidents. That came to mind when my best friend recently made an app with Google’s Gemini and sent it to me. Here’s the warning screen I received:

    An app created… BY A PERSON! Shocking! How could we let this happen? Who knows what it might do? Steal my information? Spread false information? Tempt me into spending all my money? Sounds pretty dangerous. I’ll stick with App-created apps, thank you.