This recent Klarna story is fascinating:
The fintech firm Klarna is severing its relationships with two of the biggest enterprise software providers in favor of automating its services with AI. And the company says it could potentially eliminate more.
Klarna co-founder and CEO Sebastian Siemiatkowski recently explained the rationale in a conference call, the financial outlet Seeking Alpha reported. Klarna is no longer using Salesforce, a platform that aggregates and packages sales and marketing data for businesses. The company has also removed the HR and hiring platform Workday from its tech stack
The skepticism in the article feels valid. But let’s pretend for a moment there’s something here. How might they “shut down Salesforce”? I imagine they could move their their data elsewhere, sick an LLM on it, and then query it. “Show me customers with purchases under $1MM.” And “Add a note that we followed up with Pat Peterson at Vandalay yesterday about the integration…”
From a design perspective, it feels like replacing the layouts and navigation of Salesforce with queries (and if you’ve ever used or tried to design for Salesforce, that’s not necessarily a bad thing). When I imagine what that might feel like as an experience, one of the few touch points I have is Star Trek. Their AI was pretty badass, but they also had a bunch of screens they referred to silently. The Enterprise software designer must have had some principles that dictated when they needed screens and when they could rely on spoken queries. And now that we’ve reached this point where people are ditching their screens for AI (or considering it at least) we’ll probably need those guidelines soon enough.
As homework, I pulled up Make It So: Interaction Design Lessons from Science Fiction by Nathan Shedroff & Christopher Noessel. Brilliant people as they are, they already reduced these lessons to clear guidelines, for example:
Lesson: put information in the channel it fits best
Should a bit of system information be conveyed through audio or spoken language? Though every piece of content needs to be considered in its particular context of use, a good rule of thumb is to put peripheral information in peripheral channels. Since Captain Kirk could feel confident that his communicator was on when he heard voices coming from it, and off when he heard nothing or got no response, the double beeps that signal opening and closing communications can be considered peripheral and can be signaled as system sounds instead of a voice that would convey the same information. Conversely, if the system needs to convey that 10 seconds remain before the whole place is gonna blow, it could be conveyed as a rising tone, but the information is important enough that the discreteness and omnidirectionality of language is required.