Hi Everyone 👋🏽

If you're new here, welcome to Growth Imperatives, an ongoing curation of found ideas that deconstruct the current world of design and ask how we can imagine a new one.

This week is about questioning the promises AI advocates sell us—hyper-productivity, unlimited creativity, and...text message summarization, to name a few. While some of these solutions might be useful, what are we willing to give up in return? And is the return even worth it in the first place?


Is uncertainty such a bad thing?

We live in a unique time where the powers that be in Silicon Valley—and, by proxy, the design industry—implore us to believe that any content is better than slow or no content. Even if that content is both made and consumed by machines. They push us to ignore the unproductive, questioning, and inefficient side of creativity.

But what kind of world do we get when we continually scratch this itch for ROI? What's the point of using AI to make a thoughtful and hard thing thoughtless and easy?

This week's main article is about what technologies like AI (and social media and web 2.0) give their inventors and what they take away from the rest of us.

Read below 👇🏽

In his book Non-things, the philosopher Byung-Chul Han draws a distinction between two styles of reading: the pornographic and the erotic. The pornographic reader “is looking for something to be uncovered.” He wants to get to the point, as expeditiously as possible. The erotic reader takes pleasure in the act of reading itself. He “lingers” with the words. “The words are the skin, and the skin does not enclose a meaning.” I would broaden Han’s distinction to describe perception in general. The pornographic mind is concerned only with what can be made explicit, what can be turned into information. It seeks to pierce the obscuring veils of mystery and wonder, beauty and ambiguity, to get to the gist of the matter. The erotic mind likes the veils. It sees them not as obscuring but as pleasurable and even revelatory.

The mind of the LLM is purely pornographic. It excels at the shallow, formulaic crafts of summary and mimicry. The tactile and the sensual are beyond its ken. The only meaning it knows is that which can be rendered explicitly. For a machine, such narrow-mindedness is a strength, essential to the efficient production of practical outputs. One looks to an LLM to pierce the veils, not linger on them. But when we substitute the LLM’s dead speech for our own living speech, we also adopt its point of view. Our mind becomes pornographic in its desire for naked information.

Read → Dead Labor, Dead Speech by Nicholas Carr


🔮 Visions

Three small ideas to help challenge your thinking:

Why are we assuming that people want more, faster? Has anyone ever said, “if only I could make unlimited presentations”?

What if we want to craft one presentation, but do it beautifully?

What if we actually, genuinely, love the in-between moments when we return to a draft after days have passed, sharpening one word here, adding a better verb there.

Great software products aren’t simply a collection of buttons, icons, and menus. They shape how we think and who we aspire to be.

The problem isn’t that machines are becoming more human-like, it’s that humans are becoming more machine-like in an effort to keep up.

→ Sari Azout in What does slow AI look like?

[The idea that “AI will solve climate change”] is not merely foolish but dangerous—it’s another means of persuading otherwise smart people that immediate action isn’t necessary, that technological advancements are a trump card, that an all hands on deck effort to slash emissions and transition to proven renewable technologies isn’t necessary right now. It’s techno-utopianism of the worst kind; the kind that saps the will to act.

→ Brian Merchant in AI will never solve this

AI-produced things “sort of suck” not merely because they are inherently derivative and often erroneous; they suck because AI is only ever a simulation of care, and it improves by allowing people to be more careless. AI is fundamentally “artificial intentionality” rather than “artificial intelligence.”

Tech companies seem to hope that they can make a brute-force case that “having intention” is inconvenient, just as they continually try to persuade users that interacting with other people is inconvenient (rather than the point of life).

→ Rob Horning in Artificial intentionality


That's all for this week! Thanks for reading.

If you got something from this newsletter (or not), please let me know using the feedback buttons below! Every nudge in the right direction is a massive help.