Starting to feel a sense of deja-vu scrolling through new all the news about AI? Does it all seem to be the same stuff over and over about AI across LinkedIn, X, Discord, Reddit?
Ever wondered what is going on? Why does it feel like you have seen it all, heard it all?
Short answer ~ "the world is full of isomorphisms (coercive, mimetic, normative) & these forces shape the technology landscape".
So, what the heck am I talking about?
Longer answer follows, but feel free to take my short answer, paste it into any LLM and ask it to contextualise it for the field if "AI" ~ it most likely should give you what follows; skim away my take to save time :)
In this context, it is the process by which organisations or systems adopt similar structures, practices, or behaviours over time, often due to external pressures rather than internal needs or efficiency. For example, most universities end up with courses, subjects, lectures, labs, exams, assignments, internships, discipline areas, focus groups ~ the structure is similar across universities; even if some give them slightly different names. It’s a way of explaining why institutions in similar environments end up looking and acting alike.
There are forces that pressure organisations (or groups of people) to converge on what will feel like a single, accepted form. These show up a lot in tech with interesting effects across the eco-system (the isomorphism explanation was put forward by DiMaggio & Powell initially).
In AI, the same forces are at play across 3 broad aspects.
Coercive Isomorphism (Constraint from above) ~ A top-down pressure from external mandates. Consider GDPR, AI ethics principles, or CCPA forcing specific protocols. These manifests as UI design choice and even AI model architectures. Compliance becomes a non-negotiable design constraint before a single line of code is written. The challenge is not longer about novelty, but to innovate within these pre-defined, and often rigid, guardrails.
Mimetic Isomorphism (Pull of leader) ~ In a field defined by high uncertainty or ridiculous velocity like we see in AI, the default strategy is often to copy the market leader. When a firm like OpenAI or Google put out a new recipe or tactic, the industry reflexively mimics it. Model Context Protocol (MCP) with all its security gaps still takes off, skills.md & agents.md move like wild fire tearing across the tech landscape; agent protocols from Google are given a significant amount of attention ~ tech media covers these further amplifying the trend. This feels like a safer, more rational bet, but it is an innovation trap & longer term we get technofeudalism (even though no one wants it or voted for it). While the ecosystem is busy iterating on the leaders position, breakthroughs demand divergence, not just incremental refinement.
Normative Isomorphism (Pressure from within) ~ This force stems from shared professional norms, shaped by top universities/think-tanks, influential arXiv papers, and conference groupthink. We internalise a "standard toolkit" as the "correct" way to engineer even if (sometimes) it makes no-sense in terms of the trade-offs being made. This shared baseline is powerful, yet drives us all into a cognitive rut.
My feel is that the best ideas in AI are likely to come from merging or re-mixing different disciplines ~ yet, this approach is not given sufficient power & hence the groupthink over rides sense. As engineers, we are the architects of the future, but isomorphisms are like an invisible resistance & shaping force.
Just knowing about it helps. So feel free to question constraints without breaking them, resist the urge to imitate, and we all should aim to challenge the orthodoxy of our training (within common sense norms).
Now more than ever we need to make progress in an interesting way and this just will not come from convergence, but from deliberate, informed divergence.
My definition of creativity is the distance from our past & the history of the field. Push to go deeper, there is much more interesting things we as humans can still do ~ even more so, with the augmentation offered by recent advances in AI.