Reflections from the 15 April 2025 unconference about SE in the age of AI (Melbourne Australia)
Will AI change how we build software?
When we gathered thirty practitioners, researchers, and engineering leaders in one room, the question on everyone's mind was no longer "Will AI change how we build software?" Rather, it was: "How fast can we redesign the entire socio-technical system so humans, machines, and metrics co-evolve responsibly?"
I've always viewed engineering as a living ecosystem where value must be feasible (technically sound and achievable), viable (aligned to business reality), and sustainable (financially, technically, ethically, and environmentally durable). AI doesn't replace this triad ~ it stretches it and acts like any other technical constraint, but with unprecedented reach and speed.
The workshop surfaced exactly where this stretch feels most acute.
Hiring: The collapsing signal problem
LLMs now draft resumes, optimise cover letters, and help rehearse interview answers. The net result? Our traditional signals of competence are dissolving. We heard unease about "portfolio projects" that juniors can't explain and debates about banning AI tools in technical tests even though those same tools define modern workflow.
The core issue - When every part of the pipeline can be delegated to an LLM, the observable output is no longer a proxy for the latent capability we're trying to measure. We need new sensors.
Evaluate prompt literacy and critical-thinking traces alongside raw programming competence
Ask candidates to critique or improve AI-generated code fragments—mirroring real work to reveal judgment
Treat AI fluency as a positive signal while anchoring it in the candidate's understanding of risk, bias, and system constraints
Juniors and the lost apprenticeship
We grow engineers through "exposure to pain"—and I use this emotional metaphor intentionally. Staring at stack traces, stepping through seg-faults, re-architecting when v1 collapses, working through obscure stakeholder requests that everyone wishes they'd understood months earlier. AI short-circuits this grind, and the risk is an entire cohort whose intuition never fully forms.
The deeper concern: Every time we outsource a cognitive micro-struggle to a model, we withdraw a tiny deposit from the bank of hard-won tacit knowledge. Accumulate enough withdrawals and the organisation becomes fragile. Worse, we reduce the velocity of tacit knowledge accumulation across the entire field.
Shadow prompts: juniors attempt the task first, then compare their approach with how the model tackled it
Mentor review of AI output: treating the LLM as a noisy junior and the human junior as its editor and critic
Deliberate friction zones: code areas where AI suggestions are disabled so humans must wrestle with design trade-offs
Redefining the “good engineer”
Speed has been automated; what remains scarce is sense-making. Participants converged on some emergent core skills.
Ask better questions (problem framing)
Filter signal from synthetic noise (critique and synthesis)
Compose systems, not snippets (architectural reasoning)
These map neatly onto the functional stack observed in my own work. Reason & Plan gain a premium while generate becomes commoditised. But here's the paradox: if engineers do not generate enough code themselves, they lose the critical ability to build tacit knowledge through practice.
The best engineers may need to be AI conductors - orchestrating models & agents while maintaining deep technical judgment about what makes systems robust, maintainable, and valuable.
Speed, metrics, and the illusion of progress
Five pull requests an hour making the github repo commit rate go dark-green looks impressive until you realise they are shallow reformats. AI can inflate the denominator on any vanity metric ~ commits, lines changed, words written. Soon enough, large organisations will inadvertently create metrics around "tasks allocated to agents," "tokens consumed," or "which engineer uses async agent allocation most effectively." Leaderboards will emerge as side effects of the logs and dashboards now embedded in our IDEs.
Reality check: If a number can be gamed (which is ever so easier with AI tools), it probably wasn't measuring value in the first place.
Design delta: how much a change improves modularity or reduces cognitive load (see my blog post on AI Trust debt)
Decision traceability: clarity of trade-offs and risk assumptions captured alongside code
Collaborative impact: frequency and quality of knowledge transferred across the team
AI can help surface these richer signals, but only if we teach it what “quality” means in our specific context. Notice that all these recommendations focus on human-centric outcomes that are not directly observable by AI systems.
Culture is the hardest interface
Tool adoption is racing ahead of cultural adaptation. The session gave stories of juniors defaulting to ChatGPT for every query, seniors resisting "auto-pilot coding" on principle, and staff engineers quietly running experiments in the background.
Shared rituals are still effective: brown-bag demos, prompt libraries, "failure-file" talks where teams share what went wrong with AI-assisted development. Culture, after all, is the error-tolerance buffer of any system.
The integration challenge: Different experience levels interact with AI differently, creating new forms of knowledge gaps and potential friction within teams.
Closing reflections
Software engineering has always balanced creativity with engineering reliable structures, developing as a craft that navigates complex trade-offs while accumulating tacit knowledge and wisdom. Large language models represent a significant disruption, but the core nature of our field remains unchanged.
We are learning to architect feedback loops where humans and machines co-create and co-mature; we have a co-cognitor to work with. The tools, processes, systems, and culture for integrating this capability are still developing. The engineering community has always been remarkably adaptive, and I'm confident we'll respond well to this challenge.
This un-conference did not give tidy answers, nor should it have. What it offered was a sharper view of the complex system we are now part of, and the humility to iterate on both code and culture. It has given a valuable reflection and (surprisingly) confidence that we can steward our craft, so the “age of AI” becomes less of a disruption and more of an evolution toward the next generation of great engineering.
Re-instrument hiring: Move from resume artefacts to evidence of systemic thinking and AI stewardship
Redesign learning loops: Embed friction and reflection so tacit craft survives in AI-accelerated workflows
Codify cultural scaffolds: Normalise experimentation, critique, and collective sense-making
Align with sustainability: Each acceleration step consumes energy (not just electricity, but human/political/cultural) and shapes talent pipelines & we need to factor these externalities into our definition of done
The path forward isn't about choosing between human creativity and machine capability; it is about designing systems where both can flourish.Rajesh Vasa, April 2025