The traditional career ladder ~ that predictable, tenure-based progression from junior analyst to senior manager—is collapsing. Entry-level hiring has declined by roughly 50% since 2019, not because organisations need fewer people, but because artificial intelligence has automated the tasks that once served as training rungs. Summarising reports, cleaning data, scheduling meetings, drafting correspondence: these were never just menial work. They were the apprenticeship through which newcomers learned how organisations actually function. When AI absorbs this stratum of activity, it does not simply eliminate jobs ~ it eliminates a developmental pathway.
This essay argues that in this transformed landscape, a particular psychological disposition; what generally is called High Agency ~ combined with AI fluency becomes the primary determinant of professional survival and flourishing. But I want to push beyond the motivational rhetoric that often accompanies such claims. High Agency, properly understood, draws on a rich tradition of psychological research from Rotter to Bandura to Seligman. And when we examine its deepest implications, we find ourselves in dialogue with ancient wisdom traditions ~ particularly the Bhagavad Gita's concept of Karma Yoga that offer both a refinement of and a corrective to contemporary notions of personal efficacy.
I. The Psychology of Agency: From Locus of Control to Learned Optimism
The concept of agency has been theorized from multiple angles in twentieth-century psychology, each perspective illuminating a different facet of how individuals navigate uncertainty and create outcomes in their lives.
Julian Rotter's (1966) construct of locus of control provides the foundational framework. Individuals with an internal locus of control believe that outcomes result primarily from their own actions, abilities, and decisions. Those with an external locus attribute outcomes to luck, fate, powerful others, or systemic forces beyond their influence. This distinction matters enormously for behavior: internals persist longer at difficult tasks, seek more information before making decisions, and show greater initiative in ambiguous situations (Rotter, 1966). The High Agency individual exhibits an extreme internal locus—a conviction that nearly everything falls within their circle of potential influence.
Albert Bandura's (1977, 1997) theory of self-efficacy adds specificity to this picture. Where locus of control addresses general beliefs about causation, self-efficacy concerns one's confidence in executing specific behaviors to produce specific outcomes. Bandura demonstrated that self-efficacy beliefs predict performance across domains—from athletic achievement to academic persistence to recovery from illness. Crucially, self-efficacy is not mere confidence; it must be grounded in genuine capability or it produces overconfidence and failure. The High Agency individual maintains strong domain-general self-efficacy while remaining calibrated to reality.
Martin Seligman's research on learned helplessness (Seligman, 1972) and its antidote, learned optimism (Seligman, 1990), reveals what happens when agency collapses. When organisms experience repeated uncontrollable negative events, they often cease attempting to escape even when escape becomes possible. The pattern generalizes to humans facing chronic unemployment, abusive relationships, or organisational dysfunction. Learned optimism involves cultivating an explanatory style that treats setbacks as temporary, specific, and external while viewing successes as permanent, pervasive, and personal—essentially, rebuilding the internal locus of control that helplessness erodes.
Carol Dweck's (2006) distinction between fixed and growth mindsets operationalises agency at the level of learning and development. Those with fixed mindsets view abilities as static endowments; effort signals inadequacy, and failure threatens identity. Those with growth mindsets view abilities as developable through dedication and practice; effort is the path to mastery, and failure provides information for improvement. The High Agency individual embodies an extreme growth mindset, reframing every blocker as what we might call a "skill issue", a temporary gap between current capability and required capability that effort and learning can close.
Several additional constructs round out this psychological picture. Suzanne Kobasa's (1979) concept of psychological hardiness emphasises commitment (engagement with life activities), control (belief in one's influence over events), and challenge (viewing change as opportunity rather than threat). Aaron Antonovsky's (1979) sense of coherence highlights the importance of comprehensibility (the world makes sense), manageability (resources are available to meet demands), and meaningfulness (life's challenges are worth engaging). Viktor Frankl's (1946) logotherapy locates the primary human motivation in the will to meaning as the drive to find purpose even in suffering. Jonathan Haidt's (2006) synthesis in The Happiness Hypothesis draws on both ancient philosophy and modern psychology to suggest that meaning emerges from vital engagement between a skilled self and challenging activities.
What emerges from this convergence of traditions is a composite portrait of High Agency: an internal locus of control, strong and calibrated self-efficacy, an explanatory style that resists learned helplessness, a growth mindset that treats obstacles as learning opportunities, psychological hardiness in the face of stress, a coherent sense that one's actions matter, and an orientation toward meaning rather than mere hedonic comfort.
II. AI as Activation Energy Reducer
The concept of activation energy comes from chemistry, where it denotes the minimum energy required for a reaction to occur. The metaphor transfers readily to human behavior: every action we might take has an associated activation energy—the effort, skill, time, and resources required to initiate it. Much of what prevents people from translating intentions into actions is not lack of desire but prohibitively high activation energy.
Consider the aspiring entrepreneur who has identified a market opportunity. Before AI, pursuing that opportunity required either possessing or hiring diverse skills: market research, financial modeling, legal compliance, design, development, marketing copy, customer service scripts. Each skill gap represented activation energy that had to be overcome through years of learning or thousands of dollars in contractor fees. Many viable ideas died not because they lacked merit but because their activation energy exceeded the founder's available resources.
AI fundamentally changes this calculus. Large language models can draft legal documents, write functional code, generate marketing copy, analyse financial projections, and research competitive landscapes. Yes, they have gaps and work is no perfect, but well enough to dramatically reduce the activation energy for a high-agency individual to move from intention to prototype. In effect, AI can act as a "jet-pack" for agency: it does not replace human direction, but it amplifies the force that human intention can exert on the world.
The psychological literature on self-determination theory (SDT) (Ryan & Deci, 2000) helps explain why this matters beyond mere efficiency. SDT posits three fundamental psychological needs: autonomy (feeling volitional and self-directed), competence (feeling effective and capable), and relatedness (feeling connected to others). AI tools satisfy the competence need by extending what an individual can accomplish; they satisfy the autonomy need by reducing dependence on gatekeepers, specialists, and organizational hierarchies. The phenomenological experience of using AI well is often one of expanded possibility—a sense that actions previously foreclosed are now available.
Mihaly Csikszentmihalyi's (1990) research on flow states offers another lens. Flow occurs when skill level matches challenge level ~ too little challenge produces boredom, too much produces anxiety. AI can function as a dynamic skill augmentation system, allowing individuals to take on challenges that would otherwise exceed their capabilities while maintaining the balance necessary for flow. The writer tackling a technical subject, the developer entering an unfamiliar codebase, the analyst confronting a new statistical technique—each can use AI to remain in the flow channel rather than collapsing into anxiety or avoidance.
A useful metric for this dynamic is the say/do ratio which is the correspondence between stated intentions and completed actions. Traditional career paths featured long gaps between saying and doing: "I want to start a company" might take years to actualise. AI compresses this gap. The high-agency individual who says "I will build a prototype this week" can now actually do so. This compression has profound psychological effects: it builds self-efficacy through rapid feedback loops, it prevents the erosion of motivation that occurs during long delays, and it allows for faster iteration and learning.
III. Scale-Free Principles: Agency Across Levels
A striking feature of the agency-AI interaction is that the same principle of activation energy reduction enabling expanded action also operates at multiple scales. This scale-free quality suggests we are observing something fundamental rather than incidental.
The Micro Scale: Zero-Latency Learning
At the level of individual tasks, the traditional response to a skill gap was either to route around it (finding someone else to do that part) or to invest time in learning (taking a course, reading documentation, practicing). Both approaches introduced latency of a delay between identifying a need and meeting it.
AI collapses this latency toward zero. The programmer who does not know the regex syntax for a particular pattern can query an AI and receive working code in seconds. The analyst unfamiliar with a statistical technique can get an explanation calibrated to their level and an implementation they can adapt. The writer struggling to organize an argument can have their draft restructured with commentary. This is not the elimination of learning. The high-agency individual still seeks to understand and internalize. Therefore, it is the elimination of learning as a mandatory precondition for action.
Saras Sarasvathy's (2001) research on effectuation in entrepreneurship describes how expert entrepreneurs think: rather than starting with a goal and acquiring resources to achieve it (causal reasoning), they start with available means and imagine possible ends (effectual reasoning). AI dramatically expands the set of available means for any given individual, enabling more ambitious effectual reasoning. What can I build with what I have, when "what I have" now includes an AI that can code, write, research, and analyze?
The Meso Scale: Permissionless Execution
At the level of projects and careers, traditional paths involved extensive permission seeking and coordination costs. Launching a product required assembling a team, raising capital, building organisational infrastructure. Each step involved convincing others: investors, employees, partners, customers.
AI enables what we might call the substitution of compute cost for coordination cost. Solo founders are building companies with $15 million in annual recurring revenue (ARR) without traditional teams. The constraint has shifted from "can you assemble the resources?" to "can you direct the intelligence?" This is permissionless execution in its purest form: the ability to create value without organisational sanction.
Atul Gawande's (2009) work on checklists in The Checklist Manifesto offers an interesting parallel. Gawande showed that complex professional work often fails not because of ignorance but because of ineptitude. That is, the failure to correctly apply what we know. Checklists externalise cognitive load, ensuring consistent execution. AI can function as a dynamic checklist system, not just reminding us of steps but executing them, catching errors, and filling gaps. The high-agency individual becomes less a solo performer and more a director orchestrating AI capabilities.
The Macro Scale: Meritocratic Volatility
At the level of economy and society, the implications are profound and unsettling. Traditional organisational structures of titles, tenures, hierarchies are static maps of a territory that AI is rapidly reshaping. These structures assumed that capability accrued slowly through experience and that position was a reasonable proxy for contribution.
When AI can amplify the output of anyone with agency and direction, capability becomes more volatile. A junior employee with high agency and AI fluency can outproduce a senior manager clinging to pre-AI workflows. Value flows to nodes. Speficially, people who can effectively direct intelligence to solve problems, regardless of their organizational label. Titles become lag indicators, documenting what someone did rather than predicting what they can do.
This creates what Nassim Taleb (2012) would recognize as increased antifragility potential for some and increased fragility for others. Those who embrace volatility, who position themselves to benefit from uncertainty through high agency and adaptability, gain from the turbulence. Those who depend on stable structures—who assume that past patterns will continue, that credentials will protect them, that seniority confers security may find their positions increasingly precarious. The system is selecting for a particular kind of person: not the most credentialed, not the most experienced, but the most agentic and adaptive.
IV. The Charioteer's Correction: Agency Through the Lens of Karma Yoga
Thus far, I have presented High Agency as an unqualified good. A disposition to cultivate and amplify with AI tools. But there is a shadow side to this picture that becomes visible when we place it in dialogue with the Bhagavad Gita's teachings on action.
The Gita's central teaching on action appears in Chapter 2, verse 47: "Karmanye vadhikaraste ma phaleshu kadachana" ~ You have the right to action alone, never to its fruits. This single verse contains a profound reframing of agency that both validates and corrects the modern High Agency ideal.
Consider the contrast along four dimensions:
Locus of Control: Absolute vs. Nuanced Internal
Modern High Agency doctrine often proclaims: "Everything is inside my circle of control." The Gita offers a more nuanced position: you control the action (karma), but never the fruit (phala). This is not fatalism. The Gita emphatically rejects inaction (Chapter 3 condemns those who sit idle claiming transcendence while secretly harboring desires). Rather, it is epistemic humility about causation. Complex systems have emergent properties; outcomes depend on countless factors beyond any individual's control or even knowledge. The High Agency individual who believes they control outcomes is epistemologically mistaken, and this mistake has psychological consequences.
View of Self: The Doer vs. The Instrument
The modern High Agency individual positions themselves as the doer (karta), the prime mover bending reality to their will. The Gita's Chapter 11 offers a startling alternative: Arjuna (the warrior that is receiving this wisdom) is told he is merely the the instrument (nimitta matram), the occasion through which larger forces work. This is not passivity; Arjuna must still act with full commitment. But it is a different relationship to action—one in which the ego does not claim authorship of outcomes.
Motivation: Outcome-Oriented vs. Duty-Oriented
Contemporary High Agency rhetoric is frankly outcome-oriented: billion-dollar valuations, promotions, influence, impact. The Gita proposes action performed for its own sake (dharma), one's duty appropriate to one's role and capacities, executed without attachment to success or failure. This is not the elimination of goals but the subordination of attachment to them. The high-agency entrepreneur builds the company because building is their dharma, not because of the exit.
Psychological Risk: Ego Inflation vs. Equanimity
Here is the shadow side of High Agency: if you believe you control outcomes, then when outcomes are bad, it is your fault. Total responsibility creates total anxiety. The person who identifies their self-worth with their outcomes rides an exhausting roller coaster—elated at success, devastated at failure, perpetually comparing themselves to others. This is what the Gita would call rajasic action ~ action fueled by passion, desire, and ego-attachment. It works, but it is fragile (a variation of how Taleb cautions in his work on anti-fragile). When the market crashes, the health fails, the key employee quits, when reality refuses to bend, the rajasic agent breaks because their identity was fused with control.
The Gita proposes equanimity (samatvam), evenness of mind in success and failure as the psychological hallmark of sattvic (good) action. Not indifference (which would be tamasic ~ the inertia and passivity the Gita condemns), but a kind of engaged detachment. Act with full energy and skill, but release the outcome. This is the meaning of "Yoga-sthah kuru karmani" - "established in yoga, perform action" (Ch. 2:48).
V. Synthesis: Sattvic Agency for the AI Age
What emerges from this dialogue is not a rejection of High Agency but a refinement of it. The synthesis I propose might be called Sattvic Agency or Detached High Agency; maintaining the internal locus of control and the bias toward action while releasing attachment to outcomes and ego-identification with results.
This synthesis has practical implications for how we use AI. The rajasic agent uses AI to "get what I want", to amplify their desires, to accelerate their outcomes, to expand their reach. The sattvic agent uses AI to perfect the execution of duty—to do work worthy of their capacities, to serve purposes larger than themselves, to create value that benefits others.
This reframing does not reduce effectiveness. The Gita is explicit that sattvic action produces better outcomes than rajasic action precisely because it is uncontaminated by ego-distortion. The entrepreneur detached from outcomes makes clearer decisions than the one whose identity depends on the next funding round. The creator who works for the work's sake produces better art than the one desperately seeking validation. The researcher who follows questions without attachment discovers more than the one fixated on publication counts.
If AI allows you to do the work of ten people, do it because that is your capacity for contribution, not because you are entitled to the rewards of ten people. This subtle shift transforms the psychological relationship to AI-augmented productivity. Instead of anxiety about whether the amplification will pay off, there is a kind of joyful engagement with expanded capability. Instead of comparison with others who might be using AI more effectively, there is focus on one's own dharma. Instead of the exhausting identification of self with output, there is the lightness of action performed as offering.
It is worth noting that this synthesis is not unique to Eastern philosophy. Marcus Aurelius, in his Meditations, repeatedly emphasizes the distinction between what is "up to us" (our judgments, impulses, desires, aversions) and what is not (external outcomes, others' opinions, fate). The Stoic injunction to "focus on what you can control" closely parallels the Gita's teaching, though without the metaphysical framework of dharma and yoga. The ancient Greek philosophers ~ Aristotle on eudaimonia as activity in accordance with virtue, Epicurus on the sufficiency of simple pleasures ~ similarly cautioned against the heroic ego that believes it can master fortune. These traditions converge on a wisdom about human agency that the modern productivity culture has largely forgotten.
VI. Implications for Pedagogy and Career Development
If the foregoing analysis is correct, it has significant implications for how we educate and advise the next generation of professionals.
From Just-in-Case to Just-in-Time Learning
Traditional education operates on a stockpiling model: learn facts, methods, and frameworks now because you might need them later. This made sense when knowledge acquisition had high friction. When AI can provide specific knowledge on demand, the premium shifts to meta-learning ~ the ability to rapidly acquire and apply new knowledge in context.
The pedagogical implication: teach less content and more process. Do not teach "how to code in Python"; teach "how to build a working system that solves a real problem, using Python and AI tools as necessary." Assess not on syntax recall but on the complexity of problems successfully tackled. This does not mean abandoning fundamentals. Understanding principles enables better AI collaboration and therefore it means subordinating fundamentals to application.
From Toil to Taste
When AI handles execution such as the coding, the drafting, the data cleaning ... the human contribution shifts toward what we might call taste: judgment about what is worth building, discernment about quality, ethical reasoning about consequences. This is what the Gita calls Buddhi Yoga ~ the yoga of intellect or discernment.
The pedagogical implication: assess students on the quality of questions they ask AI, not just the answers they produce. Train aesthetic judgment by exposing students to exemplars of excellence and having them articulate why certain solutions are better than others. Cultivate ethical reasoning by confronting students with dilemmas where technical capability outpaces wisdom.
Teaching Detached High Agency
There is a danger that High Agency, taught without the correction provided by deep thinkers across the world (Gita, Aristotle), produces a generation of anxious strivers ~ people who believe they should be able to control everything and berate themselves when they cannot. The antidote is explicitly teaching detachment from outcomes alongside bias toward action.
One potential practical approach: "fail fast" sprints where students must prototype five ideas in a week using AI tools. The rapid cycle prevents attachment to any single outcome; the volume teaches that most ideas fail and that is fine; the velocity cultivates the habit of action without the luxury of perfectionism. Students learn experientially what the Gita teaches philosophically ~ that action matters, outcomes do not define you, and the only way out is through.
The Career of the System Architect
Career advice for the AI age: do not try to be the best at any single tool or technique, because AI will commoditize tool-level expertise. Instead, develop the integrative capacity to weave multiple threads—technical, aesthetic, ethical, strategic—into coherent systems.
AI lowers the barrier to entry for components but raises the ceiling for systems. Anyone can prompt an AI to generate code, design an interface, or draft a strategy. Far fewer can architect a product that works technically, delights aesthetically, behaves ethically, and succeeds strategically. The career of the future belongs to the system architect who uses AI agents as their construction crew but retains the vision, judgment, and integration that no AI can yet provide.
VII. Conclusion: The Jet-Pack and the Charioteer
The title of this essay juxtaposes two images: the jet-pack (AI as amplifier of human intention) and the charioteer (the Gita's metaphor for the self directing the horses of the senses). Both images concern agency—the capacity to translate will into action—but they suggest different relationships to that capacity.
The jet-pack promises power, speed, escape from friction. Strap it on and go farther, faster. This is the seductive promise of AI-augmented High Agency: you can do more, be more, achieve more. It is not false. The activation energy reduction is real. The expanded possibility space is real. The high-agency individual with AI fluency genuinely can create value at scales previously impossible.
But the charioteer offers a deeper wisdom. In the Gita's image, Arjuna stands in his chariot before the great battle, Krishna at the reins. Arjuna has agency. He must choose to fight or flee, but .. interestingly, he is not the source of his agency. He is part of a larger system, playing a role in a drama that exceeds his comprehension. His task is to act with skill and commitment while releasing attachment to the outcome.
The synthesis I have proposed takes the jet-pack and places it under the charioteer's guidance. Use AI to lower activation energy, to bridge skill gaps, to amplify contribution. Cultivate the internal locus of control, the self-efficacy, the growth mindset, the hardiness that psychological research shows predicts success. But hold these capabilities with Sattvic equanimity ~ doing the work because it is your dharma, releasing the fruits to forces larger than yourself, finding identity in the quality of action rather than the quantity of outcome.
This is not a prescription for passivity. The Gita emphatically rejects inaction. It is a prescription for sustainable excellence ~ the kind of agency that does not burn out because it is not fueled by ego, does not collapse under failure because identity is not fused with outcomes, does not corrupt because the attachment to results never justifies compromising the quality of means.
The career ladder has collapsed (or a more polite word maybe that it is undergoing substaintial transformation). The path forward is not to find a new ladder but to develop the capacity to navigate unstructured terrain. That capacity has a psychological foundation (High Agency) and a technological amplifier (AI). But its deepest ground is the wisdom of ancientsthat long predates either psychology or technology. A tradition that teaches us to act with full energy and skill while remaining equanimous about results, to direct the horses of capability while remembering we are not the source of the chariot.
Coda: Expanding the Gita Thread
This essay has necessarily compressed the Gita's teaching into a few pages of contrast and synthesis. A fuller treatment would explore several threads that merit their own extended analysis:
The Three Gunas as Diagnostic Framework: The Gita's phenomenology of action types (sattvic, rajasic, tamasic) as a framework for evaluating AI use cases. Specifically, which applications promote clarity and service, which amplify desire and anxiety, which enable avoidance and inertia.
Svadharma in the Age of Infinite Possibility: The concept of svadharma (one's own duty) as a criterion for career choice when AI makes almost anything possible. How do we discern our particular calling when the constraints that once narrowed choice have dissolved?
Karma Yoga and Jnana Yoga: The relationship between the yoga of action and the yoga of knowledge in understanding AI as extended cognition. What does it mean to "know" when knowledge can be queried on demand?
Abhyasa and Vairagya: The Gita's complementary disciplines of abhyasa (persistent practice) and vairagya (dispassion) as the training regimen for the AI-augmented professional. How do we cultivate skill while releasing attachment?
The Stoic-Vedantic Convergence: A deeper exploration of the parallel insights in Marcus Aurelius, Epictetus, and the Gita ~ and what their convergence suggests about perennial wisdom regarding human agency.
These deserve their own treatment, which I hope to provide in a subsequent essay.
References
Antonovsky, A. (1979). Health, stress, and coping. Jossey-Bass.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215.
Bandura, A. (1997). Self-efficacy: The exercise of control. W.H. Freeman.
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. Harper & Row.
Dweck, C. S. (2006). Mindset: The new psychology of success. Random House.
Frankl, V. E. (1946/2006). Man's search for meaning. Beacon Press.
Gawande, A. (2009). The checklist manifesto: How to get things right. Metropolitan Books.
Haidt, J. (2006). The happiness hypothesis: Finding modern truth in ancient wisdom. Basic Books.
Kobasa, S. C. (1979). Stressful life events, personality, and health: An inquiry into hardiness. Journal of Personality and Social Psychology, 37(1), 1–11.
Marcus Aurelius. (c. 170–180 CE/2002). Meditations (G. Hays, Trans.). Modern Library.
Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80(1), 1–28.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78.
Sarasvathy, S. D. (2001). Causation and effectuation: Toward a theoretical shift from economic inevitability to entrepreneurial contingency. Academy of Management Review, 26(2), 243–263.
Seligman, M. E. P. (1972). Learned helplessness. Annual Review of Medicine, 23(1), 407–412.
Seligman, M. E. P. (1990). Learned optimism: How to change your mind and your life. Knopf.
Taleb, N. N. (2012). Antifragile: Things that gain from disorder. Random House.
The Bhagavad Gita. (c. 200 BCE–200 CE/1994). (E. Easwaran, Trans.). Nilgiri Press.