Why 1 Senior Program Manager ≠ 1 Junior Program Manager + 1 AI? 

The promise of AI is seductive. The logic seems clean: hire juniors, give them AI, save money.  Every day I hear about companies — from scrappy startups to global giants — replacing experienced talent with machines because, apparently, AI does their job “better.” 

Depending on the AI that writes the article, the news sounds either euphoric (“AI is finally exposing inefficiencies!”) or apocalyptic (“Reinvent yourself before the AI bulldozer gets to you”). Both tones are AI-generated, no doubt. 

Meanwhile, the market is full of senior managers. People with years of experience leading complex programs. Who’ve made enough mistakes to spot risks before they surface: hidden interdependencies between projects, team leadership in programs, or political maneuvering. They’ve navigated org charts, time zones, egos, and cultural resistance. And they know how to find the right moment – and the right way – to address early friction before it turns into a crack that swallows the entire project.  

But why keep them, when you can just plug in a junior + ChatGPT? 

I’ve seen this story before. In Chile, it started during the migration wave: two migrants = one local. Then came the generational shift: two youngsters = one veteran. Now we’re in the age of one junior + one AI = one senior. The math never really works, but the narrative persists. 

So let’s take a closer look. Can this hype really turn into a prophecy? 

“La IA no resuelve lo que la empresa no tiene resuelto.” 

My good friend and colleague, an expert in strategy and technology, Walter Caricato, nailed it in his article (Read here): AI doesn’t fix what the company hasn’t fixed already. At best, it accelerates. At worst, it amplifies dysfunction. Digital transformation doesn’t fail on technology; it fails on culture. His phrase should be tattooed on every C-suite’s hand before the next AI rollout. 

And if you want to know whether your organization is even ready for AI, start by measuring where you are. AI maturity assessments can give you that baseline — much like the data maturity models we used when building datalakes and data warehouses, or the enterprise IT architecture models that helped bring order to thousands of scattered applications. One example is the framework proposed by Sergio Vélez Maldonado (Read here), which evaluates key dimensions and shows where you stand before embarking on the AI journey. 

But if you turn to AI to fix the very culture that made you lay off your last Program Manager, don’t be surprised when the machine starts telling you exactly what you want to hear. 

The Sycophant Effect 

Source: https://www.youtube.com/watch?v=3Wc67-MecIo 

If you follow the AI evolution closely for a while now, you have definitely heard about this concept: sycophancy. It came to light in late April this year when OpenAI had to roll back its latest GPT-4o update. Sam Altman made a public statement, social media lit up with memes, and we firmly linked the word “sycophant” to AI.  

Mark Mullaly used it in his now-famous webinar, “What AI Cannot Do for Project Managers.” I found it a highly insightful perspective on the limitations of AI: more an algorithm that adjusts than an “intelligence” that “learns.” Mark frames particularly well the linguistic nuances and how they shape the way we see AI. (It’s free on PMI’s website. Yes, you should watch it.) He followed up with a piece titled “Your Sycophant Is Waiting,” where he warns us about AI’s most underdiscussed flaw: it flatters. It reflects your worldview back at you, politely, efficiently, confidently. It doesn’t probe. It doesn’t challenge. It affirms. 

Source: https://www.linkedin.com/pulse/sycophancy-vs-authenticity-pragya-singh/  

It’s true that not all AI is ChatGPT, but the uncovering and growing awareness of ChatGPT’s flattery is a striking indicator of human bias (like the belief that promotional emails in Gmail vanish by magic). Going back to Walter’s reflection — if there’s a bias, if there’s a cultural or process flaw, AI will amplify it and, most likely, cover it with more flaws, building a colossus with feet of clay.  

Let’s say a leader designs a flawed process, drives cultural misalignment, and then fires their most experienced program lead. Now they’re asking AI, “How do I fix this mess?” AI will applaud their self-awareness. It will frame the question as a sign of deep emotional intelligence. It will suggest three action plans, color-coded and inspiringly titled. 

Tone? Strategic, philosophical, informal? AI will adjust. Just don’t expect it to say: 

“The real problem is… there is nobody left to fix it.” 

Smart Answers, Shallow Thinking 

Picture this. 

A junior Program Manager is asked to create a stakeholder engagement plan. They look for program execution excellence. They feed bios, sentiment data, and meeting notes into AI. The output is crisp and confident: 

  • “Stakeholder X is highly supportive.” 
  • “Stakeholder Y is innovation-driven and will likely champion the change.” 
  • “Stakeholder Z has a track record of collaboration.” 

Looks good, right? But let’s look closer to what happens in the real world: 

  • Stakeholder X “support” is just polite nodding (no skepticism) 
  • Stakeholder Z is actually stretched thin by another conflicting initiative (no context, no politics) 
  • Stakeholder Y once torpedoed a similar program through silent delay (no memory) 

This is where AI’s talent for eloquence becomes a liability. It writes what feels true. What sounds good. What confirms optimism. But programs don’t succeed on good grammar — they succeed on hard truths. 

Just One More Example 

Let’s leave programs aside for a moment. 

I asked AI if I could use a certain hair product after a keratin treatment. It told me: “Absolutely, this product is great for after the keratin treatment — here’s a weekly schedule and some places you can buy it.” It even reassured me the product was sodium-free (which is key). 

I bought it. Checked the label. Sodium. A lot of it. 

I sent the label to AI, which promptly replied — in bold — that this product should not be used after a keratin treatment due to its sodium content, and congratulated me for my question. No apology. No trace of the previous advice. 

Now imagine that same AI is advising your junior PM on a $1M digital transformation program. 

Efficiency Without Judgment = Risk Multiplied 

The rise of AI in program environments is real — and full of potential. But it’s not a substitute for experience. It’s a tool. One that magnifies whatever environment you place it in. 

So here’s what I recommend — not as a checklist, but as a line in the sand: 

1. Use AI for efficiency, not for insight. 

Let AI: 

  • Draft timelines, reports, and meeting notes 

But don’t let it: 

  • Make risk decisions with cultural, political, or ethical implications (circling back to Mark and the webinar) 

2. Clarify roles. AI is not your escalation path. 

Many programs fall into a false comfort zone: junior talent uses AI to “feel” senior. Warning signs get missed. Every question is answered in a calm, balanced, and confident tone. But the real risks never make it to the surface. 

3. Build feedback loops — not echo chambers 

AI learns from prompts and inputs. Junior PMs learn from feedback. If neither gets challenged, both reinforce false confidence. 

1 (one) Senior Program Manager can see around corners. 

Strategic Program Management guides through ambiguity. 

Program Governance poses the questions no one wants to hear. 

You can’t plug that into a prompt. 

So by all means, use AI. Please. Don’t fall behind. You don’t want to be the grandpa still using a landline because mobile phones “damage the brain. 

But don’t fire the one person who knows where your weak spots really are — and has the guts to say it. 

Subscribe and share to shape how we will lead with AI in the future 

What challenges have you faced when using AI in program management? 
Are you trying to cut costs or improve efficiency by bringing AI into the mix? 
Or maybe I’ve got it wrong — convince me. 

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Scroll al inicio