How to use AI to level up executive presence and improve product strategy
10 tips and examples on how you can be a 10x PM.
Recently, an Ex Google PM and founder friend told me: “If models aren’t doing 10× your lift, you’re choosing 10× the pain.”
He was right. Since then I’ve treated AI as an infinite chief-of-staff. The more context I pour in, the more altitude I fly at. If you don’t want to be left behind I advise to quickly come up to speed.
However beware because remember that those tools are trained on your data, people’s data, meaning that the output is a stochastic simulation of the human spirits. While they are quite helpful I would say that responses can be “average” so try to add little bit of real humanity and most importantly “you” into it. They generate and you validate!
With that, below are the ten moves that upgraded my executive presence, sharpened product strategy, and gave me an almost unfair productivity edge. They’re distilled from hundreds of hours experimenting with ChatGPT, Claude and Gemini but also more prototyping tools as well as a few late-night “why didn’t anyone teach me this?” epiphanies.
Feel free to remix, but please use them. Your future self (and every engineer waiting on your specs) will thank you.
1. Rapid-Fire Prototypes: Show, Show, SHOW!
Why
A working prototype instantly tilts the room toward evidence: stakeholders can click, inspect, and argue with real outputs instead of imaginary ones. The faster you surface those tangible artefacts, the sooner execs reveal true concerns (data quality, latency, UX polish) and the sooner engineering can size real effort.
How I run it
I craft prompts by starting with business outcomes and user journeys, then detail the technical flow, data transformations, and expected outputs. My prompts specify the complete system architecture: from raw inputs through processing logic to API endpoints (That’s where a technical background helps) I include concrete requirements (libraries, file structure, evaluation metrics) and guardrails (performance constraints, testing coverage). The more detail you can be, the best the output. This structured approach enables AI agents to generate production-ready code scaffolding that I can immediately refine with Cursor, test, and deploy. The result: functional prototypes within hours instead of weeks, and also enabling rapid iteration based on real user feedback rather than lengthy planning cycles. I believe in the future this will complement PRD
Prompt Example:
For example I want to show a new feature that detects equipment failures before they happen in manufacturing plants, to reduce downtime and maintenance cost through real-time IoT sensor monitoring that flags abnormal vibrations, temperatures, and other sensor inputs.
With it I can show the user journey and the impact on the customer and demonstrate to engineering what are the end goals. Of course this is not ready for production but it gets the message across technical and non technical folks.
ROLE
You are a senior data‐scientist + MLOps engineer tasked with delivering a runnable, containerised anomaly-detection prototype.
GOAL
Create a repo that ingests raw IoT sensor data, performs
• rigorous schema validation & cleaning
• feature engineering (rolling means/std, first-order deltas, cross-sensor ratios)
• exploratory data analysis with summary visuals
• anomaly detection via Isolation Forest (CPU-only)
• model evaluation appropriate to *unsupervised* learning
• two FastAPI routes:
– POST /train → retrain from a new CSV or mounted file
– POST /predict → score a JSON payload and return {anomaly: bool, score: float}
LIBRARIES (hard requirements)
Python 3.11 · Pandas · Scikit-learn · FastAPI · Uvicorn · Joblib · PyArrow · Matplotlib
*Keep third-party deps ≤ 10 and enumerate in README.*
DATA FLOW
• Ship a `/data` folder with `iot_sensors.csv` (≈1 MB synthetic demo) having columns:
device_id, ts, temp_C, volt, current, vib_x, vib_y, vib_z, pressure, humidity
• At start-up, the training script reads `$DATA_PATH`; if unset it falls back to `/data/iot_sensors.csv`.
• Schema validator must flag missing cols, type mismatches, NaNs, timestamp gaps, and sensor drift (simple z-score on rolling window).
• `POST /train` body: `{ "csv_url": "<https-wheretostore>" , "contamination": 0.02 }`
– Streams the file → appends to Parquet store → engineers features → tunes IsolationForest (respecting contamination %) → persists `model-<date>.joblib` + `metrics.json` → hot-swaps `/predict` model.
EVALUATION (unsupervised)
If the input CSV **does NOT** contain a label column:
• Produce plots of anomaly-score distribution vs. contamination threshold
• Compute mean/median score, IQR, and percentage above threshold
• Show silhouette score of Isolation Forest embedding (optional)
• Save a summary table in `metrics.json`.
If the CSV **includes** `is_anomaly` labels (0/1):
• Additionally compute precision, recall, and PR-curve, clarifying that labels are *only* for evaluation, not training.
MLOps / CI
• Unit tests via PyTest: schema validation, `/predict` on known benign & synthetic outlier payloads, `/train` happy-path + bad URL path.
• Provide GitHub Actions workflow running tests + flake8.
• Cold start (train + API ready) ≤ 60 s on a 2-CPU container.
DELIVERABLE STRUCTURE (render as Markdown code blocks)
/src FastAPI app + ML pipeline
/tests PyTest suites
/notebooks eda.ipynb (reproduce feature engineering + training)
/data seed CSV
/artifacts (empty; filled at runtime)
Dockerfile
docker-compose.yml
.gitignore
README.md local run, Docker, cloud deploy (Render/Vercel) + tuning tips
.github/workflows/ci.yml (or Makefile)
GUARDRAILS
1. Ask up to **two** clarifying questions if any requirement is unclear.
2. Docstrings + type hints throughout `/src`.
3. No GPU code; if added in future, isolate behind config flag.
OUTPUT
Return the full file tree as nested Markdown code blocks, ready to copy-paste to disk.
After the code, append a one-page “next steps” memo (feature store integration, model registry, monitoring).
BEGIN.
2. Strategy “Red Team”: Pre-empt the Tough Questions
Why
when you present, the most important is to answer the tough question of hidden risks. Running a private AI red-team lets you surface every objection (ROI doubts, tech bottlenecks, competitive blind spots,…) before a meeting. That turns the final review from “rewrite cycle” into quick assent and preserves your credibility as a PM who comes armed with arguments.
How I run it
I spin up a dedicated project, drop in the draft deck, our north-star memo, and a one-page critique framework. First prompt: “Play CFO and shred this.” The model lists weaknesses and the exact data it needs. I supply numbers, re-prompt for a second pass, then repeat once more with “Play CTO.” Each loop tightens the story and help me uncover flaws.
Prompt Example:
ROLE
Act as the CFO performing a pre-board teardown.
CONTEXT
Files in project: Product_Strategy.pdf, North_Star.md, Critique_Framework.pdf
TASK
1. Grade each slide on:
• Revenue realism
• Competitive edge
• Feasibility
• Execution risk
• Alignment with north star
2. For every weakness, list:
• Pushback you’d raise
• Exact data or benchmark needed to fix it
• Severity (High / Med / Low)
RETURN in concise Markdown. Stop; await my data before re-scoring.
3. Brain-Dump ➜ Board-Ready Narrative
Why
Great ideas die in “blank-slide paralysis.” By letting an AI co-writer shape your raw thoughts into a coherent arc (hook, tension, proof, ask) you stay in flow and discover gaps before executives do. The model also throws back smart follow-up questions, turning a one-way dump into a guided brainstorming session.
How I run it
I paste unfiltered notes into a fresh chat inside a “Narratives” project. First ask for a five-section outline. Then I prompt, “Give me three questions that would sharpen novelty or emotional punch.” I answer only the ones that resonate, feed them back, and request a full draft. Two loops later the piece reads like it took a full writing sprint, when really it was 30 minutes of conversational refinement.
Prompt Example:
ROLE
You are a product-story coach.
GOAL
Craft a one-page narrative that wins VP approval for Feature X.
INPUT <<<Your Brain Dump>>>
STEP 1 – Create a five-section outline: hook, problem, insight, solution, ask.
STEP 2 – List three probing questions that would boost novelty, clarity, or emotion. Focus on areas I haven’t fully explored, and draw connections between different ideas I shared.
PAUSE for my answers.
STEP 3 – Integrate my answers and deliver the full draft in crisp, persuasive prose.
4. Copy That Converts — Internally and Externally
Why
A headline, Slack update, or error toast can swing activation, stakeholder trust, and even funding decisions. The difference between “meh” and “must-click” often lives in six words that is your tagline unique vision. An AI copy room lets you iterate tone, clarity, and persuasion faster than any peer review, ensuring every launch blurb or exec note lands first try.
How I run it
Inside a “Messaging Lab” project I role-play three voices (content designer, growth PM, and a skeptical customer). I feed the model the current line plus context (persona, channel, desired action). It spits back multiple styles; I A/B test the top two, ship the winner, archive the chat, and keep the prompt ready for the next launch.
Prompt Example:
ROLE (to change per ICP Target and Product context)
1) Clarity-obsessed Content Designer
2) Activation-driven Growth PM
3) Skeptical Power User
CONTEXT
Product: Virtual Power Plant program participation
Channel: In-app pop-up at first login
Goal: Drive users to click “Enroll Now”
TASK
Rewrite the headline + subhead five ways, each ≤12 words.
Tones needed: visionary, urgent, playful, analytical, minimal.
After each variant, add one-sentence rationale (why it should convert).
INPUT LINE
“Enroll now and start saving on your electric BILL.”
5. Build a Personal + Boss “Operating System”
Why
Executive presence is 50 % insight, 50 % resonance. When you tailor every ask to your manager’s decision style (data-first, vision-first, risk-first,… whatever you analyzed) you bypass debate and land on “yes” faster. Storing both your own and your boss’s work-OS in an AI sandbox turns the model into a rehearsal partner that predicts objections and suggests the framing that will click.
How I run it
I create a “Relationship OS” project. File #1: my profile (Enneagram, DISC, recurring feedback themes). File #2: my manager’s operating manual, public talks, favorite articles. Before a high-stakes chat I prompt, “Play my boss. React to this proposal.” The model answers in her voice and lists the three clarifications she’ll want. I tighten the pitch, walk into the meeting, and watch the real conversation unfold with high confidence.
With little bit of research you can even do that during interview if you build it for the hiring manager - I might do a full article on this later.
Prompt Example:
ROLE
• You have two personas loaded:
– ME: Enneagram 7, ENTP, values speed & upside
– BOSS (Maria): Enneagram 1, ISTJ, values user trust, hates hype, persona type
FILES
• /maria_operating_manual.pdf
• /northstar_mission.md
SCENARIO
I want Maria’s green-light to roll out a new “Daily Streak” push-notification feature in our user app.
TASK
1. Draft a 3-minute verbal pitch that:
• Opens with a user-trust statistic (to satisfy Maria)
• Shows how Daily Streaks increase 30-day retention (our KPI)
• Addresses her likely concerns: notification fatigue & privacy optics
2. List the top 3 questions Maria will still ask.
3. Suggest one follow-up artifact (mock, brief, or metric) that will close the decision loop.
DELIVER pitch script, questions, and follow-up suggestion in Markdown bullets.
6. Memory Hygiene: Control What the Model Remembers
Why
Your AI gets sloppy when yesterday’s problem solving advise bleeds into today’s board prep. Mixed contexts cause hallucinations, leak sensitive data, and force extra clarifications. By controlling what the model stores, you keep outputs crisp, private, and on-brand.
How I run it
I use three lanes: TEMP chats for throw-away ideation, PROJECT threads for scoped work, and ARCHIVE once a thread is final. Weekly, I prune stale memory blocks so only living documents remain. Result: every new prompt starts with a clean slate, yet past gold is a click away and i work specific task only on those prompt.
Structure Example:
SYSTEM INSTRUCTIONS — Chat Memory Manager
RULES
1. Messages tagged #TEMP: do NOT store in long-term memory.
2. Messages tagged #ARCHIVE: remove entire thread from active memory, mark retrievable.
3. Messages without tags: store only if they add lasting project value.
ACK FORMAT
Reply with:
🕑 “Temp noted” → when #TEMP stored short-term only
🗄️ “Archived” → when #ARCHIVE processed
📌 “Saved” → when message added to project memory
BEGIN enforcing these rules immediately.
7. Framework Injection: Teach the Model to Think Like a Consultant
Why
Large models know “a bit about everything,” but real leverage comes when they think with the exact framework your org swears by (AARRR funnel, Jobs-to-Be-Done, HEART, you name it). Uploading that source once means every future prompt delivers consultant-grade structure instead of generic lists, saving hours of re-formatting and signaling rigor to stakeholders.
How I run it
Inside a “Framework Vault” project I drop the canonical PDF or slide-deck. First chat: “Summarize the framework in your own words.” I sanity-check the summary; if it’s right, I lock the instruction: “Use only this framework unless I override.” From then on, the model auto-formats user-journey maps, experiment backlogs, or feature briefs exactly in that mold and there is no need for manual re-alignment.
Prompt Example:
ROLE
Product strategist for a skill-learning mobile app.
CONTEXT
• AARRR_Funnel.pdf uploaded in this project
• Company OKR: boost weekly active users (WAU) +20 % by Q4
TASK
1. Using ONLY the AARRR funnel, draft a 4-week experiment plan.
• 1 activation test
• 1 retention test
• 1 referral test
2. For each experiment include:
– Hypothesis
– Key metric & target lift
– Minimal viable change (≤1 sprint)
– Success / fail next step
FORMAT
Return in Markdown table ready for an exec review.
8. Deep-Research Agent: Instant Dossier on Markets, Competitors, People
Why
Pitch meetings and roadmap bets hinge on fresh intel. Relying on month-old market reports means you miss the podcast quote your competitor dropped last night or the spike in Glassdoor reviews hinting at a pivot. A targeted AI research pass turns scattered web crumbs into a tight dossier, so you reference real-time facts and surprise the room with relevance.
How I run it
I launch a blank “Intel Sweep” chat and instruct the model to generate Google dorks, social-media handles, and SEC query strings, then go fetch. Once the links return, I ask for a synthesis: three insights, two warnings, one oddball signal. Finally, I store the digest in the project and archive the raw crawl and keeping memory lean but context accessible.
Prompt Example:
ROLE
Competitive-intel analyst.
TARGET
“HabitHero” (top competitor to our wellness-tracking app).
TASK
1. Scrape the public web for the past 60 days using smart queries:
• site:yourcompetitorsite.com filetype:pdf OR ppt
• “yourcompetitor” AND (“launch” OR “beta” OR “retention”) last 60d
• GitHub repos, App Store reviews, Glassdoor comments
2. Return a concise digest:
• 3 strategic moves (product, pricing, partnerships)
• 2 red flags we can exploit
• 1 surprising insight that could inform our Q3 roadmap
3. Provide source links inline for each point.
LIMIT
Output ≤ 300 words; no marketing fluff—just actionable facts.
9. Conflict & Feedback Coach
Why
We all have been there, a disagreement with engineering on specs, on product vision with a college,… can derail velocity faster than any bug. A neutral AI mediator helps you unpack emotions, surface root needs, and script a solution-focused meeting so you can solve this in a proactive way.
How I run it
In a “Tough Talks” project I paste the friction summary (people, issue, stakes, context, history). First prompt asks the model to label emotions and hidden interests. Next I request a 30-minute agenda with talking points and outcome goals. After the meeting I feed back reality vs. plan; the model suggests follow-ups, keeping the relationship on course.
Prompt Example:
ROLE
Mediator-GPT for conflict resolution.
PARTIES
• Me – PM for an energy management app, ENTP, values speed + experimentation
• Jordan – Lead iOS dev, ISFJ, values stability + clear scope
CONFLICT
Jordan is blocking my “Stories 2.0” experiment, citing tech debt and push-notification load. Launch deadline is in two weeks; tension rising in Slack.(expand and be very detailed in here)
TASK
1. Summarize each person’s likely feelings and underlying needs (≤3 bullets each).
2. Draft a 30-minute meeting agenda that:
• Opens with shared goals
• Surfaces Jordan’s concerns without blame
• Aligns on must-have vs nice-to-have scope
• Ends with a written next-step owner + deadline
3. Suggest a neutral follow-up artifact (doc, metric, or prototype) to ensure accountability.
FORMAT: Markdown bullets.
10. Prompt Factory: Let AI Build Prompts For You
Why
Every time you re-invent a prompt, you burn minutes and remember the goal is to optimize your time in order to work on what matter: Your craft. Turning strong chats into reusable templates standardizes quality across the team. A living library of battle-tested prompts is like unit tests for your thinking: speed with consistency.
How I run it
When a chat produces gold, I tag it #TEMPLATE in a “Prompt Forge” project. Once a week I ask AI to distil all #TEMPLATE threads into fill-in-the-blank blueprints (role, goal, constraints, format). Those go into a shared Notion page and a snippets plug-in so the whole product org can summon them in two keystrokes.
Prompt Example:
ROLE
Prompt-Extraction Bot.
INPUT
<<<PASTE 2–3 exemplary chat transcripts that produced great output and that you know where very informative and helped you in your life / work>>>
TASK
1. Detect common structure—roles stated, context blocks, constraints, output format.
2. Produce a reusable template with placeholders, e.g.:
<ROLE>
GOAL: <GOAL>
CONTEXT: <KEY FILES / DATA>
CONSTRAINTS: <LIST>
OUTPUT FORMAT: <SCHEMA>
3. Add two robustness tweaks:
• Ask clarifying questions if GOAL is vague.
• Refuse tasks that violate CONSTRAINTS.
RETURN
• Completed template inside a Markdown code block.
• 3-sentence tip on when this template is most effective.
Closing Thought
AI isn’t coming for product managers who treat it like an infinite graduate associate but I will say, it’s coming with them. Each play above shifts labor and repetitive tasks in the past, freeing you for the human skills no model owns: choosing the right bet, inspiring the team, and exercising judgment when data is uncertain and you need to trust your gut.
Adopt a couple of these and you’ll notice fewer re-write loops and more positive and fast replies. Adopt them all and you won’t just look like a top-1% PM; you’ll operate like one.
Now go teach your co-pilot what “great” looks like and let it keep you there.