Improving Software Acquisition Outcomes for Marine Corps Program Leaders
Equipping COLs and PMs to Lead Sustainable, Marine-Aligned, and AI-Aware Software Programs
The workshop directly supports the priorities laid out in two major 2025 directives:
-
The Secretary of Defense memo on Directing Modern Software Acquisition to Maximize Lethality (March 2025), which mandates faster, more responsive software acquisition using the Software Acquisition Pathway (SWP).
-
The Presidential Executive Order on Modernizing Defense Acquisitions (April 2025), which calls for flexible contracting, integration of commercial innovation, and reform of defense software development to prioritize speed, quality, and impact.
Marine Corps leaders trained in this workshop will be better equipped to implement these mandates, accelerate operational software readiness, and deliver durable solutions aligned with mission needs.
​
Modern Marine Corps missions increasingly depend on adaptable, resilient software systems that can evolve rapidly and perform reliably under mission-critical conditions. However, many acquisition efforts fail—not from lack of effort, but due to outdated processes, misplaced incentives, and over-reliance on frameworks that obscure technical risk. This workshop prepares Marine Corps O-6 leaders to meet the software challenges of modern warfare through outcome-oriented oversight, user-centered development, and sustainable technical strategy.
Audience
Active-duty Marine Corps O-6 officers in command, oversight, or software acquisition leadership roles
Format
3 Days | Half-day sessions (~4 hours/day)
In-person instruction with online labs and group exercises
Goals
-
Equip senior leaders to oversee software acquisition efforts with clarity and outcome orientation
-
Dispel harmful myths introduced by process-heavy frameworks
-
Demonstrate how to iterate toward clarity before investing in engineering
-
Prevent vendor lock-in through maintainable code and transparent architecture
-
Provide a leadership-level framework for delivering scalable, sustainable warfighting software
Agenda
Day 1
Leadership Role, Failure Patterns & Rapid Product Validation
1.1 Leadership Mindset
-
Understand recurring failure patterns in Marine Corps software programs.
-
Reframe the PM and senior leader's role: outcomes over process adherence.
-
Identify the Process Mind Virus: overreliance on Agile, SAFe, DevSecOps, Lean.
-
Reject shallow compliance in favor of real learning and sustainability.
​
1.2 Core Leadership Responsibilities
-
Product clarity
-
Real user engagement
-
Timely investment
-
Code and architecture transparency
​
1.3 Iterate Before You Build
-
Learn how to use low‑cost RIMs (Realistic Interactive Mockups) and RIPs (Realistic Interactive Prototypes).
-
Enforce real user validation before engineering investment; eliminate surrogates and proxies for field‑user feedback.
-
Connect rapid iteration with Marine Corp environments and PMO dynamics.
-
Assess how to structure and support product teams in Marine Corp programs.
​
Key Takeaway: You are accountable for mission outcomes—not for whether a team is “Agile.” Lead iteration, discovery, and maintainability, and never let end users see a line of code that hasn’t already been validated in a RIM/RIP.
Day 2
Contracting for Outcomes, Technical Talent Growth & Oversight
2.1  Acquisition Strategy
-
Why Time‑and‑Materials contracts work against good outcomes.
-
Preferred alternatives: use Software Pathways (SWP) with CSOs and OTs as default methods, supported by outcome‑focused SOOs instead of traditional SOWs.
​
2.2  Code Health & Technical Debt
-
Vendor lock‑in due to unreadable code and opaque architecture.
-
Recognize technical debt as a leading indicator of program risk.
-
Introduce AI code‑maintainability analytics for leadership visibility.
-
Leverage AI tools to accelerate maintainability reviews and refactoring of legacy code.
​
2.3  Secrets to Achieving High‑Quality Software
-
High‑quality software starts with clearly defined quality goals (performance, security, other “‑ilities”).
-
These goals must be agreed upon early and drive architectural direction.
-
They filter vendor proposals and internal design decisions.
-
When quality goals are unclear, no amount of rework will reliably produce high‑quality software.
​
2.4  Growing Technical Talent
-
Appoint and support coding‑craftsmanship mentors to raise code quality.
-
Use code reviews to teach; emphasize simplicity over cleverness and skill over certifications; evaluate engineers by outcomes, not titles.
​
2.5  Oversight & Metrics Workshop
-
Spot shallow progress: velocity obsession, demo theater, “done” theater.
-
Implement oversight based on:
-
Maintainability scoring
-
RIM/RIP validation status
-
Refactor and bug‑fix trends
-
-
Create metrics that track:
-
Validation, clarity, and rework due to failed discovery
-
Quality of tests and code health
-
Maintainability improvement over time
-
Waste from improper discovery and poor testing
-
​
2.6  Risk Management Strategies
-
Detect risk early in accelerated acquisition cycles.
-
Use leadership questioning to expose hidden delivery risk.
-
Balance speed and sustainability without micromanaging.
​
Key Takeaway: Culture follows inspection. Track code health, discovery quality, and user validation, and your teams will deliver better software faster.
Day 3
AI‑Enabled Force, High‑Rigor Practices & Continuous Improvement
3.1  Leading in an AI‑Enabled Environment
-
Establish AI capability as a core leadership responsibility.
-
Drive AI adoption with command‑level expectations and guardrails; make AI tools part of your team’s everyday development workflow.
-
Issue policy: AI accelerates delivery but never replaces judgment; demand architectural clarity and traceability.
-
Prevent unchecked AI use in safety‑critical or ambiguous areas.
-
Draft an AI Oversight Memo setting what is encouraged, required, and off‑limits.
​
3.2  AI Tooling Guide
-
What to adopt:
-
Full‑context AI prototyping tools
-
AI assistants for test generation, refactoring, documentation
-
Maintainability analytics for code reviews and oversight
-
-
Where to adopt:
-
Early prototyping, legacy‑code refactoring, test expansion
-
Teams with strong review discipline and craftsmanship mentors
-
-
How to adopt without compromising maintainability:
-
Require human review and refactor passes
-
Use rubrics to evaluate AI‑generated code
-
Track maintainability metrics across AI‑augmented workflows
-
Apply the same rigor to AI code as to human code
-
​
3.3  High‑Consequence Software Requires High‑Rigor Practices
-
Identify critical modules tied to mission success, safety, or interoperability.
-
Require enhanced design clarity, traceability, and maintainability from the start.
-
Apply the Testing Rectangle model: heavy emphasis on integration and end‑to‑end tests.
-
Enforce independent review gates and stricter release criteria for these modules.
​
3.4  Sustaining Excellence & Continuous Refactoring
-
Continuous refactoring pipelines powered by AI suggestions and mentor oversight keep debt low and learning high.
​
Key Takeaway: AI is essential to both speed and quality. Lead its adoption with clear guardrails, enforce maintainability, and apply extra rigor where failure costs lives.