The DCMA Trap
The DCMA 14-point assessment has become the gold standard for schedule health. And it should be — it catches real problems. Missing logic, negative float, hard constraints, high-duration activities, dangling starts and finishes. Run the metrics, check the thresholds, produce the report.
But I’ve reviewed schedules that pass all 14 points and are still fundamentally broken.
On a $3.4B transportation program last year, the contractor’s schedule scored green across every DCMA metric. No negative float. Less than 5% incomplete activities without predecessors or successors. Constraint ratio under threshold. The schedule looked healthy on paper. In practice, nobody on the project team trusted it. Progress updates were three weeks behind. Resource loading had no connection to the actual workforce on site. The WBS structure made it impossible to map costs to schedule activities.
A clean DCMA score tells you the schedule is technically compliant. It doesn’t tell you the schedule is useful.
What the 14 Points Miss
1. Calendar Consistency
This one creates subtle problems that compound over time.
I’ve opened P6 databases with 47 different calendars. Activities on a 5-day work week driving successors on a 7-day calendar. Holiday calendars that haven’t been updated since 2022. A “shutdown calendar” someone created for a turnaround that’s now accidentally assigned to 200 construction activities.
The DCMA assessment doesn’t evaluate whether your calendars make sense together. It checks logic and float — it doesn’t check whether a 5-day predecessor feeding a 7-day successor is creating phantom float that masks a real scheduling conflict.
What to check: Pull a calendar report from P6. List every calendar in use, the number of activities assigned to each, and the working hours per week. Flag any calendar that hasn’t been reviewed in the past 12 months. Verify that holiday dates are current. Confirm that calendar assignments align with the actual work patterns of the crews performing the work.
2. WBS Structure Quality
The WBS is the skeleton of the schedule. If it’s malformed, everything built on top of it — reporting, earned value, resource aggregation — is compromised.
I’ve seen a WBS that was 12 levels deep on the mechanical branch and 2 levels deep on electrical. Work packages that didn’t align with cost accounts, making earned value calculations meaningless. Summary levels created because “that’s how the old schedule was set up” with no actual purpose.
What to check: Map the WBS to the project’s Work Breakdown Structure dictionary (if one exists). Verify that WBS levels are consistent across disciplines. Confirm that the lowest WBS level corresponds to controllable work packages. Check that every WBS node has at least one activity — empty WBS branches are clutter.
3. Resource Rate Validation
Resources loaded at $0/hour. Rates that haven’t been updated since the original baseline was set three years ago. Equipment resources carrying labor rates. A “TBD” resource assigned to 400 activities.
DCMA doesn’t evaluate resource integrity. It checks whether resources are loaded (and even that only loosely). It doesn’t check whether the loading is meaningful.
On an oil and gas project, I found the schedule showed 120,000 labor hours remaining. The project controls team was reporting 85,000 hours to complete based on their independent estimate. The delta came from rate table errors and duplicate resource assignments that had accumulated over 18 months of schedule updates. Nobody had reconciled the schedule’s resource data against the cost system in over a year.
What to check: Export resource assignments and rates. Compare total budgeted hours by discipline against the cost estimate. Flag resources with $0 rates, duplicate assignments on the same activity, and any resource whose rate hasn’t changed since the baseline.
4. Baseline Integrity
This is where schedule forensics meets schedule health.
Activities baselined with dates that precede the contract notice to proceed. Baseline durations of zero on activities that clearly require work. A “current baseline” that was set six months into execution by copying the current schedule — effectively erasing all variance.
I reviewed a schedule where the contractor had re-baselined four times in two years. Each time, the justification was “approved scope change.” Each time, it conveniently eliminated months of negative variance. The DCMA metrics looked fine against the latest baseline. The original project completion date had slipped 14 months, and you couldn’t see it without digging into the baseline history.
What to check: Compare the current baseline against the original baseline (BL-0). Document every re-baseline event with its justification. Verify that baseline dates fall within the contractual period of performance. Check for activities where the baseline finish is after the current data date with zero percent complete — are they genuinely not started, or are they missing progress?
5. Coding Structure
Activity codes are the filtering and reporting backbone of a P6 schedule. When they’re inconsistent, every report that depends on them is unreliable.
The pattern is always the same: someone set up the activity codes during project mobilization. That person left. The next scheduler added their own codes. A third scheduler ignored codes entirely. Two years in, you have 15 activity code types, 6 of which are actually used, and none of which are consistently assigned.
What to check: List all activity code types and values. For each code type, measure the percentage of activities that have a value assigned. Anything below 90% assigned is suspect. Look for code values with fewer than 5 activities — they’re often typos or abandoned categories. Verify that code assignments align with the WBS structure and don’t create contradictions.
6. Constraint Logic
DCMA checks for hard constraints and measures the constraint ratio. That’s necessary. But the real question isn’t “how many constraints exist?” — it’s “why is each constraint there?”
Mandatory constraints that exist because the scheduler couldn’t figure out the logic. “Start On” constraints that were set during the bidding phase and never updated after contract award. “Finish No Later Than” constraints on every milestone because the project manager wanted to “hold the dates.”
I’ve cleaned up schedules where removing unjustified constraints revealed 60+ days of negative float that had been invisible. The schedule had been reporting green. The project was actually two months behind.
What to check: Export every constraint. Categorize each one: contractual milestone, external dependency, regulatory requirement, or “unknown.” Every constraint in the “unknown” category needs investigation. If more than 20% of your constraints can’t be justified, you have a logic integrity problem.
7. Progress Methodology
How is percent complete being calculated? This question alone has uncovered more schedule problems than any DCMA metric.
Duration percent complete — P6’s default for tasks — calculates progress based on time elapsed between actual start and remaining duration. Physical percent complete requires manual input. Some schedulers use duration percent for everything because it’s automatic. The result: an activity that started 6 months ago with a 12-month duration shows 50% complete regardless of actual progress.
On a power generation project, I found 300+ activities using duration percent complete where physical percent would have been appropriate. The schedule was showing 67% complete. The field team’s physical progress assessment was 54%. That 13-point gap represented roughly $40M in earned value misstatement.
What to check: Run a report on percent complete type by activity. Flag any activity using duration percent that has a duration longer than 20 days — these almost certainly need physical percent or units percent. Check for activities showing actual start dates but 0% complete, or 100% complete without an actual finish date. Both indicate data integrity failures.
The Organizational Layer
The technical checks above will find problems in the schedule data. But a schedule doesn’t exist in isolation — it exists within an organizational context that determines whether those problems get fixed or get worse.
Governance Maturity
Who owns the schedule? Not in theory — in practice. Who has the authority to approve a logic change, add an activity, or modify a constraint? On healthy projects, the answer is clear. On unhealthy ones, everyone and no one.
Process Compliance
Is there a scheduling procedure? Is anyone following it? I’ve reviewed scheduling procedures that were 80 pages long and had never been opened after approval. The procedure is only useful if it’s practical enough that schedulers actually follow it.
User Competency
Can the schedulers explain their logic? I ask this question in every health check. Pick an activity at random, ask the scheduler why it has the predecessors and successors it has. If they can’t explain it without opening the schedule, the logic was probably built mechanically rather than thoughtfully.
Change Management
How are scope changes reflected in the schedule? Is there a formal process for adding activities, or does the scheduler just add them when the project manager mentions them in a meeting? Are deleted activities actually removed, or are they sitting at 0% complete with a “Not Started” status?
Building a Health Check Program
A one-time schedule health check finds problems. A health check program prevents them.
Monthly: Automated Metrics
Run the DCMA 14 points plus a handful of additional automated checks — calendar consistency, coding coverage, constraint categorization. These can be scripted or pulled from P6’s built-in schedule check feature. Generate a dashboard. Track trends. A schedule that’s getting worse month over month needs intervention before it’s a crisis.
Quarterly: Deep Dive
A manual review of the items above — WBS structure, resource integrity, baseline fidelity, progress methodology. This requires a qualified scheduler spending 2-3 days with the schedule data, not just running reports. Interview the scheduling team. Walk through recent logic changes. Compare the schedule’s story against the project’s actual status.
Annually: Organizational Assessment
Evaluate the governance, process, and competency dimensions. Are scheduling procedures current? Are they being followed? Do schedulers need training? Does the organization have the right tools and templates?
The Health Check Report
A useful health check report has four sections:
- Executive Summary — Overall health rating (Red/Amber/Green), top 3 findings, recommended immediate actions.
- Metric Dashboard — DCMA 14 points plus extended metrics, with trend data showing direction of travel.
- Detailed Findings — Each issue documented with severity, evidence (specific activities or data), root cause, and recommendation.
- Improvement Roadmap — Prioritized actions with effort estimates and responsible parties.
The report should be direct. “The schedule has 847 activities without activity code assignments, making discipline-level reporting unreliable” is more useful than “activity coding could be improved.” Specificity creates accountability.
The Standard Worth Aiming For
A schedule that passes the 14 points is a schedule that meets minimum standards. A healthy schedule is one that people actually trust and use to make decisions.
I’ve worked on projects where the weekly scheduling meeting was the most productive hour of the week — because the schedule reflected reality, the team understood it, and decisions could be made with confidence. I’ve also sat in meetings where the schedule was projected on the wall and systematically ignored because everyone in the room knew it was fiction.
The difference wasn’t the DCMA score. It was everything else.