Skip to main content
The DCMA 14-Point Schedule Assessment: A Practitioner's Guide for Primavera P6
P6 DCMA Schedule Health Compliance Project Controls

The DCMA 14-Point Schedule Assessment: A Practitioner's Guide for Primavera P6

A practical guide to the DCMA 14-point schedule health assessment in Oracle Primavera P6, with real-world thresholds, common failures, and remediation strategies.

The DCMA 14-Point Assessment is the closest thing our industry has to a universal standard for schedule quality. Originally developed by the Defense Contract Management Agency for evaluating contractor schedules on U.S. defense programs, it’s now used across oil & gas, infrastructure, and construction worldwide — often by organizations that have never touched a defense contract.

I’ve run hundreds of these assessments on schedules ranging from 200 activities to 60,000+. The 14 checks are straightforward in theory, but the gap between “passing the assessment” and “having a healthy schedule” is wider than most people realize.

Here’s each metric, what it actually catches, and what to do when you fail.

1. Logic — Missing Predecessors and Successors

Threshold: Less than 5% of activities missing logic (predecessors or successors)

What it measures: Activities that have no predecessor, no successor, or neither. These are “dangling” activities — disconnected from the schedule’s network logic.

How to check in P6: Create a filter under Activities > Filters. Set conditions for “Predecessors” equals 0 OR “Successors” equals 0. Exclude milestones as appropriate — start milestones won’t have predecessors, finish milestones won’t have successors.

What typically fails: Level of Effort (LOE) activities and WBS summary tasks often show up here. So do activities added late in the project by someone who didn’t bother linking them. On a defense program I supported, we found 300+ unlinked activities that a subcontractor had bulk-imported from an Excel file without any relationships.

How to fix it: Don’t just slap Start-to-Start and Finish-to-Finish relationships on everything to clear the count. Each activity needs a legitimate logical tie to the network. If an activity truly doesn’t depend on anything and nothing depends on it, question whether it belongs in the schedule at all.

2. Leads

Threshold: 0% — no leads allowed

What it measures: Negative lag on relationships. A lead means Activity B starts before the logic says it should, which undermines the entire purpose of having a network schedule.

How to check in P6: Go to the Activity Relationships view (Details pane > Relationships tab). Sort or filter by Lag where the value is less than 0. You can also run a schedule report filtered to negative lag values.

What typically fails: This one’s binary — you either have leads or you don’t. They usually creep in when schedulers try to model overlap between activities and reach for negative lag instead of restructuring the logic.

How to fix it: Replace the lead with proper logic. If Activity B can start 5 days before Activity A finishes, break Activity A into two pieces, or use a Start-to-Start relationship with positive lag. Every lead has a legitimate logic alternative.

3. Lags

Threshold: Less than 5% of relationships should have lag

What it measures: Positive lag on relationships. Lag itself isn’t always wrong — curing concrete genuinely takes time — but excessive lag is often used to manipulate float or force dates.

How to check in P6: Same approach as leads — filter the Relationships view for Lag > 0. Count total relationships with lag vs. total relationships.

What typically fails: Schedules where the planner used lag instead of creating explicit waiting/curing/drying activities. I’ve seen schedules with 30-day lags everywhere because the scheduler didn’t want to create “Wait for Permit Approval” activities.

How to fix it: For legitimate waiting periods, create a separate activity (e.g., “Concrete Curing — Area 7”) with its own duration. This makes the schedule readable and gives you something to track. Reserve lag for genuinely fixed, non-resource-driven delays.

4. Relationship Types

Threshold: Greater than 90% Finish-to-Start (FS) relationships

What it measures: The ratio of FS relationships to all others (SS, FF, SF). FS is the most intuitive and auditable relationship type.

What typically fails: On a defense program I supported, we had 47,000 activities and the scheduler was using Start-to-Start with lag for everything — the FS percentage was around 40%. The logic was technically valid but nearly impossible for anyone else to review or maintain.

How to check in P6: You’ll need to export relationship data or use a BI tool. P6 doesn’t have a built-in dashboard for relationship type distribution. Export to Excel, pivot on relationship type, done.

How to fix it: SS+FF pairs can often be replaced with FS by decomposing activities into smaller pieces. It takes effort, but the schedule becomes dramatically easier to maintain. Start-to-Finish relationships should almost never appear — I’ve seen exactly two legitimate uses in 15 years.

5. Hard Constraints

Threshold: Less than 5% of activities have hard constraints

What it measures: Constraints that override CPM logic — “Must Start On,” “Must Finish On,” “Start On or Before,” etc. These prevent the scheduling engine from doing its job.

How to check in P6: Add the “Constraint” column to your activity table. Filter for anything other than “As Soon As Possible” (or “As Late As Possible” for activities that genuinely need it). In Activities > Columns, add Primary Constraint and Secondary Constraint.

What typically fails: Schedules imported from Microsoft Project are the worst offenders. MSP applies constraints automatically in ways that don’t translate cleanly to P6. I’ve inherited MSP-converted schedules where every single activity had a “Start No Earlier Than” constraint matching its baseline date.

How to fix it: Replace hard constraints with soft ones (“Start On or After” instead of “Must Start On”) or, better yet, with logic. If an activity can’t start before a certain date because of a permit, create a milestone for the permit date with a constraint, and link everything else with logic.

6. High Float

Threshold: Less than 5% of activities with Total Float > 44 working days

What it measures: Activities with excessive float usually indicate missing logic. If a task can slip two months without affecting the end date, it’s probably not connected to the critical path correctly.

How to check in P6: Filter activities where Total Float > 44 days (about 2 calendar months). Exclude completed activities.

What typically fails: Early-phase activities that were linked to project start but not properly connected to downstream work. Also common: procurement activities that are linked to a milestone but not to the installation work they support.

How to fix it: Trace the logic path from each high-float activity forward. Somewhere between it and the project finish, there’s a missing link. Add the relationship that actually exists in reality.

7. Negative Float

Threshold: 0 activities with negative float

What it measures: Activities where the schedule math says you’ve already run out of time. Negative float means the CPM calculation produces a late finish before the early finish.

How to check in P6: Filter for Total Float < 0. This is usually immediately visible in the Gantt chart as well.

What typically fails: Negative float typically comes from constraints or imposed finish dates that conflict with the schedule logic. It’s a symptom, not a root cause.

How to fix it: Identify the constraint or imposed date causing the negative float. Either the constraint needs to move, the logic needs to change, or the project team needs to acknowledge the schedule shows a late delivery. Don’t just remove the constraint to make the assessment pass — that hides real information.

8. High Duration

Threshold: Less than 5% of incomplete activities with Original Duration > 44 working days

What it measures: Activities that span more than roughly two months. Long-duration activities mask detail and make it harder to measure progress accurately.

How to check in P6: Filter for Original Duration > 44 and Status not equal to “Completed.”

What typically fails: Procurement activities (“Fabricate Equipment” with a 120-day duration) and design activities (“Complete Detailed Engineering”) are the usual culprits.

How to fix it: Decompose long activities into sub-activities. “Fabricate Equipment” becomes “Submit Fabrication Drawings,” “Fabrication — Phase 1,” “Factory Acceptance Test,” etc. Each piece should be independently measurable.

9. Resources

Threshold: All activities should be resource-loaded

What it measures: Whether the schedule has been resource-loaded, connecting activities to the people, equipment, and costs needed to execute them.

How to check in P6: Look at the Resource Assignments tab in the Details pane. Filter for activities where Budgeted Units or Budgeted Cost equals 0.

What typically fails: This is the metric most organizations struggle with, especially in early project phases. Many planning teams maintain the schedule and cost estimate as separate artifacts and never integrate them.

How to fix it: This isn’t a quick fix — it’s a fundamental planning practice. Start by loading key resource types (craft labor, major equipment) even if you can’t get to 100% coverage. The value of resource loading is in identifying conflicts and constraints, not in hitting a checkbox.

10. Missed Tasks

Threshold: Less than 5% of in-progress and completed activities with missed forecast dates

What it measures: Activities that should have started or finished (per the baseline or status date) but haven’t. This is a direct measure of execution performance.

How to check in P6: Compare Baseline Start/Finish to Actual Start/Finish. Filter for activities where Baseline Finish < Data Date and Status is not “Completed.”

What typically fails: Every schedule that hasn’t been statused in the last month. Also schedules where the project is behind but nobody has updated the remaining durations to reflect reality.

How to fix it: This is an execution problem, not a scheduling problem. Either the work is behind (address it operationally) or the schedule isn’t being updated (address the statusing process). Rebaselining just to clear missed tasks is dishonest — don’t do it unless the scope has genuinely changed.

11. Critical Path Test

Threshold: A valid critical path must exist and be testable

What it measures: Whether the schedule has a continuous critical path from the current data date to the project completion milestone, and whether extending a critical activity actually delays the project end date.

How to check in P6: Run the scheduler, note the project finish date. Then add 20 working days to a critical activity’s remaining duration, reschedule, and verify the finish date moved by approximately 20 days. If it didn’t move — or moved by a wildly different amount — the critical path has integrity issues.

What typically fails: Schedules with constraints on near-critical activities, multiple calendars causing date shifts, and schedules where the “critical path” is actually controlled by a constraint rather than by network logic.

How to fix it: This requires careful debugging. Check for constraints, open ends, and calendar mismatches along the critical path. The goal is a clean, logic-driven critical path with no interruptions.

12. Critical Path Length Index (CPLI)

Threshold: Greater than 0.95

What it measures: CPLI = (Critical Path Length + Total Float) / Critical Path Length. It indicates whether the project can finish on time given the remaining work and available float.

A CPLI below 0.95 means you’d need to compress the critical path by more than 5% to meet the contractual end date — a significant recovery effort.

How to check in P6: This requires manual calculation. Critical Path Length = remaining duration of the longest path. Total Float = the total float on the project completion milestone. Do the math.

What typically fails: Projects that are behind schedule, obviously. But also projects where the critical path has been artificially shortened by removing logic or adding constraints.

13. Baseline Execution Index (BEI)

Threshold: Greater than 0.95

What it measures: BEI = Number of tasks completed on time / Number of tasks that should be completed per baseline. It’s a simple ratio of actual vs. planned task completion.

How to check in P6: Count activities with Baseline Finish <= Data Date (should be done). Count how many of those are actually completed. Divide.

What typically fails: This metric is brutally honest. On most projects past the 30% mark, BEI drops below 0.95 because early-phase optimism baked into the baseline didn’t survive contact with reality.

How to fix it: You can’t retroactively fix BEI — it’s a trailing indicator. What you can do is use it to drive realistic remaining duration estimates and avoid compounding the optimism problem going forward.

14. Duration Change

Threshold: Less than 5% of completed activities where Actual Duration differs significantly from Baseline Duration without justification

What it measures: Whether completed task durations are reasonable compared to what was originally planned. Large deviations suggest either poor estimating or scope changes that weren’t properly managed.

How to check in P6: Compare Original Duration vs. At Completion Duration for completed activities. Flag anything where the ratio is outside a reasonable band (typically 50%-150% of baseline).

What typically fails: Activities that were poorly scoped from the start, and activities where the scheduler just entered the actual finish date without adjusting remaining duration during execution.

Which Metrics Actually Matter?

Here’s what I tell clients: the DCMA 14-point assessment is a floor, not a ceiling.

The metrics organizations obsess over — leads (metric 2) and relationship types (metric 4) — are the easiest to fix and the least indicative of real schedule health. You can have zero leads, 95% FS relationships, and still have a schedule that’s completely disconnected from reality.

The metrics that actually predict project outcomes:

  • Critical Path Test (11): If the critical path isn’t valid, nothing else matters.
  • BEI (13): This tells you whether the team is actually executing to plan.
  • Missed Tasks (10): Combined with BEI, this paints the real picture of execution performance.
  • Logic (1): Missing logic is the single most common source of unreliable schedules.

Metrics 2, 3, and 4 are hygiene checks. They should pass, but passing them doesn’t make your schedule good.

Beyond the 14 Points

The DCMA assessment doesn’t check several things that matter enormously in practice:

  • Calendar integrity: I’ve seen schedules with 15 different calendars, half of them wrong, creating phantom float and date anomalies that no assessment metric catches.
  • Activity coding consistency: If WBS codes, activity codes, and cost accounts aren’t aligned, your schedule can pass all 14 points and still be useless for earned value or progress reporting.
  • Out-of-sequence progress: The assessment doesn’t penalize retained logic vs. progress override, or activities progressing out of sequence. On complex programs, out-of-sequence progress above 10% is a serious red flag.
  • Resource leveling realism: You can pass the resource check (metric 9) with placeholder resources. The assessment doesn’t verify that your resource histogram is actually achievable or that you’ve resolved over-allocations.
  • Schedule margin and risk: The 14 points say nothing about whether the schedule has adequate margin for uncertainty. A schedule with zero float and perfect logic is still a bad plan.

The 14-point assessment is a useful screening tool. Pass it. But don’t mistake a passing score for a healthy schedule. The real work starts after the checklist.

Share this article:

Need P6 Expertise?

Get personalized guidance for your Oracle Primavera P6 implementation, optimization, or integration project.

Schedule a Consultation