Good Analysis Starts with Reading the Schedule

All too often we hear that a project has a schedule, but they don’t have a plan. With growing complexity of project schedules, it’s more important now than ever to understand the underlying story. Metric-based software that aggregates details to dashboard-level indicators can provide some insight into schedules, but lacks insight into the plans they model. We believe it’s impossible for dashboards to provide adequate perspective into project plans. Understanding plans requires context that can only be gained through reading schedules. Metric-based software has encouraged too much focus on quantitative analysis of schedules, at the expense of qualitative understanding. Analysts who deliver need software that provides a deeper perspective.

We encourage you to analyze your schedule updates with our secure, no-obligation demo.

Schedule metrics should be considered self-evident

Aggregated metrics, most commonly represented by the DCMA 14-Point Assessment, are often used for diagnosing schedules at the enterprise level. These diagnostics are important for identifying risk trends across a large number of projects, but they should be considered self-evident at the project level where analysis should be proactive. Stakeholders should expect more meaningful evidence that management is invested into planning and controlling their project. Schedules result from collaborative narratives unique to each project and analysis requires context beyond aggregatable data. For example, a project’s dashboard may indicate good schedule performance, despite an emerging bottleneck or improperly integrated scope. It’s time to shift perspective away from “some information is better than no information”.

Drill-down analysis is isolated from context

Metric-based assessments are attractive because they are technical and measurable against benchmarks. Metrics that indicate potential problems can be drilled down to view the data (e.g. changed logic), but evaluating isolated data, with no context to upstream causes and downstream effects, doesn’t tell a story. These drill-down reviews can best be described as not seeing the forest for the trees, often resulting in analysts facing limitations, including:

  • Metrics are not able to provide evidence of properly modeled networks that resulted from project collaboration.
  • Schedule integrity and performance indicators are not measurable against the whole schedule, including key milestones.
  • Schedule quality metrics frequently promote superficial fixes to the project plan.
  • Dashboards cannot indicate if schedules are rational and planned in logical sequence.

Our software reconstructs schedules to make them easier to read and analyze

Even though we provide metrics that summarize the overall schedule, as traditional software warrants, we believe real compliance and accountability depends on understanding the integrity of the plan. Our technology allows users to read project schedules naturally during planning efforts, and then analyze causes and effects during forensic efforts. Cause and effect analysis is only possible when the entire schedule network is charted, enabling you to compare each schedule update to expose root causes and as-sequenced effects on the plan. Plan analysis promotes a better, collective understanding of the project and improves the modeling of how the team can get it done.

Our technology advances schedule analysis

Improving the plan and evaluating the true quality score of a schedule requires project collaboration, not relying on dashboards and expecting them to tell the story. Metric-based analysis software evaluates changes between lists of activities and relationships, but is not able to evaluate changes to the underlying plan. Now that plan-based analysis is achievable, schedule-based analytics becomes a routine consequence of the all-inclusive context. Dashboards provide information, but often fail to tell enough of the story. Project management must understand their schedule, and analysis should ensure each component contributes to the overall story of the plan.

The Windows Analysis Technique

The ability to analyze concurrent causes and effects between schedule updates is integral for reducing claim settlement costs and controlling project risk. Of the four methods identified in AACE International’s Recommended Practice No. 29R-03, we regard the dynamic logic, observational method as the most untapped source for advancing forensic schedule analysis (FSA). (We define FSA as the retrospective analysis of CPM models to investigate series of causes and effects between project states.) The dynamic logic, observational method allows specialists to tell the story for each chapter of a project milestone, which is why FPM has chosen to place it at the core of our software’s interface.

Although the method is the most objective for apportioning delays, analysts struggle to achieve exhaustive results because it requires considerable effort to perform. Automation has solved this challenge by providing the best approach for defending against methods that require biased selection of events for additive simulations, while reducing effort by 90%. The dynamic logic, observational method is rather intuitive, and we encourage you to see just how natural this method is by reviewing your schedule updates with our secure, no-obligation demo.

29R-03 Hierarchy of the Dynamic Logic, Observational Method [1]

29R-03 opened the door for automation and software development by thoroughly organizing the four FSA methods into hierarchical classifications and providing guidelines without prescribing uniformed definitions or specific procedures. To avoid defining it ourselves, let’s step through each of the hierarchical layers for the dynamic logic, observational method. This will provide insight into why we believe it is the most effective forensic approach for both time-based disputes and project schedule management.

A. Layer 1 (Timing): Retrospective

Analysis is retrospective when it occurs after impacting events are known (not necessarily after projects have completed). Retrospective analysis is advantageous in that it provides the “full benefit of hindsight” for investigating causes of schedule delays and reductions. 29R-03 states, “If as-built documentation is available, the best evidence rule demands that all factual investigations use the as-built as the primary source of analysis.” [1]

B. Layer 2 (Basic Methods): Observational

Observational methods include no intervention by experts to model what-if scenarios “beyond mere observation”. Any modifications should only improve upon the as-built quality of the schedule. To the extent it’s possible, the best forensic analysis uses as-is information prepared within the period of events being investigated. This ensures all related information includes time-appropriate context.

C. Layer 3 (Specific Methods): Dynamic Logic Observation

Dynamic logic methods observe shifting critical paths between schedule updates. The windows analysis technique, as explained below, exposes these shifts by comparing paths at the beginning of an analysis period to paths at the end of the period. These comparisons expose logic-based changes that cause schedule delays and reductions to contractual milestones.

Applying the Dynamic Logic, Observational Method with the Windows Analysis Technique

The windows analysis technique applies this method to forensically observe series of causes and effects through the life of a project. Our software automates this technique by exposing all shifts between concurrent paths within their ranked and logical context.

Windows analysis compares two project snapshots to evaluate schedule deviations over time, where a snapshot is represented by a schedule update or baseline. The technique begins by dividing the project duration into interim analysis periods, commonly referred to as windows. Each window is then framed by two schedule updates with data dates that correspond with the beginning or end of the analysis period. The update with the earlier data date is the forward snapshot and is structured at the beginning of the window, and the later update is the backward snapshot, structured at the end of the window.

The forward snapshot is considered the as-planned schedule, and the backward snapshot is considered the as-built schedule of the analysis window. The backward snapshot becomes the forward snapshot for the subsequent window, and each window is reviewed in isolation with analysis usually proceeding forward in time.

The terms ‘as-planned’ and ‘as-built’ can be misleading in context of interim windows. They may suggest analysis is focused only on events between the data dates of the snapshots (i.e., events that were statused during the window period). All schedule changes between snapshots are equally relevant in a window analysis, including dynamic shifts on either side of the data date of the backward snapshot.

Windows analysis uniquely allows both snapshots to be used as a baseline against each other, providing both a forward-looking and backward-looking analysis:

Benefits of the Backward-Looking Analysis include:

  • revealing concurrent impacts for assessing apportionment,
  • differentiating productivity delays from start delays, and
  • providing as-built information during the duration of the analysis window, including tasks that were added, started, completed, and/or delayed.

Benefits of the Forward-Looking Analysis include:

  • revealing the net effect of schedule reductions against contract milestones, and
  • identifying schedule modifications intended to increase float such as logic and duration changes.

Understanding the Unique Value of the Forward-Looking Analysis

The forward-looking analysis provides a very valuable, yet unconventional added benefit to FSA. The comparison of the forward snapshot against a future baseline relative to its data date allows for analysis that uniquely exposes schedule reductions and mitigations.

Schedule deviations are typically identified through a backward-looking analysis that compares a later schedule update to an earlier baseline, but backward-looking analyses are better for isolating delays than schedule reductions. Critical paths in the as-planned schedule often shift to become non-critical, falling out of the near-critical range in the as-built schedule. The only way to see these reduction shifts without reviewing a substantial number of paths is to perform a forward-looking analysis that shows how critical paths become less critical in a future baseline.

This is where our software really demonstrates its value. Without automation windows analysis can include only critical and near-critical paths at best. Our software includes the longest path of every activity to each contractual milestone, allowing us to pre-evaluate and prioritize all dynamic logic through a window, even when paths shift between criticality.


The incremental nature of the windows analysis technique exposes a pattern of events through each update period. Since paths often dynamically shift between schedule updates, FPM’s windows analysis provides logic-based investigation of every discrete schedule delay and reduction. Incremental deviations can then be accumulated through each update period to calculate net impacts to project milestones. This is why we say that the dynamic logic, observational method allows analysts to tell the unaltered story for each chapter of a project milestone, making it the most untapped source for advancing FSA.

Experience our secure, no-obligation, free cloud-based demo at or email for more information.


  1. Hoshino, Kenji P., CFCC PSP; Livengood, John C., CFCC PSP; Carson, Christopher W., PSP, “Forensic Schedule Analysis”, AACE International Recommended Practice No. 29R-03, AACE International, 2011.

Data Date Precision

FPM’s Advanced Forensic Scheduling (AFS) software captures every driving start and all relevant paths between the data date and each milestone using proprietary algorithms. The benefits of this approach include:

  • Overcoming the limitations of float-based analysis
  • Determining concurrent and near-concurrent impacts
  • Respecting the importance of the data date
  • Clearly delineating as-built and forecast segments

Identifying how and when contractual milestones are impacted is at the basis of forensic schedule analysis. Schedules are often mitigated within the same update where impacts occur, making float-based analysis unreliable. Even for correctly maintained half-step updates, demonstrating cause and effect within CPM networks is difficult. This is due to the prospect of concurrent and near-concurrent impacts, along with multiple contractual milestones. If all root causes can be identified then each effect on associated milestones can be demonstrated through pathfinding.

It’s important to understand that root causes of schedule impacts are usually found at the data date – where driving starts of paths are either in-progress or waiting to start. Because driving starts are where concurrent delays, disruptions, and pacing exist we built flexibility into AFS that provides a clear picture of the data date.

The data date delineates actual events from forecasted events. During an update period forecasted events are either re-forecasted, modified, or actualized to the as-built side of the data date. Oftentimes activities are both created and actualized within the same window. These momentary activities can represent delay events and are easily overlooked during analysis as they are immediately recorded to the as-built schedule. The as-built schedule is fundamentally different than the forecast side of the schedule. Where forecasted durations represent work, as-built durations and lag often represent un-modeled delay events or inactivity.

AFS captures every driving start to ensure all longest paths between the data date and each milestone are processed for ranking and analysis. Further clarity around the data date is provided as AFS returns as-built segments that show progress through the duration of the analysis window. These as-built segments show preceding impacts where the driving start’s variance represents the culmination of chained root causes, resulting in an exhaustive and accurate analysis.

v2 Release Notes

We are proud to have released a major update to our Advanced Forensic Scheduling, cloud-based software.

The following are some of the highlights. For a complete list of historical release notes, view the FPM Wiki.

The Demo is available and updated with the latest version.  Import a collection of XERs that you are struggling to analyze to see the recent updates.  Jump right into your analysis with any Microsoft or Google account.

User Interface Improvements

Various UI improvements across the product, including:

  1. Tables now exportable to CSV files
  2. Re-arranged the Windows page to better use existing space
  3. Added informational messages when the AFS algorithms are updated and an existing project collection has not been processed using the updated algorithms.  Re-processing the collection is optional.
  4. Improved formatting of the Gantt chart
  5. Added direct links to wiki help pages in some appropriate areas

Project Collection Configuration Improvements

  1. Users can now re-process XERs instead of deleting and re-importing
  2. XER files saved in “UTF-8 with BOM” format now supported
  3. Now detecting and displaying schedule calculation settings, and warnings when these settings are missing from the XER
  4. Improved information about how to process a single snapshot against itself (typically we operate on the Window level, requiring at least  two projects; now allowing a single project to be analyzed)
  5. Added progress bars for XER imports and Project Collection processing
  6. Display Project Title (long name)

Schedule Analysis

  1. Significant improvements to Relationship Free Float calculation, including calculating RFF down to the minute based on exact daily working periods
  2. Include holidays and exceptions from base calendars
  3. Added additional filters for path selection, such as “Critical”, “Near-Critical”, “Top 10”, etc
  4. Now expiring lag when appropriate to more accurately calculate Relationship Free Float; generally improved lag handling
  5. Exposed experimental Driving flags for activity Starts, Finishes, and Relationships.  These flags indicate how an Activity or Relationship is being driven (e.g., DataDate, Actualized, Remaining Lag)
  6. Improved Out of Sequence detection
  7. Improved Satisfied relationship detection
  8. Improvements to the way “Driving Starts” are selected for path ranking and display
  9. Beta: released a project dashboard and planned-earned chart for user feedback


  1. Implemented a Service Fabric backend cluster to operate the processing algorithms, providing better scalability and uptime
  2. Improveded performance of XER import and analysis
  3. Implemented global deadlock resilience
  4. Continuing to improve FPM Wiki help pages

A Better Definition of Longest Path

We provide the only intuitive view of CPM schedules, while improving upon the definitions of both critical path and longest path. Our software uniquely targets both completion milestones and start activities and ensures the complete longest path of every activity is returned for ranking. This allows us to associate every dynamic impact to each discrete cause. We encourage you to analyze your schedule updates with our secure, no-obligation demo.

Industry definitions for critical path and longest path can cause confusion because they can be used interchangeably. Furthermore, the definition for critical path includes notable assumptions that make it ambiguous in practice. AACEi’s Recommended Practice 49R-06 defines the critical path of a schedule, similar to PMI’s PMBOK Guide, as “the longest logical path through the CPM network and consists of those activities that determine the shortest time for project completion” [1]. Both describe the critical path as the longest path that “determines the shortest possible project duration” [2], where a delay to an activity of the path will delay project completion.

These definitions assume there is only one critical path of a CPM network that spans the full duration, but schedules can have multiple critical paths as a result of any combination of four conditions:

  1. concurrent paths with the same duration;
  2. constrained milestones or activities;
  3. multiple calendars; and/or
  4. critical paths of individual schedules that are combined into an integrated master schedule, producing either a new critical path or sets of critical paths.

These real life exceptions to the critical path definition make the concept of the longest path necessary. Longest path shares the same definition as critical path with the additional condition that a schedule’s longest path must start at the first activity of a schedule and complete at the last activity. The longest path may be different from the critical path when the critical path includes interim constraints and does not drive project completion. Because of this, the longest path can show more float than the “least float critical path” [3] of the same schedule. Like critical paths, a schedule can have multiple concurrent longest paths with the same duration and float.

When the least float critical path is constrained ahead of project completion, the longest path becomes the path that determines the project’s duration. Though uncommon, this doesn’t necessarily mean a delay to an activity of the longest path will delay project completion. For example, a longest path can include different calendars and constraints that create relationship free float between major project phases. Despite this exception, the longest path of a schedule is always the continuous chain between the first and last activity with the least relationship free float.


AFS returns both critical and longest paths, but ranks longest above critical because of their unique forensic value in emphasizing start targets at the data date. Because more than just the first and last activity of a schedule are evaluated, we define longest path as the continuous chain between a specific set of activities having the lowest relationship free float. An activity set includes a start target that is either at the data date or constrained after the data date, specific evaluation activity, and finish target such as a contractual milestone.  


The longest path of an evaluation activity may not be its most critical path because the start and finish targets may exist beyond constraints that delineate path segments with higher total float. In these cases, AFS will return the critical path of the evaluation activity as a separate path where the interim constraints are defined as the start and/or finish targets. This allows analysts to compare critical paths alongside longest paths to better understand the effects of constraints.

In the AACEi paper, S.07: When is the Critical Path Not the Most Critical Path?, Mr. Woolf presents the following discussion: “Because we believe that the definition for the term, path, should make reference to the path’s point of terminus, as well as its point of origin, we are faced with this difficult question: Where does a path begin and end? If we cannot define a path’s starting or ending points, how can we begin to define the term, critical-path?” [3].

AFS compliments Mr. Woolf’s discussion by targeting both points of “origin” and “terminus”. This approach removes ambiguity by constructing both longest and critical paths with individual context and accurate, exhaustive ranking. AFS retains context of the data date and contractual milestones, fully exhibiting the practical difference between longest and critical paths.


[1] Carson, C.W., & Winter, R.M. (2010) AACE International Recommended Practice No. 49R-06: Identifying the Critical Path.

[2] Project Management Institute. (2013) A Guide to the Project Management Body of Knowledge, PMBOK Guide, 5 ed., 155.

[3] Woolf, M. (2008) When is the Critical Path Not the Most Critical Path? AACE International Transactions, PS.07.

Primavera P6 Calendar Settings

Where Primavera P6 can create confusion by using standard calendar settings, our software provides relationship free float in days based on defined working periods. Calendars in P6 provide flexibility for managing schedules with different standard work weeks and disparate work exceptions, requiring users to maintain two types of calendar settings:

  1. Hours Per Time Period is used for converting a day into standard hours and allows users to manage schedules at the day granularity.
  2. Work Hours defines periods when work can occur, which P6 uses to allocate durations during CPM calculations.

The user is responsible for managing these independent settings consistently or CPM results will be incorrect. For example, Task A will be performed on the standard 8-hour per day schedule, but the user modifies Work Hours of two specific calendar days to reflect 12-hour days.

Although schedulers refer to task durations at the day granularity, P6 calculates at the hour granularity for purposes of storage and calculation. P6 also uses Hours Per Time Period to convert calculation outputs from hours back to days for the user, based on the User Preferences settings.

P6’s use of Hours Per Time Period creates confusion when converting CPM output such as float, as P6 fails to consider any modified Work Hours. In the Task A example above, P6 converts hours back to days based on the standard 8-hours per day and does not consider the two 12-hours days.

We believe P6 should use the Hours Per Time Period setting only to convert user-defined durations and not for converting calculation outputs such as float. Instead, P6 should calculate based on specific Work Hours.

Task A’s output from P6 varies greatly based on the standard calendar settings and the modified assigned hours. P6 would convert 24 hours of relationship free float into three days, based on the standard 8-hour work day setting, even though two days had working hours set to 12. Based on the full scope of the calendar, the relationship free float is actually two days, showing more critical.

Rather than converting to days post calculation, our software calculates relationship free float in days simultaneously to calculating in hours based on the full scope of the working ranges.