- Home
- Blog
- Buyer's Guide
- What to Measure Before Buying Scheduling Software:…
What to Measure Before Buying Scheduling Software: Your Pre-Implementation Baseline

Most manufacturers buy scheduling software to fix a problem they can feel: jobs shipping late, floors in chaos, schedulers working overtime every week. The pain is real. But when leadership asks six months later whether the investment paid off, most manufacturers cannot answer with data — because they never measured where they started.
This is one of the most common and most preventable mistakes in scheduling software implementations. Without a pre-implementation baseline, "did it work?" becomes a matter of opinion. With one, it becomes a matter of fact.
After 35 years of helping manufacturers implement scheduling software, User Solutions has seen what separates successful implementations from disappointing ones. The manufacturers who get full value from scheduling software almost always did two things: they measured carefully before they started, and they set realistic improvement targets before they went live. This guide shows you exactly how to do both.
Why the Baseline Matters More Than the Software
Scheduling software vendors will show you impressive ROI claims — 30% OTD improvement, 25% WIP reduction, $500K in annual savings. Those numbers may be accurate for some customers. But they are meaningless for your operation unless you know where you are starting.
Consider two manufacturers who both achieve 85% on-time delivery after implementing the same software. Manufacturer A started at 75% OTD — a 10-point improvement worth roughly $300K/year in penalty charges avoided and expedite fees eliminated. Manufacturer B started at 83% OTD — a 2-point improvement that, in isolation, would not justify the investment. Both can claim "85% OTD after implementation." Only one can claim the software was worth it.
The baseline also protects you internally. When a new scheduling system is live and the floor is still adjusting, there will be a difficult 30–60 day period where things feel worse before they feel better. If you have a baseline, you can put that turbulence in context — "our OTD was 74% before; it's 71% right now, but our WIP is already down 12% and overtime is trending down." Without a baseline, that 71% looks like failure.
The Six Metrics to Measure Before Implementation
These six metrics give you a complete picture of your scheduling performance. Each can be measured with data you already have — if you know where to look.
1. On-Time Delivery Rate (%)
What it measures: The percentage of jobs or orders shipped on or before the customer-promised date.
How to measure it: Pull all completed work orders from your ERP for the past 90 days. For each, compare the customer promise date (or internal ship date commitment) to the actual ship date. Count orders shipped on time divided by total orders shipped.
Watch out for: Promise date gaming — if your team routinely builds extra days into promises to make the number look better, your OTD metric is measuring promise conservatism, not scheduling performance. Note whether this dynamic exists, because scheduling software may actually tighten your promises (a good thing) in ways that temporarily lower your OTD metric.
What good looks like: Best-in-class job shops run 92–95% OTD. Average manufacturers in complex custom work run 75–85%. Below 70% typically indicates a scheduling problem, not just a capacity problem.
2. Average Manufacturing Lead Time
What it measures: The average calendar days from order release (or work order creation) to shipment.
How to measure it: For the same 90-day sample, subtract work order release date from actual ship date. Calculate the average and the median (median is more useful if you have a few very long jobs skewing the average).
Watch out for: Jobs that sat in queue before being released to the floor. If your scheduling process delays release significantly, measure from order receipt, not order release — you want to capture the full lead time your customer experiences.
What good looks like: Highly variable by industry and product complexity. The goal is not a specific number but a reduction in variance — a shop that averages 12 days with a standard deviation of 2 days is performing better than a shop that averages 10 days with a standard deviation of 8 days.
3. WIP Inventory Value
What it measures: The dollar value of all open work orders currently in process on the shop floor, valued at material cost.
How to measure it: Pull the open work order list from your ERP. For each open order, capture the material cost to date (what has been issued). Sum these values. This is your WIP snapshot.
Watch out for: Work orders that have been open for an unusually long time ("zombie WIP") — jobs that started but stalled due to missing material, tooling issues, or being deprioritized for a rush job. Flag these separately, as scheduling software should eliminate their root cause.
What good looks like: WIP as a percentage of monthly revenue varies widely by industry. What you are looking for post-implementation is a reduction, not an absolute target. A 15–20% WIP reduction is common in the first year of effective scheduling software deployment.
4. Overtime Hours Per Week
What it measures: The average weekly overtime hours paid, across all direct labor employees.
How to measure it: Pull from payroll or your time-tracking system. Calculate the 13-week rolling average of overtime hours per week.
Watch out for: Planned overtime (weekend runs for large orders) vs. unplanned overtime (staying late because jobs are behind). Scheduling software primarily reduces unplanned overtime. If you can separate these in your data, do so — unplanned overtime is your baseline, not total overtime.
What good looks like: Manufacturers with reactive scheduling typically run 10–20% of their direct labor hours as overtime. Well-scheduled operations run 3–8%. The cost difference is significant: at a $28/hour fully-loaded labor rate, cutting 200 overtime hours per week saves over $290K annually.
5. Schedule Adherence Rate (%)
What it measures: The percentage of operations or work orders completed in the sequence and timing the scheduler planned, without being moved, bumped, or expedited.
How to measure it: This requires scheduler involvement. For four weeks, have the scheduler record when jobs are moved out of planned sequence (expedited rush, material not ready, machine down). Calculate: (jobs completed as scheduled) / (total jobs scheduled). This metric does not exist in most ERP systems — you will need to track it manually.
Watch out for: If your current scheduling is largely informal (your scheduler "keeps it in their head"), this metric may not be measurable with historical data. In that case, run a 30-day measurement period before implementation starts and use that as your baseline.
What good looks like: Above 85% schedule adherence is considered strong. Below 70% indicates the schedule is effectively fictional — jobs are being moved constantly and the plan is not being followed.
6. Expedite Rate (%)
What it measures: The percentage of orders that required expediting — emergency prioritization, premium freight, or other special handling — to meet the customer commitment.
How to measure it: Review the last 90 days of shipping records and flag orders that required expedite action (ask the shipping or customer service team to help — they know which orders were "hot"). Divide by total orders shipped.
Watch out for: Expediting often becomes invisible when it is constant. A shop that expedites 30% of its orders may have normalized the practice to the point where it is not flagged as expediting. Ask the scheduler directly: "How many jobs do you touch every week because someone called you asking where their order is?"
What good looks like: Below 5% expedite rate is considered controlled. Above 15% indicates systemic scheduling failure.
Setting Realistic Improvement Targets
Once you have your baseline, you need improvement targets for two purposes: justifying the investment to leadership, and measuring success after go-live. Here is how to set targets that are ambitious but defensible.
Rule 1: Use the gap to best-in-class, not the vendor's marketing claims. If your OTD is 76% and best-in-class is 93%, you have 17 points of room to improve. Setting a target of 85% in year one is conservative and achievable. Setting a target of 93% in year one is aggressive and likely to fail.
Rule 2: Expect the biggest gains in year one, smaller gains in years two and three. Scheduling software delivers the largest improvements in the first 6–12 months as chaos is replaced by structure. After that, gains require process discipline and continuous improvement, not just software.
Rule 3: Quantify targets in dollars, not just percentages. For leadership approval, translate every metric improvement into dollars:
- Each 1% OTD improvement = X fewer penalty charges + Y fewer expedite freight costs
- Each hour of overtime eliminated = fully-loaded labor rate × hours × weeks
- Each dollar of WIP reduction = freed working capital × your cost of capital
A target of "improve OTD from 76% to 85%" is less compelling than "improve OTD from 76% to 85%, eliminating approximately $180K in annual penalty charges and $60K in expedite freight."
The 90-Day Post-Implementation Measurement Plan
Go live is not the finish line — it is the start of your measurement period. Here is a simple plan for the first 90 days.
Days 1–30: Collect data but expect volatility. The team is learning the new system, the floor is adjusting, and some metrics will temporarily worsen. This is normal. Document issues and the actions taken to resolve them.
Days 31–60: Begin comparing to baseline. Most metrics should be trending in the right direction even if they have not hit targets. Look especially at WIP and overtime, which tend to respond faster than OTD.
Days 61–90: Conduct a formal baseline-vs-actual review. Calculate the delta on all six metrics. Quantify the financial impact. Identify the one or two metrics that have not moved as expected and diagnose why — often this is a process discipline issue rather than a software issue.
Day 90 review package for leadership: A single page with the six metrics in a before-vs-after table, the dollar impact of the deltas, and a 90-day rolling trend chart. Keep it simple — the numbers should tell the story without elaborate explanations.
A Simple Baseline Measurement Template
Use this table to document your baseline before implementation begins.
| Metric | Measurement Period | Baseline Value | Data Source | Measured By | Target (12-month) |
|---|---|---|---|---|---|
| On-time delivery % | Last 90 days | __ % | ERP work orders | Operations manager | __ % |
| Average lead time (days) | Last 90 days | __ days | ERP work orders | Operations manager | __ days |
| WIP inventory value ($) | Snapshot date | $__ | ERP open orders | Accounting | $__ |
| Weekly overtime hours | Last 13 weeks | __ hrs/week | Payroll | HR / Accounting | __ hrs/week |
| Schedule adherence % | Next 30 days | __ % | Scheduler log | Lead scheduler | __ % |
| Expedite rate % | Last 90 days | __ % | Shipping records | Customer service | __ % |
Fill in every cell before your implementation begins. Store this document where it will survive the transition — not in the old system, not in someone's email. A shared drive folder labeled "Scheduling Software Implementation" with a version-controlled spreadsheet works fine.
Start your implementation the right way. Contact User Solutions to learn how RMDB has helped manufacturers at GE, Cummins, and BAE Systems build measurable, provable improvements in scheduling performance over 35+ years. For the broader decision framework, read our guide to choosing production scheduling software and our analysis of scheduling software total cost of ownership.
Expert Q&A: Deep Dive
Q: Our data systems are fragmented — ERP for some things, spreadsheets for others. How do I get a reliable baseline when I can't trust the numbers?
A: Start with what you can observe directly rather than what you can extract from systems. On-time delivery: pull completed work orders from your ERP for the last 90 days and manually check promise date vs. ship date. Lead time: sample 50 recent jobs and calculate actual calendar days from order to ship. WIP inventory: a physical count of open work orders on the floor, valued at material cost, takes one afternoon. Imperfect data collected consistently is more useful than waiting for perfect data that never arrives. Document your measurement method precisely so you apply the same method post-implementation.
Q: My CEO wants to see ROI projections before approving the purchase. What improvement assumptions are defensible?
A: Use industry benchmarks as your floor, not your ceiling. The Manufacturing Institute reports that best-in-class job shops run 92–95% OTD; average performers run 75–82%. If you are at 76%, projecting an improvement to 85% within 12 months is conservative and defensible. For overtime, software-driven scheduling typically reduces unplanned overtime by 15–25% in the first year. For WIP, visible scheduling reduces excess WIP by 10–20% by eliminating the buffer stockpiling that informal scheduling creates. Build your ROI model on the conservative end of these ranges. It is better to exceed conservative projections than to miss aggressive ones.
Frequently Asked Questions
Ready to Transform Your Production Scheduling?
User Solutions has been helping manufacturers optimize their production schedules for over 35 years. One-time license, 5-day implementation.

User Solutions Team
Manufacturing Software Experts
User Solutions has been developing production planning and scheduling software for manufacturers since 1991. Our team combines 35+ years of manufacturing software expertise with deep industry knowledge to help factories optimize their operations.
Share this article
Related Articles

Change Management for Scheduling Software Adoption
Guide to managing organizational change when implementing manufacturing scheduling software. Covers resistance, stakeholder buy-in, communication, and adoption strategies.

One-Time License vs. SaaS for Manufacturing Software (2026)
Compare one-time license and SaaS pricing models for manufacturing software. Covers TCO analysis, pros and cons, ITAR implications, and which model fits your operation.

25 Questions to Ask Scheduling Software Vendors Before You Buy
Essential questions to ask manufacturing scheduling software vendors. Covers functionality, implementation, pricing, support, compliance, and red flags to watch for.
