Operator Switching Dashboard
Measuring switch-order quality, peer-anchored fairness, and organizational safety — all without zero-sum performance comparisons.
Problem
Switching is the single most consequential activity a Distribution Control Center (DCC) operator performs. A clean switch order keeps crews safe. A flawed one — wrong sequence, missing tag, mis-identified device — can mean an arc flash, an unplanned outage, or a crew member standing in the wrong place. Errors are low-frequency, high-consequence, which is precisely the category that training and post-event audits don’t catch — both are reactive. By the time the audit happens, the order has already landed in the field.
Across the industry, every DCC operates with the same blind spot: no real-time visibility into who is writing switch orders, how often those orders survive peer review without correction, whether workload is concentrated on a small number of operators, or whether a particular operator’s recent pattern is starting to drift before the next near-miss event.
Two specific failure modes compounded the problem at OG&E before the dashboard existed:
- Workload imbalance. Most switching was being written by the same few operators. They felt overwhelmed; other operators weren’t improving because they weren’t getting the reps.
- Metrics that drove bad behavior. Ranking operators by raw counts rewarded writing more (with high error rates), “reviewing” more (without finding errors), and revising orders that didn’t need to be revised. The scoreboard was actively misaligned with safety.
A dashboard would only help if it was anchored to a metric that actually predicted operational quality, scored against a baseline that adapted as the team improved, and surfaced workload and safety together.
Approach
Three design choices drove the work — each of them is what peer utilities at the S.E.E. DOCC Spring 2026 conference engaged with most.
1. Safety-first weighting in the composite score
The composite score that ranks operators is intentionally tilted toward error-prevention behaviors, not raw throughput:
- 50% Volume — equal-share workload distribution, capped at 100 once a fair share of the work is met. This rewards reaching parity; it does not reward writing the most. Above-fair-share contribution doesn’t earn additional credit.
- 35% Writer Quality — pass-through rate on written orders (defined below).
- 15% Reviewer Quality — catch rate during peer review: how often a reviewer’s session actually corrected a problem before the order reached the field.
That weighting choice is the design point. A productivity-maximizing weighting would invert it — heavy on volume, light on quality. The dashboard makes the opposite trade explicit.
2. Pass-through percentage as the primary writer-quality proxy
Pass-through % is the percentage of orders an operator writes that survive peer review without correction before reaching the field.
Pass-through = (Orders not revised by someone else) ÷ (Orders written) × 100
The metric is the single best leading indicator of safety the dashboard can measure. A clean pass-through means the instruction that hits the field is the instruction the writer intended — accurate, complete, safely sequenced. A degrading pass-through means drift is starting before any near-miss event has been triggered.
Thresholds are tier-banded: ≥ 75 % reads as strong, 60–74 % acceptable, < 60 % warrants supervisor follow-up. The thresholds themselves are computed per control cohort — see below.
3. MAD-based peer anchoring
The third design choice is the one that made the dashboard replicable for peer utilities rather than locked to OG&E’s headcount: thresholds are not fixed targets, and they’re not mean-based — they’re computed as median ± 1 MAD (Median Absolute Deviation) per control cohort (C1 DSOs vs. C2 operators).
That choice does three things:
- Robust to outliers. A single super-performer or a single struggling operator doesn’t distort the cohort’s bar. Mean-based thresholds and percentile thresholds both fail this test in different ways.
- No zero-sum competition. “Above peers” / “on target” / “below peers” tiers measure each operator against the cohort’s actual current distribution. Everyone can be “on target” if the cohort has tightened up.
- The bar rises with the team. As writing quality improves across the cohort, the median rises, the MAD shrinks, and the bar moves up automatically. Nobody has to re-define the goal.
What I built
Four KPI surfaces and a deep audit layer underneath them:
- Pass-through % per operator — split by control type (D orders vs. M orders), tier-banded, filterable by time period.
- Composite scoring tiers — “Above peers,” “On target,” “Below peers” — color-coded, with the underlying volume / writer / reviewer scores expandable for drill-down.
- Organizational KPIs — org-wide pass-through median, count of operators currently in the “Below peers” tier, total orders written in the selected year, year-over-year pace of orders written across the cohort.
- Workload equity surfaces — total active operators, distribution of orders per operator across the cohort, with concentration risk surfaced when one or two operators carry the bulk of the writing.
- Bad-Order UG tracking — a leading-indicator log of switch orders flagged as requiring rework, de-energized-permit holds, or field corrections. Surfaces operational drift before it becomes incident-grade.
- Order-level audit drill-down for supervisors — full audit trail of every order an operator touched, filterable by role, month, and control type. A supervisor can land on an operator’s tier and trace the underlying orders that produced it.
- Lifecycle-aware metric computation — Draft orders don’t count against pass-through; only orders that have actually moved through peer review and into a post-field state contribute to the writer’s score. Pre-field state, in-progress, completed, cancelled, sent-to-field — each handled distinctly.
Results
- Presented at the Southeastern Electric Exchange (S.E.E.) DOCC Committee Spring Meeting, April 2026. Peer-utility leaders engaged deeply with the safety-first weighting and the MAD-based peer-anchoring approach. The questions after the session were consistently of the form “how would we adapt this for our DCC?” — not “does this work?”
- In production at OG&E DCC. Supervisors and operators are using the dashboard for real performance conversations, not just retrospectively. The dashboard sits inside Nighthawk, accessible to anyone with the appropriate RBAC scope.
- Ongoing iteration. Supervisor feedback is informing the next round of features — primarily around drill-down ergonomics, peer-comparison views inside individual operator pages, and tighter integration with the AI-assisted switching writer (a separate work order under the AI Integration umbrella).
Why this pattern transfers
The dashboard is engineered for OG&E’s DCC, but the management problem it solves is universal. Any operations control center where high-consequence, written work products flow from operator → peer review → field has the same blind spot:
- Refinery and chemical-plant control rooms — written procedures and lock-out/tag-out documents
- Air traffic control and transportation dispatch — flight plans, train movement authorities
- Healthcare order entry — physician orders flowing to pharmacy, nursing, lab
- Trading and risk operations — written trade tickets and risk-acknowledgment documents
In each of those contexts, pass-through-from-author-to-execution is a measurable leading indicator of safety. MAD-based peer anchoring keeps the comparison fair across team-size changes and shifts. Safety-weighted scoring tilts the incentive away from raw productivity. The pattern transfers; only the source data and the action-language change.
Stack notes
Two non-obvious design choices that make the dashboard tractable:
MAD instead of mean / percentile thresholding. Most internal scorecards default to mean-based thresholds or fixed percentile bands. Both have failure modes: mean is pulled by outliers; percentile bands are arithmetically zero-sum (someone is always in the bottom decile, even if the bottom decile is excellent). MAD-based thresholds avoid both. The math is simple — median, then median-of-absolute-deviations-from-the-median — and the result is robust at small cohort sizes (which most DCCs are).
Lifecycle-aware metric computation. Draft orders, in-progress orders, completed-but-pre-field orders, and post-field orders each contribute differently to the metrics. Draft orders that never get released shouldn’t penalize pass-through (the writer caught the problem themselves). Only orders that have moved through peer review and into a post-field state are eligible to count toward the writer’s pass-through score. The implementation reuses Nighthawk’s existing lifecycle helpers, so the same state semantics apply consistently across the dashboard.