José Macías Silva
AI Engineer · Grid Operations
Aerospace-trained systems engineer building AI for the electric grid.
I lead the AI Model Integration at OG&E — connecting AWS Bedrock and Anthropic Claude models into Nighthawk, the in-house operations platform whose five-year roadmap I authored.
Selected case studies at OG&E.
Each card maps to an entry on the resume — a mix of shipped applications, platform strategy, a cross-functional data-SME role, and a peer-utility conference talk. Click any card for the full case study with custom diagrams and the technical detail behind the work.
AWS Bedrock + Anthropic Claude models — building the first production AI capability inside Grid Operations.
Leading AI Integration at OG&E
Building the secure API gateway, role-aware context manager, and per-query audit-logging layer to connect Bedrock to OG&E's operational data. Four production-bound LLM capabilities in flight.
Read the case study →An in-house workforce-scheduling platform built in 10 weeks against an $800K vendor quote.
Resource Availability Suite
Enterprise on-call and leave-management platform linked to Azure AD, ADMS, and CAD. Models out to $284K–$310K in annual operations savings; vendor quoted $800K for the same scope.
Read the case study →The strategy that takes fragmented operational tooling to a unified AI-native platform.
Nighthawk Platform — 5-Year Roadmap
Authored the 5-year platform roadmap that took Grid Operations from 8+ disconnected legacy tools to a unified operations platform with SSO, RBAC, and a multi-database architecture. Phase 2 layers production AI; Phase 3 introduces agentic workflows under a 4-tier governance framework.
Read the case study →A grid-wide algorithm — built, validated, and handed over — that ranks every overhead lateral by undergrounding ROI.
Strategic Undergrounding Analytics
Google Maps web app that maps OG&E's entire grid infrastructure and ranks every overhead single-phase lateral as an undergrounding candidate using a composite ROI / vegetation / reliability / infrastructure score.
Read the case study →Measuring switch-order quality, peer-anchored fairness, and organizational safety — all without zero-sum performance comparisons.
Operator Switching Dashboard
A safety-first analytics dashboard for DCC operators and supervisors that measures pass-through percentage as the leading indicator of switch-order quality before the order reaches the field. Peer-anchored scoring uses MAD (Median Absolute Deviation) thresholds per control cohort so the bar adapts as the team improves. Presented at the S.E.E. DOCC Spring 2026 conference; peer-utility leaders kept asking questions about it.
Read the case study →Validated future-state schema against five downstream consumers without engaging external contractors.
ADMS Upgrade — Data SME
Served as data SME on OG&E's ADMS upgrade. Validated future-state schema and data flow against Oracle, SAP HANA, iDashboard, PowerBI, and Nighthawk. Performing this scope internally rather than via contractors generated $49,031 in documented cost avoidance and a zero-disruption cutover.
Read the case study →Re-implemented a legacy C# desktop SCADA tool as a web application accessible to anyone on the OG&E intranet.
Circuit Operations App — Legacy Modernization
Modernized OG&E's legacy SCADA operations and alarm-management desktop app (C#) into a web application hosted on the Nighthawk platform. Removed the per-workstation install dependency. Expanded access from a handful of installed users to anyone on the OG&E intranet.
Read the case study →Co-presented OG&E's AI methodology at the Southeastern Electric Exchange peer-utility forum.
AI at OG&E — S.E.E. DOCC Committee Presentation
Presented "AI at OG&E — Transforming Grid Operations" at the Southeastern Electric Exchange (S.E.E.) DOCC Committee Spring Meeting, April 2026, to peer utility leaders. Demonstrated the Nighthawk platform, three production applications built using the seven-step AI-assisted development methodology, and the design choices behind them.
Read the case study →Where the AI passion started.
It started in 2018, on a freeway, in a Tesla on Autopilot.
The first wide turn was unnerving. Letting a vehicle steer itself — knowing a failure mode meant consequences for my life and others — is the kind of nervous-system event you don’t forget. But as the miles accumulated, I started noticing patterns: where the system was confident, where it hesitated, which lane markings tripped it up, which merges it handled cleanly. The fear didn’t disappear — it converted into something more useful. A working model of when to trust the system and when to take back the wheel.
By the end of several long trips, Autopilot had saved me from a few potential collisions and, just as importantly, reduced the fatigue that makes human failure modes more likely. Two sets of eyes. Two sets of judgment. One overall safer drive. That experience reshaped how I think about AI in production: the right relationship with a powerful system isn’t blind trust or blanket skepticism — it’s pattern recognition. Understand its strengths, respect its weaknesses, design the workflow so the human stays accountable for the consequences.
Two years later, during the COVID lockdown, I was building out Nighthawk and burning long evenings on a question that had been bouncing around since my business-analyst years: what if I had a super-intelligent version of myself that remembered every query, every operational quirk, every piece of institutional context — and could generate analytical work at a fraction of the time? I started researching ML, and that’s where I found Rasa — an open-source Python framework for building conversational AI agents with intent classification, entity extraction, slot-filling, and dialog management. Pre-LLM, this was real ML work: hand-curated training data, custom dialog flows, the whole pipeline. I built an internal NLU agent on it for fun. It was the first time I’d built something that could understand intent rather than just match strings.
When OpenAI released GPT-3 / ChatGPT, I signed up the day it opened. The first hour with it crystallized something I’d been chasing through Rasa: this was the leverage I’d been looking for as an engineer, as a business analyst, and honestly as someone juggling more projects than time. The interaction model had shifted from “describe every rule to the system” to “describe the goal and collaborate.”
The most memorable thing I asked it for early on wasn’t an engineering problem — it was a structured nutrition plan and a training schedule for a marathon. The plan worked. It rekindled my love for running, made me measurably healthier, and convinced me that this collaboration model wasn’t just for code. It was a lever you could pull on any part of life worth investing in.
Today, my daily driver is Claude Code powered by Anthropic’s frontier Claude models. The fidelity of the collaboration keeps improving — it’s closer to working with a sharp colleague than operating a tool. I believe this is the technology of the century.
But the lesson from the Tesla still applies. I can see the patterns. I can see where the model is confident, where it drifts, where the failure modes are subtle enough to slip past a casual reviewer. Leveraging AI well means designing for its strengths and engineering around its weaknesses — and being the one in the room who flags the safety and security implications before they become incidents.
That’s the thread. From a steering wheel I learned to trust slowly, to a production AI integration I’m building inside a regulated utility under a four-tier governance framework. Same posture. Different stakes.
Career goals. Personal goals. Problems no one yet knows how to frame — the use cases keep expanding, and the same posture works across all of them. I see the vision. I have the passion. The work I’d choose if no one were paying me to do it happens to be the work I’m doing. That’s why I love this.
The systems-engineering thread.
My day job is leading the AI Model Integration at OG&E — connecting AWS Bedrock and Anthropic Claude models into Nighthawk, the in-house operations platform I authored the five-year roadmap for. Four production-bound LLM capabilities are in flight: AI quality-checking field-crew cause codes against CAD notes; AI validating switching orders against procedure as operators write them; a plain-language interface for our resource-availability system; and a “describe what you need, get a dashboard” generation tool open to anyone. Each one targets a specific drag on reliability, cost, safety, or regulatory compliance.
I also author the labor-cost models and capital-justification artifacts that translate each AI capability into a fundable capital work order. The bridging role between AI engineering and capital approval is rare in regulated industries — most engineers can’t speak capital-accounting and most operations teams can’t quantify AI-project effort. It’s the workstream that determines whether AI ideas get funded or stall.
Before the AI work, I built Nighthawk itself — fifteen-plus applications consolidating eight legacy tools, four production databases unified under SSO, Kubernetes deployment, three-tier RBAC. The workforce-scheduling system on top of it models out to $284K–$310K in annual operations savings; a vendor had quoted us $800K to build the same scope, and the same internal team shipped it in ten weeks for roughly $40K.
Before grid operations, six years in aerospace operations — SIOP for a $200M+ product line at Parker Meggitt (formerly Meggitt Safety Systems), multi-ERP integration, qualification-test design at Pacific Scientific and Pacific MultiTech. B.S. Aerospace Engineering, Cal Poly Pomona. Certificate in Management for Technical Professionals, UC Irvine.
The systems-engineering thread runs straight through it all.
Let’s talk.
Energy, healthcare, finance, manufacturing, logistics, retail, public sector, aerospace, software — whatever the industry. If AI integration, systems engineering, or turning fragmented operations into a unified platform is on your plate, I’d like to hear about it. I learn new domains fast and bring the same delivery discipline to every one.
Full-time roles
Operating as Lead AI Engineer today. Open to Lead, Principal, or Staff Engineer roles, Engineering Management, and Director / Head-of-AI positions where the work is strategic and consequential. Any industry. Remote, hybrid, or relocation considered.
Email me →Selective contract engagements
Take on a small number of advisory or build engagements per year — any industry where AI integration, platform consolidation, or capital-justification authoring is the bottleneck. Six- to twelve-week scopes preferred.
Start a conversation →Networking & collaboration
Always happy to swap notes, trade intros, or compare playbooks — AI engineering, grid operations, capital justification in regulated industries, aerospace, or systems thinking in general. If you want to team up on something, invite me to speak, or just say hello, send a note.
Say hello →