Don't Fall Behind

Stay current with our insights and practical guidance.

Thank you for joining our Newsletter!
Oops! Something went wrong.

How AI Became Part of Our Work

December 4, 2025
November 29, 2025
AI & Governance

LATEST POSTS

I’m a Vikings Fan. Half Spock. Half Fever Dream.

I’m a Vikings Fan. Half Spock. Half Fever Dream.
Data & Analytics

Someday I’ll Write the Book

Someday I’ll Write the Book
AI & Governance

The Whole Toolkit: How Real Transformation Happens

The Whole Toolkit: How Real Transformation Happens
AI & Governance
A robotic hand and a human hand reaching toward each other with a glowing digital sphere and light trails between their fingertips against a sunset sky.

When people ask how Strategic Systems adopted AI, they usually expect to hear about a roadmap or a major initiative. It was much messier and far more practical than that. We didn’t start with a platform strategy. We started with two tools: ChatGPT and Gamma. ChatGPT was where the thinking began. Gamma was where we tried to turn that thinking into something presentable. For a while, that pairing worked well enough. We were moving faster, shaping ideas quicker, and compressing work that used to take days into hours.

We eventually hit the edges of what Gamma was good at, so we moved on to Genspark. The change wasn’t about one tool being “better” than another. It was about learning that whatever we used needed to fit how we work, not the other way around. AI didn’t come into Strategic Systems as a strategy document or a formal rollout. It showed up as a practical response to running a business that was becoming more complex by the month. Between talent services, infrastructure work, application development, advanced analytics, and learning, the pace was increasing while tolerance for bad decisions kept shrinking.

At first, the benefits were obvious but modest. We wrote faster, found information more easily, and summarized documents without as much overhead. Useful, but not transformative. The real change started when we stopped waiting for the work to be clean before involving AI. We brought it into the middle of unfinished thinking: draft plans, half-formed ideas, debates that hadn’t settled yet. It became where we tested logic before it had consequences. We used it to challenge assumptions and stress ideas before they hardened.

Over time, the effect showed up quietly. Meetings became more focused. Writing sharpened. Weak thinking collapsed sooner. Good ideas traveled further before hitting resistance. We weren’t just saving time. We were catching problems earlier when they were easier to fix.

Over time, it became clear that the way we were working no longer fit neatly into separate buckets.

When output is no longer scarce, advantage starts to show up in quieter places.

What became obvious inside the company was that adoption, governance, enablement, data architecture, and operating design were not separate conversations. If one moved, the others moved with it. EDGE became the structure around that reality, not because it looked good on a slide, but because it reflected how the work functioned. As that thinking matured, it began showing up in the things we built for ourselves.

SERVE started to connect sales, estimation, and delivery. Once AI became part of that flow, the platform changed in real ways. Estimates became more consistent. Documentation existed when it was supposed to. Patterns across deals surfaced sooner. Issues stopped hiding until late in projects where they were expensive. At the same time, we were wrestling with a different question. How do you move AI from the executive tier into daily work without it becoming something people ignore? That work turned into SAI. Not as a product, but as a way of working. It helped translate experimentation into habits teams could rely on. Instead of selling features, the effort shifted to helping people build confidence and judgment alongside the technology.

That is also why EAT exists. AI does not live by itself. It intersects with automation, analytics, workflows, and infrastructure. People do not feel “AI.” They experience whether work is simpler or harder. EAT became our way of pulling those pieces into one system instead of letting them drift into separate initiatives.

Along the way, something else became clear. What mattered most was not which tools we used, but what stayed with us as the tools changed. The context we built up. The expectations around preparation. The habits around testing thinking instead of assuming it was right. The shared understanding of what “good” work looked like. Changing tools was easy. Rebuilding that was not.

We also learned the hard way that AI does not fix unclear thinking. It reflects it. If strategy is fuzzy, AI produces better-written confusion. If leadership avoids decisions, AI makes avoidance look organized. Used well, it sharpens thinking. Used poorly, it gives confusion better formatting.

Eventually, AI stopped being treated like software and started being treated as part of how leadership works. It did not replace thinking. It raised the visibility of weak thinking. Ambiguity stood out faster. People came into conversations prepared, or it became obvious very quickly when they were not.

There was no rollout plan. No company-wide reset. AI simply became part of the normal flow of work, the same way shared documents and messaging once did. You stopped noticing it until you imagined trying to operate without it. What surprised us most was how quickly the conversation stopped being about tools at all. The work shifted to how decisions were made, how ideas were tested, and how much ambiguity we were willing to tolerate before acting. When output is no longer scarce, advantage starts to show up in quieter places. In judgment. In clarity. In knowing when to push and when to walk away.

Working this way has not solved every problem in the business. What it has done is change how quickly issues surface and how directly we deal with them. Decisions get tested earlier. Weak assumptions do not last as long. And the gap between knowing something is wrong and doing something about it keeps getting smaller. That has been the real value of both the universally available AI and our bespoke AI, for us here at Strategic Systems.