Skip to main content

AI Automation in Test & Measurement: Why the Industry Is Overdue for a Reckoning

·1387 words·7 mins

There’s a strange paradox at the heart of the Test & Measurement industry.

The entire purpose of T&M is to tell you the truth about something. Whether it’s the performance of a power amplifier, the signal integrity of a communications system, or the thermal behavior of a semiconductor under load — T&M equipment exists to remove ambiguity. It’s the machinery of certainty.

And yet, the process of running that equipment — setting up tests, executing them, interpreting results, making decisions — has remained stubbornly human-dependent, repetitive, and slow. The irony is sharp: the industry built to measure everything has been surprisingly slow to measure the cost of its own inefficiency.

That’s starting to change.

The Problem Isn’t the Hardware
#

Modern T&M hardware is extraordinary. Signal analyzers with 110 GHz bandwidth. Oscilloscopes that digitize at terasamples per second. Vector network analyzers that characterize devices across thousands of frequency points in milliseconds. The physics being harnessed here is genuinely impressive.

But then you watch what happens when an engineer actually uses this equipment.

They open a script written in Python — or worse, LabVIEW — that hasn’t been touched in three years. They wrestle with instrument drivers. They manually sweep through test configurations, wait for results, export a CSV, open Excel, and build a report by hand. Or they spend two days writing new test code from scratch because the existing code doesn’t quite fit the new DUT.

The bottleneck in T&M has never been measurement speed. It’s been everything surrounding the measurement.

The bottleneck isn’t the instrument. It’s everything that happens before and after you press “run.”

Why Automation Has Always Been Hard Here
#

T&M automation is uniquely difficult for a few reasons that don’t get enough credit.

Instrument diversity. The T&M ecosystem is a zoo. SCPI-based instruments from five different manufacturers, each with their own quirks. Legacy GPIB devices running alongside modern LAN instruments. VISA layers, IVI drivers, and proprietary APIs all living in the same test rack. Writing automation that works reliably across this landscape has historically required deep, specialized expertise.

Test intent is complex. A test isn’t just “apply stimulus, measure response.” It’s a series of decisions. Is this measurement good enough? Does this result warrant a deeper look? Should we halt the sequence or continue? That decision logic is usually buried inside human heads or poorly documented SOPs — not in code.

Institutional knowledge is siloed. The engineer who built the test knows what the edge cases are. The new engineer who inherited it doesn’t. When the original author leaves, that knowledge walks out the door. The code is rarely self-documenting enough to compensate.

The stakes are high. You’re not testing a web form. You’re qualifying hardware that might go into aircraft, defense systems, or medical devices. Getting it wrong has consequences. So engineers are conservative, change-averse, and rightly skeptical of automation that feels like a black box.

Where AI Changes the Equation
#

The wave of AI capability that’s washed over software development in the last few years is now cresting on T&M, and it’s arriving with some genuinely useful properties.

The most immediate value isn’t some futuristic autonomous test system. It’s much more practical: AI that can understand test intent and generate the scaffolding.

When an engineer describes what they need to measure — in plain language — a well-designed AI system can translate that into instrument commands, sequencing logic, and result validation. Not perfectly. Not without review. But fast enough that the starting point is no longer a blank file, and the iteration cycle compresses dramatically.

Think about what that actually means:

  • A new engineer can be productive on day one instead of month three
  • Test coverage expands because writing new tests is no longer prohibitively expensive
  • Legacy scripts get documented, explained, and refactored by a system that can read them and reason about them

But the deeper opportunity is in closed-loop test intelligence — AI that doesn’t just generate test code, but watches the results in real time and makes decisions. Adaptive test sequences that spend more time where the anomalies are. Automatic root cause classification. Results that don’t just tell you what happened, but contextualize it against thousands of previous runs.

This is where T&M goes from being a bottleneck to being a competitive accelerant.

What Glue Is Building
#

At Glue, we’ve been deep in this problem for a while now — and I mean deep in the specific, unsexy, difficult-to-generalize parts of it.

The T&M software stack has a fundamental architecture problem: it was built for a world where humans were always in the loop, always present, always interpreting. The data formats assume a human will open a file. The interfaces assume a human will click through them. The test logic assumes a human will decide what to do with the results.

Glue is building for the world where the loop closes automatically.

Our approach centers on automated test as a first-class software artifact. Not a script. Not a LabVIEW VI that lives on one engineer’s workstation. A structured, version-controlled, AI-augmented test that knows what it’s testing, why, and what good looks like.

A few things that make this concretely different:

Test authoring with AI assistance. Instead of starting from scratch or cargo-culting old code, engineers describe what they need. The system generates a starting point that speaks to the instruments in the rack, validates against known-good baselines, and documents its own logic. Engineers review, refine, and own it — but they’re not building it from zero.

Instrument abstraction that actually works. One of the harder infrastructure problems in T&M automation is making the same test logic work regardless of which specific instrument is on the bench. Glue handles this at the platform level so engineers don’t have to solve it per-project.

Results that mean something downstream. A CSV export is not a test result. It’s raw data. Glue generates structured, searchable, contextualized results that connect to the rest of your engineering workflow — design data, previous runs, pass/fail criteria, traceability to requirements.

AI-assisted anomaly detection. When something looks wrong, the system flags it, classifies it, and surfaces the relevant history. Instead of an engineer staring at a waveform trying to remember if this is how it looked last quarter, the system tells you.

The Resistance Is Real (And Understandable)
#

I want to be honest about something: this adoption is not going to be frictionless.

T&M engineers are, correctly, skeptical of automation promises. They’ve seen too many “revolutionary” test platforms that required more work to maintain than the code they replaced. They’ve been burned by vendor lock-in, by systems that worked perfectly in the demo and fell apart in production, by automation that made simple things complicated.

That skepticism is healthy. The answer isn’t to dismiss it — it’s to earn trust incrementally. Start with the test authoring assist, where the human stays in the loop and the AI is clearly a tool, not an oracle. Show that the output is readable, maintainable, and improvable. Build confidence in the platform before asking engineers to trust it with anything critical.

The path to autonomous test isn’t a leap. It’s a ramp.

Why This Matters Beyond Efficiency
#

Here’s the thing that I keep coming back to: T&M is a forcing function for the entire hardware development cycle.

When testing is slow, expensive, and human-bottlenecked, it happens less often. Designs get validated late. Problems get caught late. Iteration cycles stretch. Time-to-market suffers. And in industries where hardware is the product — defense, aerospace, semiconductor, telecom — that’s not just an inconvenience. It’s a fundamental constraint on how fast the industry can move.

AI-automated test doesn’t just make existing test processes faster. It changes what’s possible to test. You can run tests that would have been prohibitively expensive to automate manually. You can run them continuously, not just at program milestones. You can close the feedback loop between hardware behavior and design decisions in hours instead of weeks.

That’s not incremental improvement. That’s a different kind of product development.

The T&M industry is overdue for this reckoning. The hardware got fast a long time ago. The software around it is finally catching up.


David Sulpy is the founder and CEO of Glue, a software company building AI-powered test and measurement automation for the modern hardware development cycle.