The Last Technician - Why Maintainability Must Be Rethought for Autonomous Workers
A discipline built around the human body as a discipline, maintenance has always been inseparable from the human body. From the earliest industrial systems to today’s complex plants, maintainability has been defined—often unconsciously—by what a trained person can physically and cognitively do.
The human body is the reference model: how far an arm can reach, how long a person can stay in a confined space, how much force can be applied safely, how information is perceived through sight, sound, touch, and even smell.
Designing-for-maintenance evolved from this reality. Accessibility meant physical access for people. Visibility meant line of sight for the human eye. Safety meant protecting the human body from energy, height, heat, chemicals, and motion. Procedures were written as step-by-step narratives intended to be read, interpreted, and sometimes adapted by technicians on site.
This human-centric foundation may have been a limitation, but it was a necessity, as humans were the only possible maintainers. Standards, best practices, and engineering culture have been developed based on that understanding.
The world changed while maintainability stood still: Over the last decade, something fundamental has shifted—not suddenly, not dramatically, but slowly and inevitably. Inspection tasks began to migrate away from humans and were handed to machines. At first, it was justified by safety: “We will only use robots where humans should not go.” Then by efficiency: “We will use drones because scaffolding is expensive.” Then by data quality: “Robots collect better, more consistent data.”
Slowly, inspection by machines stopped being an exceptional case and became a default option.
Today, robots inspect assets more frequently than humans ever could. They operate at night, in bad weather, during production. They enter confined spaces without permits, climb structures without scaffolding, and record conditions continuously rather than episodically.
Yet the assets they inspect remain stubbornly human-oriented.
The result is a growing misalignment: maintenance execution is changing faster than maintenance design.
Robots are present, but they are guests in a human house: Robotic systems are now common in maintenance environments, but they are still treated as visitors rather than residents. They are deployed in plants and tunnels, on roofs, and in areas that were never designed with robotic autonomy in mind. Their presence is tolerated, sometimes welcomed, but rarely anticipated at the design stage.
This matters because robots do not adapt to environments the way humans do. A technician entering a poorly designed maintenance space compensates instinctively: adjusting posture, changing sequence, improvising tools, interpreting ambiguous signals. These adaptations are invisible to standards, because they live in human experience.
Robots cannot compensate in this way. Every ambiguity becomes a failure mode. Every inconsistency becomes a risk. Every undocumented modification becomes a potential dead end.
What humans absorb effortlessly, robots must resolve explicitly—and often cannot. There is a moment in the movie Blade Runner when replicants—bioengineered humans—move through a world that is technologically advanced and carefully engineered, but fundamentally not designed for them. Doors open too slowly, interfaces feel indifferent, and the environment tolerates their presence without acknowledging their needs.
That world is replicated in industry today. Robots navigate plants that operate perfectly well for humans yet remain quietly hostile to autonomous actors. The space speaks the wrong language for robots to understand.
Like replicants, robots are not limited by capability but by environments that assume someone else—the human—is still in control. No amount of artificial intelligence can fully compensate for an asset that was never designed to recognize its new caretaker.
Inspection is solved; meaning is not: From a technical perspective, inspection is no longer the hard part. Industry has largely solved the problem of sensing. Cameras, thermal sensors, lidar, ultrasound, vibration probes, and gas detectors provide unprecedented visibility into asset conditions. And robots do not “miss” inspections because of fatigue, weather, or scheduling conflicts.
However, sensing is not understanding.
Traditional maintainability assumes a human interpreter. Someone looks at the data, integrates them with experience, recalls previous cases, and decides whether action is required. This decision process is informal, contextual, and difficult to codify, but it works because humans are good at ambiguity.
Robotic systems require explicit criteria, however. They need to know what constitutes normal, degraded, and unacceptable states. They need thresholds, confidence levels, and decision logic. When assets are not designed to expose their condition clearly and consistently, autonomy cannot progress beyond observation.
This is why many robotic maintenance initiatives plateau. Robots gather data, but humans still decide and act. The bottleneck is not technology—it is asset intelligibility.
In 2001: A Space Odyssey, HAL 9000, the space ship’s onboard AI computer, has a massive failure. HAL does not fail because it lacks intelligence. It fails because it is asked to operate within a system built on incomplete, contradictory, and human-centric assumptions. HAL sees everything, monitors everything, and calculates relentlessly—yet its understanding is fatally misaligned with the reality it is meant to manage.
Modern robotic inspection systems face a less dramatic version of the same dilemma. They observe more than any human inspector ever could, yet they are expected to infer meaning from assets lacking explicit logic. Data are abundant, but context remains implicit.
When we say “the robot did not understand,” what we often mean is that the asset never explained itself. Autonomous systems do not need better algorithms.
They need assets that are explicit, interpretable, and honest by design.
What the 2025 standard says—and what it does not: The updated maintainability standard published in 2025 consolidates decades of practice. It describes maintainability as an inherent property of design. It emphasizes accessibility, testability, maintenance time, skill levels, procedures, and support. It provides guidance on planning, analysis, and verification.
But it remains silent on a critical question: Who is performing maintenance?
By not explicitly addressing the performer, the standard implicitly assumes maintenance is still carried out by humans. Accessibility is human accessibility. Testability is human-operated testability. Procedures are written for human execution. Human analysis is central.
This silence is understandable. Standards follow established practice, and large-scale autonomous maintenance is still emerging. But the silence is also revealing. It highlights a growing gap between formal guidance and operational reality.
Assets are increasingly maintained by machines, yet maintainability is still defined as if humans were in the loop. This gap will not close on its own.
The limits of ergonomics in an autonomous world: Ergonomics is one of the great achievements of maintenance engineering. The emphasis on reducing physical strain, improving safety, and designing for human comfort has prevented countless injuries and saved many lives. Wherever humans are involved, ergonomics will be essential.
But ergonomics does not translate directly into robotic effectiveness.
A layout that is comfortable for a human may be confusing for a robot. A component that is “easy to reach” by hand may be unreachable by a manipulator with fixed degrees of freedom. A reflective surface that poses no issue to human vision may blind a machine’s vision system. A label readable at arm’s length may be invisible to a camera operating at a fixed distance and angle.
Robotic maintainability requires a shift from ergonomic thinking to semantic and geometric clarity.
From accessibility to legibility:
Accessibility answers the question of physical reach. Legibility answers the question of understanding.
For a robot, an asset must be legible in multiple dimensions. Components must be uniquely identifiable without relying on context, something humans infer automatically. States must be observable in a way that sensors can interpret reliably. Interfaces must signal how they can be interacted with, without the need for trial and error.
Legibility is not accidental. It must be designed. It requires deliberate choices about geometry, markings, contrast, interface standardization, and spatial organization.
An asset that is legible to machines reduces uncertainty, simplifies autonomy, and increases reliability. Interestingly, such assets often become clearer for humans as well—but clarity for humans alone is no longer sufficient.
Intervention exposes every hidden assumption: Robotic inspection is one thing—intervention is another. Robotic intervention remains rare, not because robots lack dexterity but because assets demand human intuition.
When a robot is asked to act—to turn a valve, connect a service line, tighten a fastener, or replace a module—it encounters the accumulated assumptions of human-centered design. Tactile feedback, variable force requirements, informal alignment cues, and undocumented dependencies all surface at once.
Simply stated, if a maintenance task is difficult to teleoperate reliably, it will be impossible to automate safely.
This shifts responsibility upstream. Intervention success depends less on robot capability and more on asset design maturity.
If the earlier examples concern how systems perceive and interpret reality, intervention exposes a different vulnerability: what happens when execution exceeds the assumptions embedded in design. The Titanic did not fail because it was poorly designed or carelessly built. On the contrary, it represented the best engineering practices of its time and complied with all existing standards. The failure occurred because the system was designed around a set of fragile assumptions: that damage would be limited, that deviations would remain within known bounds, and that the operating context would behave as expected.
Many industrial assets behave in the same way when robots attempt intervention. As long as human intuition compensates for small inconsistencies, undocumented dependencies, and design shortcuts, the system appears robust. Once that intuition is removed, those same assumptions are exposed mercilessly.
Robots do not panic, but they do not absorb uncertainty the way humans do. When robotic intervention fails, it is rarely because the robot is incapable. It is because the asset was designed to operate safely only as long as someone was there to absorb uncertainty—until the assumptions were exceeded.
When robots attempt intervention, small design ambiguities suddenly become absolute blockers. Interfaces that “work fine for technicians” fail when intuition is removed. What humans manage through experience becomes unmanageable when execution must be precise and repeatable.
The digital twin becomes the operational authority: In an autonomous world, models are the key to successful maintenance.
The digital twin is no longer a descriptive artefact. It is a navigational reference, a semantic map, and a decision framework. Robots rely on it to know where they are, what they are looking at, and what actions are permitted.
This elevates configuration management to a safety-critical function. Deviations between physical reality and digital representation are no longer inconveniences; they are sources of operational risk.
Humans notice when something “does not look right.” But robots trust their models. When the model is wrong, confidence becomes dangerous.
Preventing failure without heroics: Traditional maintenance celebrates intervention under pressure. A failure occurs, a team responds, and production is restored. Stories like these shape organizational identity and reinforce reactive behavior.
Robotic maintenance offers a different narrative. Continuous inspection, early detection, and small corrective actions prevent failures from becoming major events. Work happens quietly, often invisibly.
This is not a loss of professionalism. It is a sign of maturity.
Reliability without drama may be boring, but it is optimal.
A strategic choice—hiding behind technical discussions: Many discussions about robotic maintenance focus on technology: sensors, AI, navigation, manipulation. These are definitely important, but they distract from a deeper question.
The real question is whether organizations are willing to redesign assets for non-human maintainers.
If assets continue to be designed primarily for humans, robots will remain peripheral tools—useful but limited. If assets are designed with autonomous execution in mind, robots will become first-class maintenance actors.
This decision influences availability, safety, life-cycle cost, and ultimately competitiveness.
This is not an operational choice. It is a design philosophy.
Standards lag behind reality—and that is normal: Standards are not visionary. Instead, they formalize consensus after practice stabilizes. The fact that current maintainability standards do not address robotic maintainability is not a failure but a signal that practice is moving faster than formalization.
Practitioners, asset owners, designers, and researchers must articulate what is missing, demonstrate what works, and push the discipline forward.
If we wait for standards to lead, we will wait in vain.
The closing image: A plant is operating. There is no emergency. No alarms. No rush.
A robotic system moves through an asset, observes, interprets, intervenes if needed, and updates its own model. Humans oversee the system, review decisions, and improve designs—but they are not exposed to danger.
Nothing remarkable happens. And that is exactly what modern maintainability should achieve. Maintainability has not disappeared. It has changed its primary actor. To succeed, assets need to thrive quietly, not struggle loudly.
Text: Prof. Diego Galar Photos: SHUTTERSTOCK, gettyimages