The End of Industrial Automation (As We Know It)

Author photo: Harry Forbes
ByHarry Forbes
Category:
ARC Report Abstract

Executive Overview

A major technology disruption is approaching in the world of industrial automation.  This disruption has the potential to render long-held supplier business models obsolete.  It will also require suppliers of all types to adopt emerging software technologies and practices from the cloud computing domain.  This disruption will be driven almost entirely by new software technology rather than hardware.  While the disruption will first impact continuous process automation, the power and value of these new software technologies will drive them and their new business models into many other areas, including much of factory automation and into the Industrial IoT.

The technology driving this disruption is the “containerization” of software applications.  Software container technology originated decades ago in UNIX operating systems and continues to develop rapidly today driven by the vast Linux open source ecosystem and by major cloud computing service suppliers. 

Software containers provide two major values to software developers and end users. First, they provide automated means to deploy and manage multiple distributed applications across any number of machines, physical or virtual. Second, a container software development process creates a repository of “container images” – software deliverables that can be created collaboratively and include all the artifacts required for running an application within a specific machine environment.

The development of container images has created a powerful new abstraction that isolates applications from the heterogeneous CPUs, operating systems, software versions, and environments in which they run during production. In addition, because container images are scoped to contain a single application, containers shift the focus of developers from managing machines to managing applications.  This greatly improves application deployment and visibility.  Container development, deployment, and orchestration software tools have matured phenomenally during the last five to ten years.  They now far surpass traditional embedded system software technology in their capability to deliver and manage distributed and high availability applications – such as the automation applications of tomorrow’s distributed control systems (DCS). This is why the effective use of container deployment and orchestration software is likely to be a critical success factor for future process automation systems.

End of a Vertically Integrated Business Model

Automation hardware and software architecture at the control level (ISA 95 levels 1 and 2) has not changed fundamentally since the introduction of PLCs and DCSs in the 1970s. Then, as today, the industrial automation market structure revolved around bundled automation hardware and software.  This structure is very much like the minicomputer market of the 1970s, where each of the major computer suppliers (Digital Equipment, Da-ta General, Prime, HP, etc.) came to market with their own distinct set of bundled applications and software tools, as well as their own set of channel partners and independent software vendors (ISVs).

The formation of the largely end user-focused Open Process Automation Forum (a Forum of The Open Group) and its support by much of the process automation supplier community suggests that this highly vertically integrated business model is already threatened and may be reaching its endpoint.

The End of the Traditional Embedded System Model

The end is also near for the traditional market for embedded systems and for traditional embedded system software development. A transformation is needed in the way that networked (distributed) embedded systems – and industrial embedded systems in particular – are developed, deployed, and managed through their lifecycle.

At present, much embedded software (even if it originates in open source) includes many proprietary elements.  For security reasons, today it’s increasingly important to update embedded software, but sadly this is often difficult or impossible to do.  As a result, cybersecurity has become a chronic problem for embedded systems, especially in the consumer electronics segments. This creates other problems, too. Applications are inflexible. The operating systems and software tool chains are fragmented. Development speed is slow. Hardware/software integration remains problematic. And worse, there's a relatively small base of experienced embedded system software developers.

While the embedded system software market has certainly felt the impact of Linux, other aspects of the traditional embedded market have persisted for many years. The market will need to change technology to move into an era of the Industrial IoT, Industry 4.0, smart manufacturing, and the like. Future industrial embedded systems will require that the entire software stack be automatically upgradable – from the application at the top to underlying operating systems and hypervisors.

So, if the current technology stack for industrial automation no longer serves well, what is going to replace it?  Based on ARC Advisory Group’s domain knowledge, we make some observations, interpret them, and project the implications for both embedded systems and industrial automation.

Container Software Disrupts Many Industries

Recently, ARC had a discussion with a C-level executive from the process automation business who was pessimistic about the prospects for open process automation systems.  According to this executive, “Our customers do not want to live in a plant where they have to manage their application software across thousands of devices from many different suppliers.”

He believed that it is difficult enough to manage a plant’s automation software across a far smaller number of controllers made by only a few vendors. Managing an application for a unit or plant that would involve probably hundreds of hardware modules from many different vendors with application software from different suppliers would require major advances in technology.  Such systems would have to be vastly better at deploying and orchestrating their software than even the newest process automation systems running in plants today.

And yet, the industrial automation industry and several other major industries are now creating reference architectures or even specific solutions for highly distributed automation and functionality.  These initiatives have many things in common. 

For example, the global automotive industry is now developing a reference architecture that will encompass on-car features for safety, vehicle autonomy, remote services, infotainment, and convenience.  This includes defining a specific set of Linux software known as Automotive Grade Linux. In telecommunications, another massive industry, companies are developing architectures that virtualize processing functions both at the base of cell towers and in the telecom central office switches.  These two initiatives are called NFV (network function virtualization) and CORD (Central Office Re-architected as a Datacenter).

The challenges of digitization in huge industries like automotive, telecommunications, and industrial automation will require management of remotely deployed software at large scale and over a long lifecycle. This will require breakthroughs (as opposed to incremental improvements) in the way the industrial software is developed, deployed and, maintained, and on all types of hardware, including embedded devices.

The good news is that that these breakthroughs already exist. The capability to deploy and manage distributed applications in real time is available now in today's leading open source tools for software container management and orchestration. Ideally, these same tools will be incorporated into the emerging open process automation initiatives.

Convergence in Software Development

Historically, software development has been divided into three major classes of software: enterprise software (which runs the business), embedded software (which runs inside of things), and - most recently - cloud software (which runs on third-party cloud resources).  While there’s some overlap between these, they each also have some unique development and tooling aspects.

Over the next five years or so, it appears likely that cloud software development technology will come to dominate the other forms of development, and that the three will largely converge.  Furthermore, this convergence will occur at a pace driven by the rapid development of open source software, not at the much slower software development pace typical of today’s industrial automation industry.

The first phase of this convergence will be the development of so-called cloud-native software, marking the convergence of conventional enterprise software and cloud software. The deployment and orchestration software tools used for these tasks will become commonplace and well known. This will drive the second phase of convergence when cloud-native software in scaled-down forms addresses the requirements of embedded system software – the “bread and butter” of the Internet of Things (IoT), especially the Industrial IoT.

The industrial automation world can see some hints of this already in the way applications are deployed now using Docker and Linux containers in many industrial products introduced during the last two years. Besides this, an emerging cloud technology called unikernels combines the small footprint disciplines of embedded software development with cloud execution platforms. While the technology of unikernels is presently an area mainly of research, this is likely to change. In fact, venture capital is already being invested in firms that are developing unikernel products for both the enterprise and Industrial IoT markets.

Furthermore, cloud computing is a $250 billion business with plenty of R&D funding. It is growing rapidly and dominated by huge software firms such as Amazon, Alibaba, Microsoft, Google, IBM, and the like. These giant firms are competing fiercely for a huge and growing business. It seems reasonable to expect the open source software technology used in the cloud computing business will evolve rapidly and broadly.

Scaled-down Clouds for Industrial Automation

ARC believes that it is likely that in five years all software development will use cloud software development methods.  Within the software “food chain,” if “software is eating the world,” then the software that is eating the software development world is cloud software development and tooling. Even in the insular and specialized world of embedded software, development is likely to be overtaken and swallowed by the current and future cloud software technologies.

Cloud Software Development Technologies Eventually Will Consume All Others with software container technology hfcontainersw1.PNG

This is not to say that all applications will run in the cloud. Rather, the software development and deployment technologies used by cloud soft-ware will overtake and dominate the other forms of software development.  There are two possible reasons for this.

The first reason is because although huge, cloud is a relatively new and still-evolving industry. That said, today’s hot cloud computing software technologies are as recent as cloud computing itself. Software such as OpenStack, Cloud Foundry, Docker, and Kubernetes have all been released as open source in the last five to ten years.  Experts on cloud computing agree that the current cloud computing model contains significant redundancy and can be greatly improved. Therefore, one should expect rapid and sustained technology development in this area. Fine, but that's the cloud. Why should the industrial automation market be concerned with this?

That leads to the second reason to expect cloud software development to dominate. This is because cloud software technologies can scale down to much smaller systems; the type of systems that are the concern of industrial automation and of the Open Process Automation Forum.  Let’s examine a critical example: that of container software.

Containers and Industrial Automation

What is the significance of containers? One can see why these are important for the cloud and maybe for the enterprise but why are they important for applications like industrial automation?

Let's step back and review the history of containers. Container technology started in the late 1990s when companies like Sun Microsystems were building very large UNIX servers.  From the standpoint of operation and administration it was easier to partition the applications within these huge machines rather than run all the processes together.  The original concept behind containers was to create partitions between applications to make it simpler to manage large servers.

The downside of container technology was that implementing it required highly skilled UNIX system administrators.  For major companies of the time (Sun Microsystems, Oracle, SAP, and others) this did not present a problem. But the need for deep UNIX administration expertise and large servers limited the use of containers to relatively few companies.

What changed? Why are containers now so popular? As the container technology became available in Linux, a venture firm developed software tools that greatly simplified the creation, deployment, and operation of Linux containers. This software, named Docker, was released to the open source community five years ago.

As a result, during the last five years interest in containers has exploded both in the cloud and enterprise computing spaces. Furthermore, during the last couple of years, many industrial automation products have also been delivered with Docker included to enable system integrators and end users to easily integrate their own applications with these products.

Earlier, we mentioned the NFV and CORD initiatives for telecommunications. In the telecom industry, full digitization will move all analog signal processing to the very edge of the network. All the other services from the cell tower through the central office will be performed in software and done digitally. The result will be a highly flexible infrastructure enabling new services to be added without changing the hardware. How do telecom network operators envision being able to manage such a massive software-defined infrastructure with real-time requirements over a lifespan of many years? Ask them about this and about the role of container software in their plans and you will learn that containers are critical to their vision.

Finally, and this is perhaps the most important point, container software can scale down to very small systems. The Docker runtime software has been ported to small ARM core single board computers and is being used now to support Industrial IoT applications and services.

Docker and Kubernetes

As an added bonus, the open source software world has largely converged on a single container orchestration tool (orchestration in this context refers to software that manages deployments of containers).  This open source container orchestration software tool, Kubernetes, originated within Google, a company that has demonstrated reliability and success with its SaaS products implemented using containers.  Today Kubernetes is perhaps the hottest open source project in the enterprise software space.  ARC believes that it could be worthwhile for those with plant operations or engineering experience to discuss its potential applicability to future automation architectures with IT experts in their own organizations. After all, Docker and Kubernetes are not just technologies for managing and building containers. Their strong suit is that they are also technologies for deploying distributed applications in heterogeneous environments – such as industrial automation.

More than just containing files for storing or transferring applications, the term “container” also refers to employing features built into the kernel.  These features create a “contained” environment for running code on the kernel.  The code perceives that it is the only application.  Strictly speaking, a container is an execution-time experience, not a distribution-time experience.

The software artifacts that are fitted into a file and distributed as the basis for creating a container are called a “container image,” which can be a confusing term.  Container images are the software artifacts created using Docker or another container development tool.

When using container images (as opposed to software package managers) software distribution and deployment into production is user-driven and application-driven rather than supplier-driven.  Containers can greatly reduce the integration burden on end users of the software, enabling them to focus much more on their applications while letting the builder of the container image focus on the target system dependencies and configurations.

When supplier rather than user needs drive software distribution (as is the case with software package managers), the end user or system integrator is responsible for assembling the correct version of the dependencies and for setting all the runtime configuration parameters correctly.  Software delivered via a package manager is a modular unit of software to be integrated.  A container image, on the other hand, is a single but fully integrated application from the end user perspective.

As noted earlier, for networked devices that are embedded systems, “the end is near.” Going forward, high-value devices and services will be remotely monitored, managed, updated, and serviced, and their software stacks will need to be serviced with extremely high levels of automation.  Hence the need for Docker and containers in future embedded systems, including embedded systems within industrial automation.

Containers and Orchestration for Industrial Automation

Containers offer several important benefits that can make industrial automation systems simpler and more valuable to their end users and suppliers alike when compared to the embedded system technologies used in present-day industrial automation.  These include:

  • Containers focus on applications as opposed to machines.  This effectively decouples application development from the management of the compute, storage, and network systems where the applications run.  This provides value over the life of a system, by enabling application developers and system support specialists to work effectively decoupled and with far less interference.
  • Creating and managing repositories of container images represents a solution to the complexities introduced by variations in hardware, processor architecture, operating system, and supporting software dependencies for distributed systems.
  • The “layering” property of container image files enables suppliers and end users to efficiently create and iteratively enhance a repository. This recognizes that some (“low layer”) aspects of a container image will be used for long periods in many applications, while other aspects (“high layer”) or a specific application may be subject to frequent development or change.
  • Developers can create container development and deployment environments that enforce specific work and testing processes before an application can be deployed into a production environment.

Container orchestration is equally important for many reasons. These include:

  • Orchestration provides capabilities for greatly simplified system scaling and management.
  • Orchestration provides a declarative distributed system configuration. This configuration describes the desired state of the distributed system. The orchestrator operates to implement and maintain this desired state. Operations of the orchestrator enable a distributed system to achieve high application availability and properties such as “self-healing” since the orchestrator acts to restore the desired system state when part of the distributed system is disturbed or disrupted.
  • Declarative system configuration is less error-prone than imperative configuration tools, which must execute to take corrective action, requiring the troubleshooting of executables.
  • Declarative system configuration combined with a version control system makes the roll-back of a recent system change a trivial matter.  Software package managers, embedded system technologies, and existing DCS products simply cannot do this.
  • Building the orchestrator around containers shifts the primary operational focus to application performance, which is what end users care about.  Because each container is an application, there is no need to filter out signals or logs from many different applications to focus a diagnosis on a particular application.
  • Focusing on applications rather than on machines carries ripple-effect benefits in many other areas.  For example, it makes it much easier to build, deploy, and maintain applications correctly and segregate application runtime issues from machine issues.

Containers and Open Process Automation

Why should all this concern the Open Process Automation Forum? Because in the world of industrial automation, especially process automation, applications are changed frequently, often daily. A large process automation installation will require minor changes to its control application configurations almost every day.  Control configuration is the day-to-day stuff of work in the operational technology (OT) world of process automation.  New measurements are added, new control schemes tested, tuning parameters or alarm limits adjusted, and so on.

This situation of constant change is why process industry end users have so much intellectual property invested in the control applications (configurations) running in their DCSs. These applications represent literally man-years of work spent engineering and adjusting control systems to serve the physical plant and its operational objectives. Unfortunately, that intellectual property is today captured in control languages and idioms that are of-ten not machine-readable and are always highly proprietary. Standardized container and orchestration tools offer an exit from this dead end.

Containers for Application Deployment

Yet, today's DCSs do make it easy for end users to make frequent control application changes.  At the very least, a successful open process automation system will need to support this same level of performance (and probably much higher) with respect to minor engineering changes. Deploying applications onto target systems (especially smaller target systems as envisioned for the OPAF distributed control node) will be a critically important task, and it will be a task that will be performed daily for the entire in-stalled life of open automation systems.

The figure is from the open process automation technical reference model showing the solution lifecycle and work processes for an automation system. The figure highlights the work processes that pertain to application deployment. Note that these are all critical work processes that touch the open process automation components that are operating the production equipment.

To summarize, this mature, container-based software deployment technology is highly standardized, widely used, available in open source, works on a variety of platforms, and has been field proven in installations from the very largest (Google), to the very smallest (Raspberry Pi).  Since the stated goal of The Open Group is not to develop new standards, but rather to reference existing standards where they are both established and relevant, ARC questions why the Open Process Automation Forum would consider specifying a different approach or possibly even choose to create a different software deployment technology for use in open process automation systems. 

DCN Interfaces

The figure shows the architecture-defined interfaces for Open Process Automation from the view of the distributed control node (DCN).  Notice that three interfaces are particularly important:

  1. The configuration management interface, which defines services and information models used to manage Distributed Control Framework (DCF) configurations for use by configuration management tools.
  2. The application management interface, which defines the services and information models used to manage applications in a DCF by application management tools.
  3. The application services interface, which defines the services and information models used by applications to access framework services including system services.

As they specify these interfaces, the Open Process Automation Forum members should compare the required functionality point-by-point with the functionality already available within Docker and Kubernetes. Much of the desired functionality is already available to serve (containerized) applications using these existing and mature software tools.

To choose not to employ container technology in a future open automation architecture means that OPAF-compliant systems will have to compete against future products that will use these very powerful software tools. Automation suppliers should become aware of these capabilities, if only to understand the future competitive landscape and what innovations current or agile new competitors might bring to market.

Table of Contents

  • Executive Overview
  • End of a Vertically Integrated Business Model
  • Container Software Disrupts Many Industries
  • Convergence in Software Development
  • Docker and Kubernetes
  • Containers and Open Process Automation
  • Lift and Shift - The ELCN
  • Recent Examples of Containers in Industrial Automation
  • Summary
  • Recommendations

 

ARC Advisory Group clients can view the complete report at ARC Client Portal 

If you would like to buy this report or obtain information about how to become a client, please Contact Us  

 

Engage with ARC Advisory Group

Representative End User Clients
Representative Automation Clients
Representative Software Clients