CONTROL, AUTOMATION, ASSET MANAGEMENT: The leading edge in managing plant digital assets

EUCI/Pearl Street/CCJ collaboration carries forward a rich tradition

Fifteen years ago, when Pearl Street Inc president Jason Makansi was editor-in-chief of Power magazine, and CCJ’s editor Bob Schwieger was its publisher, they launched an industry conference entitled “Power Plant IT,” a follow-on editorial product from a special report Makansi researched and authored, “Information Technology for Power Plant Management,” published in June 1996. Power held that conference three years in a row.Makansi

After Makansi left the magazine, he continued to work and do research in this area through Pearl Street’s client engagements, organizing sessions at other industry conferences, and playing a leadership role in the annual ISA Power Industry Symposium.

The EUCI Power Generation Summit: “Managing the Digitally Integrated Power Plant,” carries on this tradition. Mark your calendar for next year’s event, to be held in Dallas, Feb 26-28, 2014.

Emerson Process Management’s Power and Water Solutions, served as the overall sponsor of this year’s event in New Orleans. This is significant because its president, Bob Yeager, was one of the early and initial supporters of Power Plant IT. PAS Inc and AlertEnterprise also were sponsors of the 2013 EUCI event.

Control, automation, and asset management systems, often referred to as digital assets, occupy a peculiar place today. Like many other systems at combined-cycle plants, the DCS and software vendors take on more of the responsibility for making sure that the equipment works. Data and knowledge propagation, proliferating network connections, and cybersecurity, have created a growing interface to the corporate IT folks, outsource services vendors, and even regulators (through emissions monitoring) and safety/Hazop systems.

Many large owner/operators have added centralized fleet monitoring and performance centers and also rely on vendor remote monitoring for key subsystems—such as the gas turbine/generator. Plants continue to add additional digital software and hardware to (1) make use of the available data; (2) obtain advanced diagnostic capability; (3) allow remote work access, share knowledge, and rationalize staff, processes, and procedures; and/or (4) fix unintended consequences of the main digital systems—such as data fog, alarm management, cybersecurity, and operator graphical interfaces on the screens.

In one sense, the physical assets remain fixed but the digital assets reside in places too numerous to count—PCs, laptops, cell phones, iPads, personal digital assistants (PDAs), and other devices. Integrating and managing a plant’s digital assets has become one of the most vexing challenges facing today’s owner/operators. Jason Makansi, president, Pearl Street Inc, St. Louis, Mo, calls the digital assets the “brain” to distinguish it from the “brawn” of the iron and steel of the boiler, turbine, pumps, piping, valves, heat exchangers, etc. The “gray matter” is scattered far and wide these days, he says.

 At the first EUCI-Pearl Street Inc Power Generation Summit, “Managing the Digitally Integrated Power Plant,” 75 experts gathered in New Orleans, February 27-March 1, to discuss these and other facets of digital assets. Many of the speakers were the leading practitioners of digital asset integration and pioneers in specific areas. CCJ was a media sponsor for the event, which was so well-received, it will be held again next year (sidebar). Material presented and discussed of value to combined-cycle owner/operators is summarized below.

The big picture

Here are the major themes that emerged from the conference:

1. Configuration management is coming to your plant. In the previous chapter of the continuing saga of NERC’s cybersecurity standards, it appeared as if many plants, if not most powerplants, would be considered non-critical assets and therefore escape the most onerous compliance actions. However, one of the latest revisions, published in January 2013, apply a low-, medium-, or high-criticality definition. The upshot, say NERC compliance experts, is that configuration management is becoming a necessary feature of plant automation and knowledge systems.

Defining configuration management is not straightforward. In this context, it refers to the ability of a plant to document its last current “state,” such as before or during a grid incident, and therefore be able to recover from a cyber-event. If you have nuclear plants in your fleet, ask your nuke colleagues; they’ve been dealing with configuration management for years.

2. Get to know prognostics. Monitoring and diagnostics is like your father’s Oldsmobile. The next logical step is prognostics. Wordsmiths will note it’s the root of prognosticate, as in predicting the future, interpreting the tarot cards, calling your astrologer or your stock broker. In this case, it makes a world of sense. By recording and storing large amounts of data, correlating data points, and identifying and comparing patterns under different operating conditions using sophisticated algorithms, software really can indicate, if not predict, the near future, like the next five minutes, or an hour.

Advanced prognostics are being embedded in today’s control and automation systems, or are available as outsourced services from specialist firms or OEMs. These systems provide early warnings and alerts of serious problems. Owner/operators have reported significant savings by avoiding catastrophic events.

3. Wireless applications are growing. Some experts are still not comfortable with wireless in the plant environment. First, there are the potential interferences with other communication protocols. Second, they present a separate and potentially higher cybersecurity risk than hardwired systems. Owner/operators with deep wireless experience insist that these challenges are being met; it’s the perception of wireless among those with no experience that challenges the industry.

Because hard-wiring is costly, wireless systems allow plants to deploy advanced sensors, instruments, and M&D strategies that simply could not be justified otherwise. Some researchers still think cell phones impair brain function, yet the vast majority of Americans seem to use them without incident. The benefits of wireless deployments are simply too compelling to be held hostage by the challenges of perception.

4. Plants are rethinking operator interfaces. Data fog is an unintended consequence of digital systems. The challenge is magnified because some owner/operators have standardized on plant control systems and no one thinks to ask real live plant operators for their input during the design phase. How plant data, trends, and graphics are presented to the operator is being overhauled. This activity started with alarms—too many, too often, too disorganized—from the latest DCS. Now, alarm management has become a function in its own right at many plants.

5. Virtualization is making computer hardware more productive. You probably have to be a real IT geek to comprehend this one. Suffice it to say that virtualization is a way to apply software that makes your computer hardware—servers, PCs, controllers, PLCs—more efficient. From the plant’s perspective, think of it as being able to run multiple operating systems on one hardware platform. One generating station recently converted three physical servers to run 16 virtual servers. Another way to view it is that virtualization helps you deal with the fact that control-system hardware has a 12- to 18-yr obsolescence cycle, while IT equipment typically has a lifetime of from three to five years.

Today you can have your plant simulator running with the same models as the DCS. In other words, you can train at a plant simulator that is running in parallel with the plant’s DCS using real-time or recent historical plant operating data.

6. The real plant and the digital plant are converging. This is where the brain analogy really stands out. Parallel-running simulators are just one example. With today’s diagnostics and prognostics, you can replay and analyze plant transients the next day (or anytime in the future, theoretically) and peer into the future and run “what if” scenarios using actual plant operating data. Plants become self-aware, or self-actualized. Other process industries are employing 3-D visualization technologies to organize and manage all plant data and information. These screen graphics are developed from laser-scans of the actual plant equipment in the as-built state.

Operators soon will be looking into a virtual reality world, a faithful representation of the actual plant, rather than crude diagrams or computer-generated graphics. As the plant moves forward from its initial design state after startup, operations, maintenance, and management will be based on the real-time, current state of the physical plant, captured digitally.

The lexicon is hard to digest—virtualization, self-actualization, prognostics, configuration management. It’s enough to make an academic out of a practicing powerplant engineer. But behind these words are powerful techniques and technologies that are helping owner/operators with the perennial goals of faster, better, cheaper. And safer. The following summaries of presentations from plant managers attest to that.

Rethink operator interface

One of the most thought-provoking presentations at the first EUCI-Pearl Street Power Generation Summit, “Managing the Digitally Integrated Power Plant,” held earlier this year was given by Harvey Ivey, Manager of I&C Systems and Field Support for The Southern Company. He showed a screen shot from a typical powerplant DCS (Fig 1) and asked, rhetorically, “Is this boiler operating properly?” Ivey is leading a program at Southern to rethink how operators receive and respond to information at dozens of the utility’s power stations and provide a level standardization for new facilities and retrofits to existing units across the Southern fleet.

Called PowerGraphiX™, it incorporates these core principles: arrange hierarchy, navigation, and graphics based on how operators think and tasks that they perform, not on a typical piping and instrumentation diagram (P&ID); limit use of color to enhance communication, not hinder it; assist the operator’s natural ability to detect trends and recognize patterns; provide navigation from overview/big picture, important sub-systems (for example, fuel, water), specific pieces of equipment, and interlocks and diagnostics; convey information, not just data; and apply ergonomic principles.

1. 2. 3.

The result is that operators will view screens that look like Fig 2, not like Fig 1. The control rooms at Southern power stations will look more streamlined, with two 70-in overview wall-mounted screens for Level 1 information, and six desktop screens for Level 2 (system) and Level 3 (equipment) information (Fig 3). Level 4 information includes interlocks and diagnostics.

Standardizing in this way, said Ivey, doesn’t just improve operator performance, but reduces the total number of purchased screens, and engineering, licensing, and cybersecurity costs. The total number of operator screens will be reduced from 300 to 600 to around 75. Ivey noted that there’s no additional cost to build graphics this way. It costs at least as much to build a poor graphic as a good one, he said.

Rationalizing alarms is another aspect of operator interfaces getting much attention these days. Dan Martian, senior production engineer at Minnkota Power Cooperative Inc, showed how the Milton R Young coal-fired plant managed to control alarm flooding, disable nuisance alarms, and make it practical for operators to respond with adequate time to the most important alarms.

Essentially, the plant follows the guidelines, which apply regardless of the type of station, outlined in the flow chart and table (Figs 4, 5). The principles are derived from “Alarm Management: A Comprehensive Guide,” second edition, by Bill Hollifield and Eddie Habibe, published by ISA. The authors are executives with Houston-based PAS Inc. One of the guiding principles is to distinguish between an event, something that does not require an operator action, and an alarm, something that does.

4. 5.

Martian reports that, for one plant subsystem, the selective non-catalytic reduction (SNCR) NOx emissions control, the plant has reduced alarms by 96%. In one week before rationalizing alarms, operators had to consider 5615 alarms! In a week afterwards, that number was reduced to 214. Plant personnel are now able to focus on the 20 most-frequent alarms.

Think Prognostics

Diagnostics can tell you what’s wrong. Prognostics anticipate what will go wrong, so that you can take actions to prevent significant events. One way to think about prognostics is that they give you an early alert before an alarm is triggered.

Moh Saleh, an engineering O&M manager at Salt River Project, described multi-level monitoring systems used at the Desert Basin combined-cycle plant near Phoenix. They include two “foundational” systems—the OSI PI data infrastructure which ties together the DCS, the Mark V gas turbine/generator controls, vibration monitoring systems, and others, and the SmartSignal (now GE Intelligent Systems) EPI Center which pulls data from PI and analyzes patterns based on plant models built by SmartSignal and verified by the plant.

SmartSignal detects deviations well below control-system alarm levels, Saleh noted. Plant personnel access this prognostic capability 24/7 while SmartSignal specialists monitor the plant remotely during normal business hours. Monitored parameters are compared (at 10-min intervals) to what is “normal,” based on correlations among parameters using historical data and learned system behavior.

Other monitoring systems used regularly include:

• The combustion turbine OEM’s remote diagnostic center which, among other things, compares data to other units in the model-number fleet to analyze patterns and deviations, and is linked back to parts inventory.

• GP Strategies Corp’s EtaPro™ performance monitoring system which tracks heat-rate losses and quantifies the impact of failures on plant efficiency and outages.

• Chromalloy Gas Turbine LLC’s Tiger®, a knowledge-based turbine condition monitoring system, which allows data replay.

• SmartSignal’s Cycle Watch, which tracks the gas turbine’s startup “signature” by correlating vibration, bearing temperatures, combustion parameters, blade path and exhaust temperatures, valve positions, and auxiliary systems.

Saleh notes that these systems are particularly useful because Desert Basin “cycles all over the place” but was designed for base-load operation. Each GT experiences 250-300 starts per year. One of the many benefits of monitoring is that it provides informed support for a need to shut down units.

Juan Ontiveros, plant manager of the University of Texas at Austin’s Carl J Eckhardt Heating and Power Complex, indicated that they are demonstrating a plant health index from BNF Technology Inc, a Korean firm. It is described as a sensor-less CBM system for plant equipment using statistical modeling techniques.


Regardless of what plant you run, you have computer hardware that can probably be rationalized for better performance. Mark Thompson from Basin Electric Power Co-op’s Leland Olds Generating Station, discussed a powerplant application of virtualization, essentially using software that mimics hardware to replace physical computing hardware (Fig 6).


The big dichotomy with modern control systems is that the control-system hardware—the stuff out in the plant—is on a 12- to 18-yr replacement cycle while the computer hardware replacement cycle is 3 to 5 years. Virtualization allows you to consolidate computers and make them more productive. By inserting a “virtual machine” (software) in between the physical machine (PC or server) and its operating system, that physical machine can run multiple applications and multiple operating systems. VMWare Inc, Palo Alto, Calif, is the company that supplied the virtual machine layer to Leland Olds; the product, in IT parlance called a “thin client,” is called a hypervisor.

In late 2009, the plant experienced failures of physical servers in the Rockwell Automation PLC platform, raising reliability concerns. The servers were installed in 2003-2006 timeframe as part of an overhaul of the system and move to a client/server architecture. The system originally included 13 servers. Hypervisor allowed the plant to consolidate into three physical servers running 16 virtual servers. Two of the servers are used to actually run the plant, the third is to back-up and maintain the system.

Basin Electric had never done anything like this before. Based on the success of this initial project, according to Thompson, the plant is going further. It has deployed virtualization for the plant simulators and next plans to consolidate the plant control consoles.

Significant cost savings can be obtained, especially for older plant assets, Thompson said, in addition to the following benefits: running fewer servers reduces power consumption and conserves space, different operating systems can be run on the same platform, virtual servers are easier to rebuild than physical ones, and the plant can better leverage internal IT and computer resources. ccj