Onsite

Multiple organizations create new challenges for plant operators

By Team-CCJ | April 19, 2022 | 0 Comments

Once upon a time, a vertically integrated utility was responsible for electricity supply—soup to nuts, that is, from the powerplant to the customer’s meter. Today, responsibility for over half of US power production rests with non-utility entities. Like most big industry shifts, this transition has led to greater efficiencies in some areas, challenges in others.

For operating entities, some of those challenges are created by having multiple organizations responsible for different aspects of the equipment during different phases of the facility’s life. Gaps in scope and duties have to be identified and addressed before they cause heartache. Chances are, you’re going to learn this stuff only if you visit the facilities and talk to the people who know the facts on the ground because they are the boots on the ground.

CCJ ONsite has been chronicling progress at the 725-MW Woodbridge Energy Center, Woodbridge, NJ, owned by CPV Shore Investment, LP, Osaka Gas USA Corporation, Toyota Tsusho Shore, LLC, and John Hancock Life Insurance Corp, and operated by Consolidated Asset Management Services (CAMS).  The 2 × 1 combined cycle, constructed on an EPC basis by Kiewit Corp, is anchored by two of the first 7FA.05 gas turbines to achieve commercial status.

In November 2017, CCJ editors stopped in to visit with Ken Earl, plant manager, and Michael Armstrong, plant engineer, to get the latest rundown on this pioneering facility. Addressing the gaps among different responsible parties proved to be a theme that emerged following the discussion.

GT models, oversight. First and foremost, the Dot 05 engines are meeting all of their performance parameters. “No issues yet,” reports Earl, “although there is plenty of margin in the performance guarantees.”

One of the features of the GTs is model-based controls, which automatically adjust the machines’ (and therefore the plant’s) operation based on real-time variations in input parameters (inlet air temperature, for example) and output requirements (such as load signals from AGC). “The units are constantly making internal adjustments, which enhances flexibility.”

However, because the calculations are performed inside the model, it is difficult for the plant operators to “see” what parameters are controlling performance and to diagnose issues, which forces them to rely on the OEM. GE considers the model proprietary, along with the Dot 05 hardware technology.

The gap here is that GE has its views inside the GT model, but not the overall plant; operators have the plant views but not the GT. Earl considers this a temporary issue, as they create their own models based on comprehensive operating data. “The machines are thinking on their feet,” Earl observed, “we just have to catch up; we need to know if we are getting everything we can out of the units at any particular operating condition.”

As automation and model-based control proliferates, owner/operators should be cognizant of this gap in writing contracts and outlining scope boundaries. After all, the vendor may own the models, but the plant owns its data and is responsible for facility performance.

BOP challenges. Earl further indicates that the balance-of-plant (BOP) performance “is where we want to be,” specifically enumerated as 92% equivalent availability with a forced-outage rate of less than 2%.

Despite the attention Earl and company paid to heat tracing, it still came to bite the plant staff in the behind. In 2016, during early ops, CAMS had to work diligently to make sure the heat tracing systems were complete and adequate. However, a heater did fail inside an enclosure (a level transmitter) and shut down the plant. Critically, Woodbridge is designed as a completely outdoor facility in a region that can experience some brutal winter weather.

Here, the gap is between the heat tracing system designer and the heat trace system installer. As a means to further ensure heat trace effectiveness, CAMS, along with CPV, contracted with Thermon Manufacturing Co to audit the equipment, turn everything on, inspect each individual circuit, make sure the electrical wiring was configured correctly in the panels, measure current draws, and properly set the alarms. Many installation errors were discovered, especially in programming.

This isn’t your father’s electrical heat trace system either. There are 17 heat trace control panels throughout the plant, with dozens of individual circuits tied into each one. The circuits include high/low current alarms and information that helps anticipate failures. False alarms in mild weather were common and had to be addressed, since the alarms are directed to the control room and operators have to go to the local panel to address them.

Even with the audit and rework and $150,000 additional expenditure, Earl indicates the heat tracing is the type of system that “demands constant attention.”

The blowdown system improvements, are working well. Essentially, the logic in an over-complicated control loop had to be modified to reduce the amount of quench water (purchased from the local municipal water authority) used to cool the blowdown. Apparently, the logic was set to protect the blowdown-tank-sump outlet piping from possible high-temperature excursions while also controlling the blowdown-tank outlet to this low temperature.

However, the tank can handle temperatures far higher than the logic would allow thus wasting water by over-quenching. The outcome is that the plant has reduced purchased water consumption by 125 gpm by addressing a gap between the designer’s need to over-protect and the owner/operator’s need to reduce needless expenses.

What the plant cannot control is the quality of the water delivered to the cooling tower, effluent from the county wastewater treatment facility. “We still deal with variability of the solids content in the source water, but it’s not proving to be an issue.” The variations tend to be seasonal in nature. Earl had flagged this as a question mark in CCJ’s earlier reporting because there are no bounds on the quality of the water delivered.

Codifying best practices. CPV has contracted with EP3 LLC to implement a software platform which creates a living best-practices and lessons-learned database shareable among its facilities. The fleet owner employs several different operating services firms and the EP3 Quad C software will help identify, organize, manage, and disseminate best practices among the facilities and service firms. This program also illustrates how gaps among different operating entities and a parent owner/operator can be managed, and will be the subject of a separate article in the coming weeks.

Visible-dye liquid penetrant said best for identifying radial tip cracks in 7FA R0 and R1 rotor blades

By Team-CCJ | April 19, 2022 | 0 Comments

Mike Hoogsteden, Advanced Turbine Support’s director of field services, called to say his company has informed customers that it believes visible-dye liquid penetrant (PT) is the most dependable in-situ inspection process for identifying radial tip cracks in 7FA Stage 0 and Stage 1 rotor blades. The OEM recommends in Technical Information Letter 1509-R3, “F-Class Front-End Compressor Inspections,” that owner/operators check for R0 and R1 tip cracking in regular borescope inspections. The TIL explains why blade tip distress occurs and how to mitigate it.

Advanced Turbine Support has deep experience in compressor borescope inspections, citing the finding of more than 265 cracked rotor blades since 2001 using PT. Hoogsteden said the superiority of PT over eddy current (ET) for identifying the presence of radial tip cracks (photos) in 1509 inspections recently was confirmed when his company was called in to validate the findings of another vendor that had used ET. In this case, the engine owner/operator wanted a second opinion on the findings before it decided on possible corrective action. Second opinions are standard practice for some owners in critical situations.

Advanced Turbine Support’s inspectors confirmed the two cracks found with ET, but also identified three additional cracks that went undetected. “Any of these missed cracks could have led to a catastrophic compressor failure prior to the next annual inspection,” Hoogsteden said. Based on this result and previous experience, he recommends PT as a best practice for identifying radial tip cracks and for immediately conducting in-situ blend repairs of cracks found.

Generators: Lessons learned, best practices shared at GUG 2017 (Part 3)

By Team-CCJ | April 19, 2022 | 0 Comments

CCJ ONsite’s coverage of technical highlights from the Generator Users Group’s Third Annual Conference, held in Phoenix, Aug 27-30, 2017, concludes below with the third and final installment of lessons learned and best practices shared among attendees.

Presentations and discussions are arranged in these five sections:

    • Stator frames and magnetic cores.
    • Stator windings and bus systems.
    • Fields and excitation systems.
    • Operation and monitoring.
    • General topics (read on).

Links are provided to each of the first four sections in case you missed them. Summaries of the presentations in Section 5, which appear below, were prepared by IEEE Fellow Clyde V Maughan, president, Maughan Generator Consultants, who supplied the muscle to get the GUG off the ground in late 2015. Users wanting to dig deeper into any presentation can access the PowerPoint in the Power Users library. Note that Power Users Group is the umbrella organization serving the generator, steam turbinecombined cycle7F, and controls users groups.

Members of the GUG steering committee, all of whom have been with the volunteer organization since startup, are the following:

2018 Chair: Ryan Harrison PEng, lead generator engineer, ATCO Power (Canada).
2018 Vice Chair: Dave Fischli, generator program manager, Duke Energy.
Immediate Past Chair: Kent N Smith, manager of generator engineering, Duke Energy.
John Demcko, lead excitation engineer, Arizona Public Service Co.
Joe Riebau, senior manager of electrical engineering and NERC, Exelon Power.
Jagadeesh Srirama, generator engineer, NV Energy.

Mark your calendar now and be sure to attend the Fourth Annual Conference and Vendor Fair of the Generator Users Group, Aug 27-30, 2018, at the Louisville Marriott Downtown in Louisville, Ky. Follow the agenda, in development, on the organization’s website. 

General topics

    • Moisture ingress and storage mechanisms in large generators, Neil Kilpatrick, GenMet LLC
    • Generator layup, Dhruv Bhatnagar, GE
    • Practical experience in implementing NERC standard PRC-019, Douglas Selin, Arizona Public Service Co
    • Generator maintenance considerations and robotics, Dan Tragesser and Chris Markman, GE
    • Hydrogen seal-oil experience, Dhruv Bhatnagar, GE
    • Coordinated frequency response, Thor Honda, Emerson 

Moisture ingress and storage mechanisms in large generators

Neil Kilpatrick, principal, GenMet LLC, integrated more than four decades of  metallurgical knowledge into his presentation, covering several aspects of moisture ingress on generators:  problems created, moisture opportunities, capillary basics, examples of planar capillaries in generator construction, damage mechanism affected by moisture storage, and why it is so difficult to dry out these machines.

As an example of a problem with moisture ingress and storage, a large generator located in the South (think humid) was found with water actually running out from under the ID of the rings. The cause was condensation on the rotor inner surfaces and planar capillaries and connected surfaces internal to the winding.

Large generators normally are dry under operating conditions. When open and cooled to ambient temperature, there’s a tendency for moisture to accumulate on and in insulation materials. The usual remedy is to apply heat and ventilation in order to dry out the winding; this can be a lengthy process.

There are numerous moisture opportunities related to inadequate protection during shipment, storage, standby, and maintenance. Even during operation, there are opportunities for condensation from gas coolers, cooler leaks, and frame flooding. Outdoor units are a particular challenge given their exposure to weather. Hydrogen-cooled units have lower exposure than air-cooled units because of the controlled operating environment.

Capillaries behave the same whether horizontal or vertical. With dry air at the ends of capillaries, the capillaries contain only air. Increase the humidity to the point of condensation and water starts to condense inside the smallest capillaries. This occurs at about 92% relative humidity (metal temperature relative to dew-point temperature). With nearly saturated air at the ends of the capillaries, water starts to condense in small capillaries. Under saturation conditions, condensation occurs on free surfaces, and pooling begins. The capillaries will fill.

There are numerous capillaries on both the rotor and stator. On the rotor there are capillaries between turns and on both faces of the slot liner (Fig E1). On the stator, there are capillaries in the spaces between core laminations and the spaces between the bar surface on the fillers and core iron (Fig E2).

Damage mechanisms of moisture affect both metals and insulation. For generators which still have nonmagnetic retaining rings susceptible to stress corrosion, crack initiation and crack propagation occur under wet conditions. Note that retaining rings are under high stress at standstill and all other conditions. With long-term wet conditions, rust will form on steel surfaces which are bare and/or porous. Rust is hydroscopic, and will retain moisture—more opportunity for water storage.

On insulation, the major concern is for moisture on insulating surfaces. Typically, wet conditions in generators will result in low resistance to ground, and this must be corrected before return to service.

The issue of the difficulty in drying out a generator is interesting. A generator in operation tends to be inherently dry, because of the high temperature and high ventilation flow. On shutdown, there is no ventilation flow, so the entire machine becomes a large number of stagnant zones. Any stagnant zones that have some moisture content tend to become saturated. Capillary condensation will work to fill all the connected capillaries.

If the open machine is exposed to humid conditions, then the daily dew-point cycle may result in periods when the dew-point temperature is greater than the metal temperature. Condensation will occur, and the machine will take on water as long as condensation continues.

A filled capillary is relatively stable at moderate ambient temperatures and stagnant conditions. There is almost no driving force to evaporate water back out into a stagnant atmosphere at the same temperature. A significant increase in metal temperature will increase the evaporation rate by producing a decrease in local relative humidity. Significant ventilation flow should also break up the stagnant zones with the rapid inflow of dry air. For the rotor, a significant increase in shaft speed will provide a G loading which will tend to centrifuge water out of the rotor.

It is always better to keep a dry machine dry, than to dry out a wet machine. For maintenance and layup conditions, it is important to make sure that capillary condensation conditions cannot occur. Prevention can include maintaining some ventilation flow of dry air throughout the machine and maintaining temperature well above the ambient dew point; a healthy margin would be 80% relative humidity. For long-term layup, develop a system which combines fail-safe sealing, monitoring, and drying. A nitrogen blanket or dry gas feed might be considered. 

Generator layup

Dhruv Bhatnagar, GE’s technical leader for generator-fleet risk management, provided the OEM’s guidelines for unit layup during non-operational conditions. Stator and rotor recommendations are the following:

    • Stator layup for days. No recommendations for H2-cooled units if the hydrogen is pressurized. For liquid-cooled stators, the cooling-water system (SCWS) should be operational, or shut down with water drained from the winding for any layup of more than 48 hours. For air-cooled units, or H2-cooled units that are depressurized, turn on space heaters to prevent condensation.
    • Stator layup for weeks or months. For air-cooled units, turn on space heaters to prevent condensation; same for H2-cooled units, but depressurize before turning on space heaters. H2-cooled units not purged should reduce gas pressure to 0.5 psig to minimize consumption. For liquid-cooled units, the winding and SCWS should be drained and vacuum-dried.
    • Rotor layup for days. Rotor should be at rest with the pole axis in the vertical direction. Coat all exposed shaft surfaces with light lubricating oil.
    • Rotor layup for weeks or months. Rotor should be at rest with the pole axis in the vertical direction. Megger field monthly and trend insulation resistance. A low megger indicates moisture in the generator. Inspect exposed shaft surfaces and the collector rings to ensure that the oil film is adequate.

Similar recommendations were provided for collector systems, seal-oil systems, and coolers.

In addition, the following case studies related to improper storage were discussed:

Case No. 1. Unit was in a planned outage (turbine upgrade). During restart after the outage, the unit tripped on stator differential protection. Upon inspection, damage was noted on the turbine-end series loop caps (Fig E3). The failure was attributed to condensation on stator windings.

Case No. 2. Unit was shut down because of grid issues. Upon restart, a field ground alarm was activated and the unit was shut down again. Inspectors noted rust had accumulated on the rotor and exciter components because of condensation and improper layup (Figs E4 and E5).

Implementing NERC Standard PRC-019

Douglas Selin, PE, consulting engineer, Arizona Public Service Co, provided an overview of the NERC standard and the process APS uses to implement the PRC-019 across its fleet of generators. A review of the standard outlined the functional entities required to comply with the mandate, the applicable facilities, and the individual requirements which involve coordinating voltage regulator controls with the protection system and the capabilities of the equipment (generators or synchronous condensers). The time requirements for implementing the standard on a fleet of generators also was presented.

The evaluation process used by APS includes the following five steps:

    • Identify all of the voltage-regulator limiter and protection functions for a given generator.
    • Identify all of the generator relay protection functions enabled.
    • Determine what coordination must be evaluated based on a comparison of the items identified in Steps 1 and 2 above.
    • Perform the needed evaluation and modify settings such that they coordinate.
    • Document the results in a formal report.

Several methods of demonstrating how the coordination can be reviewed, visualized, and documented were presented for most of the voltage-regulator functions that would be encountered in such an evaluation. A summary list of learnings was offered to enhance the efficiency of the evaluation process.

The effort needed to perform the coordination analysis outlined is a requirement for all generator owners. It has the benefit of improving power-system reliability by avoiding unnecessary unit trips: Generator voltage regulators act to mitigate undesirable operating conditions before relays trip the unit. 

Generator maintenance considerations and robotics

Revision L of GEK 103566, perhaps better known by number than its title, “Creating an Effective Maintenance Program,” was reviewed by Dan Tragesser and Chris Markman to help owners operate their generators safely and reliably. Tragesser manages technical risk for GE’s Global Generator Product Service Engineering, Markman is product manager for generator inspections.

Six key areas were discussed by the duo—including Rev L updates, rotor removal recommendations, inspection and maintenance intervals, how intervals are determined, examples of intervals, and rotor and retaining-ring life management. GEK 103566, which was said to contain information of importance to users, can be obtained from your GE rep.

Robotic inspections were the next topic with specific references made to the OEM’s retaining-ring scanner (Fig E6) and air-gap robots (Fig E7). The speakers said robotic inspections were performed on 512 units between 2011 and 2016, with 8% having significant findings (defined as rotor removal required for repair) and half of those deferred to next outage. There were three forced outages associated with the 20 rotors pulled. Two had rotor grounds and one was forced out by a negative sequence current with arc strikes.

The speakers said electrical tests conducted on the rotors removed typically confirmed the findings of other tests or conditions—such as shorted turn and vibration. Data show robots typically find the same defects as visual inspections with the rotor out. Discussion regarding robot inspections that resulted in a rotor pull noted that more than 50% of these could have been planned for with better operations data review and outage management.

Relative to reliability, GE reported four cases of MAGIC (Miniature Air Gap Inspection Crawler) robots losing parts—including two burst bearings (encapsulated bearings are now used) and three fastener issues (redesigned). Relative to robots getting stuck, GE has emergency retrieval capability built into designs.

Also discussed was the stator cooling water system with focus on copper oxide buildup and removal. 

Hydrogen seal-oil experience  

GE’s Dhruv Bhatnagar returned to the podium to address the challenges associated with seal-oil systems and how to mitigate them. Challenges include the following:

    • The seal rings themselves. Damage, contamination, improper assembly, and cocked seals all can lead to operational issues—including oil ingress.
    • Float traps require manual bypass during every startup/shutdown. Improper procedures are conducive to seal-oil ingress.
    • Oil contamination of the hydrogen control panel.

Seal-oil-system mechanisms and effects was the next topic. Mechanisms include cocked seals, loss of seal oil, damaged anti-rotation pin, contaminants, damaged seals, generator pressurization, clogged drain lines, improper assembly, and misoperation. Resulting effects include higher total and hydrogen-side seal-oil flow, improper liquid-level detector alarms, high float-trap oil level.

Checklists for disassembly and reassembly followed:

Disassembly. Measure rotor position from the outboard oil deflector fit to the shaft, measure the distance between the hydrogen seal casing and the rotor shaft, determine “as-found” seal clearances, inspect seals, and ensure seals are not out-of-round.

Reassembly. Inspect seals and ensure they are not out-of-round, check for any foreign material between the inner oil deflector and hydrogen seal casing, check vertical face of the end shield between the upper half and lower half for any steps across the horizontal joint, perform blue check and ensure 100% contact, check for any RTV that may have squeezed from between the upper half and lower half of the end shield, remove any RTV material that has come onto the horizontal joint of the lower-half casing, ensure seal-oil inlet feed and gas-side seal-oil drain in the end shield are clear.

The presentation closed with a case study of a unit that was offline, but pressurized and with seal-oil system in operation, when a blackout occurred. The DC system came online, but the site lost seal-oil differential pressure (DP). By the time DP was restored, the unit had dropped 10 psi in hydrogen pressure. The decrease in seal-oil DP allowed oil ingress.

The operator received multiple liquid-level detector alarms, and low and low-low lube-oil alarms. Site personnel tried to start up the unit next day but were unable to build lube-oil header pressure. Personnel purged and inspected the generator, which was flooded with lube oil. Air-side seals and shaft surfaces were found to have rub marks (Figs E8 and E9); seals were out of round. 

Coordinated frequency response  

Emerson’s Thor Honda, an expert on the modernization of mechanical and electronic turbine controls, discussed the challenges associated with injecting into the grid large amounts of intermittent power produced by renewable resources. This new and evolving paradigm in electric generation has highlighted the need for synchronous turbine/generators to help stabilize system frequency.

The questions then arise: How the synchronous powerplants respond to system frequency disruptions and what changes may need to be made in order to comply with frequency response codes and standards? Synchronous generators add rotating inertia and have governors which detect frequency disruptions and raise/lower output to quickly balance generation to load (called “droop” control or primary frequency response).

These questions become more acute because tax credits and rapidly declining costs are driving ever more massive amounts of renewables power into existing transmission systems. A lower percentage of synchronous generators means less inertial response to frequency disruptions, and less inertial response means more turbine/generator response is needed. Synchronized turbine/generator droop control must give a sustained response to minimize the magnitude of frequency disruptions and maintain reliability.

NERC’s 2012 Frequency Response Initiative Report found only 30% of the generators online provide primary frequency control, and that two-thirds of those that did respond exhibited “withdrawal” or “squelching” of the response. The reason is outside closed-loop control. Since only 10% of the units online were sustaining their expected primary-frequency-control “capability,” a reliability issue arises: Balancing authorities (BAs) get a significant portion of frequency response from load and cannot predict the load response or control it (load’s inertial contribution cannot be accurately predicted).

These issues have encouraged NERC and industry efforts to improve frequency reliability, thereby making the need for government regulations less likely. One step in that direction is GE Technical Information Letter 1961, “Steam Turbine Governor Studies to Meet NERC Frequency Response Advisory,” which was supported by a webinar.

Honda closed his presentation with these recommendations for owner/operators:

    • Verify unit-specific requirements with your BA.
    • If operating in closed-loop automatic generation control (AGC), biasing may be required to pass BA compliance criteria.
    • When implementing AGC bias, deep the following in mind:
      • AGC will not negate droop impact on site output, which may have economic considerations.
      • Ensure AGC bias is accurate and enabled accordingly.

COMMENTARY: Utilities embrace digital technology within the same old business models

By Team-CCJ | April 19, 2022 | 0 Comments

Spending two full days at the DistribuTECH® 2018 Conference & Exposition in San Antonio, Tex, in January, was an opportunity for the editors of GRiDToday and CCJ ONsite to assess progress as the electricity industry “transforms” from centralized to distributed assets with customer service at the core of the enterprise. Or so all the executives were saying.

The concept of a “smart city” was certainly a prevalent theme, an evolution in buzzy phrases from smart meter followed by smart grid, as was the notion of a “digital utility.” During the keynote talks, a utility executive for a large municipal noted that the company is uniquely suited to be a smart city. An executive from the largest state-owned public utility in the country told the audience it is working to become the first “all-digital utility.” Both have built and are managing significant technology development centers to achieve these goals.

Of course, no one defined what exactly is a smart city or a digital utility, nor did anyone broach what the term smart city might imply about rural areas.

Later in one of the mega-sessions, a representative from one of the country’s largest investor-owned utilities said they were “looking to replace existing assets with new, smarter, and better ones.” This utility is, by the way, one of the largest coal-based utilities, too, which tells you something. The traditional utility business model is to earn a regulated rate-of-return investing in assets over a long time horizon. Smart cities and an “all digital utility” are grand central planning strategies which utilities are comfortable with, even if they are designed for “decentralized” infrastructure.

The muni executive gave an insightful company factoid: Last year the utility had 8470 MW of capacity in operation; in 2020 they expect to reduce that to 7880 MW, while adding 650 jobs. Clearly, those workers are not destined for the powerplants or transmission network. The executive said the company considers the community it serves “energy advisors.” That’s an advisory committee meeting one would probably make any excuse to avoid.

Not your father’s utility bill. The customer-services technology platforms being implemented are nothing like a paper utility billing statement delivered monthly. The hardware model, according to presentations and what was being exhibited on the floor, is the smart phone; the customer experience model is Amazon, and the engagement is intended to be 24/7/365.

Descriptions and demonstrations of these customer/utility interfaces were truly dazzling, with systems controlling a two-way transactional electricity flow interface with the utility, rooftop solar, smart thermostat and HVAC, behind-the-meter storage, electric-vehicle charging station, and more. Customers can chart and alter (or not) behavior, costs, and revenue (if they are selling power back) patterns through data visibility.

Demand side management (DSM), in other words, has come a long way from the utility subsidizing the replacement of an old refrigerator with an efficient one, while the ratepayer plugs in the old one in the basement to keep the beer cold. That was DSM circa 1970s.

Still early days. Despite the promise and potential, the smart-digital transformation is still in its infancy. Utility representatives generally talked more about their “initiatives” and future plans, their technology development programs, and results from initial demo facilities than they did of replicable commercial projects. As one example, the muni utility mentioned above has only one microgrid currently in operation. Meanwhile, the engineers and technocrats grappled with the nuts and bolts of making this stuff work.

Across several sessions addressing microgrid challenges and lessons learned, what was clear is that the one function that essentially distinguishes a microgrid from a conventional power and electrical system serving an industrial facility, for example—the capability to “island” (or safely disconnect) from the utility or larger grid in responding to a disturbance, keep operating, and then automatically reconnecting when the disturbance is cleared—is still the greatest challenge.

In one presentation, the microgrid manager noted that “it was a challenge going from island” operation back to the grid-connected operating state. Further, that seamless transitions work but you may need to shed some load.” That actually doesn’t sound like a seamless transition. Another microgrid operator “was not able to demonstrate transitioning from islanded to grid-connected operation. In this project, one problem was that the battery-storage communication system couldn’t react fast enough to synchronize the battery to utility frequency.

Other challenges mentioned by several with demo-project experience included:

    • Accommodating cloud-induced variability with solar PV systems.
    • Contracting for, and integrating, state-of-the-art components from multiple suppliers into a coherent system design.
    • Understanding and complying with building codes and standards and new standards for microgrid components and grid interfaces, which are still evolving.
    • Control and communications protocols and cybersecurity issues among different subsystems (storage, PV, microturbine/generator, etc).
    • Loop testing hardware with real-time digital simulation (described by one presenter as critical).
    • Handling reactive load profiles and providing reactive power support (one presenter mentioning the application of “smart” inverters).

Finally several C-suite challenges were noted, perhaps unintentionally. For example, a representative of a large nuclear-based utility with a national footprint said they were “pursuing a customer and energy services business model” while also noting that “the customer experience [stuff] is only 5% of the utility’s annual expenditures.” Protecting current revenue sources while pursuing hot new growth areas is always a fundamental challenge for executives of large companies.

One executive noted that “the future is in energy storage.” However, it’s also true that reliable, affordable storage is what could decouple customers from the utility grid, reduce the need for purchased utility power to the emergency backup category, and allow new market entrants to manage a customer’s residential or commercial energy infrastructure. Thus, the real race may be to pay off existing “dumb” assets and retain ratepayers with dazzling new services delivered through an Amazon-like interface before Amazon and others like it dis-intermediate the utility altogether.

Getting top performance from your air-cooled condenser

By Team-CCJ | April 19, 2022 | 0 Comments

Air-cooled condensers (ACCs), like their water-cooled (WCC) cousins, get little respect from plant personnel. These heat exchangers typically are viewed as being of secondary importance compared to the steam turbine/generators they serve. They certainly are not eye candy. The most popular version of the former looks like a huge elevated stage with spindly legs, the latter may be described best as a steel box filled with thousands of small-diameter tubes. Both are just kind of “there.” A pet rock comes to mind.

And what could go wrong with either? Not much, compared to the steam turbine/generators and most other plant equipment. But remember that a healthy condenser is critical to achieving top Rankine cycle efficiency and maximum profit. This puts condensers under the watchful eye of the financial folks.

Powerplant O&M staffs are most familiar with WCCs and much has been written about how to keep them at peak performance. For a short refresher, look back at the work done by your colleagues at Talen Energy’s Nueces Bay and Barney M Davis Energy Centers to earn one of CCJ’s 2017 Best of the Best (Practices) Awards.

Much less has been published on how to squeeze more performance from ACCs. One reason: There are far fewer ACCs than there are WCCs, hence the funds available to support product improvement have been somewhat limited to date. NV Energy believed dry cooling technology could be advanced at an affordable cost by fostering collaboration among owner/operators of ACC-equipped powerplants and it launched the ACC Users Group nine years ago with assistance from the editors of CCJ and CCJ ONsite.

The library of presentations from the organization’s meetings available on its website may be the most comprehensive collection of technical material on ACCs available publicly.

The ninth annual meeting of the ACC UG, held Oct 3 – 5, 2017 in Las Vegas, featured specifics of performance issues and measurements, equipment challenges and solutions, system chemistry, and design details that could be discussed and shared among users, engineering firms, equipment suppliers, and academics, and taken home for the benefit of operating plant personnel, owner/operators, and researchers. CCJ ONsite’s Consulting Editor Steve Stultz filed the following report; his coverage continues in the next issue of this letter.

Looking back over the years to the first meeting in 2009, the editors opined about what they viewed as a quantum leap in technology from then to now. Also, that because of water’s generally much higher value in areas outside the US, technology improvements—such as those to improve fan performance, mitigate gearbox issues, etc—are coming from places like South Africa, Morocco, Italy, Germany, Netherlands, and Spain.

Performance improvement. Huub Hubregtse of Netherlands’ ACC Team talked about the need for good operational data on heat transfer, fans, leak rate, etc, to decide what modifications or adjustments might be required to achieve top performance of the heat rejection system. He stressed the value of engineering expertise to for interpreting the data.

The plant DCS records turbine backpressure, steam temperature at the turbine exhaust, condensate temperature, extraction temperature, steam flow, and ambient temperature at the weather station. With this information, Hubregtse continued, the performance calculation method (a thermal design model) can calculate turbine backpressure as it should be. When compared with actual or historical backpressure, differences indicate either performance loss or performance improvement.

He went on to say that fan-performance determination requires the following data:

    • Actual air flow from the fan.
    • Static pressure in the plenum.
    • Static pressure at the suction side of the fan.
    • Fan power.

Air-side performance is evaluated based on the temperature rise of the coolant across the finned heat-exchanger tubes. The hot air temperature is commonly measured at the outlet of the bundles (50 positions). Air flow should be measured from the fan bridge in at least four directions.

Hubregtse also discussed the adverse impact of air in-leakage on ACC performance and stressed the need to inspect for leaks and eliminate them. He said performance losses of up to 10% have been attributed to in-leakage.

ACC fouling has two effects, he added. First, air flow is restricted, resulting in a higher static pressure than the original design. Second, the heat-transfer coefficient of the finned tubes is reduced by insulation layers (fouling) on the surface. Thus, ACC performance tests should be made after tube cleaning.

Details on air in-leakage at three ACCs in Mexico and two in the UK were provided by InterGen’s Oscar Hernandez, a member of the user group’s steering committee. At one plant in Mexico, instrumentation detected a change in the dissolved-oxygen concentration leading to repair of a spray nozzle. In another, dissolved oxygen again triggered an investigation leading to the finding of a steam-turbine gland seal out of position. For both, credit was given to accurate online instrumentation and continuous chemistry monitoring.

A lengthy list of key indicators beyond chemical parameters was reviewed, highlighting such items as increase in backpressure, decrease in condensate temperature, and loss of ACC vacuum. This was followed by a list of common air in-leakage sources including missing hardware, penetrations, welds, turbine shaft seals, expansion joints, pump seals, manways, and valves. Thermal imaging was recommended, looking for black spots.

Structural Integrity’s Barry Dooley, a member of the ACC UG steering committee mentioned that IAPWS is reviewing the subject of air in-leakage for one of the organization’s Technical Guidance Documents planned for 2018. Dooley is executive secretary of the international body of experts.

Tube cleaning technology has advanced significantly over the last decade. AX Systems’ Romain Pennel presented on an automated cleaning system developed by the French company. His case study for a small waste-to-energy plant in the UK equipped with two A-frame ACCs illustrated the value of cleaning. Output had decreased by 3 MW before cleaning reclaimed most of that loss.

Pennel began with an overview of fouling mechanisms, showing how they create an isolation film and limit air flow through the fins, thereby reducing heat transfer. This includes fouling from the natural environment (such as pollen or sand) and from industrial sources (fiber, dust, and oil). The result: Reductions in vacuum, turbine steam flow, and power generation.

ACC configurations and challenges covered by the presenter included flat, A-frame, and V-frame. Examples of what not to do with regard to external cleaning of heat-transfer surfaces included the following:

    • Don’t use manual high-pressure spray equipment: It’s easy to bend fins when the spray head is not perpendicular to the tube surface.
    • Avoid sandblasting. Risks include fin damage and removal of any tube coating—aluminum, for example.
    • Say “no” to use of a bicarbonate solution for cleaning. Risk is the electrolysis effect between aluminum and NaHCO

EPRI’s Andy Howell, chairman of the user group’s steering committee, followed Pennel with a status report on the users group’s tube-cleaning guidance document, “Finned-tube heat exchanger tube cleaning.” The chemist reviewed an outline of ACC.02, which covers operational factors limiting ACC efficiency (such as ambient temperature and degree of external tube fouling), cleaning frequency, foulant removal using water, air, and dry ice, and much more.

He urged all attendees to participate in the development of the document with suggestions, reviews, and comments.

Of fans and wind. Failure to meet performance expectations often can be traced to wind effects and fan issues—such as marginal design, in both cases. Ockert Augustyn of Eskom, South Africa’s largest utility (produces about 95% of the country’s electricity), discussed operating performance of the world’s largest ACC at Medupi Power Station. It will be home to six 794-MW steam units; three were operating at the time of the meeting.

Eskom operates large ACCs at four multi-unit installations. Fans measuring 30 ft in diameter number between 48 and 64 per condenser; platform heights range from 145 to 195 ft. Water restrictions dictate use of ACCs.

Augustyn stressed that all ACC performance requirements are specified by the purchaser; the supplier is responsible for compliance and design. However, he pointed out that the supplier can be at an advantage because performance testing is not conducted at high-wind conditions.

Augustyn noted these risks for the purchaser: A supplier might be reluctant to add safety margins or other features that would make its offering less competitive, and the purchaser may not be able to disqualify offers or justify higher costs if all suppliers meet the specification. More important, performance characteristics in windy conditions are essentially unknown until after commissioning—too late for design changes.

Because most suppliers are in compliance with specifications, the advantage is theirs, and the purchaser needs to be knowledgeable of all potential risks and limitations.

Operational experience at Eskom shows there can be significant capacity loss during adverse weather conditions (high temperature and high wind speed, in particular). At the utility’s older Matimba site, CODs of its six 665-MW units extended from 1988 to 1993. A dozen vacuum-related unit trips attributed to wind occurred during the first seven years of operation. In 2016 alone, there were multiple cases of load losses exceeding 1000 MW.

Planning for Medupi drew upon Eskom’s experience at Matimba and its other plants:

    • Atmospheric conditions were based on 130-ft elevation above grade.
    • Design wind speed was 20 mph in any direction.
    • Wind-wall height was extended to the top of the steam duct.
    • An 8-ft-wide solid walkway was placed around the entire platform perimeter (for maintenance work, but also to reduce hot air recirculation).
    • Wind cross on ground level was 33% of fan inlet height.
    • Performance guarantees were verified by CFD before construction.

Also required was an increased gap between the ACC and turbine building. At Matimba, the two structures kissed, at Medupi they were 165 ft apart. Plus, the at-grade wind walls at Medupi were extended from Matimba’s 33 ft to 47 ft. Augustyn explained: “We went with what we knew, then made things stronger and bigger.”

The first unit at Medupi entered commercial operation in October 2015 and has experienced no vacuum-related load losses. ACC performance comparisons of Medupi and Matimba benefit from the close proximity of the two plants.

Wind screen design. Cosimo Bianchini, of Italy’s Ergon Research, shared his knowledge on the use of CFD analysis for optimal wind-screen positioning. He captured the attention of attendees with this factoid: Wind’s impact on the ACC is significant, reducing net power by 10% or more for each 22 mph of wind speed. Two common sources of wind-induced ACC losses, he continued, are fan performance degradation and recirculation of hot air into downwind fan inlets.

Bianchini went on to describe his overall modeling strategy with detail befitting a CFD specialist. He pointed out one of the advantages of modeling: The ability to test variations. In this case, 11 mitigation devices (screen plans) were tested by combining various suspended and ground-up designs. The optimal configuration, a compromise between performance and cost, was the cruciform fabric screen (30% open area) and suspended vertical screens around the ACC walls.

The conclusions: Wind screens can mitigate wind losses, showing a gain of up to 14% at 22 mph. This recovery factor starts decreasing at intermediate wind speeds. Actual flow rate depends on wind-screen configuration, wind speed, and wind direction.

Efficient fan design. Augustyn returned to the podium to discuss the “learning experience” for Eskom at Matimba. The plant had suffered historic losses in windy months but the utility’s engineering team concluded it was not economically feasible to reduce them entirely. Typical annual production from the plant is 24,000 GWh. Vacuum-related losses in 2016 totaled 350,000 MWh, or less than 1.5% of total production.

Eskom initiated its loss-mitigation efforts with a thorough review of previous CFD work, including placement of wind screens. This led to a detailed look at fan performance.

Aerodynamic design was reviewed with CFD, keeping the same duty point. Static efficiency was set at 60%, and a steep curve was established to protect against wind.

South Africa’s Stellenbosch Univ was invited to participate in the study and a consortium was established to design, manufacture, install, and commission a high-efficiency 30-ft-diam fan. Consortium members included Kelvion (Germany),  Enexio (Germany), ECILIMP Termosolar (Spain), Soltigua™ (Italy), IRESEN (Morocco), Waterleau Group NV (Netherlands), and Notus Fan Engineering (South Africa). Funding was provided by the European Union through its Horizon 2020 Research and Innovation Program.

A unique manufacturing process offered consistent weight distribution. When eight blades were weight-tested, there was only a 500-g difference (less than 1%) between the heaviest and lightest airfoils in the group. There also was a 50% weight reduction with the new design. Blade angle settings achieved increased volume flow and reduced fan power consumption.

Aerodynamic improvements included the following:

    • The new fan consumes 15% to 20% less power than the existing fan for similar flow displacement.
    • Volume flow rates can be increased by 10% to 20%.
    • Cells have greater protection against wind effects.

Structural improvements:

    • Blades are not resonating, thereby greatly reducing vibrational loads on the gearboxes.
    • Blade shape and structure are consistent, making blades interchangeable without negative effects.

Vegetation challenge? Consider goats

By Team-CCJ | April 19, 2022 | 0 Comments

Sharing ideas, and solutions to problems encountered in operating and maintaining powerplants, is why users groups were formed by owner/operators. Talen Energy is a big supporter of these self-help all-volunteer organizations, its personnel actively participating in several—including the Combined Cycle and 501G Users Groups.

An idea sure to generate discussion at upcoming meetings comes from Colleen Dolan and Regina Chan and their colleagues at Talen’s Athens Generating Plant, operated by NAES Corp. The three-unit, 1080-MW, 501G-powered facility is located 30 miles south of Albany, NY, a stone’s throw from the Hudson River.

Plant facilities include two storm-water basins designed to safely capture and release natural and facility discharge into federally protected wetlands. The man-made ponds are monitored by the New York State Pollutant Discharge Elimination System Permit Program.

Since commissioning in 2004, the ponds, located in opposite corners of the site and designated the northeast and southeast ponds, have become overrun with vegetation, mainly cattails and sumacs. Such vegetation is often found at power-generation facilities, especially those surrounded by woods and wetlands.

Storm-water ponds become inefficient when overcome with vegetation. And when pond berms attract vegetation (other than grass), root systems can undermine the berms causing leaks. At Athens, the cattails also attracted muskrats, which began making holes in the berms. Plus, obstructed visibility of the site’s perimeter was a security concern. Thus, vegetation control became a critical part of facility maintenance.

Proper pond maintenance is critical for both operation and environmental conservation. Generally, storm water refers to runoff that does not soak into the ground, and can travel into waterways. In the case of Athens, that eventually leads to the Hudson River. Storm-water runoff collects pollutants and debris as it travels, enabling concentrations of materials that can cause damage to lakes, rivers, wetlands, and other water bodies. Thus it can have negative impacts on animal and plant life, plus sources of potable water, and other things as well.

Because containment and controlled discharge are part of Athens’ state permit program, vegetation must be controlled.

Cattails, aquatic perennial plants, overwhelmed the Athens ponds. Three species of sumacs (Staghorn, Smooth and Poison) also appeared in and around the ponds. These shrubs and small trees produce flowers with dense pinnacles and fruit. Sumacs can reach a height of 30 ft.

The northeast pond is designed to collect and release water from natural sources and from the Athens facility. When vegetation became an apparent problem, the pond’s discharge pipe was closed for long periods in an attempt to raise the water level where cattail growth would not be induced. The area also was brush-hogged. These two actions significantly reduced the need for vegetation removal and maintenance. But the pond berm and surrounding area soon became an alternative growth area, and uncontrolled sprouting became a new concern. Extensive mowing is the current viable solution.

The southeast pond (map) also is designed to carry, collect, and release, but the facility has not yet been required to open its discharge pipe. Water collected in this pond is from rain and occasional release from transformer containments. Therefore, the level normally is low, providing an ideal environment for vegetation growth within the pond. Both cattails and sumac have flourished and have invaded the berms and surrounding areas. Brush hogging, extensive mowing and manual labor have not decreased the problems. Some cleanup activities actually have increased the progression of these plants.

Actions and alternatives. Biocides and herbicides were never an option for vegetation control. Alternatives were discussed with the New York State Department of Environmental Conservation, concluding that mechanical removal would be the most appropriate. But New Athens had attempted mechanical means since 2011 and these had proven neither economic nor efficient. The vegetation would return.

Sometimes the best solutions are relatively simple and, in this case, local.

Athens personnel contacted a local farmer who raises goats. They are herbivores, naturally clearing land with their insatiable appetites, and capability to ingest a wide variety of plants. Their natural craving includes cattails and sumac. For the more tree-like sumac species, goats will eat the bark or use it to clean their horns, in addition to eating the leaves. This prevents new shoots from growing.

After surveying the southeast pond area, the farmer was willing to locate a small herd of 10 goats in a fenced area for eight weeks. He would visit daily to bring water, check animal health, and oversee progress. This became a trial run to determine whether or not the goats would be comfortable and willing to feed on the overgrowing vegetation. Goats were placed in four different areas of the pond throughout the eight weeks to test their appetites and adjustment to the environment. The results:

Area 1. The southern area of the southeast pond is a steep hill that meets the Conrail Railroad tracks east of the Athens facility. This zone is filled with sumac and had grown into a miniature forest. The view of the tracks (and security perimeter) was obscured by the dense canopy of leaves and thick branches. This area was chosen the first test because of its abundance of sumac.

Within 48 hours, the goats had made significant progress, eating away at the sumac leaves and cleaning their horns on the bark. Within 10 days, and view of the railway was significantly improved. Approximately half of the trees were stripped bare as the goats targeted the staghorn and smooth sumac. They also ate the grass, flowers, and weeds.

The goats concentrated on the steep hill. The untouched vegetation was either plants that they could not ingest, were too high to eat, were poison sumac, or were staghorn containing white flowers (young sumac getting ready to bloom).

Area 2. The goats were moved to the second zone after 10 days. This was the pond area containing cattails and surrounded by sumacs and other vegetation, and was the area of most concern to the facility (pond discharge pipe).

However, because of poor weather and standing water in the pond, the goats did not venture into the base but instead only grazed on the sides, eating the sumac trees.  Their sensitivity to water overruled their preference for cattails, reducing their overall impact. They ate approximately 35% of the area, consuming leaves and breaking down barks, impeding future growth.

Area 3. After 14 days in Area 2, the goats were moved to a zone densely populated with shrubs and weeds on the hill and with staghorn and smooth sumac at the base. This was the smallest test area, bordering a larger natural habitat. Thick vegetation made it difficult to monitor progress. The goats cleaned approximately 25% of the vegetation, primarily on the hill. Rain accumulated on the ground floor where other sumac resided, and the goats were soon moved to the next area.

Area 4. The final test zone was the north region of the southeast pond. This area is also a hill that meets with an adjacent fuel oil tank. It is not filled with cattails but instead consists of tall sumacs, shrubs, bushes, and common weeds. The goats could graze and then rest under the tall trees during hot days.

Approximately 40% of the area was cleared by the feeding. Another 10% was eliminated through sun exposure as the goats stripped the bark and ate the roots. Their preference for sumac reduced any significant impact on other shrubs; they would break the shrub leaves and pull on branches but would not consume them. The farmer became concerned for their nutrition and they only remained for one week.

Results. This trial seemed to benefit the facility, the goats, and the farmer. For the plant, approximately 40% of the overall vegetation in the test pond area was cleared. It was environmentally sound; the vegetation was reduced and nutrients were restored back to the ground. It was also economical, providing a sensible program with positive results. For the farmer, the goats were well fed and grew quickly.

Roadblocks were identifiable. Water in Areas 2 and 3 limited the goat herd’s potential to clear away cattails. Dislike for young, tall staghorn sumac in Areas 1 and 4 limited overall clearing of sumac. But the trial provided first-hand experience and useful data for future trials and programs. Many roadblocks (water accumulation, for example) can be reduced with early planning.

Goat trials will resume in summer 2018 at other plant areas.

Real-time optimization pushes out into distribution networks

By Team-CCJ | April 19, 2022 | 0 Comments

Optimization of the utility production and delivery system at the University of Texas, Austin (UT-Austin) is the poster child for a work in progress. After all, you don’t optimize a 90-year-old powerplant, one with a mix of really old boilers and state-of-the-art gas turbine/HRSGs overnight or even in a few years. It’s also a glimpse into the new world of microgrid integration with utility grids, although you’d hardly call this one “micro.”

Five years ago, CCJ ONsite editors visited the Carl J Eckhardt Heating & Power Complex to report on the plant’s use of state-of-the-art optimization and analytics software and numerous major equipment upgrades and additions. The plant delivers electricity, chilled water, hot water, and steam to a campus of more than 160 buildings serving over 50,000 students.

The editors stopped in during a jaunt through central Texas in January 2018 to get some detail around presentations given by Associate VP Juan Ontiveros and Associate Director Roberto Del Real at last year’s Ovation™ Users’ Group Conference, held in Pittsburgh. “We’ve now optimized the distribution network, not just the production facilities,” they said.

Critical pieces of the most recent optimization effort are the upgrade of the Ovation Electrical Control Energy Management System (ECEMS), dozens of hardwired sensors, and a new chilled-water storage tank. The ECEMS replaces manual control of the four major electrical tie-lines and regulates the campus grid/municipal grid interface with these two overarching goals in mind:

    • Don’t exceed the campus’ 25-MW peak demand charge.
    • Maintain the grid interface at net zero without exceeding the responsiveness of the turbine/generators and while maintaining campus demand.

Simply, the controller monitors the purchased power during each 15-min demand period. The logic calculates the amount of purchased power (MW) in excess of the demand limit. When this number is greater than zero, it is the responsibility of the generator load-control program and load-shedding program to ensure that the demand limit is not exceeded.

Campus internal generation is increased by the amount of power generated by this logic. This also gives the campus the ability to sell power back to the grid when the opportunity is presented. Real-time internal production costs and current grid market prices are used to constantly balance internal generation, purchased power, and power sales. At the same time, the ECEMS monitors hundreds of discrete load points with the capability to remove load in less than 100 milliseconds (ms).

UT-Austin utilities production and delivery: Pertinent details

  • 140 MW of internal generation capability, 60 MW of peak demand.
  • 5 million gal of chilled-water storage capacity, two tanks, five chilling stations with 18 chillers total, six miles of piping.
  • 60,000 tons of cooling capacity, 33,000 tons of peak demand.
  • Four electrical grid ties to the municipal utility, 69-kV transmission feeds, 32 miles of electrical duct banks, internal electricity generated and distributed at 12 kV and 4160 V.
  • 18-million ft2 of building space served.
  • Two gas turbine/generators with HRSGs, four gas-fired boilers feeding four steam turbine/generators at 425 psig/710F.
  • 160-psig steam distributed through the campus heating grid.
  • Two independent gas main feeds into the plant.
  • Close to 1000 meters monitoring electricity usage, steam, chilled water, and domestic water.
  • Overall annual efficiency of 84% to 87% based on Btu generated divided by Btu purchased, making it one of the most efficient district heating and cooling systems in operation, and the first university to be certified PEER (Performance Excellence in Electricity Renewal) by the United States Green Building Council.

UT-Austin also has optimized the chilled-water grid so that it uses the least amount of pumping power to meet demand. This has reduced peak pumping from 70,000 gal/min to 40,000. Differential pressure (dP) between supply and return lines, measured at key buildings on campus, is updated in real time while the pumping speed gets adjusted to follow a low-dP set point in endless iterations with a rate-of-change of 0.25 dP/min.

This low differential pressure has the benefit of satisfying campus demand with less pumping power. The lower-speed flow set points also are optimized in other chilling stations while keeping one assigned for controlling dP. Pumps are equipped with variable-frequency drives (VFD) which lower speed, thus achieving maximum accuracy and controllability.  Because pumping horsepower varies with the cube of rpm, reducing flow has a dramatic effect on power consumption. But it also has a positive impact on O&M: Over-pumping can lift the seats in the control valves and lead to other maintenance impacts.

The plant is equipped with 18 electrical chillers but now, because the optimization effort, occasionally only one chiller unit is needed during some winter periods to serve the entire campus. With the dramatic drop in peak cooling load came a drop in steam flow to produce the required electricity. The plant no longer has to over-deliver cooling at too low of a temperature in winter; now it is able to set the chilled water system at 44F, rather than 39F. This temperature reset follows outside air temperature linearly between 80F and 55F.

Electrical production efficiency is maximized when plant output stays below 70 MW, noted Del Real and Ontiveros. Campus peak load is 60 MW but the powerplant has a maximum production capacity of 140 MW. By adding a 6-million-gal chilled-water storage tank, they explained, they can “shift” 10 MW during the night, make and store chilled water, and “baseline” the most efficient gas turbine/generator. Cutting out a chiller in the summer is an easy way to shed electric load, with stored chilled water from the off-hours making up the difference.

Thus, another goal of the overall strategy is to run the most efficient turbine/generator, the LM2500 installed in 2010, eight months of the year, as opposed to four earlier.

The Ovation ECEMS controller gives the plant the capability to “island under any configuration of generators” (the defining characteristic of a “microgrid”). Said Ontiveros: “During islanding, load shedding becomes really important and the chilled-water storage acts like a big battery—it allows us to ramp the generators comfortably.”

Having 70 variable-frequency drives (VFDs) with inverters, as well as numerous other inverters operating in the system, brings its own unintended consequences: Harmonics and low power factor because of non-linear loads and inductive motors, respectively. The ECEMS includes an automatic VAR control system, which summons VARs out of the generators to maintain a minimum 95% power factor. Advisory control is also a key feature that helps the operator making decisions on which capacitor bank should be brought online for voltage control. The city tie is kept at net unity power factor.

Said Del Real: “The grid connection is what controls our grid frequency. If separation from the city grid occurs, and if the load were to exceed generation, we would then need to shed load fast enough to ‘beat’ the disturbance, requiring a 100-ms response.”

Thus, the ECEMS continuously analyses and updates campus electrical demand, advises the operator on when to stage capacitor banks in and out of service, monitors generator load profiles, and regulates city purchased power while maintaining appropriate power factors in a system that’s become more sensitive because of the presence of dozens of sensitive loads.

Views into the “real-time energy dashboard” built into a campus mapping app allow operators to drill down to the building level (though not sub-metered) and see meters which are not working, monitor all utility flows, and even auto-text outage information to building custodians. Meters are also used to automate the billing to individual customers.

On the plant management side, another consequence has been that the facility is saving significant money on fuel because it operates at optimized production and delivery points, but fixed costs are higher because of the need to maintain the new equipment and demand-side strategies. As with many owner/operators, campus administration loves the cost savings but investing it back into the facility? Not so much.

Yet the performance quest, and the work in progress continues. Del Real said they are investigating transformer efficiency monitoring and “right-sizing” transformer capacity (since they tend to be oversized) as the next steps in optimization.

Doosan takes a high-profile position in aftermarket services

By Team-CCJ | April 19, 2022 | 0 Comments

In today’s topsy-turvy world of electric-power generation and delivery anything can happen, and it does. Recall that only a couple of years ago, the OEM with the largest position in the generation sector purchased a major competitor and tried to take by storm the industry’s aftermarket business—worldwide. That might have led some to conclude that other companies would shy away from investments in outage, maintenance, and repair activities and there would be fewer options for powerplant owner/operators.

Not true. One example: Doosan Heavy Industries & Construction Co (Doosan), a 123-year-old company with deep ties in power generation, purchased privately held ACT Independent Turbo Services July 26, 2017, renaming it Doosan Turbomachinery Services (DTS) and establishing an aftermarket presence in North America to serve the world. The Korean firm did not have a formal gas-turbine repair group before acquiring ACT.

Doosan had a small footprint in the North American electric-power market at the time, which is why you might not be familiar with the company. But it has been a force in Asia, Europe and the Middle East for decades. Doosan may have been best known here for its fossil-fired boiler and combustion systems expertise and products gained through the company’s acquisition of Mitsui’s Babcock Energy Ltd in 2006.

More than 90% of Doosan’s $16.4-billion revenue stream (2016) is generated by its infrastructure support activities divided among six business groups: EPC, turbine/generator, nuclear, power service, water, and castings and forgings. Doosan Turbomachinery Services reports directly to James Kim, director of the gas-turbine business unit in the turbine/generator business group.

Gas turbines. Doosan has a gas-turbine manufacturing history that dates back to 1991, when it was licensed by GE for its Frame 6B. In 2007, it also was licensed to produce gas turbines by Mitsubishi Hitachi Power Systems for its M501F and M501G.

Today the company is transforming itself into an OEM, expecting to introduce a 270-MW (60 Hz) engine to owner/operators in 2022. Full-scale/full-load tests are scheduled for mid-2019 to mid-2020 in Doosan’s Changwon (Korea) manufacturing facility. Development of a 340-MW (60 Hz) H-class engine is well underway with design completion anticipated about a year after commercial introduction of the F-class machine. Previously the company developed and tested a 5-MW gas turbine/generator which was offered commercially in 2009.

Changwon is an “A to Z” manufacturing facility for gas turbines, as it is for steamers, handling everything from the production of producing castings and forgings through engine assembly and testing. A big plus for gas-turbine customers is the facility’s capability in the manufacture of turbine and compressor discs for all legacy engines. Inconel discs are under development.

Changwon’s hot-parts shop, equipped for manufacturing and repairs, does coatings, heat treatment, machining, welding, inspection, etc. The full-speed/full-load test rig has more than 3000 sensors to verify gas-turbine operational stability, structural integrity, and performance.

DTS, which opened its doors as Advanced Combustion Technology in 1996, expanded dramatically in both size and capability in the second decade of the new millennium and today has a global footprint among the industry’s leaders in aftermarket gas-turbine services (Sidebar 1).

1. Milestones in DTS’s history

  • 1996—Advanced Combustion Technology (ACT) opens a 35,000-ft2 shop dedicated to the repair and rejuvenation of mature-frame nozzles/vanes, blades/buckets, transitions, liners/baskets.
  • 2006—Fuel-nozzle flow test and repair facility added.
  • 2010—ACT was sold to private-equity firm and renamed ACT Independent Turbo Services.
  • 2011—Billy Coleman named CEO and president; certified as a Tier-1 advanced F-class component repair facility; 6-axis rapid-hole EDM, 3-axis sinker/ram EDM, and Schenck moment weigh machine commissioned.
  • 2012—Phase 1 rotor shop opened in La Porte. Three Schenck balance machines and a Schenck-ESI static disc balancer installed.
  • 2013—Certified to ISO 9001/OHSAS 18001; Phase 2 rotor shop expansion completed and 100-ton crane added; state-of-the-art stacking pit commissioned.
  • 2014—Repair/upgrade solution developed for 501F two-piece exhaust system.
  • 2015—Phase 3 rotor ship expansion completed; Certified as Tier-1 F-class rotor shop.
  • 2016—Consolidated operations at LaPorte.
  • 2017—Two coating booths commissioned; Doosan acquired ACT and renamed it Doosan Turbomachinery Services.
  • 2018—Discs for 7EA manufactured.

Future plans include a cold-rotor coating facility and final expansion of the rotor building to 420 ft long.

Industrial gas turbine (IGT) component repairs were the company’s bread-and-butter until 2012. The editors visited the original shop of the founding owner during a user-group meeting in the mid-2000s and the expanded facilities in the same location near Houston’s William P Hobby Airport early in this decade after the company was purchased by an investment group.

The new, high-tech La Porte shop, toured by the editors in February 2018, dwarfs its predecessors with more than 100,000 ft2 of production facilities on an 11-acre site with room to expand (Sidebar 2). ACT had begun transitioning to La Porte under the direction of industry veteran Billy Coleman in 2012 with the opening of what it called its Phase 1 rotor shop.

2. La Porte facilities

Doosan’s La Porte complex is divided into five active shop areas, with a sixth set aside for expansion when necessary:

  • Rotor bay with dual 30-ton cranes.
  • Rotor high-bays 1 and 2, each with a 100-ton crane and dual 30-ton hooks.
  • Component repair shop, 30- 10- and dual 5-ton cranes.
  • Thermal spray coating area with dual 30-ton cranes.

Particularly important to gas-turbine owner/operators are the rotor and heavy-mechanical capabilities added as part of ACT’s expansion in the 2011 to 2017 period before the sale to Doosan, and since. Today, the company has a Tier-1 F-class rating by major power producers and ISO 9001 and OHSAS 18001 certifications.

The La Porte team’s accomplishments in rotor repair are a source of great pride at DTS. Forty GE rotors, Frame 3s through 7FAs, have been repaired there already—including five 7FAs in 2017, two with complete unstack/re-stack (Fig 1). One rotor had a long history of problems and was running with up to 6 mils vibration. Repairs reduced that to 1.2 mils, validating staff capabilities.

Doosan’s first 501F rotor was at La Porte during CCJ OnSite’s visit. Replacement of the torque tube and air separator were among the action items. Other noteworthy 501F work involves the exhaust system where issues abound, at least according to presentations and discussion at user-group meetings. Seven two-piece exhaust systems, overhauled and upgraded with Doosan’s zero-hour mod, have been returned to service (Fig 2).

The management team (Fig 3) stressed DTS’s wide range of repair experience on both new and mature engine technologies and its work in developing innovative engineered solutions to solve chronic issues facing users. The following stops were included in the shop tour:

    • Component repair.
    • Rotor unstacking/restacking, de-blading/re-blading.
    • Heavy mechanical—exhaust sections, for example—including rounding and jacking capability, dimensional correction, weld repairs, heavy-duty lifting beams and rigging, specialized fixtures.
    • New parts, including 7EA and 7FA rotor discs.
    • Cleaning and surface treatment conducted in large drive-through blast booth.
    • Rotor balancing up to 175.000 lbs.
    • Thermal spray booth.
    • Machining centers with lathes, boring mills (vertical and horizontal), blade tip and surface grinder, etc.
    • Electrical discharge machining (EDM) and CNC.
    • Welding booth.
    • Heat treatment.
    • Sonic-nozzle flow testing.

‘Self-inflicted logic forcing’ associated with the Mark V remains a mystery

By Team-CCJ | April 19, 2022 | 0 Comments

A year has gone by since Abel Rochwarger, chief engineer at Gas Turbine Controls (GTC), shared with CCJ ONsite’s editorial team his report on an incident in which the Mark V control system on a GE F-class gas turbine inexplicably shut down all the unit’s lube-oil pumps causing extensive damage. Rochwarger said the customer’s team investigating the incident found out that the malfunction was “logic forcing without operator intervention”—hence the term “self-inflicted logic forcing.” CCJ’s editorial staff followed up with Abel for an update, just ahead of the 7F Users Group’s annual meeting, where GTC will be exhibiting Wednesday, May 9.

The OEM obviously took notice of the Mark V malfunction given that sixteen days after the CCJ ONsite’s publication, it released Product Service Information Bulletin (PSIB) 20170519A, “Mark V Communication Interface Overload—Loss of Lube Oil.” According to the PSIB, the OEM’s team simulated the site conditions in a laboratory environment and were able to confirm the self-inflicted logic forcing.

In Rochwarger’s opinion, the tests made a positive contribution to the collective knowledge—for example, by dispelling the suspicion of a possible cyber attack. But they also proved, he said, “there is a bug buried deep—and latent—in the core of the control system, which under some conditions, will manifest itself as it did in the incident described last year.” An analogy, in PC-user’s terms, the chief engineer continued, “If the Mark V configuration and sequence is the software, the bug lurks in the operating system.”

Rochwarger believes the bug is likely to remain, because the OEM probably cannot allocate any valuable engineering resources to eliminate a bug in a mature and discontinued product like the Mark V. Rochwarger recommends that users familiarize themselves with the PSIB, which provides a list of guidelines to prevent, in Rochwarger’s words, the “haywire scenario.”

So, what should a prudent operator realistically consider to “quarantine” the bug? First, suggests Rochwarger: Assess the risk. The PSIB provides excellent guidelines for doing this, he says. Second: Evaluate the situation with the incumbent parties, and define the appropriate protective measures for the site. It should be noted that, for many Mark V operators, a very reasonable conclusion for their situation will be that no action is required. After all, the Mark V has been running in hundreds of plants, with hundreds of thousands of successful operating hours since the early 1990s, and last year’s incident was the first self-inflicted-logic-forcing event ever registered. Third (if required): Implement PSIB recommendations that apply, and consider eventually adding some “foolproof,” hardwired protective measures.

The primary concern is, at a minimum to keep, the DC emergency pumps (Lube and Seal Oil) running even if the Mark V goes “haywire.” This can be accomplished by implementing certain hardwired logic modifications of the Motor Control Center (MCC). And, in the case of gas turbines, an additional modification can be implemented in order to ensure that the control sequence that cycles the DC emergency pumps in case of a complete AC failure stays intact.

Rochwarger told the editors that GTC’s service team can provide an assessment, and develop, implement, commission, and test these modifications to ensure the foregoing protection measures are satisfied. He estimated it would take about four days onsite during a shutdown to implement the hardwired mods.

Finally, the editors asked GTC’s chief engineer if the Mark V bug might have carried over to subsequent versions of the system; Abel wouldn’t hazard a guess, and commented, “This event took us all, literally all, by surprise. And, although this is a question for GE, since it happened over a year ago, knowing first-hand GE’s prudent and conservative approach, combined with their commitment to the highest quality standards, we would expect that they carried out their due diligence to ensure the integrity of their newer products. So, without any further communications, the logical conclusion is: no news is good news.”

The future is nothing without the past

By Team-CCJ | April 19, 2022 | 0 Comments

It’s no secret, many of the electric power industry’s most capable engineers and technicians have retired recently or will do so in the next couple of years. But not Dave Lucier, founder and general manager of PAL Turbine Services LLC, headquartered near GE’s Schenectady works. He recently celebrated 50 years of service to the industry and has no intention of calling it quits.

Gas turbines and Lucier have been “connected” almost since these machines were first used to generate power in the US. He was introduced to GTs in the early sixties by the mechanical engineering faculty at UMass Amherst, graduating only a few months after the Northeast Blackout in November 1965.

GE hired Lucier and enrolled him in its fledgling field engineering program. The crush of new business demanded capable, travel-flexible engineers for installs, startups, overhauls, and troubleshooting. It was a fossil steam world at that time and very few in the industry knew much about gas turbines. The OEM’s Schenectady school turned out some of the finest GT field engineers in the world.

Lucier is passionate about gas turbines and recently shared his knowledge with the editors on the first three decades of their development, focusing on GE engines for power generation. The beginning: In June 1949, Oklahoma Gas & Electric Co launched the gas-turbine era on this side of the Atlantic by starting up a 3.5-MW GE Frame 3 at the utility’s Belle Isle Station.

What’s interesting about Belle Isle is that the gas turbine was part of a combined cycle; it was not a simple-cycle machine as you might have expected. Also interesting was that this and other early combined cycles married GTs and conventional boilers, most often the gas turbines substituting for forced-draft fans. At Belle Isle, the oxygen-rich exhaust gas was used for preheating feedwater.

The first pre-engineered combined cycle integrating the Brayton (gas turbine) and Rankine cycles (HRSG and steam turbine) was installed by the Ottawa (Kans) Municipal Electric Dept in 1967. This 11-MW single-shaft unit, powered by a GE Frame 3 is still in service today. About a year later, when Lucier graduated from the GE field engineering program, the first STAG™ (for STeam And Gas) combined cycle equipped with a Frame 5 was started up by the Wolverine Electric Power Supply Co-op Inc in Cadillac, Mich. This unit, also still active, was rated 21 MW.

Gas turbines not just for power generation. The turbojet engine came into prominence just as World War II was ending. Renowned British design engineer, Sir Frank Whittle, developed an engine in the post war period that is reminiscent of early GE multi-combustor gas turbines. GE and others were looking for applications for this new technology in the late 1940s and Whittle was said to have conferred in Schenectady during the war.

There were at least three obvious commercial applications for gas turbines: planes, trains, and automobiles. Companies like GE, Pratt Whitney, and Rolls Royce responded to meet the demand for jet engines in global air travel. The Union Pacific Railroad experimented with gas turbine/generators on about 50 locomotives operating in the West, beginning in 1948. Power was generated for the drive motors on the wheels. But the high-frequency whine from the compressors was viewed as unacceptable when passing through cities and towns. Compressor noise attenuation technology came later, when gas turbines were applied to land-based powerplants.

The gas turbine also had some success in automobile applications. The first GT-powered auto Lucier remembers was the Jet One roadster, introduced in the early 1950s in England. Poor gas mileage was an early issue for those wanting to drive long distances. It wasn’t until the regenerative cycle was refined for auto applications did the idea of a gas-turbine-powered car become more acceptable. However, compressor whine was objectionable to most, as GT cars passed by street observers.

On the oval, the Granatelli-sponsored gas-turbine race car nearly won the Indianapolis 500 in 1967. A failed part in the transmission (nothing to do with the engine) took it out of contention after leading nearly the entire race. Nevertheless, it didn’t win the race and few people remember just how competitive the turbine-powered car was. One reason is that the rules were changed the following year, so the turbine cars couldn’t meet the displacement requirements versus piston engines.

The first “peakers.” In the small Vermont town of Rutland, circa 1951, three innovative Frame 3 gas turbine/generators were installed under one roof. Rated only 4 MW each, the intercooled, regenerative-cycle, two-shaft engines with fuel regulator controls would start on No. 2 distillate oil and transfer to a heavier fuel oil during operation. The final unit was retired in the early 1990s.

The first “bubble.” In the early 1960s, GE was selling only a handful of Frame 5 so- called “package power plants” annually. Then came the Northeast Blackout and as the 1960s came to a close, the OEM was shipping over 100 units per year from Schenectady. The power industry recognized the GT as a good (and necessary) product for black-start, emergency, and peaking applications. Installation time was short once a flat foundation with rebar and anchor bolts was poured and cured. Some units became operational within a few months

Larger gas turbines were desired, but GE didn’t have one—yet. Instead, the OEM sold Frame 5s in two and four-unit power-block configurations to electric utilities from Brooklyn to San Diego. A few examples: Chicago’s ComEd bought 68 units, New York’s ConEd 48, Washington’s Potomac Electric 16, and San Diego Gas & Electric eight. ComEd put its units around Chicago and in nearby towns, Potomac Electric installed its GTs to provide emergency power for the Capitol and White House.

Unique among these early projects was the 32 ConEd Frame 5s installed on four barges in Gowanus Bay—factory-assembled and pretested. Each eight-unit barge was equipped with a control house, power transformer, and black-start capability. The 32-engine installation could deliver 622 MW when operating on distillate oil with 95F inlet air. Four captive 1-million-gal fuel-oil barges moored to the turbine barges enabled full power production for 50 hours with 40F ambient air.

After the blackout, it took GE about five years to develop and manufacture the first Frame 7A in Schenectady. This prototype was sold to Long Island Lighting Co. Problem was the Frame 7 didn’t have a diesel starting engine, so it wasn’t a black-start machine. Nor was it capable of “dead-bus” starts. But it did approach the desired targeted output threshold.

Subsequent Frame 7B, 7C, and 7E gas turbines were made at GE’s new facility in Greenville, SC, beginning in the early 1970s. Frame 5s were manufactured in Schenectady into the late 1980s.

In 1978, GE-Schenectady developed the Frame 6B, looking to capture an emerging co-generation market. Those applications were located near a “steam host”—a plant requiring steam for process. Once the pressure dropped to a useful level (say 200 psig), it was used in the host’s process. Pressure reduction often was accomplished using a small steam turbine/generator, adding to the plant’s electric production capability. Cycle efficiency approached 50%.

Scroll to Top