The most recent HRSG Forum, conducted virtually July 21, 2022, broadly addressed high-pressure (HP) evaporator deposits, and specifically under-deposit corrosion (UDC), a leading cause of HRSG tube failures (HTF).
NEXT VIRTUAL HRSG FORUM: OCT 13
The first presentation, by Barry Dooley, Structural Integrity Associates Inc, focused on why UDC occurs, how to detect and analyze it, and its role, along with other mechanisms, in contributing to HTF explained why/when removing these deposits is important. Then the second presentation, “So I Need to Chemical Clean My HRSG. . .Now What?” delivered by Doug Hubbard, an independent consultant, was chock full of practical guidance for the onsite user community. Both presentations are available for viewing on the HRSG Forum website and below.
Hubbard’s years of experience, with one of the nation’s largest electric-utility powerplant owner/operators, were etched into every slide, wisdom suitable for a laminated pocket guide of bullet points one pulls out during all stages of chemical cleaning.
Example: “Waste disposal can be up to half the cost of the cleaning,” he said. Those 20,000-gal frac tanks that the cleaning contractor had delivered to your site? “Inspect every one of them,” Hubbard urged, “and make sure they were thoroughly cleaned, and if not, send them back.” If you don’t, then you just took ownership of any nasty residue from the last place those tanks were used, such as the oil/gas fields.
Good questions to ask: What do you do if you fill up your frac tanks? How do I handle the material if it is deemed hazardous waste? How do I manage a spill after the tank leaves the site? To think through these scenarios, Hubbard doesn’t just say “involve” your environmental subject matter expert, he/she must be “intimately involved” in the entire project, from initial planning to project closure.
Resist the temptation to cut a few hours from the cleaning process, even if the higher-ups are pressuring you in the interest of getting back online quickly. In fact, Hubbard recommends doubling the time for chemicals to circulate from what is indicated by the static test (usually done by your contractor)—for example, from six to 12 hours—especially if it’s a cleaning for UDC and hydrogen damage.
You’ve just spent extensive man-hours and dollars planning the cleaning, installing temporary piping, pumps, heat exchangers, etc—and now you want to save three hours of outage time and risk subverting the result? Where’s the logic?
One good reason to allow ample margin in circulation time: You won’t really know if your tube sample is representative of all the corrosion deposits in the HP-evaporator tube surfaces.
You must ensure that the unit is “fully flushed” to avoid an aggravating startup. Passivation of the unit, done chemically after the cleaning, only gives weak protection of the surfaces until the system is restored during a normal startup. For this reason, the unit must be returned to service immediately after cleaning and run at full drum pressure for at least 76 hours (an empirical value from Hubbard’s experience) to maintain the passivation layer.
It is critically important to “hydro” the unit prior to circulating the chemicals, he warns. “Leaks are manageable when you are passing only water through; they don’t get better as cleaning proceeds,” Hubbard noted wryly. Don’t assume your valves will hold either, he added. If you do have a leaking valve during cleaning, remember to flush the other side.
Three types of cleanings delineated by Hubbard are (1) pre-operational (to remove mill scale, cutting oils, etc), (2) high-deposit iron removal, and (3) UDC/H₂ damage cleaning. Each cleaning has its own characteristics with respect to solvent selection, circulation times, discharge material, etc. Cleaning time will be appreciably higher if you have any corrosion cells.
Determining which level of cleaning you need is based on analysis of the deposits and static and dynamic tests with the deposit material. A static test involves taking a 1-in.-tube cross section, placing it in a beaker with solvent and stir bar on a hot plate, and timing how long it takes for the deposit to break up and fall to the bottom. In a dynamic test, the tube sample is placed in a test loop and solvent is pumped around to determine how long it takes for the corrosion products to break up.
Cleaning generates hydrogen gas, so make sure the system is properly vented of gases and air at the high points at least every hour. An “enormous quantity of fumes can get into the steam turbine,” he cautioned, if you don’t backfill the superheater with ammoniated condensate.
Other bullet points for that pocket guide:
- Don’t assume all the deposits are iron, even if that might the case the majority of the time.
- Flows should be between 2 ft/sec (minimum flow for the cleaning solvent) and 5 ft/sec (maximum flow for the corrosion inhibitor).
- Color-code a system flow diagram, mark dead legs, valves, and the cleaning path (fluid entry point and exit point).
- Walk down the system as the cleaning proceeds.
- Cover storm drains.
- Add 5-10 deg F to the cleaning process temperature as extra margin; remember that the fluid temperature is not the tube surface temperature.
- Pull samples while cleaning is taking place to “check on what’s going on.”
Evaporator failure mechanisms. Dooley’s slides first placed UDC in the context of other leading chemically induced HRSG reliability issues and contributors to failures, then went on to show resplendent examples of UDC, UDC with acid phosphate corrosion (APC), and pictures of deposits under various treatment regimes.
Using the term “Repeat Cycle Chemistry Situations (RCCS),” Dooley explained that damage and resulting failures arise when a plant has experienced two or more of these RCCS: Corrosion products, HRSG evaporator deposition, contaminant ingress (no reaction), drum carryover, lack of shutdown protection, inadequate alarmed on-line instrumentation, not challenging the status quo, non-optimum chemical cleaning, and high level of air in-leakage. RCCS analysis methodology is based on over 260 plant assessments worldwide.
If you want to benchmark your plant’s situation, Dooley’s slides also include categories within several of the nine RCCSs listed above. For example, under “contaminants,” Dooley notes “high number of condenser tube leaks” and “continuing to operate when contaminant levels exceed action and shutdown limits.”
The balance of the recorded presentation discusses why taking tube samples is critical, how to analyze tubes for deposits and damage, and what the morphologies look like under close examination by high-powered lab instruments. A detailed chemical and physical analysis of the deposits and deposit gradations, distribution, and loading, along with tube conditions, are required before determining how to mitigate or prevent them in the future. Several slides of tube samples show the consequences of poor chemical cleanings and/or application of film forming substances.
During the Q&A, Dooley notes that 65% of the plants assessed experience online instrumentation issues, but also that operators often are not trained on how to recognize and respond to an exceedance event.
Technical guidance documents are available from IAWPS at no cost for many of the topics Dooley addressed.
With outage season in full swing, Greg McAuley, CTO of TRS Services and lead consultant at sister company GMW Consulting, shares his decades of industry experience below in the post-outage checklist he developed for owner/operators of gas and steam turbine/generators. Post-outage reviews and records are particularly important, McAuley says, because you already paid for the knowledge gained and don’t want to repeat any errors in judgement that might have been made.
Priority No. 1 after celebrating your outage success is to ensure all vendors—in particular, those service providers participating in the inspection, repair, upgrade, etc, of HRSGs, high-energy piping systems, and gas and steam turbine/generators—have completed and submitted their outage reports while memories are still fresh. McAuley suggests you have the reports within a month or two after work is completed.
Reports received, the next step is obvious: Read the reports thoroughly, jotting down questions for follow-up with vendors, noting where clarifications are necessary. Next, load pertinent data into your CMMS and other systems/databases. Review recommendations made by service providers, factoring pertinent comments into the plan for your next outage.
In parallel with your outage review, ensure all components removed from plant equipment are accounted for and their locations are known. Aim for completing the post-outage inventory in less than six months. The timeline may sound generous but it really is not, because the following work is involved:
- Repair versus scrap assessment. Segregate from the repairable spares, parts damaged beyond repair and those that have reached their retirement dates based on service hours or starts.
- Schedule repairs. Decide what parts to repair, and when. McAuley suggests that if you do not have a risk-based inventory management plan, this would be a good time to develop one. Keep an open mind on schedule: Depending on inventory and power demand, you may want to delay or accelerate repairs of critical spares.
- Develop the necessary RFQs.
- Evaluate the quotes received and award inspections. McAuley notes that peer and/or consultant review can help here because no two repair vendors think or act alike.
- Follow the repair effort closely, paying particular attention to assure specifications and dimensions are met and verified by internal or external experts.
- Return repaired parts to inventory.
- Don’t forget the consumables. A list of consumable hardware requiring replenishment should be provided by the maintenance contractor or warehouse supervisor. This can be done to the same time schedule as capital parts.
Safety. Goal for incorporating safety lessons learned into the plant’s procedures should be three months or less. Pay particular attention to the following:
- LOTO issues.
- Performance of dedicated safety personnel.
- Rigging and lifting assessments; factor findings into future lift plans.
- Results of safety audits.
- Results of toolbox meeting audits.
- Communicate audit results and resulting procedural changes to staff.
Outage performance. Review how everyone performed during the outage and how plant contractor personnel can improve going forward. Budget up to three months for this effort.
- Review and discuss the outage scorecard.
- Reassess manpower requirements.
- Identify bottlenecks encountered.
- How did safety impact schedule?
Outage financial analysis. Four to six months typically is adequate for this segment of the outage evaluation.
- Quantify and qualify extra work: Should it have been part of the original work scope?
- Manpower loading and tooling: Could the work have been done with fewer people? Would more personnel have been better?
- Quantify surprises: How could surprises have been avoided by spending less than the extra impact of them?
Post-outage data analysis. Findings can impact several of the other analyses being conducted, so this should have a high priority. Aim to have this effort finished within a month after outage completion.
- Remaining or new equipment issues—for example, oil or air leaks.
Planning the next outage. Begin this task immediately and never stop.
- Base work scope on the type of outage.
- Identify additional scope from the results of the outage just completed.
- Incorporate safety and other lessons learned in RF Q documentation.
- Develop RFQs while experiences are fresh in everyone’s mind. Waiting contributes to forgetting and missing opportunities.
- Modify your plans as new information/data is uncovered or produced.
What do you do when your gas turbine experiences a forced outage (FO), or as Sulzer’s Jim Neurohr and Michael Andrepont put it, a “bump in the night”? Hopefully, your plant already has a plan in place for such emergencies, and they hope you’ll call Sulzer and let them put their full-service shop in LaPorte, Tex, to work on your behalf.
The underlying message of the webinar, “A Bump in the Night: GT Forced-Outage Response,” hosted by CCJ Online, July 28, 2022, was to dramatize the benefits of responding to the FO by contracting one service firm with a single point of contact between site/owner and the shop, rather than trying to manage multiple service firms.
Neurohr and Andrepont imagined the emergency trip of a 7EA machine operating at baseload in a combined cycle, with significant damage to the compressor’s fifth stage, and collateral damage to downstream components, discovered through borescope inspection after the machine cooled down. The two then proceeded to illustrate every major step (sidebar), and many minor ones, between exclamations of “What the heck” in the control room and a gleaming, fully repaired (and perhaps even upgraded) rotor returned to the site weeks later.
10 critical steps in Sulzer’s inspection/repair process
- Receive parts, record, and ID
- Perform incoming visual and dimensional inspections
- Grit blasting/strip coating
- NDT components
- Generate repair quote
- Pre-weld heat treat, NDT, weld prep
- Weld repair, NDT, dimensional check
- Post-weld heat treatment, NDT
- Final dimensions, customer witness
- Prep, ship, provide Turbodocs
The photo journey depicted in the presentation includes lifting of the casing top, onsite data and evidence preserving and gathering, lifting and transporting the rotor, in-shop rotor inspection and evaluation (turbine and compressor), rotor unstacking, root-cause failure analysis (RCFA), rotor overhaul plan and commercial proposal, repairs, coatings, replacement parts, rotor rebuild, rotor bolt stretching, cooling flow testing, final balancing, the findings of the extensive metallurgical inspection and analysis and RCFA, install of the refurbished rotor, and testing and tuning prior to the machine’s return to commercial service.
As part of a response to an attendee question, the presenters noted that they can assist the owner/operator in working with their insurance carrier to cover a non-OEM refurbished machine.
Participants also were treated to a veritable tour of the LaPorte service facility, first built in 1973, which now boasts 500 employees and a full complement of state-of-the-art inspection, evaluation, repair, coating, testing, and balancing facilities.
These days, replacement parts can be difficult to procure, especially one-offs. Sulzer stocks many of the replacement parts, is increasing its on-hand inventory, and/or can manufacture them on short notice to keep the project moving forward. Upgraded components “in many cases can run longer than OEM recommended intervals,” the pair said.
Most of the questions were practical in nature, but not readily answerable because each FO or turbine wreck is unique. RCFAs are time consuming and involve “heavy analytics,” the specialists noted. Not all RCFA findings are “conclusive,” but “generally, we’ve seen whatever failure mode it is before.”
In a response to a question about fuel nozzle and combustor damage, the Sulzer duo reported that, year to date, three 7EAs have experienced failures of transition pieces in DLN units, new failure modes are being observed, and nozzles are wearing faster and showing new damage indications.
In addition to the best practices published on CCJ Onsite earlier this week, here’s a new batch of ideas owner/operators of any gas turbine can employ at their facility.
ESSENTIAL POWER NEWINGTON (EPN), managed by Tom Fallon and operated by Cogentrix Energy Power Management, is a merchant facility that runs primarily on natural gas, and over the past decade has transitioned from consistent daily cycling to more seasonal operation. The plant’s 10-cell mechanical-draft cooling tower takes brackish makeup water from the tidal Piscataqua River. Two 50%-capacity main circulating-water pumps serve the steam turbine’s main condenser and Newington’s closed cooling-water system. These pumps are protected by primary and secondary removable debris screens located between the tower basin and the pump pits.
During long offline periods, EPN was experiencing more frequent and more severe fouling of the circ-water pumps’ debris screens than previously. Plus, staff observed appreciable sediment and microbiological fouling of the condenser. Further, during online periods, especially on hot summer days, condenser backpressure increased, reducing steam-turbine output when most needed.
The site’s debris-screen cleanings require contracting for mobile-crane and vacuum-truck services—an added cost, along with the increased potential for personnel injuries during the cleaning process.
Root cause of the fouling: Excessive biological growth in the cooling-tower basin and circ-water system. The site’s policies and procedures had been to dose the basin with sodium hypochlorite while the plant was online. But with operation curtailed, personnel realized a different approach was needed.
A comprehensive study determined the existing online dosing regimen needed augmentation with offline dosing. This would allow the presence of a measurable chlorine residual in the circ-water system when EPN was not operating. Subsequently, personnel began a thorough review of the procedural, administrative, and engineering changes required to allow for the successful implementation of new practices.
Of primary concern was ensuring that no measurable amount of chlorine would be discharged inadvertently (a violation of the plant’s environmental permit) over a potential offline period of several days and multiple shift changes.
Operationally, this presented some very real practical concerns and adjustments: Cooling-tower basin level would have to be reduced significantly during offline dosing to accommodate the approved low-volume waste streams plantwide while keeping permit-limited tower salinity at manageable levels without the option of tower blowdown and being more conservative with sodium hypochlorite dosage—all while maintaining Newington ready to run.
An operating procedural update was made to account for the new shutdown dosing regimen, as well as an associated checklist that would remain active for the dosing period. This allowed site personnel to have an identifiable point in the process at any moment, as well as a guidance on what steps were next.
Conspicuous signage now is displayed at the control-room console prohibiting the operator from opening the tower blowdown valve from the commencement of dosing until free chlorine is below the detectable limit, which might be days later.
In addition to these administrative controls, an engineering control was implemented through the use of a DCS software lock (inhibit function) on the cooling-tower blowdown valve, which is enabled at the start of a dose, creating an additional step to reduce the likelihood of inadvertently opening the valve when chlorine is still present.
Once these processes were fully in place and personnel trained, a steady program of cooling-tower dosing began during the offline periods when the plant was not selected to run in the market.
Results: One of the first things staff noticed was the significant reduction in chlorine consumption, residual chlorine being more effective with the new dosing system because it remains in the system longer than previously. This allowed operators to reduce the chlorine dosing-pump run times from what was once typically up to an hour, down to as little as 10 to 15 minutes for the same measurable residual.
Also found was that during the first outage after implementation, a full cleaning of the cooling-tower basin was no longer needed. This was a significant savings on contractors, pressure washers, rented vacuum trucks, and outage timelines.
The number of cooling-tower-pump debris-screen cleanings also went down to just infrequent occasions in the fall when leaves are introduced into the basin. This was an enormous cost saving.
The other benefit noted was a consistently lower backpressure on the condenser while online in summer. During the high demands of the summer peak season, that can really make the difference in meeting the plant’s dispatch targets.
During these periods of change in powerplant operational profiles, it’s the realization and implementation of the simple changes that can make a significant difference to the material and economic condition of the plant. Although the change described above was quick to identify, it was fully studied and implemented to comply with all environmental permitting and operational requirements associated with the systems involved.
Overall, the new dosing process has realized significant cost saving for the plant, with no impact to the environment and with minimal investment. While a change such as dosing time may seem minor, sometimes it really is the small changes that make the biggest impacts.
How to remove air from your GT water-injection system
EPN operates primarily on natural gas with sporadic operation on fuel oil in winter. When running on oil, demineralized water is injected into the combustors to reduce NOx emissions. Water is supplied at up to 250 gpm per unit from the demineralized-water storage tank to a high-pressure, variable-speed injection pump located alongside each gas turbine.
The water-supply piping system, approximately 1000 ft long, has over a dozen elevation changes from 6 ft below grade to 70 ft above, between the forwarding pumps and the inlet flange to each injection pump.
During long offline periods, air accumulates in the piping and is trapped at numerous high points in the system. When water injection is initiated, air is carried into the injection pumps, causing pressure transients that have, at times, shut down the water injection system and the gas turbines.
Over the years, personnel added numerous manual vents, as well as several small automatic air-release devices. However, access to and operation of the vents has been problematic, especially in winter, given their number and the outdoor location of several. While manual venting had reduced the frequency of events, it had not eliminated the problem and numerous manual operations were required to complete fuel-oil transfers.
The vents also had to be left open for extended periods as the entrained air traveled through the system. As GT load and water-injection flow increased, new pockets of air would appear at vent points. Despite an intricate procedure and numerous vent points, it was still commonplace for water-injection pressure transients to disrupt operation after long periods of system layup.
Staff elected to install a high-volume vortex air separator, with an automatic air release, in the demin-water forwarding piping at the inlet to each water-injection skid. Detailed engineering was performed by site and corporate field services personnel. The vessel and piping were sized for 96% air-removal efficiency at full flow. The majority of the piping was prefabricated and the separators were installed in four days during a planned outage. Operating procedures and applicable drawings were updated and operations staff was trained in the revised operating procedures.
The site completed its full-load liquid-fuel operation audit without any water-injection-system or gas-turbine trips caused by forwarding-system air entrainment or pressure transients. Success of the system changes was verified in the winter as the site operated on oil over a dozen times with no events related to air entrainment. This change has reduced manual operator intervention during fuel-oil transfers and has greatly improved liquid-fuel-system reliability. Reduced venting and the eliminating of continuous manual venting also has reduced demin-water consumption.
EMERGENCY RESPONSE TRAINING AT RUMFORD POWER, a process safety management site that uses anhydrous ammonia for gas-turbine inlet cooling. The plant is a participating member in the local emergency planning committee (LEPC).
Not having a facility-based hazmat team, Rumford would rely on its relationships with the local fire department and hazardous response unit in the event of an uncontrolled ammonia release. Staff annually participated in a table-top exercise (TTX), gathering members of the plant, fire, police, hazmat, paramedics, local regional dispatch, and the hospital to talk through potential scenarios and go over the process of what a response to those situations would look like.
The event created a camaraderie between the plant and first responders, and was a knowledge-share, usually creating follow-up actions or plant tours to show first responders the site and where they would be responding to in the event of an emergency. A unique learning situation was created by the Covid-19 pandemic, when in-person TTX was not realistic.
Zoom to the rescue! Rather than taking a “pass” on the year, with all participating personnel available to participate remotely, Zoom was the platform chosen, because of its breakout rooms for side discussions.
The scenario (italics below) for the TTX developed in collaboration between the plant and the LEPC deputy director was split into the following two modules:
Module 1: The two Rumford Power employees on duty begin the startup process at the facility. One employee is in the control room, one outside. The employee in the control room notices a vehicle approach the gate, where it sits for several minutes. The employee calls 9-1-1 to report the suspicious vehicle.
While on the phone with 9-1-1, the vehicle drives through the gate and into the plant. The employee reports seeing on camera, the driver’s arm outside the window of the vehicle holding what looks like a handgun.
Module 2: The vehicle makes multiple laps around the facility, and then leaves the property. Upon reviewing the camera footage, the employee notices a vapor cloud near the anhydrous ammonia lines outside the facility. Soon after, the south ammonia ceiling detector begins to alarm (150 ppm) indicating a leak.
The scenario presented several items for discussion: from suspicious activity, to security breach, to anhydrous ammonia release; plus, it tested communication and coordination between the plant and multiple first responders. Once the meeting commenced, the “scenario” was presented to plant employees onsite that day, a 9-1-1 call was simulated with the regional communications center, and the event played out testing reactions and emergency response procedures in place both onsite, and with the first responders.
Breakout rooms were created so plant personnel and first responders could congregate (1) to develop initial action items and then get back together, (2) simulate developing an incident command structure, and (3) gather plant information and share it with first responders so action plans could be developed.
The two modules were played out back-to-back, with initial communication made from the plant to the regional communications center. Following the initial trigger of site “Emergency Response Plan for Suspicious Activity” and generating a response from police, the site personnel answered questions to provide information to the first responders.
Once completed with the initial module, a new discovery was made of a potential anhydrous ammonia vapor cloud being picked up on plant security footage that was being reviewed by staff to gather more information on the gate breach, which expanded the interagency response and brought in fire and hazmat response and planning in the situation.
The after-action report and improvement plan revolved around two main core capabilities: operational coordination and operational communication.
Coordination strengths included the following:
- Quick notifications to mutual aid resources and supporting organizations while mobilizing needed resources.
- Hospital quick notification allowed setup of mass decontamination capability if needed at the hospital for patient surge or chemical exposure treatment.
Areas of improvement included these:
- Increased visibility and familiarity for first responders. Facility tours were provided for all first responders. Rumford Fire had many training experiences at the plant, but the Rumford Police Dept had not. Thus, familiarization training was provided to all PD staff. Large signs were made, at the suggestion of first responders, to label the sides of the building (A, B, C, D) and to number doors. These action items were completed by Rumford employees.
- Harden the plant for safe place of refuge in event of a facility breach. Rumford follows CIP-003 protocols for physical security with fence, gates, and locked doors. As an added measure, deadbolts were strategically located so plant staff could create a safe room if a breach around the outer security level at the site were to occur.
Communication strengths from the TTX were the results from the simulated 9-1-1 call with dispatchers. Walking through all questions from the playbook showed plant staff what would be asked if the scenario were to arise. The Zoom break-out rooms were also deemed to be a benefit to the exercise to create an “isolation” like a real event would be, not having everyone at the same table as had been performed in years past. The plant had previously purchased a two-way radio to communicate directly with first responders if the need were to arise. A test was conducted with first responders to confirm proper operation and programming of the radio.
As a result of these activities and outcomes, the plant was named Oxford County LEPC Facility of the Year.
EMPIRE GENERATING COMPANY, managed by Chet Szymanski and operated by NAES Corp, uses gray-water effluent from a local municipality’s sewage treatment plant as incoming process water for both the steam-generating and cooling-water systems. Water quality, specifically total dissolved solids (measured as conductivity), varies greatly depending on the time of year and the quantity of water received. During high-conductivity events, even with the primary filtering mechanism operating at optimum efficiency, the plant experienced carryover of impurities to its other systems.
To minimize the carryover effects and to ensure that the water is properly disinfected prior to being used as a cooling medium, a 600,000-gal chlorine contact chamber is located upstream of the cooling tower.
Over more than 10 years of operation, the treatment system proved extremely effective at removing the carryover material from makeup water. However, system design made no provision for removing that material. An ever-increasing amount of waste material reduced system effectiveness over time. To ensure operation as designed, an effective method to remove the semi-solid waste material had to be developed.
The goal was to both clean out the sludge and recapture as much of the water as possible. Note that a previous attempt to remove semi-solids by vacuum truck proved unsuccessful because of poor performance and high cost.
Plant personnel and the water-treatment vendor identified geotubes (a permeable system of engineered textiles specifically designed for dewatering high-moisture-content sludge and sediment), in conjunction with an approved flocculant, as the most effective solution. Automatic sump pumps transfer recovered water to the plant’s cooling tower. The semi-solid material remaining, characterized as non-hazardous for waste disposal, is sent to a local landfill.
HUNTERSTOWN GENERATING STATION, managed by Competitive Power Ventures and operated by NAES Corp, ensures outage LOTO accuracy and completeness by mapping procedures on drawings. Plant Manager Thomas Hart said the challenge was to clearly identify the LOTO scope and to provide the means to present and communicate the safety plan to all outage employees, including outside contractors.
Hunterstown requires that all outage LOTOs be mapped out on electrical and mechanical drawings. Locations of work must be delineated in the drawing package and isolation points highlighted, demonstrating the relationship between workscope and LOTO boundary.
This proven process provides the means to easily demonstrate through a drawing review and walk-down of equipment that the site’s LOTO program is effective and the specific workscope to be performed by a contractor is compatible with the LOTO scope applied to the equipment and presented in the LOTO package (including drawings). This results in clarity for contractors and confidence in the site’s safety programs. Because of this there are fewer requests from contractors to make changes that would expand LOTO scope to increase the LOTO boundary.
Feedback from contractors has been positive. One example: GE said, “Hunterstown’s LOTO program is unmatched in providing clear understanding to the scope of the LOTO and how the work we will be performing is covered under the LOTO hung on the equipment in the field. With our GE procedures, we are required to walk-down every LOTO and Hunterstown’s program provides assurance that we can clearly see on paper in the marked-up drawings how what we see in the equipment walk-down provides protection for our workforce.”
MARCUS HOOK ENERGY CENTER, operated by GE, has four hydrogen-cooled generators: three GE 7FH2 machines, one Toshiba TAKS. Hydrogen is supplied by a local firm in 16-cylinder cradles; delivery is by truck. Consumption of the 7FH2 generators is 500 ft³/day by design; the TAKS unit uses 1236 ft³/day. The 16-packs were moved to each generator by forklift. Challenge was to reduce H₂ consumption, thereby reducing cylinder handling and improving system safety.
GE operations personnel used a spreadsheet to track 16-pack movements and hydrogen use. All generators, save one on the gas-turbine units, were consuming well above design. The operations team worked systematically to locate the leaks, which were found to be the following:
- GT2 generator. A ½-in. poppet check valve in the hydrogen supply line was found leaking and replaced.
- GT1 generator. A bad gage caused H₂ supply pressure to run high and a small relief valve unseated as a result. The gage was replaced.
- ST generator. The scavenging valve for the hydrogen purity meter was allowing excessive flow and that was corrected.
Results of the corrective steps taken by staff: A 50% reduction in 16-pack changes and hydrogen use. All four generators now are consuming hydrogen at a rate below design, reducing cylinder handling and onsite traffic, as well as the plant’s risk profile. Expectation is the combined improvements will save about $50k annually. Also under consideration: Reducing onsite storage of the 16-cylinder packs, thereby increasing saving.
RUMFORD POWER, managed by Justin Castagna and operated by Cogentrix Energy Power Management, faced changing markets that reduced run opportunities. Response of the maintenance staff to the plant’s rural Maine location usually was half an hour or more. Given Rumford’s low capacity factor, response time was critical to resolving issues and returning the plant to operation: There was little opportunity to make up for the unavailability by extending week-long runs as was done in the early 2000s.
Through the plant’s RCA process, which is implemented for every forced-outage event, staff identified common areas of improvement that with a little training would enable others (for example, onsite operations personnel) to troubleshoot and repair once they better understood the affected system. This effort could help reduce the plant’s equivalent forced-outage rate (eFOR).
The plant already was using online training modules for OSHA and process safety management training. Based on the successes gained, staff decided to grow the training library.
Looking back over the past five years of common themes or problems for call-ins at the site, the three areas that routinely came up in troubleshooting events during off-hours, or when the plant was in a forced outage, were the following:
- Fuel-gas letdown adjustment.
- CEMS calibration-gas bottle replacement.
- Breaker racking.
The fuel-gas letdown adjustment was simply a response to a common issue the plant had in 2017-2018. Repeat failures of the gas letdown system caused starting-reliability issues, including a short period of forced-outage time. Maintenance technicians knew the process of checking and setting the Emerson (Fisher) 310A-32A pressure regulator, but the operators had little involvement in the process.
The training module created gave the background of the theory behind the “wide-open monitor” method of control and use of a second downstream pressure regulator as the controlling valve. Support documentation included cutaway images of the valve with descriptions, the four basic steps in setting the letdowns, etc.
CEMS-gas bottle replacement is a common task that the plant completes, but primarily by I&C technicians. Operators assisted and swapped bottles, but there was no formal training or instructions for the task—just on-the-job training. While it could be months, or years, between the need to swap a calibration-gas bottle on a Sunday afternoon, the training module eliminates to the need to call someone in on a day off.
Breaker racking was a training module motivated by an earlier arc-flash study. Modifications were made with new relays installed on the 480-V switchgear to reduce incident energy levels, and recommendations were made to purchase remote racking devices as an extra precaution to remove personnel from the potential “line of fire.” Often during troubleshooting efforts in support of a forced outage, breakers must be racked off the buss and LOTOed.
The training module developed shares the PPE requirements for the task, the equipment and tools necessary, and the preparations needed for determining which method is best for racking (manual or remote) and how to perform each method. While this had no direct impact result in forced-outage reduction, it was identified as a best practice by the site to share all information about the breakers themselves, the racking devices, PPE, and process on confirming zero energy to maintain the plant equipment.
These three examples of knowledge-sharing, along with site-specific web-based training, help provide both process theory and real-world experience. Result: eFOR improved from 0.28% 0.47% in the first year of the program’s implementation.
As new hires replace a retiring workforce, the training helps present real-world examples and expectations to new hires. When new personnel come aboard, they are given a database of information that the plant has gained over the last 20 years, enabling them to grow their skill sets and be better prepared for events as they arise.
Brent Converse, the plant engineer at Old Dominion Electric Cooperative’s Wildcat Point Generation Facility, has three decades of experience in the operation and maintenance of powerplant equipment—including advanced-class gas and steam turbines. Among the most important lessons Converse has learned over his career: “Stay ahead of the curve” on things having a significant impact on reliability and performance, and pay close attention to detail.
The editors asked Converse to share an example during a telephone interview. He chose the lube- and hydraulic-oil conditioning systems for his plant’s three turbine/generators. Background: Wildcat Point’s Mitsubishi Power 501GAC gas turbines have separate sumps for control and lube oil—the former 150 gal, the latter about 6900. Both systems are charged with Idemitsu’s Daphne Super Turbine Oil MG32.
The Alstom steamer has a nominal 6600-gal combination sump for control and lube oil; the Alstom hydrogen-cooled generator a separate sump for seal oil. The fluid common to both systems is Mobil DTE 746.
Converse, who has been at Wildcat Point since before commissioning, said the first oil conditioning systems installed were C.C. Jensen Inc’s off-line (a/k/a kidney loop) HDU fine-filter solution for the gas-turbine control-oil sumps (figure).
The plant engineer monitors control oil quarterly, judging its condition primarily based on the results of RPVOT, RULER, and acid-number tests. The HDU can mitigate fluid degradation conducive to the formation of acids and insoluble oxidation products that could impede the operation of components critical to turbine control.
However, antioxidant depletion makes it necessary to replace control oil every two years, or so. “Having had no unit trips attributed to control/lube-oil issues since commissioning, such attention to detail has paid dividends,” says Converse.
The plant engineer’s familiarity with the Jensen system was a catalyst for a discussion with the company’s technical manager, Axel Wegner, at a user group meeting. The result was two-fold: Installation of Jensen’s varnish removal unit (VRU) on both gas turbines and its PTU-type Filter Separator on the steam turbine.
More recently an HDU-type Fine Filter, similar to that serving the gas-turbine control-oil system, was installed on the ST generator seal oil system. The major difference between the two is that the latter is explosion-proof.
Why kidney loop. Kidney loop or offline filters benefit users because they are independent of the fluid system and always in service at an optimal flow rate—thereby avoiding pressure fluctuations and other disturbances that might otherwise negatively impact rotating equipment. Plus, they achieve very fine filtration. As the illustration shows, turbine fluids are withdrawn from the lowest point in the oil reservoir, removing sediment in the process.
All of the Jensen offline filters installed at Wildcat Point—HDU, PTU, and VRU—are arranged in this manner. Likewise, all are designed to (1) remove particles 3 microns absolute and larger, (2) absorb up to 1 gal of water per filter insert, and (3) remove insoluble varnish. This performance is achieved by a stack of two or more filter inserts, such as those shown in the photo, made primarily of compressed wood cellulose and cotton linters.
Composition of the filter inserts may vary from the standard Jensen offering where special requirements warrant. This has not been necessary at Wildcat Point. Wegner notes that independent laboratory tests show the Jensen filter inserts do not affect the phenolic and aminic antioxidant additive packages typically used in hydraulic/lube oil formulations used in powerplants.
Applications. The HDU and its standard filter insert are designed for use in applications except where water ingress is expected. Special inserts are available for acidity reduction, dissolved-water removal, and soluble-varnish removal.
The PTU is used mostly with steam-turbine lube-oil systems (and diesel-oil filtration systems) because of its ability to remove large quantities of water from the turbine fluid and discharge it automatically.
Wegner says the PTU has advantages over dehydrators often used in ST control- and lube-oil systems for this application—including lower capital and operating costs, ability to remove particulates and varnish, and faster water removal. However, dehydrators may be beneficial in systems charged with fluids having poor demulsibility (don’t shed water easily). Centrifuges, a third alternative for water removal, are limited in their ability to work reliably over long periods and to reduce moisture levels to the 20 ppm often recommended today.
Jensen’s VRU typically serves on large gas and steam turbines where large amounts of soluble varnish must be removed from lube oil. A specially designed filter insert removes dissolved and suspended soft contaminants by polar attraction and alerts on varnish saturation by high pressure. The varnish-free oil produced cleans all system components it comes in contact with, ultimately reducing the level of varnish in the full charge of fluid to near zero.
Jensen recommends that its filtration systems run 24/7/365 and that the filter inserts be changed annually, when the pressure drop across the filter exceeds the recommended limit, or when oil analysis requires a change—whichever comes first.
A high pressure drop requiring new filter inserts can be caused by a leak exceeding the water holding capacity of the filter or by a high concentration of particulates in the turbine fluid. Wear and tear of mating parts and rusting of carbon steel components—such as the oil reservoir—contribute to the latter. Wildcat Point avoids rust to the degree possible with oil sumps made of stainless steel.
Regarding control of particulates, Wegner says that begins when the oil is purchased. He urges filtering new oil or specifying a required ISO cleanliness level at that time to avoid receiving a fluid that may be dirtier than that called for by manufacturers of servos and other hydraulic control components. Based on his experience, you can expect an average contamination level of about 19/17/14 unless you write a tighter spec.
Extended warranties and service. There’s only so much plant personnel can handle today given busy schedules, new operating regimes, etc. Converse said Wildcat Point opted for Jensen’s extended warranties and service to keep equipment running the way it should, maintaining turbine fluids within recommended specifications.
Jensen changes out filter elements annually and performs all PMs called for in its O&M instructions. Any interim issues are responded to in timely fashion and there have been no delays in service and in the receipt of necessary parts. As noted earlier, “no failed starts” is the objective and the fluid systems for the principal rotating equipment at Wildcat Point continue to meet these expectations.
Consolidated Asset Management Services (CAMS), one of the electric-power industry’s leading integrated services firms offering comprehensive asset management, O&M, financial services, and compliance and consulting solutions, shares its considerable experience with those considering a unit purchase in “8-pitfalls to Avoid When Buying a Power Generation Asset,” available online at no cost.
Evidence of CAMS’ accomplishments includes the company’s success in CCJ’s annual Best Practices Awards program. Over the last decade, the nominal hundred generating units in its portfolio (totaling about 40 GW of capacity) have earned more than 75 awards—including the Best of the Best citation in 2022 to St. Charles Energy Center, which it operates for Competitive Power Ventures.
The authors of “8-pitfalls” note in their introduction that “evaluating a power asset becomes increasingly difficult without a sound understanding of power operations and fundamental asset-management optimization capabilities.” Highlights of the guidance document include the recommendations summarized below.
Engage an independent engineer to conduct an unbiased assessment of the asset, as required by lenders for their underwriting of the potential acquisition. But because this report may be generic in nature, focusing on the general commercial viability of the project (for example, compliance with permits), the prospective buyer is encouraged to dig deeper into plant specifics—including O&M history.
The authors suggest having an O&M partner throughout all phases of the acquisition process. “Working with an O&M provider to assess a facility’s O&M needs will create a realistic, actionable game plan for the future. With this approach there are fewer surprises and the buyer has a solid foundation to optimize.”
Account for existing corporate support and its replacement cost. If the generating unit of interest is included in a portfolio of assets, beware the possibility that costly corporate support services may not be included in the seller’s model. Example: The cost of regulatory support to assure compliance with NERC and other requirements. The seller may make generic assumptions as to the cost of providing these services in its model, but such simplifications can leave the potential buyer with unbudgeted costs after the generating unit is acquired.
Develop a custom O&M budget for the facility of interest. Avoid the temptation of “adjusting” the plant’s existing budget: Build a realistic operating budget from the ground up by having experienced personnel review each line item.
Additional revenue opportunities. While the custom O&M budget will help control the expense side of the operating statement, the buyer should consider investigating the potential for additional revenue opportunities to further increase profitability. A few areas to consider:
- Is the plant capturing its full value in the marketplace? Look into electrical interconnections and corresponding agreements to be sure they are consistent with the actual capability of the facility.
- Is the area on which the plant is built fully utilized? If not, can the excess be sold?
- Is there unused inventory that can be liquidated?
Evaluate plant personnel. During site visits, discussions with plant staff can be informative and provide insights into operational and plant issues that might not be identified otherwise. While sellers typically do not want to give prospective buyers access to plant staff, they should insist on it. One outcome of personnel due diligence is an optimal staffing plan to ensure safe, reliable operation based on current and future requirements.
Property tax evaluation. A powerplant typically is a significant revenue resource for the local taxing authority. Given the many special tax abatements and other grants that could apply to the taxable value, it is in your best interest to validate the assessment and explore ways to improve asset value.
IT requirements. Powerplants require extensive IT connections and equipment. Plus, cybersecurity requirements, telemetry, systems control, emissions, fuel monitoring, RTO/ISO communications, and software licenses all must be accounted for. A comprehensive audit of inherited systems and devices is important. One reason is that it’s typical for a seller to not upgrade aging equipment when a facility is for sale. An IT audit can identify unforeseen security and reliability issues while contributing to the development of a good IT process for the facility.
Support after the sale. Acquiring a power-generation asset is a team effort that takes many months of hard work. But keep in mind that realizing the full potential of your investment takes another team of experts with the capability to optimize all aspects of the resource and the marketplace.
The IAPWS Australasian Boiler and HRSG Users Group (ABHUG) returns to in-person meetings this year, hosting its 2022 conference November 15-17, at the Brisbane Convention & Exhibition Centre.
Presentations and open discussion sessions will focus on plant operation, maintenance, layup, cycle chemistry, and materials. Participants include users, consultants, and equipment/services providers. Attendees are encouraged to explain specific technical problems with their HRSGs, conventional boilers, steam turbines, and valves and then participate with others willing to share their experiences and debate solutions.
Several workshops will be conducted during this event—including one on the expanding use of film-forming substances (FFS) to protect metal surfaces in powerplant steam/condensate and cooling-water systems against corrosion and other degradation mechanisms.
Visit the conference website for names of participating companies/speakers, program updates, registration link, and sponsorship opportunities, and other details.
2021 Conference Highlights
Gauge the value of this meeting to your organization by reviewing the highlights of last year’s program, below, compiled by CCJ’s Consulting Editor Steve Stultz.
ABHUG 2021 was a virtual event, conducted on three consecutive mornings from Brisbane, Australia, and elsewhere, joined by more than 110 participants worldwide. It included 14 technical presentations with a blend of cycle chemistry, mechanical, and control experiences and issues across combined-cycle/HRSG plants and conventional boilers.
ABHUG organization and content are tied closely to its supporting organizations, which include the European HRSG Forum and various national groups of the International Association for the Properties of Water and Steam (IAPWS). Also closely aligned is the annual US-based HRSG Forum, making this part of a worldwide HRSG/combined-cycle information exchange.
Bob Anderson (Competitive Power Resources) and Barry Dooley (Structural Integrity Associates and IAPWS), the moderators for ABHUG 2021, will serve in that capacity again this year.
“Nothing’s for free.” CleanCo, Queensland’s publicly owned clean-energy generator, discussed the changes that have taken place at its Swanbank E Power Station (first commissioned by Stanwell in 2002) over the years. The facility’s transformation from baseload to a market-driven profile required continuous effort, long-term commitment, and careful attention to system impacts.
The 380-MW Swanbank E 1 × 1 combined cycle, based on GT26 gas turbine technology, has always been one of Australia’s most efficient thermal plants. The original design profile was daily operation with high output as needed, and low-load operation in the evenings.
But in 2014, the plant was put into cold storage, primarily to benefit from the increasing world-market value of its gas entitlements. Return-to-service dates then experienced delays, and long-term preservation techniques became a major new challenge.
CleanCo reopened Swanbank E in 2021 with the goal of helping Queensland transition to renewables. As the company’s Matt Sands explained: “With these operational changes, we’ve had to understand the unit a lot better. We’ve had to start pushing the boundaries and finding ways to meet the new market-driven demand curve.
“We’re having to re-learn the impacts of starting every day: thermal fatigue in the HRSG, drain and blowdown operations, and total system impacts, to maximize the life of the HRSG, turbines, and plant.
“And to do this, you have to understand that nothing’s for free. When you are starting and stopping the unit, it is going to cost you somewhere.” Operation has shifted to daily starts to support renewables, generally 3 to 11 p.m., Monday through Friday.
Sands listed examples such as offline-induced corrosion, thermal recycling, equivalent operating hours on the machines, and boiler/system chemistry. “Nothing’s for free, and we need to know all of the pinch points,” he repeated.
Stanwell E’s original design ramp rate was 11.5 MW per minute. To meet the new market demands, trials were set at 30 MW/min. The plant achieved that rate and currently operates at 17.5.
The latest effort is the Stanwell Fast Start Project, where there have been more teething issues. Intent is to reach minimum load in half their normal time of 4.5 hours. “If we get there sooner, we save a lot of fuel,” Sands stressed. But again, nothing is free. So he discussed some notable impacts on the steam/water cycle and steam-turbine temperature control.
He talked about improved temperature probes, and a new economizer bypass to increase range, to help condition water, and achieve higher temperatures. He also offered details about drum level control.
Sands then discussed low-load operation. “We are still testing, and looking at the impacts on the gas turbine, steam turbine, HRSG, and system water chemistry. Future projects will look at amines for wet storage, header crack monitoring, steam-turbine control-valve throttling, and flow issues with attemperator spray and OTC control valves.
Minimal chemistry instrumentation is a recurring conference topic because of its importance and less-than-optimum global observance. It focuses on instrumentation requirements for proper system water-chemistry control and monitoring. Chemistry-influenced failures and potential personnel safety issues are increasingly common with today’s changing operating profiles.
Kirk Buecher, Mettler-Toledo (US), offered a well-organized, informative review of all important measurements, instrumentation, and rationale—including important optional measurements and equipment.
“Too many plants are under-instrumented today,” he told attendees. “So, let’s look at the minimums of both equipment level and redundancy.” Some redundancy, Buecher explained, can offer good cross-checks of other equipment.
“This is a non-commercial presentation, and we’ll look at the options,” he said. He went on to list the principal drivers of his information including the IAPWS Power Cycle Chemistry Working Group as well as EPRI, VGB, ASME, ASTM, and other reliable sources. OEMs and academia are heavily involved.
So, he provided the following overview:
- Every plant should have at least a minimum level of instrumentation (MLI) which can uniquely identify (pinpoint) the key parameters and drivers to each and every failure/damage mechanism that can occur.
- Redundancy: The MLI does not only analyze specific chemistry locally; it needs to provide sensitivity analysis for the cycle (holistic view) in the event of a defective or out-of-service instrument. Thus, an instrument within the minimum key level is backed by other instrumentation or verification technologies. In a serious contamination event, the operator does not need to take time to validate the reading of an individual instrument.
- Equally important, the MLI should all be audibly alarmed in the control room or on the distributed control system.
An interesting direct quote here: “Pre-Covid, I visited somewhere between 40 and 50 plants a year all over the world and I’m still surprised by how many sensors, transmitters, and analyzers are just standalone. They’re not connected to anything. So, if you’re not right next to the transmitter or the analyzer when the alarm goes off, hours could pass before anybody notices the problem,” he explained.
Buecher followed with a detail-rich presentation on specific and direct conductivity, cation (acid) conductivity, pH, dissolved oxygen, sodium, phosphate, and oxidation/reduction potential. He included some equipment new to the market. He then added common reasons for monitoring and measuring degassed cation conductivity, silica, and TOC.
Catch problems early. Buecher went on to stress the proper measurement locations, and tying some optional measurements based on plant history. He focused on makeup water, condensate, feedwater, drum/evaporator monitoring, and steam-turbine protection.
Buecher reminded that measurement at the condensate pump discharge gives first warning of condenser leaks, regeneration chemicals from the makeup plant, or contaminated condensate from the storage system. It is critical for detecting air in-leakage.
At this same location, measuring cation conductivity gives a rapid alert to the ingress of corrosive anions. For seawater-cooled plants without condensate polishing, measuring for sodium is critical. Measuring pH and conductivity are optional, but also helpful for confirming other information. Measuring degassed cation conductivity can clearly indicate whether or not an increase in cation conductivity is from CO2.
He completed this session with makeup water, feedwater with and without polisher, HRSG evaporator water (running AVT, caustic treatment, phosphate), and main and reheat steam.
Update on erosion of HP bypass valves. An important and recurring problem at many HRSG/combined-cycle plants is erosion of seating surfaces in HP bypass pressure-control valves (PCV), attributed to ingestion of wet steam and water. Seat/plug damage results in leaking steam that overheats downstream carbon-steel pipe. Operators must then open the valve to minimum to enable safe attemperator operation to cool the pipe. Manual desuperheater operation will cause even more liner and pipe damage.
Co-chair Bob Anderson explained how water can end up in the valves, including that from condensate which forms while warming the HP steam pipework. Common solutions are ensuring the HPSH is drained during startup (before steam flow), ensuring interstage and final attemperators do not leak, reviewing DCS data to make sure superheat is available when the PCV is opened (may require additional surface-mounted thermocouples), and avoiding HP bypass operation during layup.
As noted during AHUG 2018, a change in PCV materials or design will not solve the erosion problem (CCJ No. 59, 4Q/2018, p 62, “Workshop 1: Steam-turbine bypass”). A change in operating practice can perhaps reduce the problem, but installation of new or larger pipe-warming drains may be needed.
Anderson’s presentation explained how to achieve warming steam flow for cold lag startups in 2 × 1, 3 × 1, and 4 × 1 systems, perhaps by enlarging/installing a warmup drain upstream of the HP isolation valve.
FEA, remaining life. For those who tuned in for a deep dive into finite element analysis and creep-fatigue crack growth, that came from Daniel Blanks, senior structural integrity engineer at Quest Integrity.
Blanks did not stop with what is involved in analysis, he showed attendees how to undertake assessments with step-by-step narratives and graphics. His background statement: “The shift away from baseload to flexible operation can result in an increase in damage to plant components, in particular thick-walled boiler components.”
The basic scope of his studies:
- Remaining life of an initial tube ligament crack is used to compare the effects of the various flexible operation modes on header components.
- Creep-fatigue crack growth is computed under baseload, two-shift, and low-load modes of operation.
He covered risk to headers and was very clear about his technical approach. Participants saw heat-transfer thermal models, stress analyses, and the details of creep-fatigue crack growth. This led to computation of remaining-life assessment for the various modes of operation.
- For all headers considered during flexible operations studies, two-shift operation resulted in the fastest creep-fatigue crack growth, resulting in the shortest remaining lifetimes for those components.
- For cases where remaining lives were shortest (less than 20 years), the dominant crack growth mechanism was fatigue; creep crack growth contributed very little.
- Low-load modes of operation generally promoted faster creep-fatigue crack growth than baseload operation; however, it was generally slow enough to remain acceptable over the future service life of the component (30 years).
- Across all headers, baseload operation is very benign, with limited creep-fatigue crack growth.
And his conclusions:
- Using a combined FEA and creep-fatigue crack growth approach can provide an understanding of the effects flexible operations may have on header components, by comparing the remaining life of an initial crack.
- For some components, two-shifting can be very damaging, resulting in rapid creep-fatigue crack growth.
- If considering transition to flexible operation, wide-ranging thermocouple coverage and adequate testing of transient modes are essential to constructing a valid model for assessing remaining life.
ABHUG 2022 is chaired by Barry Dooley, Structural Integrity Associates (UK), and Bob Anderson, Competitive Power Resources (US). Steering committee members in addition to Dooley and Anderson are the following:
- David Addision, Thermal Chemistry, New Zealand*
- Matthew Sands, CleanCo, Queensland**
- Russell Coade, HRL Technology Group, Victoria*
- Michael Drew, Australian Nuclear Science & Technology Organisation (ANSTO), NSW*
- Armand du Randt, Genesis Energy, New Zealand*
- Stuart Mann, AGL, Victoria**
- Keith Newman, Synergy, Western Australia**
- Charles Thomas, Quest Integrity, New Zealand**
** Energy provider
To understand the true value of a user group like the Generator Users Group, you must attend and participate in its collaborative, technical environment of generator experts. With the 2022 conference just one month away in San Antonio, the following intends to illustrate the unique content disseminated and absorbed by attendees.
The 2021 conference of the aired back in July virtually, and like all GUG conferences, the content lives on. Presentation slide decks and many recordings by users and vendors—but not the OEMs—are available to registered owner/operators in the GUG Conference Archive section of the Power Users website at www.powerusers.org.
A goal of all user groups operating under the Power Users umbrella is to disseminate technical material presented at its conferences to the managers, engineers, and technicians who would benefit most from it by improving plant performance. CCJ is working with Power Users to facilitate access to this information.
Fact: Everyone is not interested in everything presented at a meeting and very few, if any, with equipment responsibilities today have the time to “pan” for information that might help them grow in their chosen profession.
CCJ and Power Users believe they can help in this regard by enabling readers to quickly locate technical material conducive to making better O&M decisions. As you read through the summaries here, bear in mind they are not intended to provide the “answers” you might be looking for, but rather point you to presentations by experts who can.
Think of this recap as a “TV Guide” of sorts for the web. To our knowledge, no one previously has provided such a service to the power industry. Now you can peruse a meeting’s content in sound bites and locate relevant details in a minute or two. Efficiency!
We apologize in advance for possibly overstepping our editorial bounds by identifying presentations we believe to be excellent for a particular reason (exceptional photos, for example). Our opinions are based on years of attendance at user-group meetings and in hallway discussions with knowledgeable plant personnel like you.
If you have any thoughts to share on our approach here, we would welcome them. Please drop an email to Scott Schwieger at firstname.lastname@example.org.
Rotor tooth-top cracking
- A valuable paper for all who have responsibilities involving rotor operation and maintenance.
- Two Westinghouse 68-MVA generators that after 42 years of moderate-duty operation were found with many serious problems—including extensive tooth-top cracking of the rotor-body forging (Fig 1, left).
- Two highly skilled engineers developed ingenious repairs (Fig 1, right) that are described with many excellent photos.
Radiographic inspection of phase straps (TIL-1965)
- Valuable and interesting study of use of radiographic inspection to replace OEM recommendation of stripping insulation to look for cracking of copper leads (Fig 2).
- Findings seem to indicate successful results: No cracks found, time and cost saving significant.
- But some complications, for example: (1) Large isolation area roped off may interfere with other work. Stripping of insulation still required if cracks found.
Stator-winding resistance tests: Troubleshooting and investigation
- Excellent discussion of high DLRO (digital low-resistance ohmmeter) readings found on stator winding copper:
- *One was a true reading that led to corrective action on an incorrectly assembled bolted joint (Fig 3). This action likely prevented a catastrophic failure.
- *The other was attributed to improper DLRO “homemade” leads being used.
- The message: Use care and good judgment in dealing with instrumentation readings.
Generator failure and subsequent extent of condition inspections
- Many slides describing extensive problems (Fig 4) found on seven combined-cycle generators with only a few years of operation.
- Problems ranged from small ones, such as minor greasing in stator windings, to large ones, such as partial core restacking, stator rewinds, and field rewinds.
- An instructive presentation.
Stator-winding collateral damage resulting from an isophase-bus fault event
- A valuable, well-illustrated presentation describing an accurate analysis of a catastrophic isophase/stator-winding failure (Fig 5).
- Well worth spending some time with this presentation following the logic illustrated in the analysis and resolution of a huge event.
Generator-field main-lead failures, MD&A
- An excellent tutorial on field-winding main-lead design, failures (Fig 6) and repairs.
- Highly valuable reading for anyone responsible for field operation and maintenance.
- Many excellent photographs and drawings.
Generator-stator core looseness and failures, AGT Services Inc
- A detailed review of serious core-looseness problems on modern GE generators for large gas and steam turbines—specifically, outside space block (OSSB) migration on 390H and 450H GTs and 324 STGs (Fig 7 left); trailing-edge end-iron looseness on 7FH2s (typically on peaking/high cycling units); tooth loss on 7FH2s built by Mitsubishi Power (Fig 7 right).
- Concludes that “Modern GE stators need to be evaluated for core tightness frequently.”
- Very important reading for anyone responsible for reliability of the models of GE generators identified in the first bullet point.
Generator-stator bolted and brazed connection inspections, AGT Services Inc
- Tutorial on connection inspection, with emphasis on bolted designs.
- Many photographs illustrating good and poor connections (Fig 8).
- Valuable presentation for anyone with responsibilities relating to stator-winding reliability.
Unintended magnetism and corrective measures for improved maintenance programs and risk reduction, MPS Gaussbusters
- Comprehensive review of unintended (residual) magnetism in turbine/generator components.
- Many photos and sketches illustrating components and problems (Fig 9).
- A good presentation for anyone involved in overall equipment operation and repairs.
Generator retaining-ring failure case studies, MD&A
- Well-illustrated retaining-ring design and function tutorial. No retaining-ring failures.
- Discussion of Nomex tab rivet failures on certain GE generator designs (Fig 10).
- Some useful information for those responsible for operation and maintenance of generator fields.
Stator-core condition assessment with CPC 100, Omicron
- Good tutorial on stator design, duties, and fault measurement (Fig 11).
- Emphasis on measurement with CPC 100 and at higher frequency, 400 Hz.
- Valuable presentation for anyone with responsibilities for stator-core maintenance.
Rewind versus repair when commercial and technical aspects are diametrically opposed, National Electric Coil
- Detailed summary of decision process, commercial versus technical, on two stator winding failures.
- Details of the cleaning and repair work are well illustrated (Fig 12).
- Valuable reading for plant personnel responsible for generator maintenance.
Successfully rewinding GVPI generator stators, National Electric Coil
- Valuable semi-tutorial presentation on rewinding of GVPI stators (Fig 13).
- Discussion of relative merits of GVPI vs hard-coil windings.
- Useful presentation for anyone with responsibilities relating to generators.
DIG DEEPER: SAN ANTONIO, AUGUST 28 – SEPTEMBER 1
DIG DEEPER: SAN ANTONIO, AUGUST 28 – SEPTEMBER 1
The eighth annual meeting of the Generator Users Group returns to in-person conferencing at the Power Users mega event in the San Antonio Marriott Rivercenter, August 29 through September 1, following virtual meetings in 2021 and 2020. From 2015 to 2019 in-person meetings were conducted in Las Vegas, San Antonio, Phoenix, Louisville, and St. Louis.
2022 conference overview
The technical program for the upcoming meeting was developed by an all-volunteer steering committee of electrical engineers and managers (sidebar)—many with decades of relevant experience. An overview of the presentations scheduled for the week beginning August 29 follows. All sessions are user only. Presenting vendors are allowed in the room only when it is their time to present.
Expectation is that most of this year’s presentations will be made available to owner/operator through the Power Users website a few months from now. Slide decks and many presentation recordings from previous meetings already are accessible to registered users in the Conference Archives. If you are not registered, sign up now at www.powerusers.org: It’s easy and there’s no charge.
Monday, August 29. The first morning features a training session on generator insulation systems, with Siemens, GE, and NEC participating. User presentations dominate the afternoon. Topics are vendor quality, stator-winding manufacturing issues, the sharing of lessons learned by one utility, a stator-winding installation case study on the importance of correct alignment of endwinding support system components, and diagnostics for outage planning. A roundtable discussion on materials, manufacturing, and installation challenges closes out the day’s classroom program. Doors to the vendor fair open at 5:30.
Tuesday, August 30. Vendor presentations dominate the program until the afternoon refreshment break. The lineup is as follows:
- HV stator-coil rewind, National Electric Coil.
- Accurate partial-discharge evaluation; new-bar pre-qualification, AES Kinectrics.
- Advances in fiberoptic core and coil monitoring, B-Phase.
- Isophase-bus anti-condensation measures, Electrical Builders Inc (EBI).
- CO2 purge, Airgas, an Air Liquide company.
- Troubleshooting plan using EMSA—a case study, Cutsforth Inc.
- HV bushings: Inspection, testing, manufacturing, and failure examples, MD&A.
- 7F generator issues, AGT Services Inc.
Three user presentations follow the afternoon break: Fixators are not as fixed as you’d expect, flex connectors, and main- and neutral-lead mod for Siemens modular units. A roundtable discussion on common failures and maintenance issues closes out the day’s activities.
Wednesday, August 31. Siemens presents in the morning, GE in the afternoon. Siemens topics include the following: Generator monitoring (case studies), FASTWedge, major findings in the fleet, enhancements for robotics inspections, optical flux probe, pole cross over (tutorial), generator purging, flexible connectors (tutorial).
GE’s presentation topics: Hydrogen leakage investigations, TIL-2260—MELCO-built core iron looseness, TIL-2337—Rotor core migration, TIL-2323—H65/H84 nozzle-assembly design update, H65/H84 seal-ring RCA, supply chain issues, collector-ring flashover prevention, air-cooled collector megger issues, generator fan design, robot development updates.
Thursday, September 1. User presentations and a roundtable discussion are featured until the meeting concludes at noon. Presentation topics include these: Bus-duct circulating-current issues, case history of a sudden field excitation increase, fleet issues seen by one utility, and shaft grounding-brush issues.
Steering committee, 2022
Dave Fischli, generator program manager, Duke Energy
Andres Olivares, generator specialist, Calpine
Jeff Phelps, principal engineer/generator SME, Southern Company
Joe Riebau, senior manager—electrical engineering and NERC, Constellation Power
Craig Spencer, director of outage services, Calpine
Jagadeesh Srirama, senior electrical engineer, NV Energy
Advisor: Jane Hutt, webmaster, International Generator Technical Community