Airports need to check their critical systems to make sure they are able to cope with rising temperatures.
It’s no secret that airports around the world depend on electrical and electronic components to run their critical processes 24/7.
Everything from reservations, to check-in, baggage handling, air traffic control (ATC) and control rooms, noise monitoring and apron navigation, and even systems such as airside lighting, are dependent on automation and IT equipment such as servers, UPS batteries, PLCs and inverter drives.
Without the continued smooth and efficient operation of these processes, things would grind to a halt pretty quickly and not just at that airport, but also at other airports through the knock-on effect of any disruption and delays.
Providing the optimum conditions for this equipment’s safe and continued operation should surely then be considered essential?
Let me take one critical element, which is the environment – in particular the temperature – inside the enclosure in which all the components are installed. This kind of electrical equipment is very sensitive to heat and as we’re coming into summer and ambient temperatures are starting to rise, it’s a good time for airports to check their critical systems and processes and see what temperatures they are operating in. If the temperatures are too high, it could badly affect their operation. Even more worryingly, if they overheat, it will cause the components to trip and, ultimately, fail, which means the processes they control could stop without warning. Not what you want on a busy day for departures on a hot August day.
Despite the obvious risk and the impact it would have on the airport, it is still, apparently, insufficient reason for some airports to run these checks and, where necessary, install cooling equipment to protect their systems. The cost of the equipment to cool a server, plus the energy required to operate it effectively, may simply be weighed up against the prolonged service life of installed components. Viewed in this context, cooling is seen as unnecessary.
To me, that rather misses a more pressing point.
To base the decision whether or not to install climate control purely on the cost of component replacement ignores the far greater potential cost of systems’ downtime, and the huge reputational impact such failure could have for the airport – as well as the chaos that will ensue.
It’s a no-brainer, surely?
Options for cooling a system
As a rule of thumb, it’s accepted that a maximum temperature of 35 degrees Celsius inside a cabinet should prevent most electrical equipment tripping and extend its life expectancy.
Heat within an enclosure is produced by the installed equipment itself and also from the environment that it’s sited in. A drive with a rated output of 150kW can produce as much as 4.5kW of heat much (or even all) of which will be trapped within the enclosure. Similarly, if you put an enclosure in a hot location (say, in full sun), it’s going to warm up. The effect of this can be reduced by using double-walled enclosures which, by their construction, simply reduce solar gains.
Where the equipment is located in relatively cool surroundings, ventilating systems such as fan-and-filter units (or air-to-air heat exchangers if the air isn’t particularly clean), can be used to lower the heat that, in this instance, is largely created by the equipment itself.
If the atmosphere outside the enclosure is particularly warm, dusty, oily or humid, then a refrigerant or water-based cooling solution such as cooling units, air-to-water heat exchangers or Liquid Cooling Packages (LCPs), may be a better alternative.
IT equipment has its own specialist cooling systems. The datacentres that house the equipment needed for ATC and support control room operations can be cooled through:
- Evaporative cooling
- Computer room air conditioning (CRAC) units or
LCPs are in-row cooling devices that are integrated between server racks and direct air through them more effectively than room-based schemes. They can remove up to 55 kW of heat and have been used to cool equipment for ATC applications that is being tested prior to it being installed in the main datacentre.
Cold and/or hot aisle containment may also be used to more effectively manage air flow and further improve the efficiency of datacentre cooling solutions.
Updating cooling equipment
It’s also worth appreciating that the energy used to cool an electrical enclosure or server rack is almost always far less than that consumed by the equipment installed in it.
Recent innovations have dramatically improved the efficacy of enclosure cooling units. The use of speed controlled components and heat pipe technology in the refrigerant circuit have had a significant impact on the financial argument whether or not to install enclosure cooling.
Theoretical energy savings of 70 per cent have been surpassed in practice, with 95 per cent being achieved at a manufacturer in the aerospace industry, lending weight to the argument for replacing conventional cooling units.
Audit your systems to uncover issues
The suggestion that ‘the system seems to be working fine’ so why change it, is one that I’ve heard many times, but it’s not a reason to do nothing. The effects of insufficient cooling are typically not apparent immediately so airport operators may be unknowingly storing problems for the future – although they might wonder why their components fail more often in the heat.
A sensible first step would be to get a specialist company to run checks on your systems, review the service requirements of any cooling solutions, recommend a solution if none is present, and undertake thermal surveys on enclosures.
Enclosure cooling might not eradicate equipment failures completely, but it makes them less frequent and allows a more managed approach around component replacement so that unplanned downtime is kept to a minimum.