Server Room Cooling Basics: Temperature and Humidity Targets (and Why 18 to 27°C Matters)
If you are responsible for a server room, comms room, or small data hall, your job is not to make the room “feel cool”. Your job is to keep equipment operating within safe environmental limits, continuously, and with enough resilience that a single fault does not become downtime.
This guide explains what the 18 to 27°C target really means, how to measure it properly, what humidity targets are trying to prevent, and how to design or improve airflow and cooling so rack inlets stay stable. You will also get practical checklists for sizing inputs, monitoring points, commissioning, and maintenance.
What “server room cooling” actually means
Server rooms behave differently from offices.
Office cooling is often driven by people and solar gain. Server room cooling is driven by electrical load. When IT equipment is powered on, it produces heat regardless of the season. That means you typically need cooling all year, including winter, even when the rest of the building would rather be heating.
This is why server room cooling is best treated as a controlled technical environment, not a comfort space. The risk profile is different, too. A warm meeting room is uncomfortable. A warm rack inlet can cause throttling, unexpected shutdowns, hardware damage over time, or a cascade of failures during peak load.
What you are protecting: inlet conditions, not “room average”
The most common mistake in small server rooms is controlling a single wall thermostat somewhere “convenient”. The equipment does not care about that reading. It cares about the air it pulls into the front of the servers and network gear.
Your cooling system, airflow layout, and monitoring should therefore focus on the air entering racks (or cabinets), and on preventing recirculation where hot exhaust air sneaks back into the inlet side.
Typical failure patterns you can avoid with the right basics
- Hotspots: overall room temperature seems acceptable, but one rack inlet runs hot due to poor airflow paths.
- Short-circuiting: cold supply air returns directly to the unit without passing through equipment, while racks starve.
- Humidity extremes: very dry air increases electrostatic discharge risk, while high moisture increases condensation and corrosion risk.
- Single point of failure: one cooling unit or one condensate route fails, and the room temperature rises quickly.
Targets first: the 18 to 27°C headline and how to apply it
Why 18 to 27°C matters
Industry guidance has evolved. Earlier recommendations were often tighter. Modern guidance recognises that reliable operation can be maintained across a broader range, provided the environment is controlled, and the equipment class and layout are understood.
The practical takeaway is simple: treat 18 to 27°C as the widely referenced recommended operating band for IT equipment air intake, then design your monitoring and airflow so rack inlets stay within that range even as load changes.
If you want to read the underlying guidance directly, see the ASHRAE TC9.9 guidance on recommended ranges.
Recommended vs allowable: do not confuse the tw.o
Many documents show both “recommended” and “allowable” conditions. This is where teams get into trouble.
- Recommended is the band you should aim to hold as normal operation because it supports stability and reduces risk.
- Allowable is more about tolerance. It may be acceptable for excursions or specific equipment classes, but it is not a sensible everyday operating target for most mixed environments.
In practical terms, if your controls are set to the upper edge of “allowable” with limited monitoring, you might still be technically “within spec” while creating hotspot and reliability problems. Keep “allowable” in your pocket for risk assessment, not as a comfort blanket.
Where to measure temperature (the detail that changes everything)
If you only take one action from this guide, take this one: measure and control at the point that matters.
Guidance for ICT rooms explicitly notes that temperature should be measured at the front of racks in the cold aisle. That aligns with how server cooling actually works: front-to-back airflow through equipment.
Practical implementation options:
- Entry-level: place calibrated sensors at the front of key racks (top, middle, and bottom positions), then trend readings.
- Better: add multiple inlet sensors across the cold aisle, especially for dense racks or known hotspots.
- Best: integrate environmental monitoring into your building controls or DCIM-style monitoring so alarms trigger on inlet conditions, not room averages.
Turning targets into operational limits
Targets only work if they lead to action. Instead of a single “temperature is high” alarm, define limits that reflect how your room behaves:
- Normal operating band: the range you expect rack inlets to remain within most of the time.
- Warning threshold: an early indicator that load has changed or airflow has degraded.
- Critical threshold: a point where you trigger escalation, investigate immediately, and consider temporary mitigation (load shedding, failover, or additional cooling).
Your exact thresholds depend on equipment mix, redundancy, and how quickly the room temperature rises after a fault. The important point is that limits should be based on rack inlet measurements and trend behaviour, not on a single thermostat reading.
Humidity control without myths: dew point, RH, and what “safe” looks like
Why are both low humidity and high humidity risky
Humidity is routinely ignored in small server rooms because “the air feels fine”. That approach is risky.
- Too dry: increases the likelihood of electrostatic discharge events, particularly when people are moving around and touching equipment.
- Too humid: increases the risk of condensation during cooling transients and can accelerate corrosion and contamination issues over time.
Humidity risk is not just about comfort. It is about electrical reliability and long-term asset protection.
Dew point vs relative humidity: Use the right mental model
Relative humidity (RH) changes when temperature changes, even if the moisture content stays the same. Dew point is often a better indicator of actual moisture content in the air.
A simple way to think about it:
- RH is “how full of water vapour the air is” relative to what it could hold at that temperature.
- Dew point is the temperature at which moisture would begin to condense.
In practical server room terms, if you control only temperature, RH can drift. If you control only RH without understanding dew point, you can still end up with conditions that increase condensation risk during transient events.
Practical humidity targets you can actually operate to
You do not need to memorise complicated psychrometrics to set sensible controls. A practical interpretation of common guidance is:
| Parameter | Practical target concept | Why it matters |
| Temperature (rack inlet) | Keep rack inlet temperatures within the recommended 18 to 27°C band | Reduces hotspot risk and supports reliable operation |
| Upper moisture limit | Keep moisture controlled so you avoid high RH and high dew point conditions | Reduces condensation and corrosion risk |
| Lower moisture limit | Avoid very dry conditions | Reduces electrostatic discharge risk |
If you need an external reference for the relationship between recommended ranges and humidity limits, the SEAI guide includes a useful practical summary of air intake temperature and moisture parameters for ICT rooms: SEAI server room energy efficiency guidance.
How cooling systems change humidity (often unintentionally)
Even if you never installed humidification equipment, your server room humidity can still swing because:
- Cooling removes moisture: when air passes over cold coils, water can condense out (dehumidification).
- Outside air infiltration: leaky doors, cable penetrations, or shared ceiling voids can introduce humid air.
- Overcooling then reheating: some control strategies dry the air, then heat it back up, which can create very low RH.
The practical message: treat humidity as a monitored variable. If you see repeated drift outside sensible bands, address the causes (infiltration, setpoints, control strategy) rather than chasing symptoms.
Heat load and sizing basics (without overcomplicating it)
Start with the heat source: IT electrical load
In a typical server room, most of the electrical energy consumed by IT equipment becomes heat in the room. That makes electrical load a sensible starting point for estimating cooling demand.
A practical sizing conversation usually needs three numbers:
- Current IT load: measured or estimated from UPS, PDUs, or sub-metering.
- Expected growth: realistic forecast over the next 12 to 36 months (or your refresh cycle).
- Resilience requirement: what happens if one unit fails (and how quickly you must recover).
What else contributes to the load (often overlooked)
Small rooms can be distorted by “non-IT” loads that are ignored in quick estimates:
- UPS losses: UPS systems and power distribution equipment generate heat.
- People and lighting: intermittent, but can matter in small rooms with poor airflow.
- Neighbouring heat gains: risers, plantrooms, or roof spaces can add summer gains.
If your room is close to the limit already, these “extras” are often the difference between stable operation and repeated high-temperature alarms.
Why a survey changes the answer (and usually saves money)
The biggest mistakes in server room upgrades come from guessing. A good survey does not just count racks. It identifies:
- Actual load and how it is distributed (one dense rack vs many light racks)
- Airflow paths, obstructions, and recirculation zones
- Where sensors should be placed to reflect true inlet conditions
- What redundancy approach is realistic in your space
- Constraints: noise, external unit siting, condensate routes, power availability
If you want a structured starting point for a local site assessment, use the free survey request form to outline your current setup and constraints before anyone quotes solutions blindly.
Airflow design that prevents hotspots
Hot aisle and cold aisle fundamentals
The simplest airflow design principle is separation: keep cool supply air on the inlet side and hot exhaust air on the discharge side. Guidance for ICT rooms describes arranging racks in alternating rows with cold air intakes facing one way and hot exhausts facing the other, creating cold aisles and hot aisles.
This is not “data centre theatre”. It is how you stop equipment from breathing its own exhaust.
Containment and small-room equivalents
Full containment is not always practical in a small server room, but you can still apply the same logic:
- Blanking panels: fit them in racks to stop hot air from recirculating through empty U spaces.
- Cable management: seal large penetrations that allow hot air to bypass back into the cold side.
- Door discipline: keep doors closed and limit propping open, especially when outside air is humid.
- Keep returns clear: avoid stacking boxes or spare kit where return air needs to flow.
Supply and return paths: avoid short-circuiting
Many cooling units can deliver enough capacity on paper, yet still fail to keep the rack inlets stable because supply and return air mix in the wrong places.
Common patterns to watch for:
- Supply too high, returns too low: cold air never reaches the rack inlets properly.
- Return too close to supply: the unit “sees” cold air and ramps down while racks run hot.
- No defined pathway: air takes the easiest route, not the route you intended.
A good design forces supply air through the cold side first and pulls return air from the hot side, even in a small space.
Cooling system options for server rooms
First question: Is it an IT closet or a true server room?
Not every IT space needs the same solution. One practical dividing line is load and density.
Guidance for ICT closets suggests that once loads exceed a certain point, a dedicated cooling system becomes the sensible recommendation rather than trying to borrow comfort cooling from adjacent office systems. The same guidance also cautions against ceiling-mounted split arrangements in certain configurations because mixing of hot and cold air streams reduces effectiveness.
Split systems and small-room setups
In smaller spaces, a properly designed split system can work well if it is:
- selected and positioned to support cold aisle delivery and hot aisle return paths
- controlled using inlet-representative sensors (not just a wall thermostat)
- installed with reliable condensate drainage and alarms
- maintained on a schedule that fits a 24/7 duty
If you are considering or upgrading this approach, start with a specialist server room assessment rather than a standard comfort cooling quote. The server room installation pathway differs in how targets, controls, and redundancy are designed. See server room air conditioning and climate control options.
Multi-split, VRF/VRV, and why controls matter more than the badge
Multi-split and VRF/VRV systems can be suitable where you need multiple indoor units, zoning, or staged capacity. However, the success factor is rarely the brand name. It is:
- how well the system maintains rack inlet temperatures under varying load
- how it behaves during partial failure (and whether remaining units can stabilise conditions)
- whether you can monitor and alarm the right points, with useful trend data
A “big” system with poor airflow design will still produce hotspots. Do not buy capacity before you fix air distribution.
Close control and higher-density rooms
As density increases, close control approaches (and in-row options) become more relevant because they are designed for tight environmental control at the equipment intake. In these environments, you normally pair:
- clear separation of cold and hot air streams (containment or strong aisle discipline)
- multiple sensors and trend-based control (not single-point control)
- resilience planning (how you handle unit failure)
If you are approaching these design problems, you are already beyond “generic air conditioning”. You want a server-room specific design conversation.
Reliability: redundancy, monitoring, and failure planning
Resilience in plain English: what you should decide up front
Most downtime incidents are not “mystery failures”. They are predictable outcomes of a room that was designed with no failure planning.
Decide these points explicitly:
- What happens if the main cooling unit fails? How long until inlets exceed the target?
- Can you tolerate a single failure? If not, you need redundancy or temporary mitigation.
- How is failure detected? Alarm on inlet temperature and unit status, not on “it feels warm”.
Monitoring points that give you actionable alarms
A minimal but effective monitoring setup typically includes:
- Rack inlet temperature sensors: across the cold aisle and at representative heights.
- Humidity trend: so you can detect drift and respond before extremes develop.
- Condensate and leak risk: alarms for high water level or leaks, especially around pumps and drainage routes.
- Unit status: run/fault state and, where possible, alerting on abnormal cycling.
The aim is not “more data”. It is a smaller number of signals that reliably indicate risk to equipment inlet conditions.
Operational discipline: small changes that prevent big incidents
- Change control: treat rack moves and cable changes as airflow changes, because they are.
- Housekeeping: keep returns and supply routes clear, and reduce dust load.
- Door management: prevent uncontrolled infiltration, especially in humid weather.
- Spare parts strategy: filters, condensate pump components, and control spares reduce recovery time.
If you want examples of how a structured approach translates into real-world installations and long-term support, review the case studies across Bristol and the South West.
Commissioning and maintenance for stable performance
Commissioning checklist (what you should verify)
Commissioning is where server room cooling succeeds or fails. A basic commissioning checklist should cover:
- Sensor placement: verify inlet sensors represent true equipment intake temperatures.
- Control logic: confirm the unit responds correctly to inlet changes and does not short-cycle.
- Airflow validation: confirm cold air reaches inlets and hot air returns to the correct side.
- Condensate routing: confirm falls, pumps (if used), and alarm behaviour.
- Alarm testing: verify alerts reach the right people with the right severity levels.
Maintenance tasks that prevent common incidents
Server room cooling runs harder than typical comfort systems. Maintenance is not a “nice to have”.
A sensible maintenance plan will typically include:
- filter inspection and replacement, appropriate to dust conditions and duty cycle
- coil cleanliness checks to prevent reduced airflow and degraded heat transfer
- Drain and condensate pump checks to prevent water leaks, controls, and sensor verification, so you are not flying blind on bad readings.
If you are operating reactively (only calling when something fails), that is often a sign you have no resilience margin. Planned servicing reduces the chance that a basic issue becomes downtime. See service and maintenance support.
When planned maintenance becomes the sensible default
If your server room supports critical operations, your default position should be that environmental control is a managed asset. Planned maintenance becomes particularly important when there is no spare cooling capacity during a failure
- You have had repeated issues with leaks, alarms, or performance drift
- The room runs close to the upper temperature threshold during normal operation
Common mistakes and fast fixes
“The room temperature is fine”, but the racks still overheat
This is almost always airflow, measurement, or both. Fixes:
- move monitoring to the rack inlets and trend it
- check for recirculation paths (empty rack spaces, open cable cut-outs)
- Confirm supply air reaches the cold side, not the return
Portable AC as a permanent strategy
Portable units are often added during emergencies and then never removed. Risks include inadequate capacity, unreliable drainage, and poor airflow delivery. If a portable unit is still in place months later, treat that as a sign that the room needs a proper assessment and a designed solution.
Condensate issues and avoidable water damage
Water incidents are usually caused by simple failures: blocked drains, poorly positioned pumps, or no alarm strategy. Fixes:
- ensure drainage routes are designed for the duty cycle
- Use alarms on high-level conditions where failure would be costly
- Include drainage and pump checks in routine maintenance
No trend data, no early warning
If you cannot see temperature and humidity trends, you will only discover problems when they are already severe. Even a basic logging approach changes this, because you can identify gradual airflow degradation or seasonal humidity drift early.
Planning your next step (survey, upgrade, or new install)
What to prepare before a site survey
A faster, more accurate survey happens when you prepare these items:
- rack list and layout (including high-density racks)
- available electrical data (UPS output, PDU readings, sub-metering if available)
- current issues (hotspot locations, leak history, nuisance alarms)
- operational constraints (working hours, access, noise limits, external unit siting constraints)
- growth expectations (new hardware, expansion plans, refresh cycle)
Minimising disruption during installation
A server room upgrade is usually an operational project, not just an installation project. Good planning includes:
- agreeing on downtime windows (or designing temporary cooling and staging)
- sequencing work so critical racks remain protected
- commissioning in a way that proves performance at the rack inlets
Acceptance criteria and documentation you should ask for
Before you sign off on any work, ask for:
- as-fitted drawings or clear documentation of equipment and controls
- sensor locations and what they represent (inlet vs room average)
- alarm settings and escalation paths
- a maintenance plan aligned to the duty cycle
If you want to discuss a server room cooling upgrade, survey, or resilience plan in Bristol or the South West, use the contact page to outline your room size, load, and risk concerns.
Summary
Server room cooling is about protecting equipment inlet conditions, not making the space “feel cold”. Treat 18 to 27°C as the key recommended temperature band for rack inlets, and build your monitoring and airflow layout around the cold aisle and rack-front measurements. Control humidity to avoid extremes, because both very dry and very moist air increase risk in different ways. Most persistent issues come from poor airflow paths, incorrect sensor placement, or lack of maintenance rather than from a lack of nominal cooling capacity. If your room is above the small-closet scale or close to its limits, a structured survey is the fastest way to reduce downtime risk and avoid spending on the wrong fix.
Frequently Asked Questions
What temperature should a server room be?
Focus on the temperature at the front of the racks (the inlet side). A commonly referenced recommended band is 18 to 27°C at the equipment air intake. Avoid relying on a single wall thermostat.
Where should temperature sensors be placed?
Place sensors at rack inlets in the cold aisle, ideally across multiple racks and heights. This reflects what the equipment actually experiences and helps you detect hotspots early.
Do I need to control humidity in a small server room?
At a minimum, monitor it. Humidity can drift due to cooling coil behaviour and infiltration. Very dry air can increase electrostatic risk, and high moisture increases condensation and corrosion risk.
Is a ceiling-mounted split system OK for a server room?
It can be problematic if it causes mixing of hot and cold air streams, leading to poor inlet control. The airflow design and sensor strategy matter as much as the unit capacity.
When should we move to planned maintenance?
If the room supports critical operations, if you have had repeat faults (leaks, alarms, performance drift), or if you have little margin during a failure, planned maintenance is usually the safer operating model.
How do I know if I need a dedicated server room cooling system?
If loads are significant, the room runs near the upper limit, or the business impact of downtime is high, a dedicated design and resilience plan is usually justified. A site survey is the quickest way to confirm.