To stop data centers from turning into a place of chaos and waste, follow the three tenets of infrastructure planning: measure present state, represent it accurately, and control future changes
The data center is a complex being. It’s complex because it has many layers to it, each of which is a melee of challenges on its own. Beginning with the layer that addresses the environment through layers of real estate and physical distribution, to layers that handle platforms, infrastructure services and applications—the complexities are huge.
The data center is also always changing.
This can create challenging situations. The need of the hour is to enable controlled change at the data center, says Henry Hsu, Director Global Power & Architecture, Raritan Inc, USA.
From proper planning to chaos
There’s an interesting behavior that data center operators demonstrate, he says. At the time of planning and construction of the data center, there is a very systematic, orderly and planned approach that operators take. Detailed analyses happen to assess the kW/MW capacity, how should the raised floor be, what is the right cooling technology that we should use, how will the environment variables affect the building, and so on. The building itself will not see a whole lot of change over the years.
What does change over the years is technology. As these changes happen, the mode of addressing tends to be more casual. Decisions on cabinets, network and space requirements do not have the detailed planned approach as the building did.
And then as the technology changes come in on a weekly, then daily basis, things take on the haphazard mode, says Hsu. Server and network connectivity provisioning, asset decommissioning, asset moves, changes and adds, growth of storage—these are changes that come in frequently and are dealt in a manner that is hardly systematic or holistic.
This casual approach only leads to chaos and waste. Many data center operators appreciate this, but they are themselves in an overwhelmed situation. They have limited personnel, constrained funding, and constant projects. So they end up employing “sufficient” and “de facto” processes, says Hsu. This typically exacerbates the chaotic nature of datacenter management.
They resort to tools such as Visio or Microsoft Excel to do all their planning. Some work with a simple asset database. There are security processes in place but as the number of changes goes up, many do not get enforced.
Inability to optimally deploy change
What does this mean? The data gathered is hardly real-time or accurate. Managers do not get to see the current state of operations. Often they need to visit the data center, or aisles to look at the existing connections, loads and make assessments on assumptions and decisions on little data.
What is the power demand or capacity? What are the dependencies in the power chain? What are the details of the assets—where is it located, how is it being utilized and who is the owner? How much rack space is available? What kind of network connectivity does the asset have? How much weight is it handling?What are the environmental parameters—temperature, humidity, air flow?
If a data center manager doesn’t have accurate info about each of this, he ends up taking decisions which are far from optimal. So if a manager takes sub-optimal decisions and makes sub-optimal changes in the present state, how can one expect the data center’s future to be great, asks Hsu. He says,the primary inhibitor to a data center’s future is frequently the data center’s present state.
Towards better infrastructure planning
So how can data center operators work so that the future isn’t a story of bigger chaos than today? Hsu speaks of three key tenets to follow:
Being able to accurately understand the operational capacity of your data center has an extremely high ROI, says Hsu (see image).
Data centers are complex and ever changing. But having the right planned approach and using the right set of tools can stop the facility from being a place of chaos to a place of optimized efficiency.