Siemon 10G ip™ Data Centre Solution: Data Centre Trends, Components, Planning, Equipment and Cabling
The term data centre conjures up different meanings to different people. Some would argue that the data centre is the room where the servers are stored. Others visualize quite a different picture. It is true that at one time, the data centre was little more than a secure server room. However with technological advances and data centric businesses of today the term could better be expressed as a "mission critical data centre". Business models have gone through a complete cycle from centralized data centres to decentralized and now back to centralized. Businesses realize that data is their strongest asset and as such are making strides to assure its availability, security and redundancy.
The data centre concept has also grown into its own business model. Companies that provide redundant and offsite storage for other companies are building state of the art facilities on a global scale. At the heart of these facilities is the IT infrastructure. This paper will address infrastructures and components of a data centre. Whether a company implements all or part of these components, one core element will always remain, that is the cabling system infrastructure. This planning guide is designed to provide you with a basic roadmap for your data centre.
Data Centre Trends
According to Infonetics Research's latest North American data centre market research study, the projected combined data centre services and products are expected to grow 47% from $10.6 billion to $15.6 billion between 2003 and 2007. Data centers can represent 50% of an IT budget for an organization. These data centres house the data for ERP (Enterprise Resource Planning) applications, e-commerce applications, SCM (Supply Chain Management), CAD/CAM, rich media, video/voice/data convergence, B2B (Business to Business) applications along with the back-office applications on which companies run. The communications mechanisms for the applications very, but the critical elements of data uptime do not change. According to IT Week January 12, 2005, A survey of 80 large US companies conducted by analyst firm Infonetics last year indicated they had an average of 501 hours of network downtime per year, and this cost them almost four percent of their revenue, totalling millions of dollars. In separate research, analyst Gartner estimated a typical business experiencing an average of 87 hours of downtime a year, can result in total losses exceeding $3.6m.
It is not very difficult to see that downtime directly translates into dollars, and lots of them. Companies that provide data centre components and equipment are sensitive to this and have made great strides in providing companies with viable, hearty solutions for their growing data stores and requirements.
Components of a Data Centre
Data centers are comprised of a high speed, high demand networking communication systems capable of handing the traffic for SAN (Storage Area Networks), NAS (Network Attached Storage), file/ application/web server farms, and other components located in the controlled environment. The control of the environment relates to humidity, flood, electrical, temperature, fire controls, and of course, physical access. Communication in and out of the data centre is provided by WAN, CAN/MAN and LAN links in a variety of configurations depending upon the needs of the particular center.
A properly designed data centre will provide availability, accessibility, scalability, and reliability 24 hours a day, 7 days a week, 365 days per year minus any scheduled downtime for maintenance. Telephone companies work for 99.999% uptime and the data centre is no different.
There are two basic types of data centres: corporate and institutional data centres (CDCs) and Internet Data Centres (IDCs). CDCs are maintained and operated from within the corporation, while IDCs are operated by Internet Service Providers (ISPs). The ISPs provide third party web sites, collocation facilities and other data services for companies such as outsourced email.
Critical data centres are monitored by a NOC (Network Operations Center) which may be inhouse or outsourced to a third party. The NOC is the first place outages are realized and the starting point for corrective action. NOCs are generally staffed during the data centre's hours of operations. In 24 x 7 data centres, the NOC is an around the clock department. Equipment monitoring devices will advise the NOC of problems such as overheating, equipment outages, and component failure via a set of triggers that can be configured on the equipment or via a third party monitoring software which can run over all of the equipment.
Data Centre Planning and Design Guideline
Data center planning has become somewhat of a specialty in the architectural world. Most architectural firms either have an RCDD (Registered Communications Distribution Designer) on staff, or acting as a consultant to assist with the specialized equipment not addressed by their Electrical Engineers and Mechanical Engineers. The equipment housed within the center is complex each with specific requirements for heating, cooling, power budgets and spatial considerations. A typical data centre contains the following components:
- Computing and network infrastructure products (cabling, fibre, and electronics)
- NOC or NOC communications and monitoring
- Power distribution, generation and conditioning systems
- Uninterruptible Power Supplies, generators
- Environmental control and HVAC systems
- Fire Detection and Suppression systems (typically halon or other non-water suppression)
- Physical security and access control prevention, allowance, and logging
- Circuit breaker protection (lightning protection in some cases)
- Proper lighting
- Minimum of 8'5" ceiling height
- Racks, cable management and cabinets for equipment
- Pathway: Raised access flooring and/or overhead cable tray
- Carrier circuits and equipment
- Telecommunications equipment
- Proper clearances around all equipment, termination panels and racks
Data centers must be carefully planned PRIOR to building to assure compliance with all applicable codes and standards. Design considerations include site and location selection, space, power and cooling capacity planning, floor loading, access and security, environmental cleanliness, hazard avoidance and growth. In order to calculate the above needs, the architect and RCDD must know the components that will be housed in the data centre including all electronics, cabling, computers, racks, etc. To provide this list it is important to predict the number of users, application types and platforms, rack units required for rack mount equipment and most importantly, expected or predicted growth.
Anticipating growth and technological changes can be somewhat of a "crystal ball" prediction. With the possible combination of storage islands, application islands, server platforms and electronic components literally being factorial, planning is as important to a data centre as the cabling is to a network. The data centre will take on a life of its own and should be able to respond to growth and changes in equipment, standards and demands all while remaining manageable and of course, reliable. Larger data centres are designed in tiers or zones (sometimes on different floors) with each tier performing different functions and generally with different security levels. Redundancy may be between different levels or different geographic locations depending on the needs of the users of the facility.
In an effort to conserve space and lower costs within data centres, KVM switches have been on the market for quite sometime. KVM (Keyboard, Video and Mouse) switches allow a single keyboard, monitor and mouse to control multiple servers in a rack or the new blade servers that are entering the market. Newer versions of these switches allow this control to happen remotely as well as locally through the switch.
SAN (Storage Area Networks) and NAS (Network Attached Storage) devices have made sharing disk drives between servers or over the network a faster and easier alternative to the older server mirroring technologies. These devices can be attached via Fiber Channel, SCSI, or network cabling. IP based products are becoming prevalent that allow for the communications between the storage devices and network components to be either IP based or tunneled through IP. This makes these solutions far more scaleable and reliable than their predecessors. More information on Storage Area Networks.
Another plus in the data centre world is that electronics are becoming smaller and more compact thereby conserving space on the data centre floor. This can be seen in telecommunications switching equipment, servers, UPS solutions and various other components within the data centre. Single chassis switches equipped with blades for various tasks replace the older versions where an entire switch unit was needed for each function. Servers and rack mounted appliance servers are also smaller than their counterparts of old.
Data Centre Cabling System Considerations
The TIA TR-42.1.1 group was tasked with the development of the "Telecommunications Infrastructure Standard for Internet Data Centres." "The scope of the working group included topologies and performance for copper and fibre cabling, and other aspects of the IT infrastructure that will enable these facilities to rapidly deploy new technologies. Although the standard was published prior to the requirements for 10GBASET, the design practices are solid for new technologies. The TIA/EIA has recently adopted TIA/EIA-942 'The Telecommunications Infrastructure Standard for Data Centres'. The requirements will consider the need for flexibility, scalability, reliability and space management." (Source www.tiaonline.org). The National Electric Code (NEC) in Article 645 "Information Technology Equipment" and the National Fire Protection Association (NFPA) in NFPA-75 "The Standard for the Protection of Information Technology" have addressed these important factors. While these standards will provide guidelines, there are specific design elements that will vary with each data centre and its housed equipment. General considerations that will apply to all data centres include:
- Standards based open systems
- High performance and high bandwidth with growth factors incorporated
- Support for storage devices (i.e. Fibre channel, SCSI or NAS)
- Support for convergence with growth factors incorporated
- High quality, reliability and scalability
- High capacity and density
- Flexibility and expandability with easy access for moves, adds and changes
- BAS, voice, video, CCTV and other low voltage systems
- Incorporation of Data Centre security and monitoring systems
Cabling may be copper (UTP, F/UTP, S/FTP) or fibre (SM/MM) which will depend on the interface of the equipment to which it is to connect. In many cases a combination of several media types will be used. It is in an end user's best interest to run cabling accommodating growth during the first cabling implementation. Pricing can be negotiated on a project basis saving money. Also moves, adds and changes can be costly and increase the risk of bringing down critical components that are in use. Typical practices allow for dark fibre (unused strands) to be run along with the active fibre. Equipment may be active or passive.
Data centers contain highly consolidated networks and equipment. This high consolidation requires high density cabling systems. Cabling pathways in the data centre generally consist of a combination of access under a raised flooring system and overhead cable tray. Raised floors provide the benefit of aesthetic pleasure along with heat management and easy access to the hidden cables. Cables under a raised floor should be run in raceways (cabling channels) to protect them from power cables, security devices and fire suppression systems which may be run in the same environment. Power cables can be run either in conduit or in power raceways and should respect the minimum distances outlined in industry standard specifications. Pathways can help assure that air pressure is maintained throughout the remainder of the data centre, facilitate future moves, adds and changes, and assure that cables are properly supported removing the likelihood of damage or degredation of performance.
The fibre cabling pathway and management in the data centre should be provided by a dedicated duct system. This provides a safe and protective method for routing and storing optical fibre patchcords, pigtails and riser cables among fibre distribution frames, panels, splice cabinets and termination equipment. Fiber carries different stress and bend radius requirements than copper due to the fact that it carries light rather than electrical signals. Planning is required to assure that proper space allowances are provided.
Enclosures and Racks
Equipment enclosures and rack space should be a very early consideration in the entire design process. Identification of equipment and the number of rack units used will determine the number of racks needed for installation. Rack mounted equipment is expressed in xRU, with x representing the number of rack units (1-3/4" rack space). Some equipment also carries buffer or air requirements for separation from other equipment. Racks are standardized on a 19" equipment mounting width. Larger versions and larger cabinets are available.
All racks should be properly labeled as should all equipment contained therein. All racks/cabinets should be properly labeled as should all equipment contained therein, being careful not to label the spaces with any information that could pose a security risk. In most compliance related industries, it is now a requirement that networks are fully documented and that the documentation is maintained. TIA-942 suggest the use if a grid system so that each cabinet can be identified by its position on the flooring grid. Equipment enclosures and racks should contain the required cabling and should utilize wire management. Equipment enclosures and rack should be placed in locations allowing 4' from the center of the rack to the wall behind with a minimum clearance of 3' in front. Should equipment be contained in the rack, a 6' clearance should be allowed. ANSI TIA/EIA and the NEC codes should all be consulted for proper placement of all components within the data centre. In raised floor environments equipment enclosures and rack placement should also consider floor tile layout in order to prevent "a land-locked" situation. Cabinet enclosures will have varied positions and clearances due to the size of the cabinets and any airflow requirements, but typically 4' in front of the cabinet is maintained (two full tiles) with one full tile plus the remaining tile space at the rear of the cabinet comprised the clearance at the rear.
Cabling systems for the data centre
10G ip™ is available and provides cabling solutions for data centres today. 10G 6A™ UTP or F/UTP, TERA®, category 7/Class F and XGLO™ fibre offer the best performing 10 gigabit cabling solutions available, assuring the longest lifecycle possible eliminating the need to recable and add additional labor costs and downtime risks in data centre environments. These systems are available as field installable/field terminatable systems, or as factory preterminated trunking cable assemblies.
Siemon's 10G 6A systems are the wold's best category 6A systems with linear performance and useable bandwidth to 500MHz. It will support the just published 802.3an 10GBASE-T standard while being backward compatible with category 5e equipment specifications of today. The 10G 6A™ typically includes the following products:
- 10G 6A™ MAX® Modules
- 10G 6A™ MAX® patch panels
- 10G 6A™ S210® series connecting blocks
- 10G 6A™ MC® patch cords
- 10G 6A™ qualified cables
- TERA® MAX® Modules
- TERA® HD® patch panels
- TERA® MC® patch cords
- TERA® qualified cables
The XGLO™ fibre-optic cabling solution, meets IEEE802.3ae standard. This laser-optimized fibre supports existing 10-Gigabit Ethernet equipment. The XGLO™ cabling system typically includes the following products:
- Rack mounted optical fibre interconnect Centre (RIC);
- Rack Mounted Fiber Connect Panel (FCP3-DWR),
- Wall Mount Interconnect Centre (SWIC3);
- Quick-Pack™ Adapter Panels,
- CT®, MAX®, SM® and FOB2 series work area adapters,
- XGLO™10 Gigabit Optical Fiber Jumpers & Pigtails,
- XGLO™ qualified fibre cables.
- De-Mystifying Cabling Specifications From Cat 5e to Cat 7A
- Grounding for Screened and Shielded Network Cabling
- Increased Savings with Shielding - The Hidden Costs of Category 6A UTP Systems
- Network Cabling Lifecycles and Total Cost of Ownership
- Network Cable Sharing in Commercial Building Environments
- Power Over Ethernet Applications
- Selecting a Structured Cabling Vendor - A Balanced Scorecard for the Best Value
Rev. B 11/28/06