Lawrence Livermore Selects Siemon Pre-Engineered Cabling Solutions for New Data Center
A simpler cabling solution.
Lawrence Livermore National Labs (LLNL) is world-renowned in its role as a premier applied science laboratory. Since 1952, this Northern California facility has operated as part of the US National Nuclear Security Agency and has been the home of a nearly incalculable list of discoveries and innovations.
But behind LLNL's cutting edge research activities is the "business" end of things - administrative offices, staff and an IT infrastructure not unlike any major, information centric business enterprise. Recently, LLNL designed and built a new data center to support these IT needs. Not only was the facility designed to service LLNL's own administrative IT needs, but to serve as a co-location facility for enterprises wishing to move their own data center operations to the site.
As a co-location data center, LLNL offered unique tenant benefits. Much of the high level security covering the overall LLNL facility would extend to the hosted data center spaces. And, unlike other common hosted sites, LLNL would offer its tenant a more move-in ready space, providing a high performance "plug and play" physical infrastructure. This second benefit posed a design challenge. Most of the equipment decisions would be made by the eventual tenants, creating the possibility for a wide variety of channel layouts. The final design would need to be flexible enough to easily adapt to client needs.
"It was an interesting challenge," explained Jim Herbert, LLNL's Data Center Cabling Manager. "We had to design an entire data center based on the anticipated needs of unspecified future clients. High availability business processing and data capabilities were the central criteria, but we knew we had to build in a great deal of scalability and managed adaptability."
Mindful of these primary requirements, LLNL commissioned Ron Hughes, President of California Data Center Design Group (CDCDG) to design the facility, implementing cutting edge "modular" design practices in anticipation of the internal growth potential for the facility. "The modular design incorporates an infrastructure backbone that can be expanded rather than rebuilt once existing load or growth potential is realized" stated Hughes. "Flexibility and growth were at the forefront of the mechanical, electrical, and telecommunication design concepts."
CDCDG's modular design met other LLNL criteria as well. Once installed, the infrastructure was simple enough to allow small-scale installation phases and ongoing moves, adds and changes (MAC) to be performed by in-house LLNL IT staff members. Although an outside contractor performed the largest installation phases, the facility's robust security clearance procedures would not efficiently allow outside support for subsequent MAC work.
Additionally, because the data center was designed as a co-location facility which would essentially be "sold" to various executives, its visual appeal had to match its performance capabilities. "Aesthetics were a big part of the design challenge," added Herbert. "A prospective tenant can show up at any time and the facility has to appear every bit as organized as it actually is. No matter how many adjustments we make to the cabling plant, it has to look as neat as the day we opened the doors."
Pre-engineered problem solvers.
While the long-term goal for the project was to develop an infrastructure that could be largely managed by internal staff, LLNL and CDCDG worked closely with Arkatype, a Laguna Hills, CA-based infrastructure consultant and installer company during the initial implementation phases. "LLNL's critical need for reliable performance was actually very straightforward. The challenge was a reliable cabling plant that could be easily moved and changed by internal staff without jeopardizing system performance," explained Arkatype's Michael Cantrell. "While the staff was highly technical, a system that required them to perform time consuming and craft-intensive field terminations introduced a likely failure point."
After reviewing the CDCDG design as well as LLNL's specific goals and needs, Arkatype's Cantrell suggested pre-engineered cabling solutions for all permanent data center links. After a thorough review of available options, LLNL agreed: horizontal channels would be supported by Siemon Premium 6 pre-terminated copper trunking cable assemblies and Backbone duties handled by Siemon's 10Gb/s-capable XGLO plug and play fiber optic cabling system.
Siemon copper trunk cables consist of 6 individual cabling channels, each terminated at both ends with MAX outlets. These channels are contained in an overall industrial mesh sheath, which protects and organizes the cabling during installation and later MAC work. Utilizing individual outlets, Siemon trunks present a smaller pulling profile than bulky cassette-based versions, allowing installation in tight pathways and smaller cabinet openings. The individual modules then simply snap into a wide array of Siemon patch panel solutions.
The XGLO 10Gb/s plug and play fiber solution utilizes a combination of pre-terminated and tested fiber modules and simple MPO fiber connectivity. Up to 12 fiber connections can be quickly deployed by plugging in a single MPO connector into an XGLO plug and play module and snapping the module into any of Siemon's fiber optic enclosures. Individual plug and play modules can support up to 24 connections, with 2 MPO connectors.
A Best Practice Approach
These Siemon pre-engineered cabling solutions met all of LLNL's core needs, including the critical need for high-availability and performance. Like all Siemon trunking cables, the Premium 6 copper assemblies used in the data center's horizontal channels are factory terminated and tested, with full test reports included with each assembly. Likewise, the XGLO plug and play modules and connectors are fully tested and performance-validated before leaving the factory.
The high-quality factory terminations provide an interlinked benefit between LLNL's need for performance and its need for simplicity. By eliminating the need for onsite terminations, they removed the performance variability inherent in field terminations. This capability increased the overall reliability of the facility. With fully tested connectivity servicing active equipment connections, cabling related downtime and slow-time could be considerably decreased.
Moreover, field terminations require highly trained technicians to ensure performance. With pre-engineered cabling, internal LLNL IT would be able to simply and quickly deploy high-performance permanent links.
Beyond eliminating field terminations, the copper trunking cables and fiber plug and play modules offered other deployment simplicity and speed benefits. Both product sets adhere to a "made-to-fit" approach. LLNL was able to order the exact length and configuration required, dress them into their pathways and plug them in. This significantly reduced onsite cable installation time and disruption, shaving about 75% from traditional field-terminated installation time.
As a co-location facility, the modular configuration is expected to provide future management benefits as well. Servicing the varying needs of tenant connectivity will require a great deal of flexibility and scalability in the cabling plant. The modular links provided by pre-terminated solutions can be easily moved to where connectivity is required, without the need to re-terminate. And in the likely event that additional channels are required, new trunking cables can be added with minimal disruption, using internal LLNL resources.
Moreover, LLNL feels that pre-terminated links will assist in the long-term management of the cabling plant. Because all data center links will be consistently deployed with a common product set, pathways are far less likely to become disorganized due to ill-managed increases in individually field-terminated links. Poorly planned MACs, most often a result of unchecked individual field terminated channels, are an extremely common source of data center cable management issues. Troubleshooting, channel tracing and the orderly management of MACs are all simplified with a pre-engineered solution.
Associated with simplified management, Siemon assemblies helped LLNL create a consistent aesthetic appeal - a benefit of significant importance in a co-location facility. LLNL's IT staff will in essence, sell the benefits of the data center to prospective tenants, many of whom may not be IT experts. According to Herbert, "Nice and neat goes a long way. It makes it easy to communicate the quality of the facility to decision makers that may not be well-versed in data center infrastructure."
Ron Hughes, CEO of California Data Center Design Group has been involved in the design, construction and operation of data centers for over 20 years. In the last 8 years, his company has managed the design of over 2,500,000 square feet of data center space in the US, Europe, Asia, the Middle East and Mexico. www.cdcdg.com
Michael Cantrell has been involved in the design, implementation and certification of information transport systems for 14 years. As founder and CEO of Arkatype he has been responsible for over 750,000 square feet of data center infrastructure deployment and 1,200,000 square feet of campus and office environment backbone and structured cabling. With travel and experience in 84 countries, Mr. Cantrell has been recognized as an industry leader in the areas of Network Design, Data Center layout, and Network Infrastructure Integration. www.arkatype.com