Google Translate

Worldwide | United States/Canada

Note:
The following technical article was current at the time it was published. However, due to changing technologies and standards updates, some of the information contained in this article may no longer be accurate or up to date.

Facts and Fallacies of Testing Next Generation Cabling

Introduction

This presentation discusses the issue of field testing structured cabling. Testing Standards are developed for testing devices to ensure a uniform minimum performance of these devices. The Testing Standards describe test procedures to enable test comparisons to be made in a meaningful way. Performance standards that include specifications for components, links and channels reference the test standards.

For copper cabling, a number of parameters have been defined to describe the performance of cabling components, links and channels. Collectively, they present a picture of the quality of the communication channel and its ability to deliver signals for different applications under different conditions.

Standards such as AS/NZS 3080, TIA/EIA 568 and ISO/IEC 11801 include specifications of links, channels and components, design and installation guidance.

Installation methods have a big influence on the performance of high speed communication channels.

The standards provide guidance for installation of copper cabling, separation from EMI sources and guidance on immunity and susceptibility. Temperature effects are also considered.

Vendor Warranties

Component manufacturers provide various long term warranties for installed cabling.

Testing installed cabling is vital in assessing the performance of warrantied installations.

The Siemon Company provides a number of long term warranties, such as System 5e, Premium 5e, System 6 and System 7. These can be either 16 or 20 years duration.

Link or channel models can be selected by the end-user to be covered by the warranty.

Applications warranty is only provided for the channel model. Additionally, parts and labour are covered for the entire term.

The Siemon Company requires 100% testing for a warranty, not sample testing. No marginal passes are accepted. This gives the end-user an automatic headroom equal to the measurement accuracy tolerance defined in the testing standards. All applications defined by the standards to be supported by the relevant cabling channel system will be covered by the Siemon Company warranty for the duration of the warranty.

Standards for Field Test Devices

Field test devices need to comply with standard-defined specifications. These ensure meaningful test results that give confidence to the end-user that the installed cabling will support the intended applications.

These specifications are identified as Level 2e (Class D / Category 5e) and Level 3 (Class E / Category 6).

Link testing is probably the hardest specification to comply with, especially for class E / category 6. This is as a result of very small margins between component specifications and link specifications, leaving very little room for error by the installer.

Marginal test results are due the inherent inaccuracy of the test device near the specified limit. In this region, the test device cannot accurately decide whether the measurement result is a pass or a fail with a high degree of accuracy. The Siemon Company does not recognise marginal passes.

Calibration

Calibration of the test device is essential if there is any meaning to the measurements.

The device needs to be calibrated against a more accurate test instrument, such as a network analyzer, over the frequency band of interest. Field calibration is also possible, though this generally has nothing to do with measurement accuracy. The Network Analyzer is a test instrument recognised in the industry as a highly accurate device capable of measuring small signal amplitudes over a wide frequency range.

Measurements are conducted with both test instruments and a scatter plot is produced. The field test instrument is adjusted as necessary to ensure meaningful and repeatable test measurements.

Field Test Device Specifications

Specifications for field Test Instruments (Draft TIA/EIA-568-B.2-1 (Category 6)) define the following parameters:

  1. The dynamic range for NEXT loss and FEXT loss is 65 dB maximum + 3 dB.
  2. The dynamic range for PSNEXT loss and PSFEXT loss is 62 dB maximum + 3 dB.
  3. Dynamic accuracy requirements shall be tested up to the specified dynamic range for NEXT loss and FEXT loss.
  4. Dynamic accuracy ELFEXT assumes a dynamic accuracy requirement of ± 0.75 dB for FEXT loss, which shall be tested, and that dynamic accuracy performance for attenuation and FEXT loss add to the ELFEXT requirement.
  5. The verification of residual NEXT loss and FEXT loss is up to 85 dB maximum. It is assumed that the frequency response changes at a rate of 20 dB/decade.
  6. The verification of output signal balance and common mode rejection is up to 60 dB maximum. It is assumed that the frequency response changes at a rate of 20 dB/decade.

For the following transmission parameters, a frequency resolution is defined that determines the minimum number of tests that the test device needs to perform.

Parameters: insertion loss; NEXT loss; PSNEXT loss; ELFEXT; PSELFEXT; Return loss

Resolution Frequency range 1 - 250 MHz

Frequency Band Frequency Resolution
1 MHz to 31.25 MHz 150 kHz
31.25 MHz to 100 MHz 250 kHz
100 MHz to 250 MHz 500 kHz

Field test device specification parameters include Dynamic accuracy, Source/load, return loss, Random noise floor, Residual NEXT, Residual FEXT and Output signal balance.

Common field test failures

Testing links and/or channels at an installed site can be very demanding. The installed cabling needs to be tested to the specified standard using suitable test equipment.Common test parameter failures in the field include wire-map, return loss, NEXT and length. Test devices provide detailed test reports and diagnostic tools to aid in fault finding and rectification.

When a transmission parameter fails, it is important to note the frequency of failure. This usually leads to identifying the component that caused the link to fail test. Also, plots can suggest the fault location.

Case Study

An actual case where Category 6 was installed which had a number of problems was a multi story building. In this case, S210 punchdown hardware was used in the telecommunications room and at consolidation points; modular outlets at TO s. Category 6 cable was used throughout. Level 3 test devices; Permanent Link Category 6 test specification selected. A high number of marginal passes were recorded, which included the following parameters:

Parameter(s): NEXT, Return Loss.

Employing sensible fault finding techniques, led to the conclusion that these marginal passes were attributed to the following causes:

NEXT: mainly attributed to termination techniques.
Return Loss: was harder to resolve. Test cords contributed as well as cable slack at the Consolidation Points.Device calibration also resolved some problems.

In general, there are a number of causes of test failures. These include cable, connecting hardware, excessive untwisting of pairs, exceeding minimum Bending Radius, excessive cable stress. Additionally, test devices may inadvertently contribute to test failures as a result of loss of calibration, worn out test cords, low batteries and wrong test standards

Field Test Device Brands

A number of test device manufacturers operate in the Australian market. These include :

Fluke Networks DSPxx series Digital Signal Processing techniques using FFT and inverse FFT algorithms
Microtest OmniScanner series Analog signals used to stimulate and analyze response for certain parameters
Agilent HP350, Wirescope 155  

Optical Fibre Testing

Unlike copper installations, there are essentially only two parameters that need to be controlled in an optical fibre installation. These are attenuation (insertion loss) and return loss. Although bandwidth is an essential characteristic, this parameter is unaffected by the installation process.

Factors Affecting Performance

Optical fibre attenuation is caused by a number of factors. Some are internal to the fibre, others are dependent on external factors. Installation techniques have a great influence on the loss in a fibre communication channel. Excessive bending of the fibre core (on pathways and at termination points) causes light to escape from the core, resulting in higher losses. This is called macro-bending. Micro-bends are small bends that generally occur at spots of high stress, such as point deformation of the fibre coating. These are very hard to detect and can be caused by sudden temperature changes or the presence of particles under heat shrink sleeves, such as in splice protectors.

Attenuation Test Requirements and Procedure

Once the installation is complete, all optical fibres should be tested for attenuation (insertion loss). The TIA/EIA 568 B specifies the One Reference Jumper method (ANSI/TIA/EIA-526-14A method B) for measuring attenuation in a structured cabling application. This is a Power meter Light source test.

For horizontal cabling, this standard requires an insertion loss of 2.0 dB independent of fibre type and test wavelength.

For backbone cabling, the TIA/ EIA 568 B standard requires a link loss budget (LLB) to be estimated ( LLB for passive cabling components only is given by the sum of mated pair connector loss at each end, optical fibre loss at a given wavelength for a given length and the splice losses, if any). This estimate is used to compare against the test results obtained in the field to determine the quality of the installation. The measured loss shall always be less than the estimated loss.

Test Method TIA/EIA 526-14A

TIA/EIA 568 B specifies the One Reference Jumper method (ANSI/TIA/EIA-526-14A method B) for measuring attenuation in a structured cabling application. The procedure described is as follows:

a)  A test cord, T1 supplied by the test device manufacturer is connected between a light source and a power meter, The two devices are turned on and set to the appropriate wavelength (850nm or 1300nnm). The power, M1 is recorded and used as the reference for subsequent comparison.

b)  The test cord, T1 is disconnected from the power meter (if the light source was disconnected, the reference would be lost). Another pre-tested low loss optical fibre cord, T2 is connected to the referenced test cord, via an adaptor at one end. The other end of the cord is connected to the power meter. Measure the loss of the new cord. The value shall be less than 0.75 dB.

c)  Disconnect the two cords at the adaptor interface. Insert the link under test between the two cords and measure the new loss (M2). The difference between the first power measurement M1 and M2 is the attenuation or insertion loss of the link under test.

This method is suitable for short links and patch cords, where the connector loss is the most significant loss in the link. For long optical fibre links, another test method is used, where the optical fibre loss is the more significant, compared to the connector loss.

Testing of optical fibre links needs to be done carefully. It is important to reference the first test cord properly. Measured loss values should always be positive.

OTDR attenuation testing is not recommended as this device is intended for fault finding and length measurements. An OTDR attenuation measurement of short links will always result in a small value being recorded. This is because the OTDR uses a laser source, which underfills the multimode core. Also, only one connector at a time is tested, when an OTDR is used to measure attenuation. This increases the cost of testing.

The test method specified in ISO/IEC 11801 and AS/NZS 3080 call up the IEC 61280-4-1 method for multimode and the IEC 61280-4-2 for single mode. The Australian and New Zealand standard follows the ISO standard and specify the maximum attenuation values for horizontal channels, intra building backbones and campus backbones.

Conclusion

In a perfect world, testing is not required. In the real world, testing is the ONLY way transmission channels can be assessed for performance to established criteria.

The observed and the observer together determine the outcome of the test process!