Port Hardware
The ports on the Ixia load modules provide high-speed, transmit, capture, and statistics operation. The discussion which follows is broken down into a number of areas:
- Types of Ports: The different types of networking technology supported by Ixia load modules
- Port Transmit Capabilities: Facilities for generating data traffic
- Streams and Flows: A set of packets, which may be grouped into bursts
- Bursts and the Inter-Burst Gap (IBG): A number of packets
- Packets and the Inter-Packet Gap (IPG): Individual frames/packets of data
- Frame Data: The construction of data within a frame/packet
- Port Data Capture Capabilities: Facilities for capturing data received on a port
- Port Statistics Capabilities: Facilities for obtaining statistics on each port
Types of Ports
The types of load module ports that Ixia offers are divided into these broad categories:
Only the currently available Ixia load modules are discussed in this chapter. Subsequent chapters in this manual discuss all supported load modules and their optional features.
Ethernet
Ethernet modules are provided with various feature combinations, as mentioned in the following list:
- Speed combinations: 10 Mbps, 100 Mbps, and 1000 Mbps
- Auto negotiation
- Pause control
- With and without on-board processors, also called Port CPUs (PCPUs). Load modules without processors only allow for very limited routing protocol emulation
- Power over Ethernet (Described in Power over Ethernet)
- External connections including the following:
Power over Ethernet
The Power over Ethernet (PoE) load modules (PLM1000T4-PD and LSM1000POE4-02) are special purpose, 4-channel electronic loads. They are intended to be used in conjunction with Ixia ethernet traffic generator/analyzer load modules to test devices that conform to IEEE std 802.3af.
A PoE load module provides the hardware interface required to test the Power Sourcing Equipment (PSE) of a 802.3af compliant device by simulating a Powered Device (PD).
Power Sourcing Equipment (PSE)
A PSE is any equipment that provides the power to a single link Ethernet Network section. The PSE's main functions are to search the link section for a powered device (PD), optionally classify the PD, supply power to the link section (only if a PD is detected), monitor the power on the link section, and remove power when it is no longer requested or required.
There are two power sourcing methods for PoE Alternative A and Alternative B.
PSEs may be placed in two locations with respect to the link segment, either coincident with the DTE/Repeater, or midspan. A PSE that is coincident with the DTE/Repeater is an `Endpoint PSE.' A PSE that is located within a link segment that is distinctly separate from and between the Media Dependent Interfaces (MDIs) is a `Midspan PSE.'
Endpoint PSEs may support either Alternative A, B, or both. Endpoint PSEs can be compatible with 10BASE-T, 100BASE-X, and/or 1000BASE-T.
Midspan PSEs must use Alternative B. Midspan PSEs are limited to operation with 10BASE-T and 100BASE-TX systems. Operation of Midspan PSEs on 1000BASE-T systems is beyond the scope of PoE.
Powered Devices (PD)
A powered device either draws power or requests power by participating in the PD detection algorithm. A device that is capable of becoming a PD may or may not have the ability to draw power from an alternate power source and, if doing so, may or may not require power from the PSE.
One PoE Load Module emulates up to four PDs. The PoE Load Module (PLM) has eight RJ-45 interfaces four of them used as PD-emulated ports, with each having its own corresponding interface that connects to a port on any Ixia 10/100/1000 copper-based Ethernet load module (includes all copper-based TXS, and Optixia load modules).
The following figure demonstrates how the PoE modules use an Ethernet card to transmit and receive data streams.
Figure: Data Traffic over PoE Set Up
The emulated PD device can `piggy-back' a signal from a different load module along the cable connected to the PSE from which it draws power. In this manner, the emulated PD can mimic a device that generates traffic, such as an IP phone.
Discovery Process
The main purpose for discovery is to prevent damage to existing Ethernet equipment. The Power Sourcing Equipment (PSE) examines the Ethernet cables by applying a small current-limited voltage to the cable and checking for the presence of a 25K ohm resistor in the remote Powered Device (PD). Only if the resistor is present, the full 48V is applied (and this is still current-limited to prevent damage to cables and equipment in fault conditions). The Powered Device must continue to draw a minimum current or the PSE removes the power and the discovery process begins again.
Figure: Discovery Process Voltage
There is also an optional extension to the discovery process where a PD may indicate to the PSE its maximum power requirements, called classification. Once there is power applied to the PD, normal transactions/data transfer occurs. During this period, the PD sends back a maintain power signature (MPS) to signal the PSE to continue to provide power.
PoE Acquisition Tests
During the course of testing with the PoE module, it may be necessary to measure the amplitude of the incoming current. The PoE module has the ability to measure amplitude versus time in following two ways:
- Time test: The amount of time that elapses between a Start and Stop incoming current measurement.
- Amplitude test: The amplitude of the current after a set amount of time from a Start incoming current setting.
In both scenarios, a Start trigger is set, indicating when the test should commence based on an incoming current value (in either DC Volts or DC Amps).
In a Time test, a Stop trigger is also set (in either DC Volts or DC Amps) indicating when the test is over. Once the Stop trigger is reached, the amount of time between the Start and Stop trigger is measured (in microseconds) and the result is reported.
In an amplitude test, an Amplitude Delay time is set (in microseconds), which is the amount of time to wait after the Start trigger is reached before ending the test. The amplitude at the end of the Amplitude Delay time is measured and is reported.
Both Start and Stop triggers must also have a defined Slope type, either positive or negative. A positive slope is equivalent to rising current, while a negative slope is equivalent to decreasing current. A current condition must agree with both the amplitude setting and the Slope type to satisfy the trigger condition.
An example of a Time test is shown in the following figure.
Figure: PoE Time Acquisition Example
An example of an Amplitude test is shown in the following figure.
Figure: PoE Amplitude Acquisition Example
10GE
The 10 Gigabit Ethernet (10GE) family of load modules implements five of the seven IEEE 8.2.3ae compliant interfaces that run at 10 Gbit/second. Several of the load modules may also be software switched to OC192 operation.
The 10 GE load modules are provided with various feature combinations, as mentioned in the following list:
- Interfaces types: LAN, WAN, XAUI, and XENPAK
- Interface connectors: SC singlemode (LAN and WAN), SC multimode (LAN), LC singlemode/multimode, XFP, XAUI, and XENPAK
- Reach: Short, long, and extended
- Wavelengths: 850 nm, 1310 nm, 1550 nm
The relationship of the logical structures for the different 10 Gigabit types is shown in the diagram (adapted from the 802.3ae standard) in the following figure.
Figure: IEEE 802.3ae Draft 10 Gigabit Architecture
For 10GE XAUI and 10GE XENPAK modules, a Status message contains a 4-byte ordered set with a Sequence control character plus three data characters (in hex), distributed across the four lanes, as shown in the following figure. Four Sequence ordered sets are defined in IEEE 802.3ae, but only two of these Local Fault and Remote Fault are currently in use; the other two are reserved for future use.
Figure: 10GE XAUI/XENPAK Sequence Ordered Sets
XAUI Interfaces
The 10 Gigabit XAUI interface has been defined in the IEEE draft specification P802.3ae by the 10 Gigabit Ethernet Task Force (10GEA). XAUI stands for `X' (the Roman Numeral for 10, as in `10 Gigabit'), plus `AUI' or Attachment Unit Interface originally defined for Ethernet.
The original Ethernet standard was defined in IEEE 802.3, and included MAC layer, frame size, and other `standard' Ethernet characteristics. IEEE 802.3z defined the Gigabit standard. IEEE 802.3ae has been created to create a simplified version of SONET framing to carry native Ethernet-framed traffic over high-speed fiber networks. This new standard allows a smooth transition from 10 Gbps native Ethernet traffic to work with 9.6 Gbps for SONET at OC-192c rate over WAN and MAN links. The 10GE XAUI has a XAUI interface for connecting to another XAUI interface, such as on a DUT. A comparison of the IEEE P802.3ae model for XAUI and the OSI model is shown in the following figure.
Figure: IEEE P802.3ae Architecture for 10GE XAUI
Lane Skew
The Lane Skew feature provides the ability to independently delay one or more of the four XAUI lanes. The resolution of the skew is 3.2 nanoseconds (ns), which consists of 10 Unit Intervals (UIs), each of which is 320 picoseconds (ps). Each UI is equivalent to the amount of time required to transmit one XAUI bit at 3.125 Gbps.
Lane Skew allows a XAUI lane to be skewed by as much as 310 UI (99.2ns) with respect to the other three lanes. To effectively use this feature, the four lanes should be set to different skew values. Setting all four lanes to zero is equivalent to setting all four lanes to +80 UI. In both cases, the lanes are synchronous and there is no lane skew. When lane skewing is enabled, /A/, /K/, and /R/ codes are inserted into the data stream BEFORE the lanes are skewed. The principle behind lane skewing is shown in the diagrams in the following images.
Figure: XAUI Lane Skewing Lane Skew Disabled

Figure: XAUI Lane Skewing Lane Skew Enabled

Link Fault Signaling
Link Fault Signaling is defined in Section 46 of the IEEE 802.3ae specification for 10 Gigabit Ethernet. When the feature is enabled, four statistics are added to the list in Statistic View for the port. One is for monitoring the Link Fault State; two for providing a count of the Local Faults and Remote Faults; and the last one is for indicating the state of error insertion, whether or not it is ongoing.
Link Fault Signaling originates with the PHY sending an indication of a local fault condition in the link being used as a path for MAC data. In the typical scenario, the Reconciliation Sublayer (RS) which had been receiving the data receives this Local Fault status, and then send a Remote Fault status to the RS which was sending the data. Upon receipt of this Remote Fault status message, the sending RS terminates transmission of MAC Data, sending only `Idle' control characters until the link fault is resolved.
For the 10GE LAN and LAN-M serial modules, the Physical Coding Sublayer (PCS) of the PHY handles the transition from 64 bits to 66 bit `Blocks.' The 64 bits of data are scrambled, and then a 2-bit synchronization (sync) header is attached before transmission. This process is reversed by the PHY at the receiving end.
Link Fault Signaling for the 10GE XAUI/XENPAK is handled differently across the four-lane XAUI optional XGMII extender layer, which uses 8B/10B encoding.
Examples of Link Fault Signaling Error Insertion
The examples in this figure are described in the following table:
Case | Conditions |
---|---|
Case 1 |
Contiguous Bad Blocks = 2 (the minimum). Contiguous Good Blocks = 2 (the minimum). Send Type A ordered sets. Loop 1x. |
Case 2 |
Contiguous Bad Blocks = 2 (the minimum). Contiguous Good Blocks = 2 (the minimum). Send Type A ordered sets. Loop continuously. |
Case 3 |
Contiguous Bad Blocks = 2 (the minimum). Contiguous Good Blocks = 2 (the minimum). Send alternate ordered set types. Loop 1x. |
Case 4 |
Contiguous Bad Blocks = 2 (the minimum). Contiguous Good Blocks = 2 (the minimum). Send alternate ordered set types. Loop continuously. |
Link Alarm Status Interrupt (LASI)
The link alarm status is an active low output from the XENPAK module that is used to indicate a possible link problem as seen by the transceiver. Control registers are provided so that LASI may be programmed to assert only for specific fault conditions.
Efficient use of XENPAK and its specific registers requires an end-user system to recognize a connected transceiver as being of the XENPAK type. An Organizationally Unique Identifier (OUI) is used as the means of identifying a port as XENPAK, and also to communicate the device in which the XENPAK specific registers are located.
Ixia's XENPAK module allows for setting whether or not LASI monitoring is enabled, what register configurations to use, and the OUI. The XENPAK module can use the following registers:
- Rx Alarm Control (Register 0x9003): It can be programmed to assert only when specific receive path fault condition(s) are present.
- Tx Alarm Control (Register 0x9001): It can be programmed to assert only when specific transmit path fault condition(s) are present.
- LASI Control (Register 0x9002): A LASI control register that allows global masking of the Rx Alarm and Tx Alarm.
You can control the registers by setting a series of sixteen bits for each register. The register bits and their usage are described in the following tables.
Bits | Description | Default |
---|---|---|
15 - 11 |
Reserved |
0 |
10 |
Vendor Specific |
N/A (vendor Setting) |
9 |
WIS Local Fault Enable |
1 (when implemented) |
8 - 6 |
Vendor Specific |
N/A (vendor Setting) |
5 |
Receive Optical Power Fault Enable |
1 (when implemented) |
4 |
PMA/PMD Receiver Local Fault Enable |
1 (when implemented) |
3 |
PCS Receive Local Fault Enable |
1 |
2 - 1 |
Vendor Specific |
N/A (vendor Setting) |
0 |
PHY XS Receive Local Fault Enable |
1 |
Bits | Description | Default |
---|---|---|
15 - 11 |
Reserved |
0 |
10 |
Vendor Specific |
N/A (vendor setting) |
9 |
Laser Bias Current Fault Enable |
1 (when implemented) |
8 |
Laser Temperature Fault Enable |
1 (when implemented) |
7 |
Laser Output Power Fault Enable |
1 (when implemented) |
6 |
Transmitter Fault Enable |
1 |
5 |
Vendor Specific |
N/A (vendor setting) |
4 |
PMA/PMD Transmitter Local Fault Enable |
1 (when implemented) |
3 |
PCS Transmit Local Fault Enable |
1 |
2 - 1 |
Vendor Specific |
N/A (vendor setting) |
0 |
PHY XS Transmit Local Fault Enable |
1 |
Bits | Description | Default |
---|---|---|
15 - 8 |
Reserved |
0 |
7 - 3 |
Vendor Specific |
0 (when implemented) |
2 |
Rx Alarm Enable |
0 |
1 |
Tx Alarm Enable |
0 |
0 |
LS Alarm Enable |
0 |
For more detailed information on LASI, see the online document XENPAK MSA Rev. 3.
40GE and 100GE
For theoretical information, refer to 40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview White Paper, published by Ethernet Alliance, November, 2008. This white paper may be obtained through the Internet.
http://www.ethernetalliance.org/wp-content/uploads/2011/10/document_files_40G_100G_Tech_overview.pdf
400GE
The 400 Gigabit Ethernet (400GE) family of load modules meets the growing bandwidth requirements of ever-evolving data networks. 400GE addresses the broad range of bandwidth requirements for key application areas such as cloud-scale data centers, Internet exchanges, co-location services, wireless infrastructure, service provider and operator networks, and video distribution infrastructure.
This family of load module is capable of 200GE, 100GE and 50GE fan-outs.
SONET/POS
SONET/POS modules are provided with various feature combinations:
- Different speeds: OC3, OC12, OC48, OC192, Fibre Channel, 2x Fibre Channel, and Gigabit Ethernet.
- Interfaces: SC singlemode and multimode (OC3, OC12, OC192), SC singlemode (OC48), no optical transceiver, SFP LC singlemode (Unframed BERT) and custom interface.
- Reach: long, intermediate and long.
- Wavelengths: 850nm, 1310nm and 1550nm.
- Local processor support. All SONET/POS load modules include a local processor, but the power of the processor and amount of memory varies.
- Variable clocking OC48 only, see Variable Rate Clocking
- Concatenated or channelized SONET operation, see SONET Operation
- Error insertion, see Error Insertion
- BERT: Bit Error Rate Testing both framed and unframed, see BERT
- DCC: Data Communication Channel, see DCC Data Communications Channel.
- RPR: Resilient Packet Ring, see RPR Resilient Packet Ring.
- GFP: Generic Framing Procedure, see GFP Generic Framing Procedure.
- PPP: Point to Point protocol, see PPP Protocol Negotiation.
- HDLC: High-Level Data Link Control, see HDLC.
- Frame Relay: see Frame Relay.
- DSCP: see DSCP Differentiated Services Code Point.
Variable Rate Clocking
The OC48 VAR allows a variation of +/- 100 parts per million (ppm) from the clock source's nominal frequency, through a DC voltage input into the BNC jack marked `DC IN' on the front panel. The frequency may be monitored through the BNC marked `Freq Monitor.'
SONET Operation
A Synchronous Optical NETwork/Synchronous Digital Hierarchy (SONET/SDH) frame is based on the Synchronous Transport Signal-1 (STS-1) frame, whose structure is shown in the figure below. Transmission of SONET Frames of this size correspond to the Optical Carrier level 1 (OC-1).
An OC-3c, consists of three OC-1/STS-1 frames multiplexed together at the octet level. OC-12c, OC-48c, and OC-192c, are formed from higher multiples of the basic OC-1 format. The suffix `c' indicates that the basic frames are concatenated to form the larger frame.
Ixia supports both concatenated (with the `c') and channelized (without the `c') interfaces. Concatenated interfaces send and receive data in a single streams of data. Channelized interfaces send and receive data in multiple independent streams.
Figure: Generated Frame Contents SONET STS-1 Frame

The contents of the SONET STS-1 frame are described in the following table.
The SONET STS-1 frame is transmitted at a rate of 51.84 Mbps, with 49.5 Mbps reserved for the frame payload. A SONET frame is transmitted in 125 microseconds, with the order of transmission of the starting with Row 1, Byte 1 at the upper left of the frame, and proceeding by row from top to bottom, and from left to right.
The section, line, and path overhead elements are related to the manner in which SONET frames are transmitted, as shown in the following figure.
Example Diagram of SONET Levels and Network Elements

Error Insertion
A variety of deliberate errors may be inserted in SONET frames in the section, line or path areas of a frame. The errors which may be inserted vary by particular load module. Errors may be inserted continuously or periodically as shown in the following figure.
Figure: SONET Error Insertion

An error may be inserted in one of two manners:
- Continuous: Each SONET frame receives the error.
- Periodic: A number of errors are inserted in consecutive frames and the pattern is repeated based on a number of frames or a period of time. Predefined periods are available, or you may create your own predefined periods.
Each error may be individually inserted continuously or periodically. Errors may be inserted on a one time basis over a number of frames as well.
DCC Data Communications Channel
The data communication channel is a feature of SONET networks which uses the DCC bytes in the transport overhead of each frame. This is used for control, monitoring and provisioning of SONET connections. Ixia ports treat the DCC as a data stream which `piggy-backs' on the normal SONET stream. The DCC and normal (referred to as the SPE - Synchronous Payload Envelope) streams can be transmitted independently or at the same time.
A number of different techniques are available for transmitting DCC and SPE data, utilizing Ixia streams and flows (see Streams and Flows and advanced stream scheduler (see Advanced Streams).
SRP Spatial Reuse Protocol
The Spatial Reuse Protocol (SRP) was developed by Cisco for use with ring-based media. It derives its name from the spatial reuse properties of the packet handling procedure. This optical transport technology combines the bandwidth-efficient and service-rich capabilities of IP routing with the bandwidth-rich, self-healing capabilities of fiber rings to deliver fundamental cost and functionality advantages over existing solutions. In SRP mode, the usual POS header (PPP, and so forth) is replaced by the SRP header.
SRP networks use two counter-rotating rings. One Ixia port may be used to participate in one of the rings; two may be used to simultaneously participate in both rings. Ixia supports SRP on both OC48 and OC192 interfaces.
In SRP-mode, SRP packets can be captured and analyzed. The IxExplorer capture view displays packet analysis which understands SRP packets. The Ixia hardware also collects specific SRP related statistics and performs filtering based on SRP header contents.
Any of the following SRP packet types may be generated in a data stream, along with normal IPv4 traffic:
- SRP Discovery
- SRP ARP
- SRP IPS (Intelligent Protection Switching)
RPR Resilient Packet Ring
Ixia's optional Resilient Packet Ring (RPR) implementation is available on the OC-48c and OC-192c POS load modules. RPR is a proposed industry standard for MAC Control on Metropolitan Area Networks (MANs), defined by IEEE P802.17. This feature provides a cost-effective method to optimize the transport of bursty traffic, such as IP, over existing ring topologies.
A diagram showing a simplified model of an RPR network is shown in the following figure. It is made up of two, counter-rotating `ringlets,' with nodes called `stations' supporting MAC Clients that exchange data and control information with remote peers on the ring. Up to 255 nodes can be supported on the ring structure.
Figure: RPR Ring Network Diagram

The RPR topology discovery is handled by a MAC sublayer, and a protection function maintains network connectivity in the event of a station or span failure. The structure of the RPR layers, compared to the OSI model, is illustrated in a diagram based on IEEE 802.17, shown in the following figure.
Figure: RPR Layers

A diagram of the layers associated with an RPR Station is shown in the following figure.
Figure: RPR Layer Diagram

The Ixia implementation allows for the configuration and transmission of the following types of RPR frames:
- RPR Fairness Frames: The RPR Fairness Algorithm (FA) is used to manage congestion on the ringlets in an RPR network. Fairness frames are sent periodically to advertise bandwidth usage parameters to other nodes in the network to maintain weighted fair share distributions of bandwidth. The messages are sent in the direction opposite to the data flow, and therefore, on the other ringlet. A diagram of the RPR Fairness Frame, per IEEE 802.17/D2.1, is shown in the following figure.
- RPR Topology Discovery. Two types of messages are used:
- RPR Topology Discovery Message: for the discovery of the physical topology.
- RPR Topology Extended Status Message: for the transmission of additional information from a node concerning bandwidth and other configuration options. This format uses TLV (Type-Length-Value) options, including:
- Weight
- Total reserved bandwidth
- Neighbor address
- Individual reserved bandwidth
- Station name
- Vendor specific data
- RPR Protection Switching Message: used to support automatic, rapid switching of traffic in the presence of a ring failure.
- RPR Operations, Administration and Management (OAM). Three messages are supported:
- Echo Request and Response messages
- Flush Frames
- Vendor specific message
Figure: RPR Fairness Frame Format

A diagram of the baseRingControl byte, part of the Ring Control header for all types of RPR frames, is shown in the following figure.
Figure: RPR baseRingControl Byte

GFP Generic Framing Procedure
GFP provides a generic mechanism to adapt traffic from higher-layer client signals over a transport network. Currently, two modes of client signal adaptation are defined for GFP.
- A PDU-oriented adaptation mode, referred to as Frame-Mapped GFP (GFP-F, for traffic such as IP/PPP or Ethernet MAC).
- A block-code oriented adaptation mode, referred to as Transparent GFP (GFP-T, for traffic such as Fibre Channel or ESCON/SBCON).
In the Frame-Mapped adaptation mode, the Client/GFP adaptation function operates at the data link (or higher) layer of the client signal. Client PDU visibility is required, which is obtained when the client PDUs are received from either the data layer network or a bridge, switch, or router function in a transport network element.
For the Transparent adaptation mode, the Client/GFP adaptation function operates on the coded character stream, rather than on the incoming client PDUs. Processing of the incoming code word space for the client signal is required.
Two kinds of GFP frames are defined: GFP client frames and GFP control frames. GFP also supports a flexible (payload) header extension mechanism to facilitate the adaptation of GFP for use with diverse transport mechanisms.
GFP uses a modified version of the Header Error Check (HEC) algorithm to provide GFP frame delineation. The frame delineation algorithm used in GFP differs from HEC in two basic ways:
- The algorithm uses the PDU Length Indicator field of the GFP Core Header to find the end of the GFP frame.
- HEC field calculation uses a 16-bit polynomial and, consequently, generates a two-octet cHEC field.
A diagram of the format for a GFP frame is shown in the following figure.
Figure: GFP Frame Elements

The sections of the GFP frame are described in the following list:
- Payload Length Indicator (PLI): The two-octet PLI field contains a binary number representing the number of octets in the GFP Payload Area. The absolute minimum value of the PLI field in a GFP client frame is 4 octets. PLI values 0-3 are reserved for GFP control frame usage.
- Core Header Error Control (cHEC): The two-octet Core Header Error Control field contains a CRC-16 error control code that protects the integrity of the contents of the Core Header by enabling both single-bit error correction and multi-bit error detection.
- Type Header Error Control (tHEC): The two-octet Type Header Error Control field contains a CRC-16 error control code that protects the integrity of the contents of the Type field by enabling both single-bit error correction and multi-bit error detection.
- Extension Header Error Control (eHEC): The two-octet Extension Header Error Control field contains a CRC-16 error control code that protects the integrity of the contents of the extension headers by enabling both single-bit error correction (optional) and multi-bit error detection.
- Connection Identification (CID): The CID is an 8-bit binary number used to indicate one of 256 communications channels at a GFP termination point.
- Payload: The GFP Payload Area, which consists of all octets in the GFP frame after the GFP Core Header, is used to convey higher layer specific protocol information. This variable length area may include from 4 to 65,535 octets. The GFP Payload Area consists of two common components:
- A Payload Header and a Payload Information field
- An optional Payload FCS (pFCS) field
- Frame Check Sequence (FCS): The GFP Payload FCS is an optional, four-octet long, frame check sequence. It contains a CRC-32 sequence that protects the contents of the GFP Payload Information field. A value of 1 in the PFI bit within the Type field identifies the presence of the payload FCS field.
Practical GFP MTU sizes for the GFP Payload Area are application specific.
GFP frame delineation is performed based on the correlation between the first two octets of the GFP frame and the embedded two-octet cHEC field. The following figure shows the state diagram for the GFP frame delineation method.
Figure: GFP State Transitions

The state diagram works as follows:
- In the HUNT state, the GFP process performs frame delineation by searching octets for a correctly formatted Core Header over the last received sequence of four octets. Once a correct cHEC match is detected in the candidate Payload Length Indicator (PLI) and cHEC fields, a candidate GFP frame is identified and the receive process enters the PRESYNC state.
- In the PRESYNC state, the GFP process performs frame delineation by checking frames for a correct cHEC match in the presumed Core Header of the next candidate GFP frame. The PLI field in the Core Header of the preceding GFP frame is used to find the beginning of the next candidate GFP frame. The process repeats until a set number of consecutive correct cHECs are confirmed, at which point the process enters the SYNC state. If an incorrect cHEC is detected, the process returns to the HUNT state.
- In the SYNC state, the GFP process performs frame delineation by checking for a correct cHEC match on the next candidate GFP frame. The PLI field in the Core Header of the preceding GFP frame is used to find the beginning of the next candidate GFP frame. Frame delineation is lost whenever multiple bit errors are detected in the Core Header by the cHEC. In this case, a GFP Loss of Frame Delineation event is declared, the framing process returns to the HUNT state, and a client Server Signal Failure (SSF) is indicated to the client adaptation process.
- Idle GFP frames participate in the delineation process and are then discarded.
Robustness against false delineation in the resynchronization process depends on the value of DELTA. A value of DELTA = 1 is suggested. Frame delineation acquisition speed can be improved by the implementation of multiple `virtual framers,' whereby the GFP process remains in the HUNT state and a separate PRESYNC substate is spawned for each candidate GFP frame detected in the incoming octet stream.
Scrambling of the GFP Payload Area is required to provide security against payload information replicating scrambling word (or its inverse) from a frame synchronous scrambler (such as those used in the SDH RS layer or in an OTN OPUk channel). The following figure illustrates the scrambler and descrambler processes.
Figure: GFP Scrambling

All octets in the GFP Payload Area are scrambled using a x43 + 1 self-synchronous scrambler. Scrambling is done in network bit order.
At the source adaptation process, scrambling is enabled starting at the first transmitted octet after the cHEC field, and is disabled after the last transmitted octet of the GFP frame. When the scrambler or descrambler is disabled, its state is retained. Hence, the scrambler or descrambler state at the beginning of a GFP frame Payload Area is, thus, the last 43 Payload Area bits of the GFP frame transmitted in that channel immediately before the current GFP frame.
The activation of the sink adaptation process descrambler also depends on the present state of the cHEC check algorithm:
- In the HUNT and PRESYNC states, the descrambler is disabled.
- In the SYNC state, the descrambler is enabled only for the octets between the cHEC field and the end of the candidate GFP frame.
CDL Converged Data Link
10GE LAN, 10GE XAUI, 10GE XENPAK, 10GE WAN, and 10GE WAN UNIPHY modules all support the Cisco CDL preamble format.
The Converged Data Link (CDL) specification was developed to provide a standard method of implementing operation, administration, maintenance, and provisioning (OAM&P) in Ethernet packet-based optical networks without using a SONET/SDH layer.
PPP Protocol Negotiation
The Point-to-Point Protocol (PPP) is widely used to establish, configure and monitor peer-to-peer communication links. A PPP session is established in a number of steps, with each step completing before the next one starts. The steps, or layers, are:
- Physical: a physical layer link is established.
- Link Control Protocol (LCP): establishes the basic communications parameters for the line, including the Maximum Receive Unit (MRU), type of authentication to be used and type of compression to be used.
- Link quality determination and authentication. These are optional processes. Quality determination is the responsibility of PPP Link Quality Monitoring (LQM) Protocol. Once initiated, this process may continue throughout the life of the link. Authentication is performed at this stage only. There are multiple protocols which may be employed in this process; the most common of these are PAP and CHAP.
- Network Control Protocol (NCP): establishes which network protocols (such as IP, OSI, MPLS) are to be carried over the link and the parameters associated with the protocols. The protocols which support this NCP negotiation are called IPCP, OSINLCP, and MPLSCP, respectively.
- Network traffic and sustaining PPP control. The link has been established and traffic corresponding to previously negotiated network protocols may now flow. Also, PPP control traffic may continue to flow, as may be required by LQM, PPP keepalive operations, and so forth.
All implementations of PPP must support the Link Control Protocol (LCP), which negotiates the fundamental characteristics of the data link and constitutes the first exchange of information over an opening link. Physical link characteristics (media type, transfer rate, and so forth) are not controlled by PPP.
The Ixia PPP implementation supports LCP, IPCP, MPLSCP, and OSINLCP. When PPP is enabled on a given port, LCP and at least one of the NCPs must complete successfully over that port before it is administratively `up' and therefore be ready for general traffic to flow.
Each Ixia POS port implements a subset of the LCP, LQM, and NCP protocols. Each of the protocols is of the same basic format. For any connection, separate negotiations are performed for each direction. Each party sends a Configure-Request message to the other, with options and parameters proposing some form of configuration. The receiving party may respond with one of three messages:
- Configure-Reject: The receiving party does not recognize or prohibits one or more of the suggested options. It returns the problematic options to the sender.
- Configure-NAK: The receiving party understands all of the options, but finds one or more of the associated parameters unacceptable. It returns the problematic options, with alternative parameters, to the sender.
- Configure-ACK: The receiving party finds the options and parameters acceptable.
For the Configure-Reject and Configure-NAK requests, the sending party is expected to reply with an alternative Configure-Request.
The Ixia port may be configured to immediately start negotiating after the physical link comes up, or passively wait for the peer to start the negotiation. Ixia ports both sends and responds to PPP keepalive messages called echo requests.
LCP Link Control Protocol Options
The following sections outline the parameters associated with the Link Control Protocol. LCP includes a number of possible command types, which are assigned option numbers in the pertinent RFCs. Note that PPP parameters are typically independently negotiated for each direction on the link.
Numerous RFCs are associated with LCP, but the most important RFCs are RFC 1661 and RFC 1662. The HDLC/PPP header sequence for LCP is FF 03 C0 21.
During the LCP phase of negotiation, the Ixia port makes available the following options:
- Maximum Receive Unit: This LCP parameter (actually the set of Maximum Receive Unit (MRU) and Maximum Transmit Unit (MTU)) determines the maximum allowed size of any frame sent across the link subsequent to LCP completion. To be fully standards-compliant, an implementation must not send a frame of length greater than its MTU + 4 bytes + CRC length. For instance, if the negotiated MTU for a port is 2000 and 32 bit CRC is in use, no frame larger than 2008 bytes should ever be sent out that port. Packets that are larger are expected to be fragmented before transmitting or to be dropped. The Ixia port's MTU is the peer's MRU following LCP negotiation. Strictly speaking, the receiving side can assume that frames received is not greater than the MRU. In practice, however, an implementation should be capable of accepting larger frames. If a peer rejects this option altogether, the negotiated setting defaults to 1,500. Regardless of the negotiated MRU, all implementations must be capable of accepting frames with an information field of at least 1,500 bytes.
- Asynchronous Control Character Map: ACCM is only really pertinent to asynchronous links. On asynchronous lines, certain characters sent over the wire can have special meaning to one or more receiving entities. For instance, common implementations of the widely used XON/XOFF flow control protocol assign the ASCII DC3 character (0x13) to XOFF. On such a link, an embedded data byte that happens to have the value 0x13 would be misinterpreted by a receiver as an XOFF command, and cause suspension of reception. To avoid this problem, the 0x13 character embedded in the data could be sent through an `escape sequence' which consists of preceding the data character with a dedicated tag character and modifying the data character itself.
- Magic Number: A magic number is a 32-bit value, ideally numerically random, which is placed in certain PPP control packets. Magic numbers aid in detection of looped links. If a received PPP packet that includes a magic number matches a previously transmitted packet, including magic number, the link is probably looped.
For the transmit direction portion of the negotiation, the peer sends the Ixia port its configuration request. The Ixia port accepts and acknowledges the peer's requested MRU as long as it is less than or equal to the specified user's desired transmit value (but greater than 26). For the receive direction portion of the negotiation, the Ixia port sends a configuration request based on the user's desired value. Generally, the Ixia port accepts what the peer desires (if it acknowledges the request, then the user value is used, or if the peer sends a Configure-Nak with another value the Ixia port uses that value as long as it is valid). This approach is used to maximize the probability of successful negotiation.
The Asynchronous Control Character Map (ACCM) LCP parameter allows independent designation of each character in the range 0x00 thru 0x1F as a control character. A control character is sent/received with a preceding `control-escape' character (0x7D). When the 0x7D is seen in the received data stream, the 0x7D is dropped and the next character is exclusive-or'd with 0x20 to get the original transmitted character. ACCM negotiation consists of exchanging masks between peers to reach an agreement as to which characters are treated as special control characters on transmission and reception. For example, sending a mask of 0xFFFFFFFF means all characters in the range 0x00 thru 0x1F are sent with escape sequences; a mask of 0 means no special handling, so all characters are arbitrary data.
Packet over SONET is an octet-synchronous medium. If the link is direct between POS peers, neither side should be generating control-escapes. (Exceptions to this are bytes 0x7D and 0x7E: the former is the special control escape character itself; the latter is the start/end frame marker. Escaping of these two characters is generally handled directly by physical layer hardware). On links in which there is some kind of intermediate asynchronous media, it is required that whatever device performs the asynchronous to synchronous conversion must also take care of any special character handling, isolating this from any POS port. See RFC 1662, sections 4.1 and 6.
If ACCM negotiation is enabled, the Ixia port advertises an ACCM mask of 0 to its peer in its LCP configuration request. The Ixia port accept whatever the peer puts forth, but does not act on the results. Regardless of the final negotiated settings for receive and transmit ACCM, the Ixia port does not send escape control sequences nor does it expect to receive them. This is the nature of a synchronous PPP medium, such as POS.
IxExplorer and the Tcl APIs allow global enable/disable of magic number negotiation. If the `Use Magic Number' feature is enabled, the Ixia port does not request magic number of its peer and rejects the option if the peer requests it. If the check box is selected, the port attempts to negotiate magic number. The result of the bi-directional negotiation process is displayed in the fields for transmit and receive: an indication of whether magic number is enabled is written in the field for the corresponding direction.
NCP Network Control Protocols
- IPCP: Internet Protocol Control Protocol Options for IPV4. IPCP includes three command types, which are assigned option numbers in the pertinent RFCs. Note that PPP parameters are typically independently negotiated for each direction on the link.
- IPv6CP: Internet Protocol Control Protocol Options for IPv6. IPv6CP includes three command types, which are assigned option numbers in the pertinent RFCs. Note that PPP parameters are typically independently negotiated for each direction on the link.
- Generate its own address.
- Suggest its own address to its peer, but allow the peer to override that value.
- Require that the peer designate an address.
- Local may: the local peer may suggest an Interface Identifier (IID), but most allow a Configure-NAK with an alternate address to override its setting.
- Local must: the local peer must set the IID, which the peer must accept.
- Peer must: the peer must supply the IID. This is accomplished by sending an all zero tentative IID.
- Peer may: the remote peer may suggest an IID, but most allow a Configure-NAK with an alternate address to override its setting.
- Peer must: the remote peer must set the IID, which the local peer accepts.
- Local must: the local peer must supply the IID.
- Last Negotiated: the last negotiated interface-identifier.
- MAC Based: an address derived from the port's MAC address.
- IPv6: an IPv6 format address.
- Random: a randomly generated value.
- OSI Network Layer Control Protocol (OSINLCP): A single option is provided for this NCP protocol. If a non-zero value for alignment has been negotiated, subsequent ISO traffic (for example, IS-IS) arrives with or be sent with 1 to 3 zero pads inserted after the protocol header as per RFC 1377.
- MPLS Network Control Protocol (MPLSCP): No options are currently available for this protocol setup.
The sender of this Configure-Request may either include its own IP address, to provide this information to its remote peer, or may send all 0.0.0.0 as an IP address, which requests that the remote peer assign an IP address for the local node. The receiver may refuse the requested IP address and attempt to specify one for the peer to use by using a Configure-NAK response to the request with a specification of a different address.
The Ixia implementation provides minimal configuration of this parameter. You must specify the local IP address of the unit and the peer must provide its own IP address. The Ixia port accepts any IP address the peer wishes to use as long as it is a valid address (for example, not all 0's). The Ixia port expects the peer to accept its address. If, however, the peer specifies a different address for use, the port acknowledges that address but not actually notify you that this has happened. The Ixia port accepts a situation in which local and peer addresses are the same following negotiation.
A PPP peer may determine its IPv6 interface address by one or three methods:
In any of these cases, the Configure-Request must contain a tentative interface-identifier to send to the peer that is both unique to the link and if possible consistently reproducible.
The Ixia PPP implementation of IPv6CP is such that the negotiation mode of the local endpoint may be configured in one of three modes:
The peer endpoint may be configured in one of three modes:
One IID can be sent in each Configuration-Request. A zero value may be sent, in which case, the peer may send an IID in its response. Either node on the link can provide the valid, non-zero IID values for itself and its peer.
The tentative, or assigned IID in the Peer - Local Must case, may be assigned from one of four sources:
See IPv6 Interface Identifiers as follows for more information.
IPv6 Interface Identifiers (IIDs)
IIDs comprise part of an IPv6 address, as shown in the following figure for a link-local IPv6 address.
Note: The IPv6 Interface Identifier is equivalent to EUI-64 Id in the Protocol Interfaces window.
Figure: IPv6 Address Format Link-Local Address

The IPv6 Interface Identifier is derived from the 48-bit IEEE 802 MAC address or the 64-bit IEEE EUI-64 identifier. The EUI-64 is the extended unique identifier formed from the 24-bit company ID assigned by the IEEE Registration Authority, plus a 40-bit company-assigned extension identifier, as shown in the following figure.
Figure: IEEE EUI-64 Format

To create the Modified EUI-64 Interface Identifier, the value of the universal/local bit is inverted from `0' (which indicates global scope in the company ID) to `1' (which indicates global scope in the IPv6 Identifier). For Ethernet, the 48-bit MAC address may be encapsulated to form the IPv6 Identifier. In this case, two bytes `FF FE' are inserted between the company ID and the vendor-supplied ID, and the universal/local bit is set to `1' to indicate global scope. An example of an Interface Identifier based on a MAC address is shown in the following figure.
Example Encapsulated MAC in IPv6 Interface Identifier

Retry Parameters
During the process of negotiation, the port uses three Retry parameters. RFC 1661 specifies the interpretation for all of the parameters.
HDLC
Both standard and Cisco proprietary forms of HDLC (High-level Data Link Control) are supported.
Frame Relay
Packets may be wrapped in frame relay headers. The DLCI (Data Link Connection Identifier) may be set to a fixed value or varied algorithmically.
DSCP Differentiated Services Code Point
Differentiated Services (DiffServ) is a model in which traffic is treated by intermediate systems with relative priorities based on the type of services (ToS) field. Defined in RFC 2474 and RFC 2475, the DiffServ standard supersedes the original specification for defining packet priority described in RFC 791. DiffServ increases the number of definable priority levels by reallocating bits of an IP packet for priority marking.
The DiffServ architecture defines the DiffServ (DS) field, which supersedes the ToS field in IPv4 to make Per-Hop Behavior (PHB) decisions about packet classification and traffic conditioning functions, such as metering, marking, shaping, and policing.
Based on DSCP or IP precedence, traffic can be put into a particular service class. Packets within a service class are treated the same way.
The six most significant bits of the DiffServ field are called the Differential Services Code Point (DSCP).
The DiffServ fields in the packet are organized as shown in the following figure. These fields replace the TOS fields in the IP packet header.
Figure: DiffServ Fields

The DiffServ standard utilizes the same precedence bits (the most significant bits are DS5, DS4, and DS3) as TOS for priority setting, but further clarifies the definitions, offering finer granularity through the use of the next three bits in the DSCP. DiffServ reorganizes and renames the precedence levels (still defined by the three most significant bits of the DSCP) into these categories (the levels are explained in greater detail in this document). The following table shows the eight categories.
Precedence Level | Description |
---|---|
7 |
Stays the same (link layer and routing protocol keep alive) |
6 |
Stays the same (used for IP routing protocols) |
5 |
Express Forwarding (EF) |
4 |
Class 4 |
3 |
Class 3 |
2 |
Class 2 |
1 |
Class 1 |
0 |
Best Effort |
With this system, a device prioritizes traffic by class first. Then it differentiates and prioritizes same-class traffic, taking the drop probability into account.
The DiffServ standard does not specify a precise definition of `low,' `medium,' and `high' drop probability. Not all devices recognize the DiffServ (DS2 and DS1) settings; and even when these settings are recognized, they do not necessarily trigger the same PHB forwarding action at each network node. Each node implements its own response based on how it is configured.
Assured Forwarding (AF) PHB group is a means for a provider DS domain to offer different levels of forwarding assurances for IP packets received from a customer DS domain. Four AF classes are defined, where each AF class is in each DS node allocated a certain amount of forwarding resources (buffer space and bandwidth).
Classes 1 to 4 are referred to as AF classes. The following table illustrates the DSCP coding for specifying the AF class with the probability. Bits DS5, DS4, and DS3 define the class, while bits DS2 and DS1 specify the drop probability. Bit DS0 is always zero.
Drop | Class 1 | Class 2 | Class 3 | Class 4 |
---|---|---|---|---|
Low |
001010 AF11 DSCP 10 |
010010 AF21 DSCP 18 |
011010 AF31 DSCP 26 |
100010 AF41 DSCP 34 |
Medium |
001100 AF12 DSCP 12 |
010100 AF 22 DSCP 20 |
011100 AF32 DSCP 28 |
100100 AF42 DSCP 36 |
High |
001110 AF13 DSCP 14 |
010110 AF23 DSCP 22 |
011110 AF33 DSCP 30 |
100110 AF43 DSCP 38 |
ATM
The ATM load module enables high performance testing of routers and broadband aggregation devices such as DSLAMs and PPPoE termination systems.
The ATM module is provided with various feature combinations:
- Interfaces: pluggable PHYs:
- 1310nm multimode optics with dual -SC connectors
- SFP socket
- Speeds: OC3 and OC12
- Encapsulations:
- LLC/SNAP
- LLC/NLPID
- LLC Bridged Ethernet
- LLC Bridged Ethernet without FCS
- VC Mux Routed
- VC Mux Bridged Ethernet
- VC Mux Bridged Ethernet without FCS
- Multiple independent data streams
ATM is a point-to-point, connection-oriented protocol that carries traffic over `virtual connections/circuits' (VCs), in contrast to Ethernet connectionless LAN traffic. ATM traffic is segmented into 53-byte cells (with a 48-byte payload), and allows traffic from different Virtual Circuits to be interleaved (multiplexed). Ixia's ATM module allows up to 4096 transmit streams per port, shared across up to 15 interleaved VCs.
To allow the use of a larger, more convenient payload size, such as that for Ethernet frames, ATM Adaptation Layer 5 (AAL5) was developed. It is defined in ITU-T Recommendation I.363.5, and applies to the Broadband Integrated Services Digital Network (B-ISDN). It maps the ATM layer to higher layers. The Common Part Convergence Sublayer-Service Data Unit (CPSU-SDU) described in this document can be considered an IP or Ethernet packet. The entire CPSU-PDU (CPCS-SDU plus PAD and trailer) is segmented into sections which are sent as the payload of ATM cells, as shown in the following figure, based on ITU-T I.363.5.
Figure: Segmentation into ATM Cells

The Interface Type can be set to UNI (User-to-Network Interface) format or NNI (Network-to-Node Interface aka Network-to-Network Interface) format. The 5-byte ATM cell header is different for each of the two interfaces, as shown in the following figure.
Figure: ATM Cell Header for UNI and NNI

ATM OAM Cells
OAM cells are used for operation, administration, and maintenance of ATM networks. They operate on ATM's physical layer and are not recognized by higher layers. Operation, Administration, and Maintenance (OAM) performs standard loopback (end-to-end or segment) and fault detection and notification Alarm Indication Signal (AIS) and Remote Defect Identification (RDI) for each connection. It also maintains a group of timers for the OAM functions. When there is an OAM state change such as loopback failure, OAM software notifies the connection management software.
The ITU-T considers an ATM network to consist of five flow levels. These levels are illustrated in the following figure.
Figure: Maintenance Levels

The lower three flow levels are specific to the nature of the physical connection. The ITU-T recommendation briefly describes the relationship between the physical layer OAM capabilities and the ATM layer OAM.
From an ATM viewpoint, the most important flows are known as the F4 and F5 flows. The F4 flow is at the virtual path (VP) level. The F5 flow is at the virtual channel (VC) level. When OAM is enabled on an F4 or F5 flow, special OAM cells are inserted into the user traffic.
Four types of OAM cells are defined to support the management of VP/VC connections:
- Fault Management OAM cells. These OAM cells are used to indicate failure conditions. They can be used to indicate a discontinuity in VP/VC connection or may be used to perform checks on connections to isolate problems.
- Performance Management OAM cells. These cells are used to monitor performance (QoS) parameters such as cell block ratio, cell loss ratio and incorrectly inserted cells on VP/VC connections.
- Activation-deactivation OAM cells. These OAM cells are used to activate and deactivate the generation and processing of OAM cells, specifically continuity check (CC) and performance management (PM) cells.
- System management OAM cells. These OAM cells can be used to maintain and control various functions between end-user equipment.Their content is not specified by I.610, and they are limited to end-to-end flows.
The general format of an OAM cell is shown in the following figure.
OAM Cell Format
The header indicates which VCC or VPC an OAM cell belongs to. The cell payload is divided into five fields. The OAM-type and Function-type fields are used to distinguish the type of OAM cell. The Function Specific field contains information pertinent to that cell type. A 10 bit Cyclic Redundancy Check (CRC) is at the end of all OAM cells. This error detection code is used to ensure that management systems do not make erroneous decisions based on corrupted OAM cell information.
Ixia ATM modules allows to configure Fault Management and Activation/Deactivation OAM Cells.
BERT
Bit Error Rate Test (BERT) load modules are packaged as both an option to OC48, POS, 10GE, CFP8 400GE and QSFP-DD 400GE load modules and as BERT-only load modules. As opposed to all other types of testing performed by Ixia hardware and software, BERT tests operate at the physical layer, also referred to as OSI Layer 1. POS frames are constructed using specific pseudo-random patterns, with or without inserted errors. The receive circuitry locks on to the received pattern and checks for errors in those patterns.
Both unframed and framed BERT testing is available. Framed testing can be performed in both concatenated and channelized modes with some load modules.
The patterns inserted within the POS frames are based on the ITU-T O.151 specification. They consist of repeatable, pseudo-random data patterns of different bit-lengths which are designed to test error and jitter conditions. Other constant and user-defined patterns may also be applied. Furthermore, you may control the addition of deliberate errors to the data pattern. The inserted errors are limited to 32-bits in length and may be interspersed with non-errored patterns and repeated for a count. This is illustrated in the following figure. In the figure, an error pattern of n bits occurs every m bits for a count of 4. This error is inserted at the beginning of each POS data block within a frame.
Figure: BERT Inserted Error Pattern

Errors in received BERT traffic are visible through the measured statistics, which are based on readings at one-second intervals. The statistics related to BERT are described in the Available Statistics appendix associated with the Ixia Hardware Guide and some other manuals.
Available/Unavailable Seconds
Reception of POS signals can be divided into two types of periods, depending on the current state `Available' or `Unavailable,' as shown in the following figure. The seconds occurring during an unavailable period are termed Unavailable Seconds (UAS); those occurring during an available period are termed Available Seconds (AS).
Figure: BERT Unavailable/Available Periods

These periods are defined by the error condition of the data stream. When 10 consecutive SESs (A in the figure) are received, the receiving interface triggers an Unavailable Period. The period remains in the Unavailable state (B in the figure), until a string of 10 consecutive non-SESs is received (D in the figure), and the beginning of the Available state is triggered. The string of consecutive non-SESs in C in the figure was less than 10 seconds, which was insufficient to trigger a change to the Available state.