Port Hardware

The ports on the Ixia load modules provide high-speed, transmit, capture, and statistics operation. The discussion which follows is broken down into a number of areas:

Types of Ports

The types of load module ports that Ixia offers are divided into these broad categories:

Only the currently available Ixia load modules are discussed in this chapter. Subsequent chapters in this manual discuss all supported load modules and their optional features.

Ethernet

Ethernet modules are provided with various feature combinations, as mentioned in the following list:

Power over Ethernet

The Power over Ethernet (PoE) load modules (PLM1000T4-PD and LSM1000POE4-02) are special purpose, 4-channel electronic loads. They are intended to be used in conjunction with Ixia ethernet traffic generator/analyzer load modules to test devices that conform to IEEE std 802.3af.

A PoE load module provides the hardware interface required to test the Power Sourcing Equipment (PSE) of a 802.3af compliant device by simulating a Powered Device (PD).

Power Sourcing Equipment (PSE)

A PSE is any equipment that provides the power to a single link Ethernet Network section. The PSE's main functions are to search the link section for a powered device (PD), optionally classify the PD, supply power to the link section (only if a PD is detected), monitor the power on the link section, and remove power when it is no longer requested or required.

There are two power sourcing methods for PoE Alternative A and Alternative B.

PSEs may be placed in two locations with respect to the link segment, either coincident with the DTE/Repeater, or midspan. A PSE that is coincident with the DTE/Repeater is an `Endpoint PSE.' A PSE that is located within a link segment that is distinctly separate from and between the Media Dependent Interfaces (MDIs) is a `Midspan PSE.'

Endpoint PSEs may support either Alternative A, B, or both. Endpoint PSEs can be compatible with 10BASE-T, 100BASE-X, and/or 1000BASE-T.

Midspan PSEs must use Alternative B. Midspan PSEs are limited to operation with 10BASE-T and 100BASE-TX systems. Operation of Midspan PSEs on 1000BASE-T systems is beyond the scope of PoE.

Powered Devices (PD)

A powered device either draws power or requests power by participating in the PD detection algorithm. A device that is capable of becoming a PD may or may not have the ability to draw power from an alternate power source and, if doing so, may or may not require power from the PSE.

One PoE Load Module emulates up to four PDs. The PoE Load Module (PLM) has eight RJ-45 interfaces four of them used as PD-emulated ports, with each having its own corresponding interface that connects to a port on any Ixia 10/100/1000 copper-based Ethernet load module (includes all copper-based TXS, and Optixia load modules).

The following figure demonstrates how the PoE modules use an Ethernet card to transmit and receive data streams.

Figure: Data Traffic over PoE Set Up

The emulated PD device can `piggy-back' a signal from a different load module along the cable connected to the PSE from which it draws power. In this manner, the emulated PD can mimic a device that generates traffic, such as an IP phone.

Discovery Process

The main purpose for discovery is to prevent damage to existing Ethernet equipment. The Power Sourcing Equipment (PSE) examines the Ethernet cables by applying a small current-limited voltage to the cable and checking for the presence of a 25K ohm resistor in the remote Powered Device (PD). Only if the resistor is present, the full 48V is applied (and this is still current-limited to prevent damage to cables and equipment in fault conditions). The Powered Device must continue to draw a minimum current or the PSE removes the power and the discovery process begins again.

Figure: Discovery Process Voltage

There is also an optional extension to the discovery process where a PD may indicate to the PSE its maximum power requirements, called classification. Once there is power applied to the PD, normal transactions/data transfer occurs. During this period, the PD sends back a maintain power signature (MPS) to signal the PSE to continue to provide power.

PoE Acquisition Tests

During the course of testing with the PoE module, it may be necessary to measure the amplitude of the incoming current. The PoE module has the ability to measure amplitude versus time in following two ways:

In both scenarios, a Start trigger is set, indicating when the test should commence based on an incoming current value (in either DC Volts or DC Amps).

In a Time test, a Stop trigger is also set (in either DC Volts or DC Amps) indicating when the test is over. Once the Stop trigger is reached, the amount of time between the Start and Stop trigger is measured (in microseconds) and the result is reported.

In an amplitude test, an Amplitude Delay time is set (in microseconds), which is the amount of time to wait after the Start trigger is reached before ending the test. The amplitude at the end of the Amplitude Delay time is measured and is reported.

Both Start and Stop triggers must also have a defined Slope type, either positive or negative. A positive slope is equivalent to rising current, while a negative slope is equivalent to decreasing current. A current condition must agree with both the amplitude setting and the Slope type to satisfy the trigger condition.

An example of a Time test is shown in the following figure.

Figure: PoE Time Acquisition Example

An example of an Amplitude test is shown in the following figure.

Figure: PoE Amplitude Acquisition Example

10GE

The 10 Gigabit Ethernet (10GE) family of load modules implements five of the seven IEEE 8.2.3ae compliant interfaces that run at 10 Gbit/second. Several of the load modules may also be software switched to OC192 operation.

The 10 GE load modules are provided with various feature combinations, as mentioned in the following list:

The relationship of the logical structures for the different 10 Gigabit types is shown in the diagram (adapted from the 802.3ae standard) in the following figure.

Figure: IEEE 802.3ae Draft 10 Gigabit Architecture

For 10GE XAUI and 10GE XENPAK modules, a Status message contains a 4-byte ordered set with a Sequence control character plus three data characters (in hex), distributed across the four lanes, as shown in the following figure. Four Sequence ordered sets are defined in IEEE 802.3ae, but only two of these Local Fault and Remote Fault are currently in use; the other two are reserved for future use.

Figure: 10GE XAUI/XENPAK Sequence Ordered Sets

XAUI Interfaces

The 10 Gigabit XAUI interface has been defined in the IEEE draft specification P802.3ae by the 10 Gigabit Ethernet Task Force (10GEA). XAUI stands for `X' (the Roman Numeral for 10, as in `10 Gigabit'), plus `AUI' or Attachment Unit Interface originally defined for Ethernet.

The original Ethernet standard was defined in IEEE 802.3, and included MAC layer, frame size, and other `standard' Ethernet characteristics. IEEE 802.3z defined the Gigabit standard. IEEE 802.3ae has been created to create a simplified version of SONET framing to carry native Ethernet-framed traffic over high-speed fiber networks. This new standard allows a smooth transition from 10 Gbps native Ethernet traffic to work with 9.6 Gbps for SONET at OC-192c rate over WAN and MAN links. The 10GE XAUI has a XAUI interface for connecting to another XAUI interface, such as on a DUT. A comparison of the IEEE P802.3ae model for XAUI and the OSI model is shown in the following figure.

Figure: IEEE P802.3ae Architecture for 10GE XAUI

Lane Skew

The Lane Skew feature provides the ability to independently delay one or more of the four XAUI lanes. The resolution of the skew is 3.2 nanoseconds (ns), which consists of 10 Unit Intervals (UIs), each of which is 320 picoseconds (ps). Each UI is equivalent to the amount of time required to transmit one XAUI bit at 3.125 Gbps.

Lane Skew allows a XAUI lane to be skewed by as much as 310 UI (99.2ns) with respect to the other three lanes. To effectively use this feature, the four lanes should be set to different skew values. Setting all four lanes to zero is equivalent to setting all four lanes to +80 UI. In both cases, the lanes are synchronous and there is no lane skew. When lane skewing is enabled, /A/, /K/, and /R/ codes are inserted into the data stream BEFORE the lanes are skewed. The principle behind lane skewing is shown in the diagrams in the following images.

Figure: XAUI Lane Skewing Lane Skew Disabled

Figure: XAUI Lane Skewing Lane Skew Enabled

Link Fault Signaling

Link Fault Signaling is defined in Section 46 of the IEEE 802.3ae specification for 10 Gigabit Ethernet. When the feature is enabled, four statistics are added to the list in Statistic View for the port. One is for monitoring the Link Fault State; two for providing a count of the Local Faults and Remote Faults; and the last one is for indicating the state of error insertion, whether or not it is ongoing.

Link Fault Signaling originates with the PHY sending an indication of a local fault condition in the link being used as a path for MAC data. In the typical scenario, the Reconciliation Sublayer (RS) which had been receiving the data receives this Local Fault status, and then send a Remote Fault status to the RS which was sending the data. Upon receipt of this Remote Fault status message, the sending RS terminates transmission of MAC Data, sending only `Idle' control characters until the link fault is resolved.

For the 10GE LAN and LAN-M serial modules, the Physical Coding Sublayer (PCS) of the PHY handles the transition from 64 bits to 66 bit `Blocks.' The 64 bits of data are scrambled, and then a 2-bit synchronization (sync) header is attached before transmission. This process is reversed by the PHY at the receiving end.

Link Fault Signaling for the 10GE XAUI/XENPAK is handled differently across the four-lane XAUI optional XGMII extender layer, which uses 8B/10B encoding.

Examples of Link Fault Signaling Error Insertion

The examples in this figure are described in the following table:

Cases for Example
Case Conditions

Case 1

Contiguous Bad Blocks = 2 (the minimum).

Contiguous Good Blocks = 2 (the minimum).

Send Type A ordered sets.

Loop 1x.

Case 2

Contiguous Bad Blocks = 2 (the minimum).

Contiguous Good Blocks = 2 (the minimum).

Send Type A ordered sets.

Loop continuously.

Case 3

Contiguous Bad Blocks = 2 (the minimum).

Contiguous Good Blocks = 2 (the minimum).

Send alternate ordered set types.

Loop 1x.

Case 4

Contiguous Bad Blocks = 2 (the minimum).

Contiguous Good Blocks = 2 (the minimum).

Send alternate ordered set types.

Loop continuously.

Link Alarm Status Interrupt (LASI)

The link alarm status is an active low output from the XENPAK module that is used to indicate a possible link problem as seen by the transceiver. Control registers are provided so that LASI may be programmed to assert only for specific fault conditions.

Efficient use of XENPAK and its specific registers requires an end-user system to recognize a connected transceiver as being of the XENPAK type. An Organizationally Unique Identifier (OUI) is used as the means of identifying a port as XENPAK, and also to communicate the device in which the XENPAK specific registers are located.

Ixia's XENPAK module allows for setting whether or not LASI monitoring is enabled, what register configurations to use, and the OUI. The XENPAK module can use the following registers:

You can control the registers by setting a series of sixteen bits for each register. The register bits and their usage are described in the following tables.

Rx Alarm Control
Bits Description Default

15 - 11

Reserved

0

10

Vendor Specific

N/A (vendor Setting)

9

WIS Local Fault Enable

1 (when implemented)

8 - 6

Vendor Specific

N/A (vendor Setting)

5

Receive Optical Power Fault Enable

1 (when implemented)

4

PMA/PMD Receiver Local Fault Enable

1 (when implemented)

3

PCS Receive Local Fault Enable

1

2 - 1

Vendor Specific

N/A (vendor Setting)

0

PHY XS Receive Local Fault Enable

1

 

Tx Alarm Control
Bits Description Default

15 - 11

Reserved

0

10

Vendor Specific

N/A (vendor setting)

9

Laser Bias Current Fault Enable

1 (when implemented)

8

Laser Temperature Fault Enable

1 (when implemented)

7

Laser Output Power Fault Enable

1 (when implemented)

6

Transmitter Fault Enable

1

5

Vendor Specific

N/A (vendor setting)

4

PMA/PMD Transmitter Local Fault Enable

1 (when implemented)

3

PCS Transmit Local Fault Enable

1

2 - 1

Vendor Specific

N/A (vendor setting)

0

PHY XS Transmit Local Fault Enable

1

 

LASI Control
Bits Description Default

15 - 8

Reserved

0

7 - 3

Vendor Specific

0 (when implemented)

2

Rx Alarm Enable

0

1

Tx Alarm Enable

0

0

LS Alarm Enable

0

For more detailed information on LASI, see the online document XENPAK MSA Rev. 3.

40GE and 100GE

For theoretical information, refer to 40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview White Paper, published by Ethernet Alliance, November, 2008. This white paper may be obtained through the Internet.

http://www.ethernetalliance.org/wp-content/uploads/2011/10/document_files_40G_100G_Tech_overview.pdf

400GE

The 400 Gigabit Ethernet (400GE) family of load modules meets the growing bandwidth requirements of ever-evolving data networks. 400GE addresses the broad range of bandwidth requirements for key application areas such as cloud-scale data centers, Internet exchanges, co-location services, wireless infrastructure, service provider and operator networks, and video distribution infrastructure.

This family of load module is capable of 200GE, 100GE and 50GE fan-outs.

SONET/POS

SONET/POS modules are provided with various feature combinations:

Variable Rate Clocking

The OC48 VAR allows a variation of +/- 100 parts per million (ppm) from the clock source's nominal frequency, through a DC voltage input into the BNC jack marked `DC IN' on the front panel. The frequency may be monitored through the BNC marked `Freq Monitor.'

SONET Operation

A Synchronous Optical NETwork/Synchronous Digital Hierarchy (SONET/SDH) frame is based on the Synchronous Transport Signal-1 (STS-1) frame, whose structure is shown in the figure below. Transmission of SONET Frames of this size correspond to the Optical Carrier level 1 (OC-1).

An OC-3c, consists of three OC-1/STS-1 frames multiplexed together at the octet level. OC-12c, OC-48c, and OC-192c, are formed from higher multiples of the basic OC-1 format. The suffix `c' indicates that the basic frames are concatenated to form the larger frame.

Ixia supports both concatenated (with the `c') and channelized (without the `c') interfaces. Concatenated interfaces send and receive data in a single streams of data. Channelized interfaces send and receive data in multiple independent streams.

Figure: Generated Frame Contents SONET STS-1 Frame

The contents of the SONET STS-1 frame are described in the following table.

SONET STS-1 Frame Contents
Section Description

Section Overhead (SOH)

Consists of 9 bytes which include information relating to performance monitoring of the STS-n signal, and framing.

Line Overhead (LOH)

Consists of 18 bytes which include information relating to performance monitoring of the individual STS-1s, protection switching information, and line alarm indication signals.

Transport Overhead (TOH)

Consists of a combination of the Section Overhead and Line Overhead sections of the STS-1 frame.

Path Overhead (POH)

Part of the Synchronous Payload Envelope (SPE), contains information on the contents of the SPE, and handles quality monitoring.

Synchronous Payload Envelope (SPE)

Contains the payload information, the packets which are being transmitted, and includes the Path Overhead bytes.

Payload Capacity

Part of the SPE, and contains the packets being transmitted.

The SONET STS-1 frame is transmitted at a rate of 51.84 Mbps, with 49.5 Mbps reserved for the frame payload. A SONET frame is transmitted in 125 microseconds, with the order of transmission of the starting with Row 1, Byte 1 at the upper left of the frame, and proceeding by row from top to bottom, and from left to right.

The section, line, and path overhead elements are related to the manner in which SONET frames are transmitted, as shown in the following figure.

Example Diagram of SONET Levels and Network Elements

Error Insertion

A variety of deliberate errors may be inserted in SONET frames in the section, line or path areas of a frame. The errors which may be inserted vary by particular load module. Errors may be inserted continuously or periodically as shown in the following figure.

Figure: SONET Error Insertion

An error may be inserted in one of two manners:

Each error may be individually inserted continuously or periodically. Errors may be inserted on a one time basis over a number of frames as well.

DCC Data Communications Channel

The data communication channel is a feature of SONET networks which uses the DCC bytes in the transport overhead of each frame. This is used for control, monitoring and provisioning of SONET connections. Ixia ports treat the DCC as a data stream which `piggy-backs' on the normal SONET stream. The DCC and normal (referred to as the SPE - Synchronous Payload Envelope) streams can be transmitted independently or at the same time.

A number of different techniques are available for transmitting DCC and SPE data, utilizing Ixia streams and flows (see Streams and Flows and advanced stream scheduler (see Advanced Streams).

SRP Spatial Reuse Protocol

The Spatial Reuse Protocol (SRP) was developed by Cisco for use with ring-based media. It derives its name from the spatial reuse properties of the packet handling procedure. This optical transport technology combines the bandwidth-efficient and service-rich capabilities of IP routing with the bandwidth-rich, self-healing capabilities of fiber rings to deliver fundamental cost and functionality advantages over existing solutions. In SRP mode, the usual POS header (PPP, and so forth) is replaced by the SRP header.

SRP networks use two counter-rotating rings. One Ixia port may be used to participate in one of the rings; two may be used to simultaneously participate in both rings. Ixia supports SRP on both OC48 and OC192 interfaces.

In SRP-mode, SRP packets can be captured and analyzed. The IxExplorer capture view displays packet analysis which understands SRP packets. The Ixia hardware also collects specific SRP related statistics and performs filtering based on SRP header contents.

Any of the following SRP packet types may be generated in a data stream, along with normal IPv4 traffic:

RPR Resilient Packet Ring

Ixia's optional Resilient Packet Ring (RPR) implementation is available on the OC-48c and OC-192c POS load modules. RPR is a proposed industry standard for MAC Control on Metropolitan Area Networks (MANs), defined by IEEE P802.17. This feature provides a cost-effective method to optimize the transport of bursty traffic, such as IP, over existing ring topologies.

A diagram showing a simplified model of an RPR network is shown in the following figure. It is made up of two, counter-rotating `ringlets,' with nodes called `stations' supporting MAC Clients that exchange data and control information with remote peers on the ring. Up to 255 nodes can be supported on the ring structure.

Figure: RPR Ring Network Diagram

The RPR topology discovery is handled by a MAC sublayer, and a protection function maintains network connectivity in the event of a station or span failure. The structure of the RPR layers, compared to the OSI model, is illustrated in a diagram based on IEEE 802.17, shown in the following figure.

Figure: RPR Layers

A diagram of the layers associated with an RPR Station is shown in the following figure.

Figure: RPR Layer Diagram

The Ixia implementation allows for the configuration and transmission of the following types of RPR frames:

GFP Generic Framing Procedure

GFP provides a generic mechanism to adapt traffic from higher-layer client signals over a transport network. Currently, two modes of client signal adaptation are defined for GFP.

In the Frame-Mapped adaptation mode, the Client/GFP adaptation function operates at the data link (or higher) layer of the client signal. Client PDU visibility is required, which is obtained when the client PDUs are received from either the data layer network or a bridge, switch, or router function in a transport network element.

For the Transparent adaptation mode, the Client/GFP adaptation function operates on the coded character stream, rather than on the incoming client PDUs. Processing of the incoming code word space for the client signal is required.

Two kinds of GFP frames are defined: GFP client frames and GFP control frames. GFP also supports a flexible (payload) header extension mechanism to facilitate the adaptation of GFP for use with diverse transport mechanisms.

GFP uses a modified version of the Header Error Check (HEC) algorithm to provide GFP frame delineation. The frame delineation algorithm used in GFP differs from HEC in two basic ways:

A diagram of the format for a GFP frame is shown in the following figure.

Figure: GFP Frame Elements

The sections of the GFP frame are described in the following list:

GFP frame delineation is performed based on the correlation between the first two octets of the GFP frame and the embedded two-octet cHEC field. The following figure shows the state diagram for the GFP frame delineation method.

Figure: GFP State Transitions

The state diagram works as follows:

  1. In the HUNT state, the GFP process performs frame delineation by searching octets for a correctly formatted Core Header over the last received sequence of four octets. Once a correct cHEC match is detected in the candidate Payload Length Indicator (PLI) and cHEC fields, a candidate GFP frame is identified and the receive process enters the PRESYNC state.
  2. In the PRESYNC state, the GFP process performs frame delineation by checking frames for a correct cHEC match in the presumed Core Header of the next candidate GFP frame. The PLI field in the Core Header of the preceding GFP frame is used to find the beginning of the next candidate GFP frame. The process repeats until a set number of consecutive correct cHECs are confirmed, at which point the process enters the SYNC state. If an incorrect cHEC is detected, the process returns to the HUNT state.
  3. In the SYNC state, the GFP process performs frame delineation by checking for a correct cHEC match on the next candidate GFP frame. The PLI field in the Core Header of the preceding GFP frame is used to find the beginning of the next candidate GFP frame. Frame delineation is lost whenever multiple bit errors are detected in the Core Header by the cHEC. In this case, a GFP Loss of Frame Delineation event is declared, the framing process returns to the HUNT state, and a client Server Signal Failure (SSF) is indicated to the client adaptation process.
  4. Idle GFP frames participate in the delineation process and are then discarded.

Robustness against false delineation in the resynchronization process depends on the value of DELTA. A value of DELTA = 1 is suggested. Frame delineation acquisition speed can be improved by the implementation of multiple `virtual framers,' whereby the GFP process remains in the HUNT state and a separate PRESYNC substate is spawned for each candidate GFP frame detected in the incoming octet stream.

Scrambling of the GFP Payload Area is required to provide security against payload information replicating scrambling word (or its inverse) from a frame synchronous scrambler (such as those used in the SDH RS layer or in an OTN OPUk channel). The following figure illustrates the scrambler and descrambler processes.

Figure: GFP Scrambling

All octets in the GFP Payload Area are scrambled using a x43 + 1 self-synchronous scrambler. Scrambling is done in network bit order.

At the source adaptation process, scrambling is enabled starting at the first transmitted octet after the cHEC field, and is disabled after the last transmitted octet of the GFP frame. When the scrambler or descrambler is disabled, its state is retained. Hence, the scrambler or descrambler state at the beginning of a GFP frame Payload Area is, thus, the last 43 Payload Area bits of the GFP frame transmitted in that channel immediately before the current GFP frame.

The activation of the sink adaptation process descrambler also depends on the present state of the cHEC check algorithm:

CDL Converged Data Link

10GE LAN, 10GE XAUI, 10GE XENPAK, 10GE WAN, and 10GE WAN UNIPHY modules all support the Cisco CDL preamble format.

The Converged Data Link (CDL) specification was developed to provide a standard method of implementing operation, administration, maintenance, and provisioning (OAM&P) in Ethernet packet-based optical networks without using a SONET/SDH layer.

PPP Protocol Negotiation

The Point-to-Point Protocol (PPP) is widely used to establish, configure and monitor peer-to-peer communication links. A PPP session is established in a number of steps, with each step completing before the next one starts. The steps, or layers, are:

  1. Physical: a physical layer link is established.
  2. Link Control Protocol (LCP): establishes the basic communications parameters for the line, including the Maximum Receive Unit (MRU), type of authentication to be used and type of compression to be used.
  3. Link quality determination and authentication. These are optional processes. Quality determination is the responsibility of PPP Link Quality Monitoring (LQM) Protocol. Once initiated, this process may continue throughout the life of the link. Authentication is performed at this stage only. There are multiple protocols which may be employed in this process; the most common of these are PAP and CHAP.
  4. Network Control Protocol (NCP): establishes which network protocols (such as IP, OSI, MPLS) are to be carried over the link and the parameters associated with the protocols. The protocols which support this NCP negotiation are called IPCP, OSINLCP, and MPLSCP, respectively.
  5. Network traffic and sustaining PPP control. The link has been established and traffic corresponding to previously negotiated network protocols may now flow. Also, PPP control traffic may continue to flow, as may be required by LQM, PPP keepalive operations, and so forth.

All implementations of PPP must support the Link Control Protocol (LCP), which negotiates the fundamental characteristics of the data link and constitutes the first exchange of information over an opening link. Physical link characteristics (media type, transfer rate, and so forth) are not controlled by PPP.

The Ixia PPP implementation supports LCP, IPCP, MPLSCP, and OSINLCP. When PPP is enabled on a given port, LCP and at least one of the NCPs must complete successfully over that port before it is administratively `up' and therefore be ready for general traffic to flow.

Each Ixia POS port implements a subset of the LCP, LQM, and NCP protocols. Each of the protocols is of the same basic format. For any connection, separate negotiations are performed for each direction. Each party sends a Configure-Request message to the other, with options and parameters proposing some form of configuration. The receiving party may respond with one of three messages:

For the Configure-Reject and Configure-NAK requests, the sending party is expected to reply with an alternative Configure-Request.

The Ixia port may be configured to immediately start negotiating after the physical link comes up, or passively wait for the peer to start the negotiation. Ixia ports both sends and responds to PPP keepalive messages called echo requests.

LCP Link Control Protocol Options

The following sections outline the parameters associated with the Link Control Protocol. LCP includes a number of possible command types, which are assigned option numbers in the pertinent RFCs. Note that PPP parameters are typically independently negotiated for each direction on the link.

Numerous RFCs are associated with LCP, but the most important RFCs are RFC 1661 and RFC 1662. The HDLC/PPP header sequence for LCP is FF 03 C0 21.

During the LCP phase of negotiation, the Ixia port makes available the following options:

NCP Network Control Protocols
IPv6 Interface Identifiers (IIDs)

IIDs comprise part of an IPv6 address, as shown in the following figure for a link-local IPv6 address.

Note: The IPv6 Interface Identifier is equivalent to EUI-64 Id in the Protocol Interfaces window.

Figure: IPv6 Address Format Link-Local Address

The IPv6 Interface Identifier is derived from the 48-bit IEEE 802 MAC address or the 64-bit IEEE EUI-64 identifier. The EUI-64 is the extended unique identifier formed from the 24-bit company ID assigned by the IEEE Registration Authority, plus a 40-bit company-assigned extension identifier, as shown in the following figure.

Figure: IEEE EUI-64 Format

To create the Modified EUI-64 Interface Identifier, the value of the universal/local bit is inverted from `0' (which indicates global scope in the company ID) to `1' (which indicates global scope in the IPv6 Identifier). For Ethernet, the 48-bit MAC address may be encapsulated to form the IPv6 Identifier. In this case, two bytes `FF FE' are inserted between the company ID and the vendor-supplied ID, and the universal/local bit is set to `1' to indicate global scope. An example of an Interface Identifier based on a MAC address is shown in the following figure.

Example Encapsulated MAC in IPv6 Interface Identifier

Retry Parameters

During the process of negotiation, the port uses three Retry parameters. RFC 1661 specifies the interpretation for all of the parameters.

HDLC

Both standard and Cisco proprietary forms of HDLC (High-level Data Link Control) are supported.

Frame Relay

Packets may be wrapped in frame relay headers. The DLCI (Data Link Connection Identifier) may be set to a fixed value or varied algorithmically.

DSCP Differentiated Services Code Point

Differentiated Services (DiffServ) is a model in which traffic is treated by intermediate systems with relative priorities based on the type of services (ToS) field. Defined in RFC 2474 and RFC 2475, the DiffServ standard supersedes the original specification for defining packet priority described in RFC 791. DiffServ increases the number of definable priority levels by reallocating bits of an IP packet for priority marking.

The DiffServ architecture defines the DiffServ (DS) field, which supersedes the ToS field in IPv4 to make Per-Hop Behavior (PHB) decisions about packet classification and traffic conditioning functions, such as metering, marking, shaping, and policing.

Based on DSCP or IP precedence, traffic can be put into a particular service class. Packets within a service class are treated the same way.

The six most significant bits of the DiffServ field are called the Differential Services Code Point (DSCP).

The DiffServ fields in the packet are organized as shown in the following figure. These fields replace the TOS fields in the IP packet header.

Figure: DiffServ Fields

The DiffServ standard utilizes the same precedence bits (the most significant bits are DS5, DS4, and DS3) as TOS for priority setting, but further clarifies the definitions, offering finer granularity through the use of the next three bits in the DSCP. DiffServ reorganizes and renames the precedence levels (still defined by the three most significant bits of the DSCP) into these categories (the levels are explained in greater detail in this document). The following table shows the eight categories.

DSCP Categories
Precedence Level Description

7

Stays the same (link layer and routing protocol keep alive)

6

Stays the same (used for IP routing protocols)

5

Express Forwarding (EF)

4

Class 4

3

Class 3

2

Class 2

1

Class 1

0

Best Effort

With this system, a device prioritizes traffic by class first. Then it differentiates and prioritizes same-class traffic, taking the drop probability into account.

The DiffServ standard does not specify a precise definition of `low,' `medium,' and `high' drop probability. Not all devices recognize the DiffServ (DS2 and DS1) settings; and even when these settings are recognized, they do not necessarily trigger the same PHB forwarding action at each network node. Each node implements its own response based on how it is configured.

Assured Forwarding (AF) PHB group is a means for a provider DS domain to offer different levels of forwarding assurances for IP packets received from a customer DS domain. Four AF classes are defined, where each AF class is in each DS node allocated a certain amount of forwarding resources (buffer space and bandwidth).

Classes 1 to 4 are referred to as AF classes. The following table illustrates the DSCP coding for specifying the AF class with the probability. Bits DS5, DS4, and DS3 define the class, while bits DS2 and DS1 specify the drop probability. Bit DS0 is always zero.

Drop Precedence for Classes
Drop Class 1 Class 2 Class 3 Class 4

Low

001010

AF11

DSCP 10

010010

AF21

DSCP 18

011010

AF31

DSCP 26

100010

AF41

DSCP 34

Medium

001100

AF12

DSCP 12

010100

AF 22

DSCP 20

011100

AF32

DSCP 28

100100

AF42

DSCP 36

High

001110

AF13

DSCP 14

010110

AF23

DSCP 22

011110

AF33

DSCP 30

100110

AF43

DSCP 38

ATM

The ATM load module enables high performance testing of routers and broadband aggregation devices such as DSLAMs and PPPoE termination systems.

The ATM module is provided with various feature combinations:

ATM is a point-to-point, connection-oriented protocol that carries traffic over `virtual connections/circuits' (VCs), in contrast to Ethernet connectionless LAN traffic. ATM traffic is segmented into 53-byte cells (with a 48-byte payload), and allows traffic from different Virtual Circuits to be interleaved (multiplexed). Ixia's ATM module allows up to 4096 transmit streams per port, shared across up to 15 interleaved VCs.

To allow the use of a larger, more convenient payload size, such as that for Ethernet frames, ATM Adaptation Layer 5 (AAL5) was developed. It is defined in ITU-T Recommendation I.363.5, and applies to the Broadband Integrated Services Digital Network (B-ISDN). It maps the ATM layer to higher layers. The Common Part Convergence Sublayer-Service Data Unit (CPSU-SDU) described in this document can be considered an IP or Ethernet packet. The entire CPSU-PDU (CPCS-SDU plus PAD and trailer) is segmented into sections which are sent as the payload of ATM cells, as shown in the following figure, based on ITU-T I.363.5.

Figure: Segmentation into ATM Cells

The Interface Type can be set to UNI (User-to-Network Interface) format or NNI (Network-to-Node Interface aka Network-to-Network Interface) format. The 5-byte ATM cell header is different for each of the two interfaces, as shown in the following figure.

Figure: ATM Cell Header for UNI and NNI

ATM OAM Cells

OAM cells are used for operation, administration, and maintenance of ATM networks. They operate on ATM's physical layer and are not recognized by higher layers. Operation, Administration, and Maintenance (OAM) performs standard loopback (end-to-end or segment) and fault detection and notification Alarm Indication Signal (AIS) and Remote Defect Identification (RDI) for each connection. It also maintains a group of timers for the OAM functions. When there is an OAM state change such as loopback failure, OAM software notifies the connection management software.

The ITU-T considers an ATM network to consist of five flow levels. These levels are illustrated in the following figure.

Figure: Maintenance Levels

The lower three flow levels are specific to the nature of the physical connection. The ITU-T recommendation briefly describes the relationship between the physical layer OAM capabilities and the ATM layer OAM.

From an ATM viewpoint, the most important flows are known as the F4 and F5 flows. The F4 flow is at the virtual path (VP) level. The F5 flow is at the virtual channel (VC) level. When OAM is enabled on an F4 or F5 flow, special OAM cells are inserted into the user traffic.

Four types of OAM cells are defined to support the management of VP/VC connections:

The general format of an OAM cell is shown in the following figure.

OAM Cell Format

The header indicates which VCC or VPC an OAM cell belongs to. The cell payload is divided into five fields. The OAM-type and Function-type fields are used to distinguish the type of OAM cell. The Function Specific field contains information pertinent to that cell type. A 10 bit Cyclic Redundancy Check (CRC) is at the end of all OAM cells. This error detection code is used to ensure that management systems do not make erroneous decisions based on corrupted OAM cell information.

Ixia ATM modules allows to configure Fault Management and Activation/Deactivation OAM Cells.

BERT

Bit Error Rate Test (BERT) load modules are packaged as both an option to OC48, POS, 10GE, CFP8 400GE and QSFP-DD 400GE load modules and as BERT-only load modules. As opposed to all other types of testing performed by Ixia hardware and software, BERT tests operate at the physical layer, also referred to as OSI Layer 1. POS frames are constructed using specific pseudo-random patterns, with or without inserted errors. The receive circuitry locks on to the received pattern and checks for errors in those patterns.

Both unframed and framed BERT testing is available. Framed testing can be performed in both concatenated and channelized modes with some load modules.

The patterns inserted within the POS frames are based on the ITU-T O.151 specification. They consist of repeatable, pseudo-random data patterns of different bit-lengths which are designed to test error and jitter conditions. Other constant and user-defined patterns may also be applied. Furthermore, you may control the addition of deliberate errors to the data pattern. The inserted errors are limited to 32-bits in length and may be interspersed with non-errored patterns and repeated for a count. This is illustrated in the following figure. In the figure, an error pattern of n bits occurs every m bits for a count of 4. This error is inserted at the beginning of each POS data block within a frame.

Figure: BERT Inserted Error Pattern

Errors in received BERT traffic are visible through the measured statistics, which are based on readings at one-second intervals. The statistics related to BERT are described in the Available Statistics appendix associated with the Ixia Hardware Guide and some other manuals.

Available/Unavailable Seconds

Reception of POS signals can be divided into two types of periods, depending on the current state `Available' or `Unavailable,' as shown in the following figure. The seconds occurring during an unavailable period are termed Unavailable Seconds (UAS); those occurring during an available period are termed Available Seconds (AS).

Figure: BERT Unavailable/Available Periods

These periods are defined by the error condition of the data stream. When 10 consecutive SESs (A in the figure) are received, the receiving interface triggers an Unavailable Period. The period remains in the Unavailable state (B in the figure), until a string of 10 consecutive non-SESs is received (D in the figure), and the beginning of the Available state is triggered. The string of consecutive non-SESs in C in the figure was less than 10 seconds, which was insufficient to trigger a change to the Available state.