Ethernet uplink card design based on IXP1200 network processor
2026-04-06 06:21:28··#1
Abstract: The Ethernet uplink card is a board in a DSLAM device based on ATM technology. The DSLAM device can directly connect to the IP network through it. However, due to the significant resource consumption during ATM-to-IP conversion, the uplink card can easily become a bottleneck in the entire system. This paper proposes an Ethernet uplink card design based on the IXP1200 network processor. This design utilizes the powerful data processing capabilities and high flexibility of the IXP1200 network processor to achieve line-speed data processing, and new functions can be added as needed. Keywords: Ethernet uplink card; network processor; ATM; Ethernet; microcode; IXP1200 With the rapid development of network communication technology, broadband access technology has become a hot topic in current telecommunications access technology. Since early broadband technology was based on ATM, the core chips and line interface chips provided by major manufacturers were all based on ATM technology. However, data networks are mainly based on TCP/IP. Therefore, to solve the integration problem of ATM and TCP/IP, it is necessary to provide ATM-to-Ethernet conversion on the DSLAM device. However, the conversion process requires a large amount of data processing, which can easily lead to system bottlenecks. The uplink card design aims to solve the problem of high-speed forwarding between ATM cells and Ethernet frames in DSLAM equipment. This paper proposes an uplink card design based on the IXP1200 network processor and provides a detailed analysis of its implementation process. 1. Main Features of the IXP1200 Network Processor A network processor is a hardware programmable device, typically a chip, specifically designed for processing network data packets. Through optimization of the hardware architecture and instruction set, this network processor not only provides high-quality hardware functionality for line-speed data packet processing but also possesses great system flexibility. The IXP1200 is a high-end network processor manufactured by Intel Corporation and is a core product of the IXA (Internet Exchange Architecture) architecture. The internal structure of the IXP1200 is shown in Figure 1. It contains a StrongARM core with a maximum clock speed of 232MHz, six RISC programmable microengines (each microengine contains four hardware threads), a 64-bit IX Bus with a maximum clock speed of 104MHz, a 32-bit SRAM interface unit (operating at half the core frequency), a 64-bit SDRAM interface unit (operating at half the core frequency), and a 32-bit PCI bus interface unit with a maximum clock speed of 66MHz. The IXP1200 is connected to the IX Bus through the FBI interface unit. There is also an integrated development environment that can be used for application development of the microengines. It supports assembly and C programming languages. (1) StrongARM Core The StrongARM Core can realize the main functions of the CPU, and can also start the system, manage and control other units of the network processor, handle data packets that the microengines cannot handle, and handle some abnormal conditions. (2) Microengine The microengine is a programmable 32-bit RISC processor. Its instruction set is specifically designed for network and communication applications. By programming each thread, packet forwarding and processing can be performed independently without StrongARM Core intervention, thus reducing the burden on StrongARM Core and making it particularly suitable for high-speed data processing and forwarding. (3) SDRAM Unit The SDRAM unit provides an interface between IXP1200 and SDRAM, supporting up to 256M bytes of SDRAM. Although SDRAM has a slower access speed, it has a large storage space, so it can be used to store large-capacity data structures (such as packets and routing tables), and can store operating system code during system runtime. (4) SRAM Unit The SRAM unit provides a general bus interface for three types of devices. These devices include SSRAM up to 8M bytes, FLASH or E-PROM where StrongARM Core executes code after reset, BOOTROM devices and other slow port devices (such as CAM), encryption devices and control status interfaces for MAC or PHY devices. SRAM has a faster access speed, but a smaller storage space, and is mainly used to store data structures that require fast access, such as lookup tables and cache descriptors. (5) PCI Unit The PCI unit provides an interface for connecting to PCI devices and can be used to download operating systems and configuration programs. (6) The hash unit, IX bus interface, and Scrachpad memory in Figure 1 are collectively referred to as the FBI unit. The IXP1200 is connected to the IX Bus through the FBI unit to realize the transmission and reception of data packets between the peripheral and the IXP1200, so that the micro-engine can access these data packets and forward them using threads. In fact, the StrongARM Core can also access these data packets and perform exception handling or upper-layer protocol processing on them. 2 Design scheme of Ethernet uplink card The basic function of the Ethernet uplink card is to realize the forwarding between ATM cells and Ethernet frames. That is, after receiving the ATM cell stream from the core card from the LVDS interface, it is converted into Ethernet frames according to the encapsulation protocol (such as RFC1483 bridging protocol), and then the corresponding MAC address and ATM PVC correspondence is established, and then sent to the IP network through the Ethernet uplink port; it can also receive Ethernet frames from the IP network from the Ethernet uplink port, and then convert them into ATM cell streams according to the established MAC address and ATM PVC correspondence, and then send them to the core card through the LVDS interface. In the uplink card, the forwarding between ATM cells and Ethernet frames is handled by microengines within the network processor. To prevent the Ethernet uplink card from becoming a network bottleneck, the microengines must be able to process data packets (Ethernet frames or ATM cells) at line speed, meaning they must complete processing of the current packet before the next packet arrives. Therefore, the maximum allowable processing time for each data packet should be less than the interval between packets. During the design process, the specific functions of the Ethernet uplink card should be allocated and utilized reasonably based on the hardware resources available in the IXP1200 network processor. This maximizes system performance. In this design, the Ethernet uplink card needs to perform six main tasks: Ethernet reception processing, CRC calculation generation, ATM transmission processing, CRC verification, and Ethernet transmission. Since the IXP1200 has six microengines, allocating these six individual tasks to each microengine and building a multi-pipeline architecture for processing yields excellent processing results. Figure 2 illustrates the task allocation scheme for the six micro-engines of the IXP1200 network processor. The entire processing flow can be divided into two directions: uplink (ATM to Ethernet data mapping) and downlink (Ethernet to ATM data conversion). In the uplink direction, the ATM receiving engine assembles received ATM cells into AAL5 PDUs, converts them into Ethernet frames according to the encapsulation protocol, establishes the corresponding MAC address and ATM PVC mapping, and then sends them to the CRC-32 check queue. Next, the CRC-32 check engine performs CRC checks on the PDUs in the queue and sends them to the Ethernet transmission queue. The Ethernet transmission engine's main task is to send the Ethernet frames in the transmission queue out from the Ethernet uplink port. In the downlink direction, the Ethernet receiving engine receives Ethernet frames from the Ethernet uplink port, encapsulates them into AAL5 PDUs, sends them to the CRC-32 generation queue, and simultaneously looks up the ATM cell header based on the established MAC address and ATM PVC mapping. Next, the CRC-32 generation engine generates CRC checksums for the PDUs in the queue and sends the PDUs to the UBR queue. Finally, the ATM transmission engine segments the PDUs into ATM cells and sends them out from the ATM port. 3. Hardware Design of the Ethernet Uplink Card Figure 3 shows the hardware circuit of the Ethernet uplink card, which mainly includes four parts: an Ethernet processing unit, an IXP1200 network processing unit, an FPGA control logic unit, and an ATM and LVDS backplane bus processing unit. 3.1 Ethernet Processing Unit The Ethernet processing unit is the uplink processing part of the uplink card, used to connect to data network devices such as routers or Layer 3 switches. This unit mainly includes an RJ45 interface, a transformer isolation circuit, an LXT9763 Ethernet physical layer chip, and an IXF440 MAC layer chip. The RJ45 interface and transformer isolation circuit are standard unit circuits for the Ethernet processing interface. The LXT9763 mainly performs the physical layer functions described in the 802.3 protocol, and it is mainly connected to the IXF440 chip through the MⅡ bus. The IXF440 chip primarily performs the MAC layer functions described in the 802.3 protocol, while also providing an IX bus interface with the network processor. In fact, this chip is the SLAVE device for the IX bus within the network processor. 3.2 IXP1200 Network Processing Unit The IXP1200 network processing unit is the core of the entire Ethernet uplink card. It primarily connects to external chips via the IX bus and acts as the MASTER device for the IX bus. All processing software runs within the network processor. The IXP1200 network processing unit consists of the IXP1200 network processor and external chips (such as SDRAM, SRAM, and Flash). The SDRAM and SRAM units are shareable intelligent units. The SDRAM units can be directly accessed by the IXP1200's StrongARM core, as well as devices on the microengine and PCI bus. This supports fast data movement between SDRAM and the microengine, IX bus, and PCI bus. The SRAM units, however, have faster access times than SDRAM units and are typically used to store tables requiring rapid lookups to improve performance. 3.3 FPGA Control Logic Unit In Intel's network processor solutions, the external data interface is the IX bus, a proprietary data bus provided by Intel. The external interface of the ATM chip used in the Ethernet uplink card is the standard UTOPIA bus. Therefore, to achieve inter-chip interconnection, an FPGA should be used to perform the conversion between the IX bus and the UTOPIA bus. Specifically, an IX bus SLAVE interface is implemented on the IX bus side, and a UTOPIA bus SLAVE interface is implemented on the ATM side. This FPGA logic control unit provides physical layer control functions for the ATM to Ethernet frame conversion. The implementation of the FPGA logic control unit is crucial for the design of the Ethernet uplink card. 3.4 ATM and LVDS Backplane Bus Unit This processing unit mainly completes the seamless connection between the network processor unit in the Ethernet uplink card and the backplane ATM. Since the core design of DSLAM equipment is based on ATM technology, this processing unit must be used to achieve system interconnection in order to apply the network processor unit in ATM-based DSLAM equipment. Other boards in the DSLAM equipment system are mainly used to complete ATM switching and the line interface of the ADSL equipment. The backplane is a high-speed differential bus based on the LVDS bus, which has anti-interference capabilities. This is crucial for high-density DSLAM equipment. In fact, the uplink card interconnects with the high-speed LVDS bus through the ATM physical layer chip, allowing the card to be seamlessly integrated into the system. 4. Software Design of the Ethernet Uplink Card The software of the Ethernet uplink card primarily runs on the IXP1200 network processor. To facilitate development based on the IXP1200 network processor, Intel has released a highly integrated and powerful development tool, SDK 2.0. This toolkit includes the IXP1200 Developer Workbench, an integrated development tool specifically designed for writing symbolic microcode, featuring an assembler and optimization devices. It also provides a hardware-free IXP1200 simulator that supports software-mode simulation and debugging, thus offering a user-friendly interface and debugging environment. The software development of the IXP1200 network processor is primarily based on two levels. The first level is high-level software, typically referring to the management software, routing protocol software, and all system-required tasks running on the IXP1200 StrongArm core. This part usually requires an embedded operating system, and current development is mainly based on Linux. The second level is low-level software, which runs on six micro-engines and is used for rapid packet processing, including fast packet forwarding and basic Layer 2 protocol processing. This part is implemented in microcode, but special attention should be paid to code optimization, i.e., using as few instructions as possible to complete the processing. In the IXP1200 network processor, each micro-engine provides 2KB of code storage space. Furthermore, each micro-engine contains four threads, which can form hardware multithreading. Because the micro-engine contains a large number of GPR, SRAM, and SDRAM transfer registers, each thread has its own specific register set when using micro-threads for relative addressing mode, thus greatly accelerating thread switching speed. An important principle in microcode design for the IXP1200 is that when a thread is waiting for resources, it should be switched out to allow other threads to utilize the microengine's processing power. This allows for rapid switching, ensuring that each thread can fully utilize the microengine's processor and preventing processor waste due to a waiting thread. The microcode organization follows this principle. Figure 4 shows the main flowchart of the high-level software. The purpose of the high-level software is to complete the initialization of the entire hardware and software, load the microcode program into the six microengines of the network processor, and start its operation. The microcode flow of the low-level software is divided into two parts, with task allocation consistent with the task allocation of the six microengines discussed above. It is also divided into two directions: ATM to Ethernet and Ethernet to ATM. Figure 5 shows its microcode software flowchart. 5 Conclusion The Ethernet uplink card based on the IXP1200 network processor introduced in this paper has been successfully applied to DSLAM equipment and solved the problem of high-speed interconnection between DSLAM equipment and IP networks. Testing showed that the card performs well and the system operates stably.