Programmable DSPs can be used to implement various existing codecs and future codec standards. The current trend is that new codec standards are released every two years, each requiring more DSP cycles. Therefore, choosing a DSP platform with a compatible development roadmap (such as ZSP) is crucial, allowing future system requirements to be met through system upgrades rather than redesign. Multimedia integrates all selected elements (including text, audio, still images, video, and graphics) into a single media object. Streaming technology transmits these objects in real time while they are being read, listened to, or viewed. Before RealAudio offered the first commercial streaming media product in 1995, most internet media files required a full download before playback. Now, playback can be performed during transmission without waiting for the streaming audio clip to download completely. Streaming data is sent from the server and received and displayed in real time by the client. The client can begin playing audio/video once the receive buffer contains enough information to prevent data loss. Dedicated multimedia servers are typically used to implement network-based streaming. Multimedia servers can transmit data continuously without bursts of transmission or long pauses, so clients require minimal buffering before playback begins. Audio/video compression algorithms already available on embedded DSPs are key to providing the real-time performance needed for streaming. These algorithms are called codecs because of their ability to encode and decode digital data. While streaming is most commonly associated with distributed computer networks, other forms of digital communication also require streaming. Digital audio broadcasting (such as DRM, XMSR, Sirius Satellite Radio), digital broadcast television (such as Direct TV, South Korea's T-DMB), 3GPP handsets, and Bluetooth handsets all require codecs to meet streaming requirements. Furthermore, codecs are also very useful in non-streaming applications such as storage compression. Standardized codecs provide the highest degree of interoperability. Streaming audio standards include MPEG1/2, Layer 3 (MP3), Digital Dolby AC-3, MPEG2 AAC, WMA, and Ogg Vorbis. Common video compression standards include MPEG2, MPEG4 SP/ASP, MPEG4 AVC/H.264, and WMV. MP3, the standard audio codec : Originally used to describe MPEG1 Layer 3, MP3 has evolved in everyday applications to include Layer 1, Layer 2, and MPEG 2.5, an extension developed by the Fraunhofer Institute. MP3 is one of the most widely recognized codecs, boasting the largest user base among internet codecs. However, to achieve near-CD quality audio, bitrates higher than 192kbps are required for certain demanding content. MPEG1, Part 3 (ISO/IEC 11172-3): Defines two-channel encoding and decoding methods with sampling rates of 32, 44.1, or 48kHz and coding rates from 32 to 384kbps. This standard describes three related methods: Layers I, II, and III. Layer III offers the highest compression ratio but also the highest complexity. MPEG2, Part 3 (ISO/IEC 13818-3): Provides two important improvements to the MPEG1 standard. First, the need for low bitrates is met by standardizing the "Low Sampling Rate (LSF)" extension. This codec offers 16, 22.05, and 24 kHz sampling rate encoding methods; secondly, the MPEG1 mode is extended to support audio data up to 12 channels. Fraunhofer's low-frequency extension, MPEG2.5, provides half the sampling frequency options of MPEG2: 8, 11.025, and 12 kHz. Digital Dolby (AC-3): Currently, Digital Dolby has the largest multi-channel codec user base. By integrating multiple channels into a single encoded object, Digital Dolby achieves high-quality, low-complexity audio compression. Although the algorithm is independent of the number of encoded channels, the current implementation has adopted SMPTE's recommendation to use a 5.1 channel configuration consisting of five full-bandwidth audio channels and one sub-bandwidth channel for bass: left, center, right, left surround, right surround, and low-frequency extension (LFE). Digital Dolby supports flexible playback: 1 channel to 5.1 channels, 32, 44.1, or 48 kHz sampling rates, and bitrates from 32 to 640 kbps. Decoded audio can automatically match the playback system to provide optimal sound quality regardless of audio configuration. aacPlus series codecs: Coding Technologies has developed a series of codecs widely adopted by international standards organizations. MPEG2 uses AAC, providing near-CD quality at 128kbps, even for particularly complex content. aacPlus v1 was standardized by organizations such as the DVD Forum, DVB, Digital Radio Mondiale, 3GPP2, and ISMA. aacPlus v2 began commercial use in late 2004 and has been designated as a high-quality audio codec in 3GPP; all components of aacPlus v2 are integral to the MPEG-4 audio specification. AAC: The aacPlus series of codecs are all built around the AAC core described in MPEG2 Part 7 (ISO/IEC 13818-7). AAC offers sampling rates of 8, 11, 12, 16, 22, 24, 32, 44, 48, 63, 88, or 96 kHz, and up to 48 audio channels with a bit rate of up to 288 kbps per channel. It defines three closely related schemes: Low Complexity, Main, and Scalable Sample Rate (SSR). Low Complexity AAC-LC requires very few processor resources and is therefore typically used in embedded applications. MPEG4, Part 3 (ISO/IEC 14496-3): This adds the Perceptual Noise Substitution (PNS) tool to MPEG2 AAC, thus defining it as MPEG4 AAC. PNS simplifies the representation of noise-like signals through parameterized encoding. PNS should not be confused with Temporal Noise Shaping (TNS) in MPEG2 and MPEG4. aacPlus V1: This codec is sometimes referred to as "High-Efficiency AAC" (HE-AAC). It integrates the basic AAC codec and Band Replication (SBR) technology. SBR (Short-Rate Bandwidth Extension) is a bandwidth extension technology that allows almost any audio codec to maintain sound quality even with a 30% reduction in bitrate. SBR expresses the high half of the bandwidth by adding encoding parameters to the lower half of the bandwidth. SBR technology can also be used in other codecs; for example, combining SBR with MP3 creates the MP3Pro codec. aacPlus V2: Adding Parametric Stereo (PS) technology to aacPlus V1 creates the aacPlus V2 codec. PS technology uses the left channel and some additional encoding parameters to generate the right channel, further reducing the bitrate. aacPlus V2 achieves DVD 5.1 channel quality at 160 Kbps, near-CD stereo quality at 48 Kbps, excellent stereo at 32 Kbps, entertainment-quality stereo at 24 Kbps, and high-quality mono below 16 Kbps. The efficiency of aacPlus V2 enables new applications in mobile digital broadcasting. WMA: WMA is a widely used family of audio codecs in the Microsoft-licensed Windows Media Series. The latest versions in this series are WMA9, WMA9 Professional, WMA9 Lossless, WMA9 Voice, and WMA9 Variable Bit Rate (VBR). WMA9 is the most common codec in this series for embedded applications; it offers 16-bit/320kbps dual-channel audio with sampling rates up to 48kHz. "Professional" supports 24-bit, 96kHz sampling rates and 7.1 channels up to 128 to 768kbps. Similar to Digital Dolby, the decoded audio automatically matches the playback system to provide optimal sound quality regardless of speaker configuration. "Lossless" is used for CD archiving with compression ratios between 2:1 and 3:1. "Voice" is used to compress speech to 20kbps. While VBR is not ideal for most streaming applications, both WMA9 and "Professional" can encode at variable bit rates. "Lossless" always uses VBR. Ogg Vorbis: An open resource with no patent fees, offering audio quality similar to MP3. "Ogg" is the container format, while "Vorbis" is the audio codec. Because it eliminates the per-game licensing fees associated with MP3 game music, Ogg Vorbis is increasingly used by computer game developers. Standard Video Codecs The Joint Video Team (JVT) consists of the ITU's Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). VCEG develops voluntary standards for advanced moving picture coding in conversational and non-conversational audio/video applications. MPEG develops international standards for the compression, encoding, decompression, processing, and coded representation of moving pictures, audio, and combinations thereof to meet various applications. In short, JVT has developed the most popular video standards, including ITU H.262/MPEG2 and H.264/MPEG4 AVC. MPEG2 Video/H.262: MPEG2 (ISO/IEC 13818-2), also known as ITU-T H.262, is currently the most widely used video coding standard in consumer electronic video devices. MPEG2 video is used for digital television broadcasting, including terrestrial, submarine cable, and direct satellite broadcasting. It can achieve 720x576 pixel imaging at a fixed frame rate of 25fps (PAL) or 30fps (NTSC). Furthermore, it is an essential codec for DVD-V. MPEG4-SP/ASP: ISO/IEC 14496-2 describes MPEG4 Simple Profile (SP)/Advanced Simple Profile (ASP). SP is used for next-generation portable terminals and narrowband internet. ASP adds several tools, improving coding efficiency by 1.5 to 2 times. Both are gaining increasing acceptance in the market. MPEG4-AVC/ITU-T H.264: AVC is a multimedia standard developed by the ISO/MPEG and ITU-T joint technical committee. AVC offers higher compression ratios, better video quality, and greater fault tolerance than MPEG2, making it a promising candidate for internet broadcasting and mobile communications. WVM/SMPTE VC-1: WMV9 is Microsoft's multimedia standard, featuring support for streaming, variable bitrate, and fault tolerance comparable to MPEG4-AVC/H.264. Besides home computers, WMV9 is currently used in cinemas for digital projection. The encoding used in movies can be constant bitrate CBR (7-12 Mbps) or variable bitrate VBR, achieving DVE resolution (720x480). Embedded DSP Streaming Solutions DSPs have become ideal for streaming codecs for several reasons. First, the variety of codecs and constantly evolving standards require programmable solutions; second, most codecs are computationally intensive, and DSPs are designed for efficient mathematical operations; third, power consumption and cost are important considerations in mobile streaming, and DSP cores offer the best combination of low power consumption and low cost. Typical audio/video streaming systems typically use both internal and external memory. Internal memory is fast memory that runs at the DSP core clock speed; external memory is slower and cheaper. Encoding/decoding instructions are stored in external memory but downloaded to internal memory for execution. Due to the massive amount of video stream data, it is usually placed off-chip unless absolutely necessary, while audio stream data can be placed anywhere on or off-chip, and some IP modules can be connected to the system SoC bus as needed. Backward-Compatible DSP Platform The current trend is that new encoding/decoding standards are released every two years, each requiring more DSP cycles. Therefore, choosing a DSP platform that can evolve according to a compatibility roadmap is crucial, allowing future system requirements to be met through system upgrades rather than redesign. ZSP provides the flexibility and performance necessary to adapt to the ever-changing multimedia standards. LSI Logic's ZSP product division offers a full range of synthesizable, software-compatible DSP cores and provides extensive audio/video standard code; cores within the product roadmap are code-compatible. A broad network of third-party partners ensures that new standards are quickly available. ZSP-based audio/video systems can easily adapt to emerging audio/video standards. Each generation of ZSP (G1/G2/G3) is based on an easily programmable architecture. The ZSP kernel is specifically optimized for low-power applications, making it ideal for mobile applications such as personal audio/video players. ZSP features 16/32-bit data channels, supporting the control performance required for high-quality audio and video processing. The G2 kernel has a powerful coprocessor interface, supporting hardware accelerators embedded into the kernel execution pipeline. Hardware accelerators can be loosely or tightly coupled to the ZSP execution pipeline. Tightly coupled accelerators can be viewed as instruction set extensions of ZSP, making them easy to program and use, even based on C code. The ZSP kernel boasts an excellent compiler, supporting not only efficient development of assembly code but also efficient compilation of C code; coupled with readily available standard code, this ensures the fastest system design and implementation.