Share this

Embedded audio system design based on S3C2410 and UDA134l

2026-04-06 06:07:52 · · #1
1 Introduction In recent years, embedded digital audio products have gained increasing popularity among consumers. In consumer electronics such as MP3 players and mobile phones, people's requirements for these personal terminals have long since expanded beyond simple calls and basic text processing; high-quality sound effects are a significant current development trend. Embedded audio systems are divided into hardware and software design. The hardware part adopts an audio system architecture based on the IIS bus. On the software side, embedded Linux is a completely open and free operating system. It supports various hardware architectures, has a highly efficient and stable kernel, and its source code is open, with comprehensive development tools, providing developers with an excellent development environment. This paper constructs an embedded audio system using Samsung's S3C2410 microprocessor and Philips' UDA1341 stereo audio CODEC, presents the design of the relevant hardware circuits, and introduces the driver implementation of this audio system based on the Linux 2.4 kernel version. 2 Introduction to ARM920T and S3C2410 The ARM920T is one of the ARM series of microprocessor cores. It adopts a 5-stage pipelined technology and is equipped with Thumb extensions, Embedded ICE debugging technology, and the Harvard bus. Under the same manufacturing process, its performance can reach more than twice that of the ARM7TDMI. The S3C2410 is an ARM9TDMI core microprocessor manufactured by Samsung using a 0.18 μm process. It has an independent 16KB instruction cache, 16KB data cache, and MMU, a feature that allows developers to directly port Linux to target systems based on this processor. 3 Hardware Framework Implementation Based on the IIS Bus The IIS (Inter-IC Sound) bus is a serial digital audio bus protocol proposed by Philips. It is a multimedia-oriented audio bus dedicated to data transmission between audio devices, providing sequential connections to standard codecs for digital stereo. The IIS bus only processes audio data. Other signals (such as control signals) must be transmitted separately. To minimize the number of pins in the circuit, IIS uses only three serial bus lines: a data line providing time-division multiplexing, a field select line, and a clock signal line. The hardware part of the entire audio system mainly involves the connection and implementation of the CPU and CODEC. This system uses the Philips UDA1341 audio CODEC based on the IIS audio bus. This CODEC supports the IIS bus data format, uses bitstream conversion technology for signal processing, and features a programmable gain amplifier (PGA) and a digital automatic gain controller (AGC). The S3C2410 has a built-in IIS bus interface, allowing direct connection of 8/16-bit stereo CODECs. It can also provide DMA transfer mode to the FIFO channels instead of interrupt mode, enabling simultaneous data transmission and reception. The IIS interface has three operating modes, selectable by setting the IISCON register. The hardware framework described in this article is based on transmit and receive mode. In this mode, the IIS data lines simultaneously receive and transmit audio data via dual-channel DMA, with DMA service requests automatically handled by the FIFO read-only register. The S3C2410 supports a 4-channel DMA controller connecting the system bus (AHB) and peripheral bus (APB). Table 1 lists the request sources for each channel of the S3C2410. To achieve full-duplex audio data transmission, channels 1 and 2 of the S3C2410 are required: receive data selects channel 1, and transmit data selects channel 2. The S3C2410's DMA controller lacks a built-in DMA storage area; therefore, the program must allocate a DMA buffer for the audio device, directly placing the data to be played back or recorded into the DMA buffer in memory via DMA. As shown in Figure 1, the S3C2410's IIS bus signal is directly connected to the U-DA134l's IIS signal. The L3 interface pins L3MODE, L3CLOCK, and L3DATA are connected to the S3-C2410's GPB1, GPB2, and GPB3 general-purpose data output pins, respectively. The U-DA134l provides two sets of audio signal input interfaces, each including left and right channels. As shown in Figure 2, the processing of the two sets of audio inputs differs significantly within the U-DA134l: the first set of audio signals is sampled after passing through a 0 dB/6 dB switch and then sent to the digital mixer; the second set of audio signals first passes through a programmable gain amplifier (PGA) and then is sampled, with the sampled data passing through a digital automatic gain controller (AGC) before being sent to the digital mixer. The second set of input audio signals is selected in the hardware circuit design. Since the goal is to adjust the system input volume via software, the second group is clearly chosen because it can be controlled via the L3 bus interface using AGC. Additionally, selecting channel 2 allows for on-chip amplification of the signal input from the NIC via the PGA. Because the IIS bus only processes audio data, the UDA1341 also has a built-in L3 bus interface for transmitting control signals. The L3 interface acts as a mixer control interface, controlling the bass and volume of the input/output audio signals. The L3 interface connects to three general-purpose GPIO input/output pins of the S3C2410, using these three I/O ports to simulate the entire timing and protocol of the L3 bus. It is crucial to note that the L3 bus clock is not a continuous clock; it only emits an 8-cycle clock signal when there is data on the data line, otherwise the clock line remains high. 4. Implementation of Audio Drivers under Linux Device drivers are the interface between the operating system kernel and the machine hardware, shielding the application from hardware details. Device drivers are part of the kernel and primarily perform the following functions: device initialization and release; device management, including real-time parameter setting and providing an interface for device operation; reading data sent to the device file by the application and sending back data requested by the application; and detecting and handling device errors. Audio device drivers mainly control the hardware to transmit audio streams and provide standard audio interfaces to higher layers. The audio interface driver designed by the author provides two standard interfaces: Digital Sound Processing (DSP), responsible for audio data transmission, i.e., playing digital sound files and recording operations; and Mixer, responsible for mixing the output audio, such as volume adjustment and bass/treble control. These two standard interfaces correspond to the device files dev/dsp and dev/mixer, respectively. The entire audio driver implementation is divided into initialization, device opening, DSP driver, MIXER driver, and device release. 4.1 Initialization and Device Opening Device initialization mainly completes the initialization of UDA134l volume, sampling frequency, L3 interface, etc., and registers the device. The `audio_init(void)` function performs the following specific functions: initialization of the S3C2410 control ports (GPB1-GPB3); allocation of DMA channels for the device; initialization of the UDA134l; registration of the audio and mixer devices. The `open()` function opens the device and performs the following functions: setting up the IIS and L3 buses; preparing parameters such as channels and sampling width and notifying the device; calculating the buffer size based on the sampling parameters; and allocating a DMA buffer of the appropriate size for the device to use. 4.2 DSP Driver Implementation The DSP driver implements the transmission of audio data, i.e., data transmission for playback and recording. It also provides ioctl to control the sampling rates of the DAC and ADC in the UDA134l. Sampling rate control mainly involves reading and writing the sampling rate control register within the UDA134l, so the main part of the driver is controlling the transmission of audio data. The driver uses the `static audio_state` structure to describe the state of the entire audio system, the most important of which are the two data stream structures `audio_in` and `audio_out`. These two data stream structures describe the information of the input audio stream and the output audio stream, respectively. This driver implements audio input and output (audio playback and recording) through the operations of `audio_in` and `audio_out`. The main content of this driver is the design and implementation of the data stream structure. This structure should include information about the audio buffer, DMA-related information, the semaphores used, and the address of the FIFO entry register. To improve system throughput, the system uses DMA technology to directly store the audio to be played back or recorded in the kernel's DMA buffer. Since the S3C2410's DMA controller does not have a built-in DMA storage area, the driver must allocate a DMA buffer for the audio device in memory. The proper setting of the buffer is crucial. Taking the `write()` function as an example, because audio data is usually large, a small buffer can easily cause buffer overflow, so a large buffer is required. However, to fill a large buffer, the CPU must process a large amount of data at once, which takes a long time and can cause latency. The author uses a multi-buffer mechanism, dividing the buffer into multiple data segments. The number and size of the data segments are specified in the data stream structure. This divides large data segments into several smaller segments for processing, and each small segment can be sent out via DMA after processing. The `read` function works similarly; it processes data as it arrives from the DMA, without waiting for the large buffer to fill up. An `ioctl` interface is also provided for upper-layer calls, allowing the upper layer to adjust the size and number of data segments in the buffer based on the audio data's precision (data flow) for optimal transmission. 4.3 MIXER Driver Implementation The MIXER driver only controls mixing effects and does not perform read/write operations. Therefore, the MIXER file operation structure implements only one `ioctl` call, provided to the upper layer to set the CODEC's mixing effects. The driver primarily implements a structure `struct UDAl34l_codec`. This structure describes the basic information of the CODEC and mainly implements the CODEC register read/write functions and mixing control functions. The `ioctl` in the MIXER file operation structure calls the mixing control functions in `UDAl34l_codec`. 4.4 Device Unloading Device unloading is accomplished by the `close()` function. The `close()` function uses the device number obtained during registration and releases various system resources used by the driver, such as DMA and buffers. 5. Conclusion This paper introduces the construction of an audio system based on the IIS bus in an embedded system, realizing audio playback and recording. Specifically, it describes the implementation of the CODEC hardware connection based on the Samsung S3C2410 microprocessor and the implementation of the audio driver under embedded Linux. This system has been implemented on the S3C2410-based development platform, and can successfully play and capture audio with good results.
Read next

CATDOLL 136CM Jing

Height: 136cm Weight: 23.3kg Shoulder Width: 31cm Bust/Waist/Hip: 60/54/68cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22