Share this

Design of an Embedded Image Transmission System Based on Linux and S3C2410

2026-04-06 03:31:39 · · #1
1. Introduction How to better obtain image data from a monitoring site has always been a challenging problem. The traditional method is to use a CCD camera to acquire video information from the site. This method is easy to implement, but costly. With the increasing application of ARM series processors and the rapid development of Linux-based embedded technology, it has become possible to use Linux's built-in TCP/IP protocol to achieve remote monitoring and image transmission. This paper proposes such a method, using a commonly available Vimicro series USB camera to obtain image data from the site. The image is acquired using the Video4Linux programming interface function in the Linux kernel, and the obtained image is transmitted to a host PC via the Internet for image saving and display. 2. Hardware System Design Principles The hardware functional block diagram of the system is shown in Figure 1. The CPU used is the Samsung S3C2410. This processor integrates a 32-bit microcontroller based on the ARM 920T processor core from ARM, with independent 16KB instruction cache and 16KB data cache, an LCD controller, RAM controller, NAND flash memory controller, 3 UARTs, 4 DMAs, 4 timers with PWM, parallel I/O ports, 8 10-bit ADCs, a touchscreen interface, I2C interface, I2S interface, 2 USB interface controllers, and 2 SPIs. The maximum clock frequency can reach 203MHz. Furthermore, the platform has been configured and expanded with 4MB of 16-bit Flash and 8MB of 32-bit SDRAM, and an Ethernet port is added via the DM9000E Ethernet controller chip. A UART interface is provided for serial communication with the host machine via RS232. A HOST USB interface is also provided, allowing an external USB camera to be connected to the USB interface to input image data into the input buffer. The data in the buffer is processed and finally sent to the Internet via the network port for storage and reception on the PC. 3. Software System Design The software system design in this paper adopts a C/S (client/server) model, using the S3C2410 platform as the server and the PC as the client. The main task of the server is to send the obtained image data to the Internet, while the main task of the client is to receive the data from the Internet and save the data as a file. The specific implementations of both are discussed below. 3.1 Server-Side Software System Design 3.1.1 Establishing the Host Development Environment This paper uses a PC as the host machine, equipped with a Red Hat 9.0 system. The development environment is established on this platform, mainly including: the selection and installation of the cross-compiler, and the configuration of NFS and TFTP servers. For embedded system development, due to insufficient resources to run development and debugging tools on the target board, cross-compilation and debugging are usually adopted. During development, cross-compilation, assembly, and linking tools on the host machine are used to form executable binary code. Then, the executable file is downloaded to the target machine for execution. The cross-compiler used in this paper is arm-linux-gcc; the specific installation details are not elaborated here. To facilitate debugging and downloading/burning, the host machine can support NFS and TFTP servers. It's important to note that to support TFTP servers, a full installation must be selected during Red Hat 9.0 installation. If a full installation is not selected, the tftp-server-0.32-4.i386.rpm and tftp-0.32-4.i386.rpm files from the third CD must be installed on the host machine. 3.1.2 Implementation of the Camera Driver The system uses a common USB camera with a ZC0301P chip from Vimicro. A key feature of this camera is its ability to implement hardware JPEG encoding. The driver implementation focuses on the following: providing implementations of basic I/O operation interface functions (open, read, write, close), interrupt handling, memory mapping, and the ioctl function for I/O channel control, all defined within the `struct file_operations`. When an application performs system calls such as open, close, read, and write on the device file, the Linux kernel accesses the functions provided by the driver through the file_operations structure. Of course, there are now generic drivers for this type of camera available online; you can download usb-2.4.31.patch.gz from relevant websites and then apply this patch to the corresponding location in the kernel. However, for some Linux kernel versions, patching will generate Config.in.rej and Makefile.rej. In this case, you only need to manually add the parts of these two files that failed to be modified to the corresponding Config.in and Makefile. 3.1.3 Linux Kernel Configuration For Linux systems that have already undergone basic porting, the following aspects are worth noting when configuring the kernel: 1) Because the Video4Linux programming interface functions in the kernel are used, Video for Linux must be selected first when configuring the kernel, and it is best to compile it directly into the kernel rather than compiling it as a module and then loading it; 2) USB Support, OHCI, and UHCI must be selected. And select the corresponding camera in USB Multimedia devices under USB Support. For this system, select USB SPCA5XX Sunplus Vimicro Sonix Cameras and configure it as a Module. 3) After configuring the kernel, run make dep, make zImage, and make module. This will generate spc5xx..o in the corresponding spca5xx directory. You can mount spc5xx..o to the target board via NFS or add it to a directory in the main file system ramdisk. Then, run insmod spca5xx.o on the target board to find the camera. 3.1.4 Writing the Server-Side Application After completing the driver and kernel configuration, the application writing begins. The program is first compiled and linked on the host machine using a cross-compiler. The generated executable file is mounted to the target board via NFS for debugging. After successful debugging, it is then hard-coded into the file system ramdisk. The implementation mainly involves the following steps: 1) Initialize basic device information. 2) Open the device file, read the basic device and signal source information, set `video_mmap`, allocate a buffer for the defined frame structure, and initialize the thread mutex. 3) Create an image acquisition thread. This thread function reads data from the device using memory mapping, locks the thread mutex, assigns values ​​to each element of the frame structure, and unlocks the mutex. This process is an infinite loop. 4) Create a connection-based socket, bind it to a port, and start listening on that port. 5) When a connection arrives, create an image sending thread. This thread function: if it determines that data has been read from the client, it sends a frame of data from the buffer to the network. This process is also an infinite loop. 6) Control the synchronization of the two threads. 7) If the program exits, close the socket and release the allocated resources. It can be seen that the program mainly consists of three parts: image acquisition, network image transmission, and multi-threaded control. The following sections will introduce the main content of these three parts. In the image acquisition section, a data structure is defined, whose main member variables are: * `Video_capability`: Contains basic device information (device name, supported maximum and minimum resolutions, signal source information) * `video_channel`: Attributes of each signal source * `video_mbuf`: Information about frames mapped using mmap * `video_buffer`: The lowest-level description of the buffer * `video_mmap`: Used for mmap * `pthread_mutex_t`: Thread mutex. There are two methods for capturing images: directly reading the device file and memory mapping. This paper uses the latter. Using this method, ordinary files are mapped to the memory address space, and processes can access memory like ordinary files, which improves efficiency. The two main functions for video capture are: `ioctl(vd->fd, VIDIOCMCAPTURE, &(vd->mmap))`. If the call is successful, the capture of one image begins. Whether the capture is complete is left to VIDIOCSYNC to determine. The `ioctl(vd->fd, VIDIOCSYNC, &frame)` function, if successful, indicates that a frame has been captured. The next frame can then be captured. For network transmission, primarily Linux socket programming, the main functions called include: socket creation, port binding, listening, accepting, reading, and writing. Specific definitions and usages of these functions can be found in relevant documentation. It's important to note that to correctly send a frame, the defined frame structure must be single-byte aligned; this is done by adding `_attribute_((packed))` to the end of the structure definition. The main functions used in multithreaded programming include: mutex initialization (`pthread_mutex_init`), mutex locking (`pthread_mutex_lock`), mutex unlocking (`pthread_mutex_unlock`), mutex destroying (`pthread_mutex_destroy`), thread creation (`pthread_create`), and thread synchronization (`pthread_join`). In addition, in order to better realize the synchronization of the two processes, some content of the semaphore mechanism is also needed in the program. Due to space limitations, please refer to relevant materials for the specific definition and usage of these functions. 3.2 Client-side software system design The client is built on a PC and a FC-based interface is designed using Visual C++ 6.0 as the receiving end. The receiving end mainly reads data from the network buffer and saves it as a file, named after the time the data is received. Figure 2 shows the result of the program execution when the image acquisition interval is 1 second. The image size is 320 pixels. It should be noted that the data sent by the server uses single-byte alignment, and the corresponding data received by the client should also use single-byte alignment. The method to implement single-byte alignment in memory under WINDOWS is to add #pragma pack (1) before the defined frame structure and add #pragma pack () after its definition. Figure 2 Client program execution result (image acquisition once per second) 4 Conclusion This paper proposes a specific implementation of an embedded image acquisition and transmission system based on the S3C2410 platform and Linux system, and gives the experimental results. Experimental results demonstrate that the system successfully completed image acquisition and transmission. The obtained images are clear. The server operates stably without disconnection or shutdown. This system can be applied to industrial site monitoring and can also be integrated with other systems, such as access control systems, to obtain important image data of door opening and closing.
Read next

CATDOLL 123CM Olivia TPE

Height: 123cm Weight: 23kg Shoulder Width: 32cm Bust/Waist/Hip: 61/54/70cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm Anal...

Articles 2026-02-22
CATDOLL Nanako Soft Silicone Head

CATDOLL Nanako Soft Silicone Head

Articles
2026-02-22
CATDOLL 138CM Ya Torso Doll

CATDOLL 138CM Ya Torso Doll

Articles
2026-02-22
CATDOLL 130CM Kiki

CATDOLL 130CM Kiki

Articles
2026-02-22