Share this

Design of a Video Surveillance System Based on DM642

2026-04-06 06:39:19 · · #1
Abstract: Addressing the limitations of traditional PC-based video surveillance systems, this paper proposes a novel embedded remote video surveillance system based on the DM642 microcontroller. The overall system structure is introduced, and the hardware design of the field-end embedded system and the implementation of the monitoring center software are described in detail. Compared with traditional video surveillance systems, this solution offers advantages such as low cost, small size, good stability, and high reliability. Keywords: Video surveillance, TMS320DM642, DirectShow, Ethernet [b][align=center]Design of the Video Monitor-Control System Based on DM642 Lu Gen_feng, Luan Chun_xu, Wang Miao, Xiong Lie_bin[/align][/b] Abstract: Due to the limitations of traditional PC-based video systems, we designed and implemented a new video monitor-control embedded system based on the DM642 hardware platform. The general framework of the system was introduced, and the detailed design of the local embedded hardware system and monitor center software based on MS DirectShow was explained. This scheme features low cost, small volume, and high stability compared to traditional video systems. Keywords: Video monitor, DirectShow, TMS320DM642, Ethernet I. Introduction With the development of computer networks, communication technology, and embedded processors, embedded remote video surveillance systems have emerged. The current mainstream market is still PC-based video surveillance system terminals. Although they are easy to operate on-site, they are not stable, the video front end is complex, and the reliability is not high. However, embedded network video surveillance system terminals can make up for the above shortcomings. Embedded video encoders have powerful functions of video encoding processing and network communication, directly support network video transmission, video compression and other functions are concentrated in a very small device, which can be directly connected to the local area network or wide area network. The system is small in size, low in cost, high in stability and good in real time [1], which is very suitable for various monitoring sites and has broad development prospects and market space. This paper proposes the design and implementation of an embedded video surveillance system based on Ethernet. It mainly includes a front-end embedded video terminal and a remote monitoring center. TI's DSP is selected to build an embedded video acquisition, compression and transmission system. This embedded video processing platform has powerful functions of video acquisition, encoding processing and network communication, supports network video transmission and network management, and also has video saving and playback functions. The development of the video surveillance remote terminal software was completed using Microsoft's DirectShow technology. In remote monitoring systems, the amount of video data is enormous. Taking a video image size of 352×288 as an example, transmitting 30 frames of true-color video images per second would require approximately 55Mbps of bandwidth, which is almost impossible to achieve over a network. Therefore, this design also implemented H.264 compression and decompression of video data on the system, which greatly alleviated the network transmission pressure. 2. System Overall Design Scheme 2.1 System Overall Framework This system mainly consists of a front-end embedded monitoring module and a remote PC monitoring center. The front-end system mainly compresses the video data collected on-site into digital signals and then transmits them to the remote video surveillance system terminal via Ethernet. The remote center decodes and plays the received video stream. The remote monitoring terminal can also control the rotation of the pan-tilt unit and adjust the camera parameters through the interface to change the monitoring area. Its system block diagram is shown in Figure 1. [align=center] Figure 1 Overall Block Diagram of Video Surveillance System[/align] 2.2 Hardware Design of Front-end Network Monitoring Module The front-end embedded network monitoring module mainly uses the TMS320DM642 chip launched by TI, which is specifically designed for video applications. The DM642 contains 6 arithmetic logic units, which can perform 2 16-bit or 4 8-bit addition, subtraction, comparison, shift and other operations in each clock cycle. At a clock frequency of 600MHz, the DM642 can perform 2.4 billion 16-bit multiply-accumulate operations or 4.8 billion 8-bit multiply-accumulate operations per second [4]. This gives the DM642 a strong advantage in multi-video processing and image processing. The DM642 also adds many peripheral devices and interfaces on the basis of C64x. The hardware block diagram of the system is shown in Figure 2. The minimum system consists of DM642, SDRAM (4M64b) and FLASH (4M*8b). The system connects to three CCD cameras at the front end. The video decoding chip used is TI's TVP5150. The DM642's video input (VP) can easily achieve seamless connection with the CCD cameras through the TVP5150 video decoding chip. The analog video signal output from the CCD camera is converted into a BT.656 format digital video signal by the TVP5150 and input from the DM642's VP port. The DM642 uses EDMA to transfer the acquired YUV format digital signal from the VP port to the system storage unit. Then, the video data is compressed using the H264 compression algorithm. The generated video stream data is transmitted to the external PHY (LXT971) chip via the EMAC's MII interface and then transmitted to the remote monitoring center host via Ethernet. The PC's monitoring software receives and plays the video data, completing the network video monitoring function. [align=center] Figure 2 System Hardware Block Diagram[/align] 2.3 Front-end Network Monitoring Module Software Design The DSP software program uses the RF-5 framework to integrate the H264 encoding library H264lib. Before entering the DSP/BIOS scheduler, the program needs to initialize several modules to be used. These include: (1) initialization of DM642 and system board; (2) initialization of RF-5 module; and (3) establishment of capture channel. After the initialization is completed, the system enters the 4 threads and 1 channel under the management of the DSP/BIOS scheduler. Among them, taskVideoCap, taskH264Encode and taskNetwork have high priority, and taskControl has the lowest priority. The taskVideoCap, taskH264Encode, taskNetwork and taskControl threads are the core threads of the system, which continuously acquire video signals from the underlying video driver, encode the video signals into H264, and then transmit them to remote users for display via the network. The taskVideoCap, taskH264Encode and taskNetwork threads are synchronized and communicate with each other through the Synchronous Communication Module (SCOM), and the taskControl thread and taskH264Encode thread communicate with each other through the Mailbox (MBX). The overall flowchart of the system software is shown in Figure 3. [align=center] Figure 3 Overall flowchart of embedded system software[/align] 3. Implementation of remote monitoring center software 3.1 DirectShow technology The entire system is based on Microsoft's DirectShow technology. DirectShow is a member of the DirectX family. It provides a complete solution for high-performance multimedia applications such as playback of various media files and audio and video capture on the Windows platform[2]. On the DirectShow system, the application needs to build the corresponding Filter Graph according to a certain intention, and then control the entire data processing process through the Filter Graph Manager. The architecture of DirectShow is shown in the figure. DirectShow can receive various events when the Filter Graph is running and send them to the application through messages. In this way, the interaction between the application and DirectShow is realized. [align=center] Figure 4 The architecture of DirectShow is shown in the figure[/align] DirectShow is based on modularity. Each specific functional module adopts the COM component method and is called Filter. DirectShow provides a series of standard modules that can be used for application development. Developers can also develop their own functional filters to extend the application of DirectShow. Each filter participates in data processing under the management of the model of FilterGraph. Each filter is linked in a specific order in FilterGraph to complete the user-defined function. Filters are generally divided into the following types: (1) Source filter: The source filter introduces data into the filter graph. The data source can be files, networks, cameras, etc. (2) Transform filter: The job of the transform filter is to obtain the input stream, process the data, and generate the output stream. The processing of data by the transform filter includes encoding and decoding, format conversion, compression and decompression, etc. (3) Renderer filter: The renderer filter is at the last level in the filter graph. They receive data and submit the data to the peripheral device [2]. 3.2 Real-time playback of network video streams using DirectShow This system software developed various functional filters and connected them into a complete link, as shown in Figure 5. [align=center] Figure 5 Network Playback FilterGraph[/align] The NetRecv Filter is the network receiving source filter, which inherits from CBaseFilter. CBaseFilter already possesses the basic characteristics and framework of a filter, using a push mode to push the data received from the network to the next level filter. The program places socket-related operations at the application layer, such as socket creation, listening, and connection. When needed, the connected socket handle is simply set to the NetRecv Filter through the filter interface. Internally, the receiving filter only needs to use the externally set socket to receive data. Since the video data transmitted from the remote embedded terminal is an H.264 compressed bitstream, a transform filter, namely the H.264 decoding filter, is needed to achieve real-time playback. It is inherited from CTransformFilter and is mainly a decoding filter based on T.264 code. T.264 is a H.264 codec program jointly developed by the China Video Coding Free Organization. It will receive the H.264 bitstream from the NetRecv filter, decode it into YUV video format, and send it to the render filter for playback through the output pin. The entire implementation process is as follows: (1) First, construct the video playback FilterGraph object m_VideoGraph; (2) Create NetRecv Filter, H264 Decode Filter and Renderer Filter, and add these three filters to m_VideoGraph; (3) Set the socket responsible for receiving video data to NetRecv Filter to receive network video stream data; (4) Reset the various parameters of the video according to the data format received for the first time, for example: mPreferredMt.SetSubtype(&MEDIASUBTYPE_YUY2); // Set the media type to YUV4:2:0 format; info.AvgTimePerFrame = 400000; // Frame rate is 25 frames/s; info.bmiHeader.biWidth = n_Width; // Set the width of the image; info.bmiHeader.biHeight = n_Height; // Set the height of the image; info.bmiHeader.biSizeImage=n_Width*_Height *2; // Image size; info.bmiHeader.biCompression= mmioFOURCC('Y','U','Y','2'); (5) After setting the video format, notify the application to complete the connection of all filters, and then call mVideoGraph->Run() to run. (4) When the network receives more than a certain amount of data, notify the event window to play the video. 3.3 Network video transmission strategy The network part is programmed using WinSock. The socket provides two different transmission methods. TCP is a connection-oriented protocol. Through the handshake protocol, it can provide reliable data transmission, but it is slow and the system load is large. UDP does not provide a connection and relies on the network itself to ensure the reliability of transmission. It cannot guarantee that the image will not be lost, but it is simple and fast[3]. Video image transmission has its own characteristics: the loss of key compressed image information (H264 compression information) will cause the system to be paralyzed. The partial loss of other control information will not have a significant impact on the system. This system employs the following transmission mechanism tailored to the characteristics of video image transmission: TCP is used for video image transmission to ensure error-free connection of core information, while UDP is used for information exchange between the control center and remote terminals. The system uses a streaming data reception scheme. Both TCP and UDP transmission packets have their own structures. The TCP packet format for transmitting video data is as follows: When the monitoring center receives a TCP packet, it first determines its data type. We define two data types here: format data and video data. Format data mainly includes the configuration information of the front-end system, such as pixel settings, image size, compression type, etc. The center uses this data to configure its own program running status and parameter settings. The payload data is the entity part of the data packet, containing specific format data or specific media data, which will be processed accordingly. Finally, CRC-16 verification is used. The UDP packet format is as follows: UDP is mainly used to transmit control commands. The program first receives command types, such as: remote terminal connection requests, remote system configuration information, PTZ control information, etc. Data parameters mainly include the specific control requirements of these control commands, such as: controlling the specific angle of PTZ rotation, etc. This enables real-time interaction of information commands between the field end and the remote end. Test Results Within a local area network environment, the monitoring system was tested using a single point-to-point transmission method. The image size was set to 352*288. Test results show that the system achieves a compression rate of approximately 100 times for flat video images, with a network latency of about 3 seconds and a network bandwidth usage of less than 2 Mbps. Based on actual performance, the system exhibits relatively smooth transmission performance, low latency during real-time transmission, and effectively fulfills the purpose of video monitoring, meeting the needs of most current video monitoring systems. Conclusion This paper mainly presents a complete implementation scheme for an embedded video monitoring front-end based on DM642 and a remote monitoring center software based on DirectShow. Tests show that the system has stable performance, low bandwidth usage, is easily expandable and upgradeable, and can be used in harsh monitoring environments, demonstrating broad application prospects. References [1] Chen Wenxiang, Meng Limin. Design of a new embedded video surveillance system [J]. Electronic Components Application, Vol. 10, No. 2, 2008. [2] Lu Qiming. DirectShow Development Guide [M]. Beijing: Tsinghua University Press, 2003. [3] Song Kun, Liu Ruining, Ma Wenqiang. Visual C++ Video Technology Solution Handbook [M]. Beijing: Posts & Telecom Press, 2008. [4] Hao Hongwei, Wang Shumin, Li Yuan. Design and optimization of embedded video surveillance system based on DM642 [J]. Microcomputer Application, Vol. 39, No. 3, 2008.
Read next

CATDOLL 139CM Luisa (TPE Body with Soft Silicone Head)

Height: 139cm Weight: 23kg Shoulder Width: 33cm Bust/Waist/Hip: 61/56/69cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm Anal...

Articles 2026-02-22