1 Introduction
With the continuous development of modern control technology, the requirements for intelligent lighting control are becoming increasingly stringent. Adopting an intelligent lighting control system not only provides a variety of artistic lighting effects but also brings benefits such as energy savings and reduced operating costs.
In large lighting environments such as libraries, shopping malls, indoor sports fields, and corridors, the area is often divided into occupied and unoccupied zones. If all zones have the same brightness, the lighting in unoccupied zones becomes ineffective. However, if an intelligent lighting control system can detect the location of people and dynamically determine occupied and unoccupied zones, it can provide normal, higher-brightness lighting in occupied zones and reduce or turn off lights in unoccupied zones. As people move, the system dynamically adjusts the effective lighting area to reduce ineffective lighting, thus saving energy while ensuring good lighting results.
Accurate human location determination is a prerequisite for intelligent lighting. Currently, some researchers have proposed methods using infrared and laser detection, RFID cards combined with identity recognition, and floor pressure sensors to determine human location. However, these methods face challenges in large lighting areas, such as the lack of suitable sensor installation locations and complex wiring, making implementation difficult and hindering further reliability improvements. In reality, video surveillance cameras are widely used in large lighting applications. If these video surveillance images can be fully utilized, combined with digital image processing technology to extract human images and determine their location, truly intelligent lighting can be achieved. This paper employs digital video image target localization and tracking technology and PLC-Bus technology to construct an intelligent lighting control system. This system directly determines human location from video images, and the lamp switching control signals are transmitted directly through power lines, eliminating the need for additional wiring. Therefore, it effectively overcomes many difficulties in sensor installation and wiring, achieving automatic illuminance adjustment, automatic lamp switching, and good control of localized lighting areas, while also maintaining high reliability.
2. Composition of the intelligent lighting control system
The entire system is composed of three parts, as shown in Figure 1. Functionally, it consists of three parts: an image acquisition module, an image processing module, and a lighting control module.
Figure 1. Composition of the intelligent lighting control system
2.1 Image Acquisition Module
The image acquisition module mainly consists of a camera and an optical glass lens. The camera uses the Hyundai HV7131R from South Korea, which is one of the better performing mainstream products currently available. The HV7131R uses a 0.3µm CMOS process, has 300,000 effective pixels, consumes less than 90mW of power, and features exposure control, gain control, and white balance processing. It has a maximum frame rate of 30fps@VGA. By setting the HV7131R's internal registers through a standard I2C interface, users can adjust image exposure time, resolution, frame rate, RGB gain, mirroring, etc., and output 10-bit raw RGB data.
The optical glass lens is a telephoto lens with a 20° field of view and a focal length ranging from several meters to tens of meters. The camera should be installed to ensure it can observe the entire monitored area; therefore, during the installation and angle adjustment, the monitored area is generally set starting from the bottom of the image. Furthermore, to avoid severe image artifacts when extracting human images, the camera should have as large a downward viewing angle as possible.
2.2 Image Processing Module
This module consists of a DSP and a data buffer. The DSP mainly uses TI's TMS320LF2407. The main functions of the DSP include: power-on autonomous operation, initialization, setting the camera registers through the I2C interface, preprocessing the monitoring images of the lighting area obtained by the image acquisition module, extracting human body edges in the images, calculating human body position, and making lighting control decisions.
2.3 Lighting Control Module and PLC Bus Technology
The lighting control module employs a distributed control approach, enabling decentralized control and centralized management of lighting fixtures throughout the monitored area. The host DSP makes lighting equipment control decisions based on image analysis results, while the lower-level lighting controllers receive communication commands from the host computer, controlling the switching on and off of the corresponding lights and providing dimming functionality. Control commands issued by the host DSP are transmitted to the lower-level lighting controllers for execution via PLC Bus.
PLCBus technology is a new power line carrier communication technology developed in recent years by ATS Power Line Communication GmbH (ATS., CO). The biggest advantage of this technology is that control signals are transmitted through the power lines, thus eliminating the need for additional control wiring, saving significant amounts of wiring materials, and making the control system easy to install and maintain.
A PLC-BUS system mainly consists of three parts: a transmitter, a receiver, and supporting system equipment. The transmitter's primary function is to transmit PLC-BUS control signals to the receiver via the power line. By controlling the receiver, the system indirectly controls lights and electrical appliances. The receiver's main function is to receive PLC-BUS control signals from the power line and execute relevant control commands to achieve the desired control of lights and appliances. Supporting system equipment includes signal converters, three-phase couplers, and wave absorbers, primarily used to assist the transmitter and receiver in achieving the control objectives.
PLCBus uses Pulse Position Modulation (PPM) to transmit signals by sending instantaneous electrical pulses in four fixed time sequences, using the sine wave of the electric field line as a synchronization signal.
On a 50Hz power line, 200 bits of data can be transmitted per second. This communication rate is not enough to transmit broadband data like that of a computer, but it is sufficient for transmitting actions or commands.
Due to the unique nature of PLCBus's PPM communication method, the receiver can easily and simply reconstruct the PLCBus code. In PLCBus, the address codes used for receiving data are of two types: NID (NetworkID) and DID (DestinationID). Each NID and DID has 8 bits, and together they can form up to 216 different addresses, controlling 216 different devices.
Key features of PLCBus:
(1) No wiring required, just plug and play.
PLCBus technology primarily transmits control signals via power lines, eliminating the need for rewiring and making it suitable for intelligent control projects in all existing or under-installation lighting locations.
(2) Super speed, instant control and instant display.
PLCBus can transmit 10 complete instructions per second, with each instruction being executed within 0.1 seconds on average, making it almost instantaneous.
(3) Two-way communication and status feedback.
The PLCBus product's hardware, software, and protocols allow for bidirectional communication, enabling controlled lighting fixtures to accurately report their on/off status signals, thus confirming whether control commands have been correctly executed. Furthermore, its cost is only about 40% higher than the X-10's single receiver or transmitter components, offering excellent value for money.
(4) It has good compatibility and wider application.
PLCBus technology devices are compatible with X-10, CEBus, and LonWorks devices without any signal interference.
Lighting control can be divided into dimming and non-dimming. Dimming can be achieved by using an OSRAM-based dimmable electronic ballast for fluorescent lamps. The controller outputs a 0-10V DC signal as the control signal for the electronic ballast, allowing for a luminous flux adjustment range of 1% to 100% for the fluorescent lamp. Dimming incandescent lamps can be achieved using a phase-shift trigger and a random solid-state relay. Applying a control signal to the control terminal of the random solid-state relay immediately turns on the AC load. When this control signal is a phase-shiftable pulse signal synchronized with the AC power grid, the load terminal can achieve stable voltage regulation within a 180° range. The phase-shift trigger, based on the magnitude of the control voltage, outputs a wide pulse with a phase shift within a 180° range, synchronized with the power grid voltage at twice the grid frequency, to drive the random solid-state relay, achieving phase-shift voltage regulation. Therefore, the random solid-state relay, used alone, can connect or disconnect the lighting circuit; when used in conjunction with a phase-shift trigger, it enables dimming of incandescent lamps.
3. Human Target Dynamic Positioning Technology in Intelligent Lighting Control
Video surveillance images are two-dimensional projections of a three-dimensional illuminated scene. While they cannot perfectly reflect the actual three-dimensional scene, there is a certain projection relationship between the two, and the video image will change accordingly when the three-dimensional scene changes. Furthermore, the scene in a continuous video stream is continuous; if there is no human movement in the illuminated area, the changes between consecutive frames are minimal. Conversely, human movement causes frame differences. Therefore, dynamic human target detection against a static background in the illuminated area can be achieved using inter-frame change detection.
Human dynamic target detection based on static background mainly consists of three parts: image preprocessing, human dynamic target extraction, and human position determination.
3.1 Image Preprocessing
Digital images of illuminated areas captured by cameras contain significant noise, which must be filtered out first. Many noise removal methods exist, and median filtering is a commonly used non-linear signal processing technique. It uses a sliding template to move point by point across the image, sorting the gray values of each point within the template, and using the central gray value as the pixel gray value of the template's center point. This method effectively suppresses random noise in images while also providing good protection for image contours and edges. Furthermore, median filtering has the advantages of not affecting step signals, maintaining the same spectrum after filtering, and effectively removing salt-and-pepper noise from images.
The digital image of the illuminated area is M×N pixels in size, with a grayscale value of f(x, y). A good result can be achieved using a four-neighbor median filter. The grayscale value of the filtered image is:
3.2 Target Change Detection
When a human target appears or moves within the monitoring field of view, it causes changes in the pixel grayscale values between consecutive frames, resulting in frame differences. The frame difference for the target area is greater than that for the background area. Therefore, calculating the frame difference to determine whether a change has occurred is a common method for target change detection. The simplest algorithm is the absolute value method of frame differences.
For the detected image sequence f(x, y, t), the formula for calculating the cumulative number of pixels that have changed is:
In the formula: Dk is the cumulative number of pixels that have changed; f(x, y, t1) and f(x, y, t2) represent the gray values of pixel (x, y) at times t1 and t2, respectively, which are added terms that are sensitive to the illumination of neighboring frames; α is the suppression coefficient; N is the number of pixels in the detection area; and T is the gray threshold, the size of which determines the sensitivity of dynamic target detection.
The criteria for determining whether there is a change in the target are:
Here, D is the set threshold. This method has a simple algorithm, and the judgment condition takes into account the impact of changes in lighting conditions, thus it has a certain degree of adaptability to changes in lighting. At the same time, it also overcomes the misjudgment caused by interference from small moving targets to a certain extent, thereby improving the accuracy of detection.
3.3 Image Edge Extraction
The most fundamental feature of an image is its edges, which refer to the set of pixels in an image whose grayscale values vary significantly. Edges are crucial for human object detection and segmentation. Image edge detection has always been a hot topic and a challenge in image processing, mainly because edges and noise are both high-frequency signals, making them difficult to separate. Among current edge detection algorithms, the Sobel image edge detection algorithm, as a classic example, has been widely used in many fields due to its low computational cost and high speed.
Because images exhibit abrupt changes in grayscale near edges, the Sobel edge detection method uses the original image's grayscale as a basis. It detects edges by examining the grayscale changes of each pixel within a specific neighborhood and utilizing the maximum value of the first derivative near the edge. The mathematical description of its gradient magnitude is:
3.4 Image segmentation and human body location determination
In large-scale lighting scenarios employing intelligent control, the presence of illuminated and unilluminated areas results in monitoring images with uneven backgrounds. Analysis of human target features reveals significant variations in background grayscale values across the entire field of view, making single thresholding unsuitable for segmentation. Using a single threshold, background pixels are segmented due to the uneven background in the lighting control area. However, the human eye can perceive targets because of a localized grayscale difference between the target and the background. Therefore, based on this principle, we divide the entire lighting monitoring image's field of view into sufficiently small, equal parts, minimizing grayscale variations in each part. First, the entire field of view is divided into many equal small blocks. For each block, the average, maximum, and minimum values are calculated, along with the corresponding threshold and threshold difference. These thresholds are then used to segment the blocks. If a target is present, its position and threshold difference within that block are recorded. After processing the entire field of view, the target with the largest threshold difference among the small blocks is identified as a candidate target point. Window tracking is then performed at this point. Since the window is already small, the impact of background unevenness is minimal.
After the human figure is segmented, its centroid position can be calculated using the following formula:
Where m and n are the window sizes, and f(x, y) is the binarized image.
By dividing the monitored lighting projection area into a two-dimensional array, and calculating the position of the human body's centroid, it is possible to determine the specific location of the human body's centroid within the two-dimensional array. This allows for the making of corresponding lighting control decisions, controlling the opening, closing, and brightness of the lights corresponding to the human body's position.
4. Conclusion
Since the input signal of the intelligent lighting control system is obtained from the video image of the lighting area using digital image processing technology, and the output control signal is transmitted by power line carrier communication (PLCBus), the intelligent lighting control system based on target positioning and PLCbus technology has the obvious characteristics of simple wiring, high system reliability, and easy maintenance.
Furthermore, the control decisions can be designed to be highly user-friendly, enabling automatic adjustment of illuminance, automatic switching of lights, and control of lighting in occupied areas. This system provides a comfortable, scientific, and economical lighting environment, representing an important development direction for advanced lighting control.