DOI QR코드

DOI QR Code

Real-time Zoom Tracking for DM36x-based IP Network Camera

  • Cong, Bui Duy (School of Electronic Engineering, Soongsil University) ;
  • Seol, Tae In (School of Electronic Engineering, Soongsil University) ;
  • Chung, Sun-Tae (School of Electronic Engineering, Soongsil University) ;
  • Kang, HoSeok (School of Electronic Engineering, Soongsil University) ;
  • Cho, Seongwon (Dept. of Electrical Engineering, Hongik University)
  • Received : 2013.09.01
  • Accepted : 2013.09.30
  • Published : 2013.11.30

Abstract

Zoom tracking involves the automatic adjustment of the focus motor in response to the zoom motor movements for the purpose of keeping an object of interest in focus, and is typically achieved by moving the zoom and focus motors in a zoom lens module so as to follow the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. Thus, one can simply implement zoom tracking by following the most closest trace curve after all the trace curve data are stored in memory. However, this approach is often prohibitive in practical implementation because of its large memory requirement. Many other zoom tracking methods such as GZT, AZT and etc. have been proposed to avoid large memory requirement but with a deteriorated performance. In this paper, we propose a new zoom tracking method called 'Approximate Feedback Zoom Tracking method (AFZT)' on DM36x-based IP network camera, which does not need large memory by approximating nearby trace curves, but generates better zoom tracking accuracy than GZT or AZT by utilizing focus value as feedback information. Experiments through real implementation shows the proposed zoom tracking method improves the tracking performance and works in real-time.

Keywords

1. INTRODUCTION

Zoom tracking keeps an object of interest in focus during zoom operation, and is an important function in video surveillance, particularly in traffic management and security monitoring. Zoom tracking is typically achieved by moving the zoom and focus motors in a zoom lens module so as to follow the so-called “trace curve”(Figure 2), which shows the in-focus motor positions versus the zoom motor positions for various object distances. Zoom tracking can be implemented simply by moving focus motor and zoom motor so that it follows the most closest trace curve after all the trace curve data are stored in memory. However, this approach is often prohibitive in practical implementation because of its large memory requirement. Many other zoom tracking methods such as GZT, AZT and etc. have been proposed to avoid large memory requirement but with deteriorated zoom tracking performance. In this paper, we propose a new zoom tracking method called ‘Approximate Feedback Zoom Tracking method (AFZT)’, which does not require a large memory and improve zoom tracking accuracy better than GZT and AZT. The proposed method first approximates all trace curve data by five representative trace curve data and stores them in the memory, beforehand. At the start of zoom operation, it decides two appropriate nearby upper bound and lower bound trace curve among the stored five approximate representative trace curves around the initial zoom and focus motor positions and calculates an estimate of the right trace curve using linear interpolation of two upper and lower trace curves in the same way as GZT estimates. During zoom operation, it adaptively adjusts focus motor position or revises the estimated trace curve by utilizing focus value information obtained from the hardware autofocus engine to reduce tracking errors.

Fig. 1.Fig. 1. Simple Zoom Lens System.

Fig. 2.Trace curves of a 12x zoom lens module for various object distances ranging near(0.6m) to far end (infinity).

Experiments on a real implementation shows the proposed zoom tracking method improves the tracking performance and successfully works for full-HD video (1920×1080p) in real-time of 30 fps.

The rest of paper is organized as follows. Backgrounds for this paper are briefly introduced in Section II. Section III describes the proposed Approximate Feedback Zoom Tracking method. Section IV explains an actual hardware implementation of the proposed AFZT method on a DM36x-based IP network camera. Section V presents the experimental results and discussion and lastly, conclusions are given in Section VI.

 

2. BACKGROUNDS

2.1 Zoom Lens System and Zoom Tracking

A simple scheme for a zoom lens system divides the assembly into two parts (Figure 1), a focusing lens similar to a standard, fixed-focal-length photographic lens, preceded by an afocal zoom system, an arrangement of fixed and movable lens elements that does not focus the light, but alters the size of a beam of light travelling through it, and thus the overall magnification of the lens system.

The focusing lens is responsible for achieving focusing. And zoom ratio is adjusted by moving a zoom lens element (or zoom lens elements) in the afocal zoom system. Moving zoom lens elements and focus lens are controlled by a zoom motor and a focus motor, respectively.

Zoom tracking is normally achieved by following the so called trace curves (Figure 2). Trace curves denote in-focus motor positions over all possible zoom motor positions for various object distances.

Thus, the accuracy of any zoom tracking method depends on how well a true trace curve can be estimated. Hence, the trace curve estimation is the most critical component of any zoom tracking method. It should be noted that the only information available for trace curve estimation is the in-focus motor position at the initial zoom motor position. Moreover, it is well known that the trace curve data provided by the manufacturers are ones obtained in design and testing time, but each manufactured zoom lens system can have trace curve data different from the designed trace curve data due to several factors.

2.2 Related Works on Zoom tracking

In the literature, several zoom tracking methods have been proposed [1-5]. Each of these proposed methods are mainly different with respect to trace curve estimation.

In the look-up table method[1], which is simple and has been tried very early in the history of zoom tracking techniques, all trace curve data for various object distances are stored in a form of look-up table into memory. An estimate of the right trace curve is obtained by choosing the closest curve from the stored trace curves. A drawback of this approach needs a large memory to store the look-up table, which limits its usage for embedded devices. Moreover, no mechanism is provided about how to choose the right trace curve when the zoom lens component moves towards the tele-angle direction. To reduce the necessary memory, many other approaches such as geometric zoom tracking (GZT)[1], adaptive zoom tracking (AZT)[1,6], reduced zoom tracking (RDZT)[2], relational zoom tracking (RLZT)[3], predictive zoom tracking (PZT)[4], and feedback zoom tracking(FZT)[5] have been suggested.

The GZT approach calculates an estimate of the right trace curve via linear interpolation only based on two trace curves of the nearest and farthest objects. A drawback of this approach is that the offset between the estimated and the real trace curves gradually increases as the zooming is changed from wide-angle to tele-angle. This weakness of GZT is later lessened by AZT method, which incorporates a recalibration procedure at the boundary zoom (motor) position where the trace curve changes from linear to non-linear. At this boundary zoom position, an auto-focusing operation is performed which can be viewed as a recalibration procedure. After auto-focusing at the boundary zoom position, the focus motor is moved according to the updated trace curve estimate by using GZT. AZT method improves the tracking accuracy but still leaves a sizable room for improvement.

In order to reduce the look-up table data to be stored, RDZT first cut the zoom step range into nine zoom positions and divides trace curve into two linear regions (one for around wide area, and one around for far tele area), and nonlinear region (between linear region 1 and linear region 2), and reserves the complete look-up table data in the nonlinear region. In linear regions, RDZT stores reduced look-up table data by using linearity.

The RLZT and PZT methods were proposed later to improve the estimation accuracy through machine learning. Both RZT and PZT require a significant amount of a priori knowledge for training. It is not always convenient to obtain these a priori trace curves in practical use. Furthermore, the errors in the learning step will also have an effect on the estimation. Because the variation of the lens or scenes often requires additional time for re-training, the adaptability of these two algorithms is relatively poor.

FZT method integrates the geometric trace curve estimation and the feedback control for trace curve revision and FZT is a novel approach in the sense that it firstly utilizes feedback information of focus value to revise trace curve at each probing time. However, its revision distance feedback control mechanism is not shown to achieve convergence so that it cannot guarantee more precise zoom tracking performance even though their experimental data shows better performance compared to other zoom tracking algorithms such as GZT, AZT, RZT, and PZT. Also, since FZT probes focus value in two focus motor positions of up and down at probe time, it causes more tracking time and a jitter in zoom tracking motion at probe time.

In [7], we presented a zoom tracking method called ‘approximation zoom tracking’ , which reduces required memory space by approximating trace curves but with a degraded performance. In this paper, we propose a novel zoom tracking method called ‘Approximate Feedback Zoom Tracking’, which does not need large memory by approximating nearby trace curves, but generates better zoom tracking accuracy than GZT or AZT by utilizing focus value as feedback information. Even though our proposed AFZT approach utilizes the focus value as a feedback information as FZT does, it differs from FZT at least in two ways, which is explained in Section 3.3 in more detail.

2.3 DM36x, AF HW Engine, and Auto focusing

The TMS320DM36x Digital Media System-on-Chip (DMSoC)[8] is a highly integrated, programmable platform for digital still/video cameras and other mobile imaging devices. DM36x supports AF HW engine[9]. This AF engine extracts green pixel data from RGB raw data of image sensor, subtracts 128 from the extracted green data, and feeds them to two IIR filters block to calculate the FV (focus value). DM36x AF HW engine computes the focus value through IIR filtering as in equation (1).

where FV(l,s) denotes the focus value for the lth focus window image data taken at the focus motor position s and computed by horizontal AF filter, an Infinite Input Response (IIR) filter, g(i,j,l,s) corresponds to the green component for the lth focus window image at the focus motor position s, hAF(j) the impulse response of IIR filter, and i and j the row and the column pixel coordinates, l the focus window.

Figure 3 shows an example of focus value graph along focus motor positions at a fixed zoom motor position obtained from a DM36x-based AF/AZ IP network camera.

Fig. 3.A focus value graph along focus motor positions at a fixed zoom motor position.

Focus value calculated by equation (1) represents the sharpness of the image frame. Since higher FV means better sharpness of the image frame, one can easily implement an autofocus algorithm by iterative adjustment of the focus motor and search of the focus motor position producing the maximum FV. In-focusing is achieved at the highest focus value position.

 

3. The Proposed Zoom Tracking Method: Approximate Feedback Zoom Tracking

The proposed approximate feedback zoom tracking consists of three stages: 1) Construction of approximate representative trace curves, 2) Estimation of the right trace curve at the start of zoom operation, 3) Focus control during zoom operation.

3.1 Construction of approximate representative trace curves

Trace curve data is very large so that it is not desirable to save all data for embedded system. GZT saves just two extreme curves (trace curves of the nearest object and the farthest (infinity) object one) and estimates the right trace curve for zoom tracking by linear interpolation, which causes a large offset error during moving to tele-angle.

In order to save memory, we also do not keep all the data. We separate trace curves into three groups and from three groups, we calculate the 1st group representative trace curve, the 2nd group representative trace curve, and the 3rd group representative trace curve as illustrated in Figure 4 and also keep trace curve of the two extreme, far (infinity) and nearest object curve. In a total, five curves are used to estimate the right trace curve for zoom tracking instead of two extreme trace curves as in GZT. The 1st group representative trace curve is calculated by taking the average of the top three trace curves of object positions, 0.6m, 0.8m, and 1m. The 2nd group and 3rd representative trace curve are calculated in the same way from the middle five object positions of 1.2m, 1.5m, 2m, 2.5m, and 3m, and from the bottom last 4 object positions of 5m, 8m, 10m, and infinity, respectively.

Fig. 4.Reduction of trace curves by approximate representative trace curves.

3.2 Trace Curve Estimation

At the start of zoom operation, auto-focusing is obtained by the global searching algorithm described in Section 2.3. Based on the focus motor position of the maximum focus value at the zoom motor position, two appropriate trace curves of upper bound and lower bound among five approximate representative trace curves stored in memory are decided and an estimation of the right trace curve based on the decided upper bound trace curve and the lower bound trace curve is processed as follows.

The estimate of the current in-focus motor position for the current zoom motor position is calculated using linear interpolation as in GZT as follows:

where F(zc) and FL(zc) denote an estimate of focus motor position and the focus motor position on the lower bound curve at the current zoom position zc. R is the focus ratio, defined as the factor of ds/Ds, where Ds is the difference between focus motor positions on the upper trace bound curve and on the lower bound trace curve at the starting zoom position zs and ds is the difference between the calculated focus motor position of the maximum focus value and the focus motor position on the lower bound trace curve at the starting zoom position zs. Dc is the difference between the upper curve and the lower curve at current zoom position zc.

3.3 Focus control

Fig. 5.Illustration of trace curve estimation.

The initial estimated trace curve calculated by equation (2) cannot be kept applied all the way during zoom operation since the focus ratio R shows non-linear characteristics when the zoom is changed from wide-angle to tele-angle, resulting in large estimation errors. Also, if the object of interest is moving or is switching during zooming operation even though the estimated trace curve is precise, following the initial estimated trace curve may possibly not guarantee in-focusing since object distance becomes changed during zooming. Thus, in order to compensate the estimation error, either focus motor movement needs to be adjusted adaptively or the estimated trace curve needs to be revised during zoom operation.

As explained in Section 2.3, the focus value represents the sharpness of the image. Figure 6 illustrates a focus value graph with respect to focus motor positions and zoom motor positions acquired from our DM368-based IP network camera equipped with a 12x zoom lens. Figure 6 shows that the highest focus value is on the peak of the mountain and that sharpness decreases gradually down the hillside. The peak line is the real trace curve for the object in the experiment. Away from the trace curves, the corresponding focus value declines deeply on both sides of the mountain.

Fig. 6.A focus value graph with respect to focus motor positions and zoom motor positions.

The focus values in focused images are much higher than those not in-focused images. Thus, as in FZT[5], the proposed zoom tracking method utilizes the focus value as a feedback information to adjust focus motor movement or to revise the estimated trace curve during zoom operation.

While zoom lens is moving from wide-angle to tele-angle direction and the focus lens is moving along the estimated trace curve, adjustment should be applied to the focus motor movement if the current focus value at the current focus and zoom motor positions position is quite different with the maximum focus value. If continuous adjustments reach at the local the maximum focus value, the estimated trace curve needs to be revised.

We denote the focus value FV(fk, zk) at the focus motor position fk and the zoom motor position zk of zoom motor moving step k as FV(k), and denote the current focus value as FVcur. Also, we denote the previous focus value as FVpre. During zoom operation, the zoom motor movement size is fixed and denoted as Δz so that zk+1 = zk + Δz . If step k is the current step, then FVcur =FV(k) and FVpre=FV(k-1). FVmax denotes a focus value related with the maximum focus value. At the zoom start time, FVmax is set to the maximum focus value found out through a global search algorithm, and during adjustment stage, FVmax is updated to the (local) maximum focus value across several zoom steps.

Now, the focus control stage of the proposed zoom tracking method works as follows.

At the current step k, if | FV(k) - FVmax | ≤ ThH , the next focus motor should be moved following the current estimated trace curve. if | FV(k)- FVmax | >ThH, and FVcur < FVpre, then it indicates that the current focus motor moving direction is the direction of decreasing focus value so that the next focus motor moving direction should be taken as backward. If | FV(k) - FVmax | >ThH and FVcur > FVpre, it indicates that the current focus motor moving direction is the direction of increasing focus value so that the next focus motor moving direction should be kept in the same direction. Focus motor movement step size during adjustment process is taken as a constant step, and denoted as Δf.

If FV(n-l) <⋯≤FV(n)>FV(n+1)⋯ , then the local maximum focus value across several zoom steps, FV(n) is set to a new FVmax. During adjustment process, if the focus motor position with the newly updated FVmax is detected, from there, first, the proposed method decides the lower bound trace curve and the upper bound trace curve from the stored five approximate representative curves, and a new estimate of the right trace curve is calculated again according to the equation (2) based on the decided upper and lower bound. And, the newly estimated trace curve is adopted for the next zoom tracking. The black arrows in Figure 7 show the actual focus and zoom motors trace during focus control stage.

Fig. 7.Illustration of the proposed focus control.

Even though our proposed AFZT approach utilizes the focus value as a feedback information as FZT, it differs from FZT at least in two ways. Firstly, AFZT revises the estimated trace curve when focus values reaches a local maximum focus value, where it has high probability to reach near the right trace curve so that achieving convergence (or compensating tracking error) is more likely. And before revision, AFZT adaptively adjusts focus motor movement if necessary. Secondly, AFZT utilizes tighter nearby upper bound and lower bound trace curve for estimating a right trace, but on the other hand, FZT uses two extreme farthest and nearest trace curve. Figure 8 summarizes the workflow of our proposed approximate feedback zoom tracking method.

Fig. 8.The workflow of the proposed Approximate Feedback Zoom Tracking method.

 

4. Real-time Implementation

Our proposed approximate feedback zoom tracking method has been implemented on a AF/AZ DM368 based IP Network Camera lens as shown in Figure 9.

Fig. 9.The Implemented DM368-based AF/AZ IP network camera.

The implemented DM368-based AF/AZ IP network camera consists of 12 times zoom lens of Optologics, EEL005 [10], 5MPixel Aptina CMOS sensor MT9P031[11], and zoom lens control HW B/D with Atmel microcontroller AT89C51. Pelco D protocol[12] is implemented for the communication between AT89C51 and DM368 via RS232 port. Focus value calculation is implemented using AF HW engine for 1920x1080 size image, which performs at 30 fps.

 

5. Experimental Results and Discussion

The performance of the developed zoom tracking method was compared with the conventional zoom tracking methods, GZT and AZT in terms of both tracking accuracy. Tracking accuracy provides a quantitative measure of the offset between an estimated and a true trace curve for a given object distance. The performance measures for tracking stationary object during zoom operation were collected from 100 distinct scenes under different lighting conditions and various object distances. The object distances were set to 1, 3, 5, 7 and 10 m. For each distance, samples were obtained from the GZT, AZT, and our proposed AFZT. Figure 10 shows an example of the trace curve for a 3 m stationary object acquired using the proposed AFZT method. The real trace curve was obtained by running the global search auto-focusing function at each zoom motor position. The trace curve was observed to tightly fit the real trace curve with several small fluctuations.

Fig. 10.The proposed AFZT trace curve for a 3 m stationary object.

Table 1 summaries the overall tracking accuracy of the developed method compared with the existing GZT, AZT approaches. From experimental data in Table 1, one can see that the proposed AFZT exhibits better tracking accuracies than the traditional GZT and AZT methods.

Table 1.Tracking accuracy for stationary objects

Even though we cannot directly compare our proposed method with FZT in [5] since both use different Zoom lens modules, the experimental data shown in Table 1 in [5] shows that the tracking accuracy of our proposed AFZT is better than that of FZT.

Through experiments, it is also verified that our proposed zoom tracking method, AFZT successfully works for full-HD (1920×1080p) video size in real-time of 30 fps.

Figure 11 shows some images out of zoom operation for our implemented AF/AZ IP network camera.

Fig. 11.Resulting images during zoom operation from wide-angle to tele-angle.

 

6. CONCLUSIONS

In this paper, we proposed a new zoom tracking method called ‘Approximate Feedback Zoom Tracking method (AFZT)’, which does not require a large memory and improve zoom tracking accuracy better than GZT and AZT. The proposed method first approximates all trace curve data by five representative trace curve data and stores them in the memory, beforehand. At the start of zoom operation, it decides two appropriate nearby upper bound and lower bound trace curve among the stored five approximate representative trace curves around the initial zoom and focus motor positions and calculates an estimate of the right trace curve using linear interpolation of two upper and lower trace curves. During zoom operation, it adaptively adjusts focus motor position or revises the estimated trace curve by utilizing focus value information obtained from the hardware autofocus engine to reduce tracking errors.

Experiments through real implementation shows the proposed zoom tracking method achieves better tracking accuracy than conventional zoom tracking methods such as GZT or AZT, and successfully works for full-HD video (1920x1080p) in real-time of 30fps.

Currently, through further experiments under various environments, further algorithm improvements is progressing and the upgraded algorithm for DM8147-based IP network camera (the next generation IP network camera) equipped with Tamron 30x zoom lens module is being implemented, whose results will be reported later.

References

  1. V. Peddigari, N. Kehtarnava, S.Y Lee, and G. Cook, "Real-time Implementation of Zoom Tracking on TI DM Processor," Proc. of SPIE-IS&T Electronic Imaging, Vol. 5671, pp. 8-18, 2005.
  2. Chia-Hao Chang and Chiou-Shann Fuh, "Auto Focus Using Adaptive Step Size Search and Zoom Tracking Algorithm," Artificial Intelligence and Applications, Vol.1, Issue 1, pp. 22-30, 2005.
  3. V. Peddigari and N. Kehtarnavaz, "A Relational Approach to Zoom Tracking for Digital Still Cameras," IEEE Trans. Consumer Electronics, Vol. 51, Issue 4, pp. 1051- 1059, 2005. https://doi.org/10.1109/TCE.2005.1561824
  4. V. Peddigari and N. Kehtarnavaz, "Real-time Predictive Zoom Tracking for Digital Still Cameras," Journal of Real-Time Image Processing, Vol. 2, Issue 1, pp. 45-54, 2007. https://doi.org/10.1007/s11554-007-0036-y
  5. Tengyue Zouemail, Xiaoqi Tangemail, Bao Song, Jin Wangemail, and Jihong Chenemail "Robust Feedback Zoom Tracking for Digital Video Surveillance," Sensors, Vol. 12, Issue 6, pp. 8073-8099, 2012. https://doi.org/10.3390/s120608073
  6. Yoon Kim, June-Sok Lee, Jae-Hwan Jeong, and Sung-Jea Ko, "A Video System with Adaptive Zoom Tracking," Conf. of IEEK, Vol. 25, Issue 1, pp. 56-57, 2002.
  7. Bui Duy Cong, Tae in Seol, Sun-Tae Chung, HoSeok Kang, and Seongwon Cho, "Realtime Implementation of Zoom Tracking on DM36x- based IP Network Camera," Conf. of Korea Multimedia Society, Vol. 16, Issue 1, pp.123-126, 2013.
  8. TMS320DM368 Digital Media System-on- Chip Datasheet, TI, 2010.
  9. TMS320DM357 DMSoC Video Processing Front End (VPFE) User's Guide, SPRUFG8A, 2009.
  10. EEL005 Lens Specification, optologics, 2012.
  11. MT9P031, 1/2.5-Inch 5Mp Digital Image Sensor Datasheet, 2005.
  12. Pelco-D Protocol Manual, http://cvs.ru/files/ pelco-d.pdf, 2003.

Cited by

  1. A zoom tracking algorithm based on deep learning vol.80, pp.16, 2013, https://doi.org/10.1007/s11042-021-10868-2
  2. A zoom tracking algorithm based on defocus difference vol.18, pp.6, 2013, https://doi.org/10.1007/s11554-021-01133-8