• Title/Summary/Keyword: feature-based tracking

Search Result 315, Processing Time 0.033 seconds

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.

Real-time Control of Biological Animal Wastewater Treatment Process and Stability of Control Parameters (생물학적 축산폐수 처리공정의 자동제어 방법 및 제어 인자의 안정성)

  • Kim, W.Y.;Jung, J.H.;Ra, C.S.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.2
    • /
    • pp.251-260
    • /
    • 2004
  • The feasibility and stability of ORP, pH(mV) and DO as a real-time control parameter for SBR process were evaluated in this study. During operation, NBP(nitrogen break point) and NKP(nitrate knee point), which reveal the biological and chemical changes of pollutants, were clearly observed on ORP and pH(mV)-time profiles, and those control points were easily detected by tracking the moving slope changes(MSC). However, when balance of aeration rate to loading rate, or to OUR(oxygen uptake rate), was not optimally maintained, either false NBP was occurred on ORP and DO curves before the appearance of real NBP or specific NBP feature was disappeared on ORP curve. Under that condition, however, very distinct NBP was found on pH(mV)-time profile, and stable detection of that point was feasible by tracking MSC. These results might mean that pH(mV) is superior real-time control parameter for aerobic process than ORP and DO. Meanwhile, as a real-time control parameter for anoxic process, ORP was very stable and more useful parameter than others. Based on these results, a stable real-time control of process can be achieved by using the ORP and pH(mv) parameters in combination rather than using separately. A complete removal of pollutants could be always ensured with this real-time control technology, despite the variations of wastewater and operation condition, as well as an optimization of treatment time and capacity could be feasible.

A Study on the Features of Visual-Information Acquirement Shown at Searching of Spatial Information - With the Experiment of Observing the Space of Hall in Subway Station - (공간정보의 탐색과정에 나타난 시각정보획득특성에 관한 연구 - 지하철 홀 공간의 주시실험을 대상으로 -)

  • Kim, Jong-Ha
    • Korean Institute of Interior Design Journal
    • /
    • v.23 no.2
    • /
    • pp.90-98
    • /
    • 2014
  • This study has analyzed the meaning of observation time in the course of acquiring the information of subjects who observed the space of hall in subway stations to figure out the process of spatial information excluded and the features of intensive searching. The followings are the results from the analysis of searching process with the interpretation of the process for information acquirement through the interpretation of observation area and time. First, based on the general definition of observation time, the reason for analyzing the features of acquiring spatial information according to the subjects' observation time has been established. The feature of decreased analysis data reflected that of observation time in the process of perceiving and recognizing spatial information, which showed that the observation was focused on the enter of the space during the time spent in the process of observing the space and the spent time with considerable exclusion of bottom end (in particular, right bottom end). Second, while the subjects were observing the space of hall in subway stations, they focused on the top of the left center and the signs on the right exit the most, which was followed by the focus on the both side horizontally and the clock on the top. Third, the analysis of consecutive observation frequency enabled the comparison of the changes to the observation concentration by area. The difference of time by area produced the data with which the change to the contents of spatial searching in the process of searching space could be known. Fourth, as the observation frequency in the area of I changed [three times -> six times -> 9 times], the observation time included in the area increased, which showed the process for the change from perception to recognition of information with the concentration of attention through visual information. It makes it possible to understand that more time was spent on the information to be acquired with the exclusion of the unnecessary information around.

Realistic Seeing Through Method and Device Through Adaptive Registration between Building Space and Telepresence Indoor Environment

  • Lee, Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.1
    • /
    • pp.101-107
    • /
    • 2020
  • We propose a realistic seeing through visualization methods in mixed reality environment. When a user wants to see specific location beyond a wall in indoor environment. The proposed system recognizes and registers the selected area using environment modelling and feature-based tracking. Then the selected area is diminished and the specific location is visualized in real-time. With the proposed seeing through methods, a user can understand spatial relationship of the building and can easily find the target location. We conducted a user study comparing the seeing through method to conventional indoor navigation service in order to investigate the potential of the proposed seeing through method. The proposed seeing through method was evaluated in navigation time in comparison with conventional approach. The proposed method enable users to navigate target locations 30% faster than the conventional approach.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Real-Time Multiple Face Detection Using Active illumination (능동적 조명을 이용한 실시간 복합 얼굴 검출)

  • 한준희;심재창;설증보;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.155-160
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro-reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi-face detector and a feature correlation tracker. The estimated position of the face is used to control a pan-tilt servo mechanism in real-time, that moves the camera to keep the tracked face always centered in the image.

  • PDF

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Target Position Estimation using Wireless Sensor Node Signal Processing based on Lifting Scheme Wavelet Transform (리프팅 스킴 웨이블릿 변환 기반의 무선 센서 노드 신호처리를 이용한 표적 위치 추정)

  • Cha, Dae-Hyun;Lee, Tae-Young;Hong, Jin-Keun;Han, Kun-Hui;Hwang, Chan-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.4
    • /
    • pp.1272-1277
    • /
    • 2010
  • Target detection and tracking wireless sensor network must have various signal processing ability. Wireless sensor nodes need to light weight signal processing algorithm because of energy constraints and communication bandwidth constraints. General signal processing algorithm of wireless sensor node consists of de-noising, received signal strength computation, feature extraction and signal compression. Wireless sensor network life-time and performance of target detection and classification depend on sensor node signal processing. In this paper, we propose energy efficient signal processing algorithm using wavelet transform. The proposed method estimates exact target position.

Vehicle Speed Measurement using SAD Algorithm (SAD 알고리즘을 이용한 차량 속도 측정)

  • Park, Seong-Il;Moon, Jong-Dae;Ko, Young-Hyuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.73-79
    • /
    • 2014
  • In this paper, we proposed the mechanism which can measure traffic flow and vehicle speed on the highway as well as road by using the video and image processing to detect and track cars in a video sequence. The proposed mechanism uses the first few frames of the video stream to estimate the background image. The visual tracking system is a simple algorithm based on the sum of absolute frame difference. It subtracts the background from each video frame to produce foreground images. By thresholding and performing morphological closing on each foreground image, the proposed mechanism produces binary feature images, which are shown in the threshold window. By measuring the distance between the "first white line" mark and the "second white line"mark proceeding, it is possible to find the car's position. Average velocity is defined as the change in position of an object divided by the time over which the change takes place. The results of proposed mechanism agree well with the measured data, and view the results in real time.