• 제목/요약/키워드: Feature Tracking

Search Result 567, Processing Time 0.029 seconds

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Methodology for Evaluating Real-time Rear-end Collision Risks based on Vehicle Trajectory Data Extracted from Video Image Tracking (영상기반 실시간 후미추돌 위험도 분석기법 개발)

  • O, Cheol;Jo, Jeong-Il;Kim, Jun-Hyeong;O, Ju-Taek
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.5
    • /
    • pp.173-182
    • /
    • 2007
  • An innovative feature of this study is to propose a methodology for evaluating safety performance in real time based on vehicle trajectory data extracted from video images. The essence of evaluating safety performance is to capture unsafe car-following events between individual vehicles traveling surveillance area. The proposed methodology applied two indices including real-time safety index (RSI) based on the concept of safe stopping distance and time-to-collision (TTC) to the evaluation of safety performance. It is believed that outcomes would be greatly utilized in developing a new generation of video images processing (VIP) based traffic detection systems capable of producing safety performance measurements. Relevant technical challenges for such detection systems are also discussed.

Analyzing Human's Motion Pattern Using Sensor Fusion in Complex Spatial Environments (복잡행동환경에서의 센서융합기반 행동패턴 분석)

  • Tark, Han-Ho;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.6
    • /
    • pp.597-602
    • /
    • 2014
  • We propose hybrid-sensing system for human tracking. This system uses laser scanners and image sensors and is applicable to wide and crowded area such as hallway of university. Concretely, human tracking using laser scanners is at base and image sensors are used for human identification when laser scanners lose persons by occlusion, entering room or going up stairs. We developed the method of human identification for this system. Our method is following: 1. Best-shot images (human images which show human feature clearly) are obtained by the help of human position and direction data obtained by laser scanners. 2. Human identification is conducted by calculating the correlation between the color histograms of best-shot images. It becomes possible to conduct human identification even in crowded scenes by estimating best-shot images. In the experiment in the station, some effectiveness of this method became clear.

Development of Tracking Equipment for Real­Time Multiple Face Detection (실시간 복합 얼굴 검출을 위한 추적 장치 개발)

  • 나상동;송선희;나하선;김천석;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1823-1830
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro­reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi­face detector and a feature correlation tracker. The estimated position of the face is used to control a pan­tilt servo mechanism in real­time, that moves the camera to keep the tracked face always centered in the image.

Real-time Montage System Design using Contents Based Image Retrieval (내용 기반 영상 검색을 이용한 실시간 몽타주 시스템 설계)

  • Choi, Hyeon-Seok;Bae, Seong-Joon;Kim, Tae-Yong;Choi, Jong-Soo
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.313-322
    • /
    • 2006
  • In this paper, we introduce 'Contents Based Image Retrieval' which helps a user find the images he or she needs more easily and reconfigures the images automatically. With this system, we try to realize the language of (motion) picture, that is, the Montage from the viewpoint of the user. The Real-time Montage System introduced in this paper uses 'Discrete Fourier Transform'. Through this, the user can find the feature of the image selected and compare the analogousness with the image in the database. This kind of system leads to the user's speedy and effective retrieving, Also, we can acquire the movement image of the user by Camera Tracking in Real-time. The movement image acquired is to be reconfigured automatically with the image of the user. In this way, we can get an easy and speedy image reconfiguration which sets to the user's intention. This system is a New Media Design tool(entertainment) which induces a user enjoy participating in it. In this system, Thus, the user is not just a passive consumer of one-way image channels but an active subject of image reproduction in this system. It is expected to be a foundation for a new style of user-centered movie (media based entertainment).

  • PDF

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

A Study on Tracking Method for Command and Control Framework Tools (명령 제어 프레임워크 (Command and Control Framework) 도구 추적 방안에 대한 연구)

  • Hyeok-Ju Gwon;Jin Kwak
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.5
    • /
    • pp.721-736
    • /
    • 2023
  • The Command and Control Framework was developed for penetration testing and education purposes, but threat actors such as cybercrime groups are abusing it. From a cyber threat hunting perspective, identifying Command and Control Framework servers as well as proactive responding such as blocking the server can contribute to risk management. Therefore, this paper proposes a methodology for tracking the Command and Control Framework in advance. The methodology consists of four steps: collecting a list of Command and Control Framework-related server, emulating staged delivery, extracting botnet configurations, and collecting certificates that feature is going to be extracted. Additionally, experiments are conducted by applying the proposed methodology to Cobalt Strike, a commercial Command and Control Framework. Collected beacons and certificate from the experiments are shared to establish a cyber threat response basis that could be caused from the Command and Control Framework.

Design and Implementation of Eye-Gaze Estimation Algorithm based on Extraction of Eye Contour and Pupil Region (눈 윤곽선과 눈동자 영역 추출 기반 시선 추정 알고리즘의 설계 및 구현)

  • Yum, Hyosub;Hong, Min;Choi, Yoo-Joo
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.107-113
    • /
    • 2014
  • In this study, we design and implement an eye-gaze estimation system based on the extraction of eye contour and pupil region. In order to effectively extract the contour of the eye and region of pupil, the face candidate regions were extracted first. For the detection of face, YCbCr value range for normal Asian face color was defined by the pre-study of the Asian face images. The biggest skin color region was defined as a face candidate region and the eye regions were extracted by applying the contour and color feature analysis method to the upper 50% region of the face candidate region. The detected eye region was divided into three segments and the pupil pixels in each pupil segment were counted. The eye-gaze was determined into one of three directions, that is, left, center, and right, by the number of pupil pixels in three segments. In the experiments using 5,616 images of 20 test subjects, the eye-gaze was estimated with about 91 percent accuracy.

  • PDF

Automatic Detection of Dissimilar Regions through Multiple Feature Analysis (다중의 특징 분석을 통한 비 유사 영역의 자동적인 검출)

  • Jang, Seok-Woo;Jung, Myunghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.160-166
    • /
    • 2020
  • As mobile-based hardware technology develops, many kinds of applications are also being developed. In addition, there is an increasing demand to automatically check that the interface of these applications works correctly. In this paper, we describe a method for accurately detecting faulty images from applications by comparing major characteristics from input color images. For this purpose, our method first extracts major characteristics of the input image, then calculates the differences in the extracted major features, and decides if the test image is a normal image or a faulty image dissimilar to the reference image. Experiment results show that the suggested approach robustly determines similar and dissimilar images by comparing major characteristics from input color images. The suggested method is expected to be useful in many real application areas related to computer vision, like video indexing, object detection and tracking, image surveillance, and so on.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.