• Title/Summary/Keyword: Optical Flow Method

Search Result 443, Processing Time 0.036 seconds

Spatial Multilevel Optical Flow Architecture for Motion Estimation of Stationary Objects with Moving Camera (공간 다중레벨 Optical Flow 구조를 사용한 이동 카메라에 인식된 고정물체의 움직임 추정)

  • Fuentes, Alvaro;Park, Jongbin;Yoon, Sook;Park, Dong Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.53-54
    • /
    • 2018
  • This paper introduces an approach to detect motion areas of stationary objects when the camera slightly moves in the scene by computing optical flow. The flow field is computed by two pyramidal architectures of 5 levels which are built by down-sampling the size of the images by half at each level. Two pyramids of images are built and then optical flow is computed at each level. A warping process combines the information and generates a final flow field after applying edge smoothness and outliers reduction steps. Moreover, we convert the flow vectors in order of magnitude and angle to a color map using a pseudo-color palette. Experimental results in the Middlebury optical flow dataset demonstrate the effectiveness of our method compared to other approaches.

  • PDF

A Study on 2D/3D image Conversion Method using Optical flow of Level Simplified and Noise Reduction (Optical flow의 레벨 간소화와 잡음제거를 이용한 2D/3D 변환기법 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Eun, Jong-Won;Kim, Jin-Soo;Lee, Sang-Hun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.12b
    • /
    • pp.441-444
    • /
    • 2011
  • 본 논문은 2D/3D 영상 처리에서 깊이지도 생성을 위한 Optical flow에서 레벨을 간소화하여 연산량을 감소시키고 객체의 고유벡터를 이용하여 영상의 잡음을 제거하는 연구이다. Optical flow는 움직임추정 알고리즘의 하나로 두 프레임간의 픽셀의 변화 벡터 값을 나타내며 블록 매칭과 같은 알고리즘에 비해 정확도가 높다. 그러나 기존의 Optical flow는 긴 연산 시간과 카메라의 이동이나 조명의 변화에 민감한 문제가 있다. 이를 해결하기 위해 연산 시간의 단축을 위한 레벨 간소화 과정을 거치고 영상에서 고유벡터를 갖는 영역에 한해 Optical flow를 적용하여 잡음을 제거하는 방법을 제안하였다. 제안한 방법으로 2차원 영상을 3차원 입체 영상으로 변환하였고 SSIM(Structural SIMilarity Index)으로 최종 생성된 영상의 오차율을 분석하였다.

  • PDF

Optical Flow Estimation Using the Hierarchical Hopfield Neural Networks (계층적 Hopfield 신경 회로망을 이용한 Optical Flow 추정)

  • 김문갑;진성일
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.3
    • /
    • pp.48-56
    • /
    • 1995
  • This paper presents a method of implementing efficient optical flow estimation for dynamic scene analysis using the hierarchical Hopfield neural networks. Given the two consequent inages, Zhou and Chellappa suggested the Hopfield neural network for computing the optical flow. The major problem of this algorithm is that Zhou and Chellappa's network accompanies self-feedback term, which forces them to check the energy change every iteration and only to accept the case where the lower the energy level is guaranteed. This is not only undesirable but also inefficient in implementing the Hopfield network. The another problem is that this model cannot allow the exact computation of optical flow in the case that the disparities of the moving objects are large. This paper improves the Zhou and Chellapa's problems by modifying the structure of the network to satisfy the convergence condition of the Hopfield model and suggesting the hierarchical algorithm, which enables the computation of the optical flow using the hierarchical structure even in the presence of large disparities.

  • PDF

Automatic Jitter Evaluation Method from Video using Optical Flow (Optical Flow를 사용한 동영상의 흔들림 자동 평가 방법)

  • Baek, Sang Hyune;Hwang, WonJun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1236-1247
    • /
    • 2017
  • In this paper, we propose a method for evaluating the uncomfortable shaking in the video. When you shoot a video using a handheld device, such as a smartphone, most of the video contains unwanted shake. Most of these fluctuations are caused by hand tremors that occurred during shooting, and many methods for correcting them automatically have been proposed. It is necessary to evaluate the shake correction performance in order to compare the proposed shake correction methods. However, since there is no standardized performance evaluation method, a correction performance evaluation method is proposed for each shake correction method. Therefore, it is difficult to make objective comparison of shake correction method. In this paper, we propose a method for objectively evaluating video shake. Automatically analyze the video to find out how much tremors are included in the video and how much the tremors are concentrated at a specific time. In order to measure the shaking index, we proposed jitter modeling. We applied the algorithm implemented by Optical Flow to the real video to automatically measure shaking frequency. Finally, we analyzed how the shaking indices appeared after applying three different image stabilization methods to nine sample videos.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

Mobile Robot Localization Using Optical Flow Sensors

  • Lee, Soo-Yong;Song, Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.4
    • /
    • pp.485-493
    • /
    • 2004
  • Open-loop position estimation methods are commonly used in mobile robot applications. Their strength lies in the speed and simplicity with which an estimated position is determined. However, these methods can lead to inaccurate or unreliable estimates. Two position estimation methods are developed in this paper, one using a single optical flow sensor and a second using two optical sensors. The first method can accurately estimate position under ideal conditions and also when wheel slip perpendicular to the axis of the wheel occurs. The second method can accurately estimate position even when wheel slip parallel to the axis of the wheel occurs. Location of the sensors is investigated in order to minimize errors caused by inaccurate sensor readings. Finally, a method is implemented and tested using a potential field based navigation scheme. Estimates of position were found to be as accurate as dead-reckoning in ideal conditions and much more accurate in cases where wheel slip occurs.

Motion Estimation with Optical Flow-based Adaptive Search Region

  • Kim, Kyoung-Kyoo;Ban, Seong-Won;Won Sik cheong;Lee, Kuhn-Il
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.843-846
    • /
    • 2000
  • An optical flow-based motion estimation algorithm is proposed for video coding. The algorithm uses block-matching motion estimation with an adaptive search region. The search region is computed from motion fields that are estimated based on the optical flow. The algorithm is based on the fact that true block-motion vectors have similar characteristics to optical flow vectors. Thereafter, the search region is computed using these optical flow vectors that include spatial relationships. In conventional block matching, the search region is fixed. In contrast, in the new method, the appropriate size and location of the search region are both decided by the proposed algorithm. The results obtained using test images show that the proposed algorithm can produce a significant improvement compared with previous block-matching algorithms.

  • PDF

Passenger Monitoring Method using Optical Flow and Difference Image (차영상과 Optical Flow를 이용한 지하철 승객 감시 방법)

  • Lee, Woo-Seok;Kim, Hyoung-Hoon;Cho, Yong-Gee
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.1966-1972
    • /
    • 2011
  • Optical flow estimation based on multi constraint approaches is frequently used for recognition of moving objects. This paper proposed the method to monitor passenger boarding using image processing when a train is operated based on Automatic Train Operation(ATO). The movement of passenger can be detected to compare two images, one is a basic image and another is immediately captured by CCTV. Optical Flow helps to find the movement of passenger when two images are compared. The movement of passenger is one of important informations for ATO system because it needs to decide door status.

  • PDF

Human Detection in Images Using Optical Flow and Learning (광 흐름과 학습에 의한 영상 내 사람의 검지)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.3
    • /
    • pp.194-200
    • /
    • 2020
  • Human detection is an important aspect in many video-based sensing and monitoring systems. Studies have been actively conducted for the automatic detection of humans in camera images, and various methods have been proposed. However, there are still problems in terms of performance and computational cost. In this paper, we describe a method for efficient human detection in the field of view of a camera, which may be static or moving, through multiple processing steps. A detection line is designated at the position where a human appears first in a sensing area, and only the one-dimensional gray pixel values of the line are monitored. If any noticeable change occurs in the detection line, corner detection and optical flow computation are performed in the vicinity of the detection line to confirm the change. When significant changes are observed in the corner numbers and optical flow vectors, the final determination of human presence in the monitoring area is performed using the Histograms of Oriented Gradients method and a Support Vector Machine. The proposed method requires processing only specific small areas of two consecutive gray images. Furthermore, this method enables operation not only in a static condition with a fixed camera, but also in a dynamic condition such as an operation using a camera attached to a moving vehicle.