• Title/Summary/Keyword: Optical Flow Sensor

Search Result 84, Processing Time 0.025 seconds

Human Detection in Images Using Optical Flow and Learning (광 흐름과 학습에 의한 영상 내 사람의 검지)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.3
    • /
    • pp.194-200
    • /
    • 2020
  • Human detection is an important aspect in many video-based sensing and monitoring systems. Studies have been actively conducted for the automatic detection of humans in camera images, and various methods have been proposed. However, there are still problems in terms of performance and computational cost. In this paper, we describe a method for efficient human detection in the field of view of a camera, which may be static or moving, through multiple processing steps. A detection line is designated at the position where a human appears first in a sensing area, and only the one-dimensional gray pixel values of the line are monitored. If any noticeable change occurs in the detection line, corner detection and optical flow computation are performed in the vicinity of the detection line to confirm the change. When significant changes are observed in the corner numbers and optical flow vectors, the final determination of human presence in the monitoring area is performed using the Histograms of Oriented Gradients method and a Support Vector Machine. The proposed method requires processing only specific small areas of two consecutive gray images. Furthermore, this method enables operation not only in a static condition with a fixed camera, but also in a dynamic condition such as an operation using a camera attached to a moving vehicle.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Applicability of Optical Flow Information for UAV Navigation under GNSS-denied Environment (위성항법 불용 환경에서의 무인비행체 항법을 위한 광류 정보 활용)

  • Kim, Dongmin;Kim, Taegyun;Jeaong, Hoijo;Suk, Jinyoung;Kim, Seungkeun;Kim, Younsil;Han, Sanghyuck
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.1
    • /
    • pp.16-27
    • /
    • 2020
  • This paper investigates the applicability of optical flow information for unmanned aerial vehicle (UAV) navigation under environments where global navigation satellite system (GNSS) is unavailable. Since the optical flow information is one of important measurements to estimate horizontal velocity and position, accuracy of the optical flow information must be guaranteed. So a navigation algorithm, which can estimate and cancel biases that the optical flow information may have, is suggested to improve the estimation performance. In order to apply and verify the proposed algorithm, an integrated simulation environment is built by designing a guidance, navigation, and control (GNC) system. Numerical simulations are implemented to analyze the navigation performance using this environment.

Fanless Thermal Design for the Information Storage System Using CAE Technique (CAE 기법을 이용한 정보저장시스템의 Fanless 열설계)

  • Ryu Ho Chul;Dan Byung Ju;Choi In Ho;Kim Jin Yong
    • 정보저장시스템학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.246-247
    • /
    • 2005
  • This study suggested fanless thermal design using CAE technique for the information storage system under the serious thermal problem. At first, main heat flow was controlled by CAE based fanless heat sink design not to influence sensitive optical pick-up sensor. Then, vent parametric studies found a thermal solution about highly concentrated case top heat due to fanless. These CAE results were verified by experimental methods. As a consequence of newly designed thermal path, thermal specification of optical pick-up sensor was satisfied and fanless thermal design for the information storage system was achieved.

  • PDF

Development of Respiration Sensors Using Plastic Optical Fiber for Respiratory Monitoring Inside MRI System

  • Yoo, Wook-Jae;Jang, Kyoung-Won;Seo, Jeong-Ki;Heo, Ji-Yeon;Moon, Jin-Soo;Park, Jang-Yeon;Lee, Bong-Soo
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.235-239
    • /
    • 2010
  • In this study, we have fabricated two types of non-invasive fiber-optic respiration sensors that can measure respiratory signals during magnetic resonance (MR) image acquisition. One is a nasal-cavity attached sensor that can measure the temperature variation of air-flow using a thermochromic pigment. The other is an abdomen attached sensor that can measure the abdominal circumference change using a sensing part composed of polymethyl-methacrylate (PMMA) tubes, a mirror and a spring. We have measured modulated light guided to detectors in the MRI control room via optical fibers due to the respiratory movements of the patient in the MR room, and the respiratory signals of the fiber-optic respiration sensors are compared with those of the BIOPAC$^{(R)}$ system. We have verified that respiratory signals can be obtained without deteriorating the MR image. It is anticipated that the proposed fiber-optic respiration sensors would be highly suitable for respiratory monitoring during surgical procedures performed inside an MRI system.

Highly Sensitive Biological Analysis Using Optical Microfluidic Sensor

  • Lee, Sang-Yeop;Chen, Ling-Xin;Choo, Jae-Bum;Lee, Eun-Kyu;Lee, Sang-Hoon
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.130-142
    • /
    • 2006
  • Lab-on-a-chip technology is attracting great interest because the miniaturization of reaction systems offers practical advantages over classical bench-top chemical systems. Rapid mixing of the fluids flowing through a microchannel is very important for various applications of microfluidic systems. In addition, highly sensitive on-chip detection techniques are essential for the in situ monitoring of chemical reactions because the detection volume in a channel is extremely small. Recently, a confocal surface enhanced Raman spectroscopic (SERS) technique, for the highly sensitive biological analysis in a microfluidic sensor, has been developed in our research group. Here, a highly precise quantitative measurement can be obtained if continuous flow and homogeneous mixing condition between analytes and silver nano-colloids are maintained. Recently, we also reported a new analytical method of DNA hybridization involving a PDMS microfluidic sensor using fluorescence energy transfer (FRET). This method overcomes many of the drawbacks of microarray chips, such as long hybridization times and inconvenient immobilization procedures. In this paper, our recent applications of the confocal Raman/fluorescence microscopic technology to a highly sensitive lab-on-a-chip detection will be reviewed.

Measurement of the Void Fraction in Slug Flow using an Optical Method (광학적 방법을 이용한 슬러그 유동의 기공률 측정)

  • Kim, Dong-Seon
    • Journal of Institute of Convergence Technology
    • /
    • v.1 no.2
    • /
    • pp.25-28
    • /
    • 2011
  • Void fraction has been measured for the gas-liquid cocurrent slug flow in 8mm vertical acrylic tube using an optical method. Bubble speed, length and period could be measured with the two sets of laser-infrared sensor modules mounted 25mm apart alongside the tube, which were designed to detect variation of light intensities with a time delay when two parallel laser beams were refracted successively by a passing bubble. It was found that the results were in good agreement with the previous studies in the literature suggesting that the method used in this study were sound and accurate.

  • PDF

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

A Study on Vehicle Ego-motion Estimation by Optimizing a Vehicle Platform (차량 플랫폼에 최적화한 자차량 에고 모션 추정에 관한 연구)

  • Song, Moon-Hyung;Shin, Dong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.9
    • /
    • pp.818-826
    • /
    • 2015
  • This paper presents a novel methodology for estimating vehicle ego-motion, i.e. tri-axis linear velocities and angular velocities by using stereo vision sensor and 2G1Y sensor (longitudinal acceleration, lateral acceleration, and yaw rate). The estimated ego-motion information can be utilized to predict future ego-path and improve the accuracy of 3D coordinate of obstacle by compensating for disturbance from vehicle movement representatively for collision avoidance system. For the purpose of incorporating vehicle dynamic characteristics into ego-motion estimation, the state evolution model of Kalman filter has been augmented with lateral vehicle dynamics and the vanishing point estimation has been also taken into account because the optical flow radiates from a vanishing point which might be varied due to vehicle pitch motion. Experimental results based on real-world data have shown the effectiveness of the proposed methodology in view of accuracy.

Crowd Activity Recognition using Optical Flow Orientation Distribution

  • Kim, Jinpyung;Jang, Gyujin;Kim, Gyujin;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2948-2963
    • /
    • 2015
  • In the field of computer vision, visual surveillance systems have recently become an important research topic. Growth in this area is being driven by both the increase in the availability of inexpensive computing devices and image sensors as well as the general inefficiency of manual surveillance and monitoring. In particular, the ultimate goal for many visual surveillance systems is to provide automatic activity recognition for events at a given site. A higher level of understanding of these activities requires certain lower-level computer vision tasks to be performed. So in this paper, we propose an intelligent activity recognition model that uses a structure learning method and a classification method. The structure learning method is provided as a K2-learning algorithm that generates Bayesian networks of causal relationships between sensors for a given activity. The statistical characteristics of the sensor values and the topological characteristics of the generated graphs are learned for each activity, and then a neural network is designed to classify the current activity according to the features extracted from the multiple sensor values that have been collected. Finally, the proposed method is implemented and tested by using PETS2013 benchmark data.