• Title/Summary/Keyword: Camera Movement

Search Result 530, Processing Time 0.026 seconds

6DOF Simulation of a Floating Structure Model Using a Single Video

  • Trieu, Hang Thi;Han, Dongyeob
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.6
    • /
    • pp.563-570
    • /
    • 2014
  • This paper purposes on stimulating the dynamic behavior of a floating structure model with the image processing and the close-range photogrammetry, instead of the contact sensors. Previously, the movement of structures was presented by the exterior orientation estimation from a single camera following the space resection. The inverse resection yields to 6 orientation parameters of the floating structure, with respect to the camera coordinates system. The single camera solution of interest in applications is characterized by the restriction in terms of costs, unfavorable observation conditions, or synchronization demands when using multiple cameras. This paper discusses the theoretical determinations of camera exterior orientation by using the least squares adjustment, applied of the values from the DLT (Direct Linear Transformation) and the photogrammetric resection. This proposed method is applied to monitor motions of a floating model. The results of 6DOF (Six Degrees of Freedom) from the inverse resection were signified that applying appropriate initial values from DLT in the least square adjustment is effective in obtaining precise exterior orientation parameters. Therefore, the proposed method can be concluded as an efficient solution to simulate movements of the floating structure.

Moving Object Detection with Rotating Camera Based on Edge Segment Matching (이동카메라 환경에서의 에지 세그먼트 정합을 통한 이동물체 검출)

  • Lee, June-Hyung;Chae, Ok-Sam
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.1-12
    • /
    • 2008
  • This paper presents automatic moving object detection method using the rotating camera covering larger area with a single camera. The proposed method is based on the edge segment matching which robust to the dynamic environment with illumination change and background movement. The proposed algorithm presents an edge segment based background panorama image generation method minimizing the distortion due to image stitching, the background image generation method using Generalized Hough Transformation which can reliably register the current image to the panorama image overcoming the stitching distortions, the moving edge segment extraction method that overcome viewpoint difference and distortion. The experimental results show that the proposed method can detect correctly moving object under illumination change and camera vibration.

  • PDF

Real-Time PTZ Camera with Detection and Classification Functionalities (검출과 분류기능이 탑재된 실시간 지능형 PTZ카메라)

  • Park, Jong-Hwa;Ahn, Tae-Ki;Jeon, Ji-Hye;Jo, Byung-Mok;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2C
    • /
    • pp.78-85
    • /
    • 2011
  • In this paper we proposed an intelligent PTZ camera system which detects, classifies and tracks moving objects. If a moving object is detected, features are extracted for classification and then realtime tracking follows. We used GMM for detection followed by shadow removal. Legendre moment is used for classification. Without auto focusing, we can control the PTZ camera movement by using center points of the image and object's direction, distance and velocity. To implement the realtime system, we used TI DM6446 Davinci processor. Throughout the experiment, we obtained system's high performance in classification and tracking both at vehicle's normal and high speed motion.

A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study (IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구)

  • Ryu, Hong-Nam;Kim, Jong-Hun;Yoo, Gyeong-Mo;Hong, Ju-Yeong;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.29 no.11
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

Verification of Camera-Image-Based Target-Tracking Algorithm for Mobile Surveillance Robot Using Virtual Simulation (가상 시뮬레이션을 이용한 기동형 경계 로봇의 영상 기반 목표추적 알고리즘 검증)

  • Lee, Dong-Youm;Seo, Bong-Cheol;Kim, Sung-Soo;Park, Sung-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.11
    • /
    • pp.1463-1471
    • /
    • 2012
  • In this study, a 3-axis camera system design is proposed for application to an existing 2-axis surveillance robot. A camera-image-based target-tracking algorithm for this robot has also been proposed. The algorithm has been validated using a virtual simulation. In the algorithm, the heading direction vector of the camera system in the mobile surveillance robot is obtained by the position error between the center of the view finder and the center of the object in the camera image. By using the heading direction vector of the camera system, the desired pan and tilt angles for target-tracking and the desired roll angle for the stabilization of the camera image are obtained through inverse kinematics. The algorithm has been validated using a virtual simulation model based on MATLAB and ADAMS by checking the corresponding movement of the robot to the target motion and the virtual image error of the view finder.

MTF measuring method of TDI camera electronics

  • Kim, Young-Sun;Kong, Jong-Pil;Heo, Haeng-Pal;Park, Jong-Euk;Yong, Sang-Soon;Choi, Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.540-543
    • /
    • 2007
  • The modulation transfer function (MTF) in a camera system is a measurement of how well the system will faithfully reproduce the original scene. The electro-optical camera system consists of optics, an array of pixels, and an electronics which is related to the image signal chain. The system MTF can be cascaded with each element's MTF in the frequency domain. That is to say, the electronics MTF including the detector MTF can be recalculated easily by the acquired system MTF if the well-known test optics is used in the measuring process. A Time-Delay and Integration (TDI) detector can make a signal increase by taking multiple exposures of the same object and adding them. It can be considered the various methods to measure the MTF of the TDI camera system. This paper shows the actual and practical MTF measuring methods for the detector and electronics in the TDI camera. The several methods are described according to the scan direction as well as the TDI stages such as the single line mode and the multiple-lines mode. The measuring is performed in the in the static condition or dynamic condition to get the point spread function (PSF) or the line spread function (LSF). Especially, the dynamic test bench is used to simulate on track velocity to synchronize with TDI read out frequency for the dynamic movement.

  • PDF

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

The Motion Analysis of the limited Wrist Joint During Dart-Throwing Motion by Using Infrared Camera (적외선카메라를 이용한 다트던지기 운동에서의 제한된 손목관절 움직임 분석)

  • Park, Chan-Soo;Park, Jong-Il;Kim, Kwang Gi;Jang, Ik-Gyu;Kim, Tae-Yun;Lee, Sang lim;Baek, Goo Hyun
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.2
    • /
    • pp.55-62
    • /
    • 2013
  • Wrist joints consist of irregularly shaped carpal bones and other complicated structures. Thus, evaluating the motion of a wrist joint is a challenging task. In this study, we used an infrared camera to perform a kinematic analysis of a dart-throwing motion. We measured the difference between the movement of a normal wrist and constrained wrist (wrist with a wrist glove) in the dart-throwing motion with thirty six healthy participants. We measured the ulna flexion - radial extension motion using the attached passive marker and analyzed it using Polygon software and SPSS. The pitch and yaw motions with a glove was bigger than the ones without a glove by 20 and 15 degrees, respectively. On the other hand, the roll motion without a glove was bigger than the one with a glove by 7 degree. Wilcoxon signed rank test (p<0.05) confirmed that there are significant differences between the motion with and without a glove. It was found that the magnitude of the pitch and yaw motion with a constrained wrist joint toward radial extension in dart-throwing motion is smaller than the one with a normal wrist joint. However, a normal wrist joint showed a bigger movement in the roll direction.

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.

Effect of drone's moving image on audience's flow, arousal of interest, emotional state (드론의 무빙 영상이 수용자의 몰입도, 흥미유발, 감정상태에 미치는 영향)

  • Park, Dug-Chun
    • Journal of Digital Convergence
    • /
    • v.16 no.4
    • /
    • pp.313-319
    • /
    • 2018
  • This experimental research explores the effect of drone's moving image on media audience's flow, arousal of interest and emotional state. Most previous researchers of media image effect insisted that camera movement should be abstained in order to give audience the feeling that movement of figures is in the contents story itself. and camera movement also can disturb natural viewing of audience. For the purpose of finding the effect of drone's moving image on media audience's flow, arousal of interest and emotional state, 2 groups of subjects composed of 56 university students were exposed to 2 different video clips, one with moving drone's image, the other with hovering drone's image. After this experiment, Questions which were designed to measure audience's flow, arousal of interest and emotional state were asked and analysed. This research found that subjects exposed to moving drone's image felt more interested and more positive emotional state than subjects exposed to hovering drone's image. However meaningful effect of drone's moving image on audience's flow was not found.