• Title/Summary/Keyword: Frame camera

Search Result 610, Processing Time 0.027 seconds

A Study on the Estimation of Camera Calibration Parameters using Cooresponding Points Method (점 대응 기법을 이용한 카메라의 교정 파라미터 추정에 관한 연구)

  • Choi, Seong-Gu;Go, Hyun-Min;Rho, Do-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.4
    • /
    • pp.161-167
    • /
    • 2001
  • Camera calibration is very important problem in 3D measurement using vision system. In this paper is proposed the simple method for camera calibration. It is designed that uses the principle of vanishing points and the concept of corresponding points extracted from the parallel line pairs. Conventional methods are necessary for 4 reference points in one frame. But we proposed has need for only 2 reference points to estimate vanishing points. It has to calculate camera parameters, focal length, camera attitude and position. Our experiment shows the validity and the usability from the result that absolute error of attitude and position is in $10^{-2}$.

  • PDF

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

A Study on Implementation of Motion Graphics Virtual Camera with AR Core

  • Jung, Jin-Bum;Lee, Jae-Soo;Lee, Seung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.85-90
    • /
    • 2022
  • In this study, to reduce the time and cost disadvantages of the traditional motion graphic production method in order to realize the movement of a virtual camera identical to that of the real camera, motion graphics virtualization using AR Core-based mobile device real-time tracking data A method for creating a camera is proposed. The proposed method is a method that simplifies the tracking operation in the video file stored after shooting, and simultaneously proceeds with shooting on an AR Core-based mobile device to determine whether or not tracking is successful in the shooting stage. As a result of the experiment, there was no difference in the motion graphic result image compared to the conventional method, but the time of 6 minutes and 10 seconds was consumed based on the 300frame image, whereas the proposed method has very high time efficiency because this step can be omitted. At a time when interest in image production using virtual augmented reality and various studies are underway, this study will be utilized in virtual camera creation and match moving.

DEVELOPMENT OF A HIGH SPEED CCD CAMERA SYSTEM FOR THE OBSERVATION OF SOLAR Ha FLARES

  • VERMA V. K.;UDDIN WAHAB;GAUR V. P.
    • Journal of The Korean Astronomical Society
    • /
    • v.29 no.spc1
    • /
    • pp.391-392
    • /
    • 1996
  • We have developed and tested a CCD camera (100 $\times$ 100 pixels) system for observing Ha images of the solar flares with time resolution> 25 msec. The 512 $\times$ 512 pixels image of CCD camera at 2 Mpixels/sec can be recorded at the rate of more than 5 frame/sec while 100 $\times$ 100 pixels area image can be obtained 40 frames/sec. The 100 $\times$ 100 pixels image of CCD camera corresponds to 130 $\times$ 130 arc - $sec^2$ of the solar disk.

  • PDF

MIMO Architecture for Optical Camera Communications

  • Le, Nam-Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.1
    • /
    • pp.8-13
    • /
    • 2017
  • Compare with other communication system based RF technology, Optical Camera Communication (OCC) has limitation on data rate due to the low frame rate of camera. The limitation on data rate can be solved with multiple-input and multiple-output (MIMO) technology; and it is the final target of all researches on OCC. The MIMO topology can be implemented easily without breaking out the architecture of image sensor. For image sensor classification, there are two architectures have been developed: rolling shutter and global shutter. The operation of two techniques is different so the performance is also different. In this paper we analyze and evaluate the performance of the MIMO architecture for OCC.

A Study on the Implementation of Web-Camera System and the Measurement of Traffic (웹 카메라 시스템의 구현과 트래픽 측정에 관한 연구)

  • 안영민;진현준;박노경
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.187-189
    • /
    • 2001
  • In this study, the Web Camera System is implementation and simulated on two different architectures. In the one architecture, a Web-server and Camera-server are implemented on the same system, and the system transfers motion picture which compressed to JPEG file to users on the WWW(World Wide Web). In the other architecture, the Web-server and Camera-server are implemented on different systems, and the motion picture is transferred from the Camera-server to Web-server, and finally to users. In order to compare system performance between two architecture, data traffic is measured and simulated in the unit of byte per second and frame per second.

Camera Calibration using the TSK fuzzy system (TSK 퍼지 시스템을 이용한 카메라 켈리브레이션)

  • Lee Hee-Sung;Hong Sung-Jun;Oh Kyung-Sae;Kim Eun-Tai
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.56-58
    • /
    • 2006
  • Camera calibration in machine vision is the process of determining the intrinsic cameara parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

  • PDF

Efficient Tracking of a Moving Object Using Representative Blocks Algorithm

  • Choi, Sung-Yug;Hur, Hwa-Ra;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.678-681
    • /
    • 2004
  • In this paper, efficient tracking of a moving object using optimal representative blocks is implemented by a mobile robot with a pan-tilt camera. The key idea comes from the fact that when the image size of moving object is shrunk in an image frame according to the distance between the camera of mobile robot and the moving object, the tracking performance of a moving object can be improved by changing the size of representative blocks according to the object image size. Motion estimation using Edge Detection(ED) and Block-Matching Algorithm(BMA) is often used in the case of moving object tracking by vision sensors. However these methods often miss the real-time vision data since these schemes suffer from the heavy computational load. In this paper, the optimal representative block that can reduce a lot of data to be computed, is defined and optimized by changing the size of representative block according to the size of object in the image frame to improve the tracking performance. The proposed algorithm is verified experimentally by using a two degree-of-freedom active camera mounted on a mobile robot.

  • PDF

Flicker-Free Spatial-PSK Modulation for Vehicular Image-Sensor Systems Based on Neural Networks (신경망 기반 차량 이미지센서 시스템을 위한 플리커 프리 공간-PSK 변조 기법)

  • Nguyen, Trang;Hong, Chang Hyun;Islam, Amirul;Le, Nam Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.8
    • /
    • pp.843-850
    • /
    • 2016
  • This paper introduces a novel modulation scheme for vehicular communication in taking advantage of existing LED lights available on a car. Our proposed 2-Phase Shift Keying (2-PSK) is a spatial modulation approach in which a pair of LED light sources in a car (either rear LEDs or front LEDs) is used as a transmitter. A typical camera (i.e. low frame rate at no greater than 30fps) that either a global shutter camera or a rolling shutter camera can be used as a receiver. The modulation scheme is a part of our Image Sensor Communication proposal submitted to IEEE 802.15.7r1 (TG7r1) recently. Also, a neural network approach is applied to improve the performance of LEDs detection and decoding under the noisy situation. Later, some analysis and experiment results are presented to indicate the performance of our system

Development of 3-D Volume PIV (3차원 Volume PIV의 개발)

  • Choi, Jang-Woon;Nam, Koo-Man;Lee, Young-Ho;Kim, Mi-Young
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.27 no.6
    • /
    • pp.726-735
    • /
    • 2003
  • A Process of 3-D Particle image velocimetry, called here, as '3-D volume PIV' was developed for the full-field measurement of 3-D complex flows. The present method includes the coordinate transformation from image to camera, calibration of camera by a calibrator based on the collinear equation, stereo matching of particles by the approximation of the epipolar lines, accurate calculation of 3-D particle positions, identification of velocity vectors by 3-D cross-correlation equation, removal of error vectors by a statistical method followed by a continuity equation criterior, and finally 3-D animation as the post processing. In principle, as two frame images only are necessary for the single instantaneous analysis 3-D flow field, more effective vectors are obtainable contrary to the previous multi-frame vector algorithm. An Experimental system was also used for the application of the proposed method. Three analog CCD camera and a Halogen lamp illumination were adopted to capture the wake flow behind a bluff obstacle. Among 200 effective particle s in two consecutive frames, 170 vectors were obtained averagely in the present study.