• Title/Summary/Keyword: Camera View

Search Result 830, Processing Time 0.022 seconds

Verification of Camera-Image-Based Target-Tracking Algorithm for Mobile Surveillance Robot Using Virtual Simulation (가상 시뮬레이션을 이용한 기동형 경계 로봇의 영상 기반 목표추적 알고리즘 검증)

  • Lee, Dong-Youm;Seo, Bong-Cheol;Kim, Sung-Soo;Park, Sung-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.11
    • /
    • pp.1463-1471
    • /
    • 2012
  • In this study, a 3-axis camera system design is proposed for application to an existing 2-axis surveillance robot. A camera-image-based target-tracking algorithm for this robot has also been proposed. The algorithm has been validated using a virtual simulation. In the algorithm, the heading direction vector of the camera system in the mobile surveillance robot is obtained by the position error between the center of the view finder and the center of the object in the camera image. By using the heading direction vector of the camera system, the desired pan and tilt angles for target-tracking and the desired roll angle for the stabilization of the camera image are obtained through inverse kinematics. The algorithm has been validated using a virtual simulation model based on MATLAB and ADAMS by checking the corresponding movement of the robot to the target motion and the virtual image error of the view finder.

An Analysis of Radiative Observation Environment for Korea Meteorological Administration (KMA) Solar Radiation Stations based on 3-Dimensional Camera and Digital Elevation Model (DEM) (3차원 카메라와 수치표고모델 자료에 따른 기상청 일사관측소의 복사관측환경 분석)

  • Jee, Joon-Bum;Zo, Il-Sung;Lee, Kyu-Tae;Jo, Ji-Young
    • Atmosphere
    • /
    • v.29 no.5
    • /
    • pp.537-550
    • /
    • 2019
  • To analyze the observation environment of solar radiation stations operated by the Korea Meteorological Administration (KMA), we analyzed the skyline, Sky View Factor (SVF), and solar radiation due to the surrounding topography and artificial structures using a Digital Elevation Model (DEM), 3D camera, and solar radiation model. Solar energy shielding of 25 km around the station was analyzed using 10 m resolution DEM data and the skyline elevation and SVF were analyzed by the surrounding environment using the image captured by the 3D camera. The solar radiation model was used to assess the contribution of the environment to solar radiation. Because the skyline elevation retrieved from the DEM is different from the actual environment, it is compared with the results obtained from the 3D camera. From the skyline and SVF calculations, it was observed that some stations were shielded by the surrounding environment at sunrise and sunset. The topographic effect of 3D camera is therefore more than 20 times higher than that of DEM throughout the year for monthly accumulated solar radiation. Due to relatively low solar radiation in winter, the solar radiation shielding is large in winter. Also, for the annual accumulated solar radiation, the difference of the global solar radiation calculated using the 3D camera was 176.70 MJ (solar radiation with 7 days; suppose daily accumulated solar radiation 26 MJ) on an average and a maximum of 439.90 MJ (solar radiation with 17.5 days).

A Vision-based Position Estimation Method Using a Horizon (지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차)

  • Shin, Jong-Jin;Nam, Hwa-Jin;Kim, Byung-Ju
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

Development of Real-time Flatness Measurement System of COF Film using Pneumatic Pressure (공압을 이용한 COF 필름의 실시간 위치 평탄도 측정 시스템 개발)

  • Kim, Yong-Kwan;Kim, JaeHyun;Lee, InHwan
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.2
    • /
    • pp.101-106
    • /
    • 2021
  • In this paper, an inspection system has been developed where pneumatic instruments are used to stretch the film using compressed air, thus the curl problem can be overcome. When the pneumatic system is applied, a line scan camera should be used instead of an area camera because the COF surface makes an arc by the air pressure. The distance between the COF and the inspection camera should be kept constant to get a clear image, thus the position of COF is to be monitored on real-time. An operating software has been also developed which is switching on/off the pneumatic system, determining the COF position using a camera vision, displaying the contour of the COF side view, sending self-diagnosis result and etc. The developed system has been examined using the actual roll of COF, which convince that the system can be an effective device to inspect the COF rolls in process.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Real-time Tracking and Identification for Multi-Camera Surveillance System

  • Hong, Yo-Hoon;Song, Seung June;Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.16-22
    • /
    • 2018
  • This paper presents a solution for personal profiling system based on user-oriented tracking. Here, we introduce a new way to identify and track humans by using two types of cameras: dome and face camera. Dome camera has a wide view angle so that it is suitable for tracking human movement in large area. However, it is difficult to identify a person only by using dome camera because it only sees the target from above. Thus, face camera is employed to obtain facial information for identifying a person. In addition, we also propose a new mechanism to locate human on targeted location by using grid-cell system. These result in a system which has the capability of maintaining human identity and tracking human activity (movement) effectively.

Stabilization of Target Tracking with 3-axis Motion Compensation for Camera System on Flying Vehicle

  • Sun, Yanjie;Jeon, Dongwoon;Kim, Doo-Hyun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.1
    • /
    • pp.43-52
    • /
    • 2014
  • This paper presents a tracking system using images captured from a camera on a moving platform. A camera on an unmanned flying vehicle generally moves and shakes due to external factors such as wind and the ego-motion of the machine itself. This makes it difficult to track a target properly, and sometimes the target cannot be kept in view of the camera. To deal with this problem, we propose a new system for stable tracking of a target under such conditions. The tracking system includes target tracking and 3-axis camera motion compensation. At the same time, we consider the simulation of the motion of flying vehicles for efficient and safe testing. With 3-axis motion compensation, our experimental results show that robustness and stability are improved.

Correction of Perspective Distortion Image Using Depth Information (깊이 정보를 이용한 원근 왜곡 영상의 보정)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.106-112
    • /
    • 2015
  • In this paper, we propose a method for correction of perspective distortion on a taken image. An image taken by a camera is caused perspective distortion depending on the direction of the camera when objects are projected onto the image. The proposed method in this paper is to obtain the normal vector of the plane through the depth information using a depth camera and calculate the direction of the camera based on this normal vector. Then the method corrects the perspective distortion to the view taken from the front side by performing a rotation transformation on the image according to the direction of the camera. Through the proposed method, it is possible to increase the processing speed than the conventional method such as correction of perspective distortion based on color information.

Assembling three one-camera images for three-camera intersection classification

  • Marcella Astrid;Seung-Ik Lee
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.862-873
    • /
    • 2023
  • Determining whether an autonomous self-driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three-camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian-view intersection classification experiments show that our feature fusion model provides an area under the curve and F1-score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three- and one-camera models.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF