• Title/Summary/Keyword: Virtual camera

Search Result 479, Processing Time 0.025 seconds

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

Useful Image Back-projection Properties in Cameras under Planar and Vertical Motion (평면 및 수직 운동하는 카메라에서 유용한 영상 역투영 속성들)

  • Kim, Minhwan;Byun, Sungmin
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.912-921
    • /
    • 2022
  • Autonomous vehicles equipped with cameras, such as robots, fork lifts, or cars, can be found frequently in industry sites or usual life. Those cameras show planar motion because the vehicles usually move on a plane. Sometimes the cameras in fork lifts moves vertically. The cameras under planar and vertical motion provides useful properties for horizontal or vertical lines that can be found easily and frequently in our daily life. In this paper, some useful back-projection properties are suggested, which can be applied to horizontal or vertical line images captured by a camera under planar and vertical motion. The line images are back-projected onto a virtual plane that is parallel to the planar motion plane and has the same orientation at the camera coordinate system regardless of camera motion. The back-projected lines on the virtual plane provide useful information for the world lines corresponding to the back-projected lines, such as line direction, angle between two horizontal lines, length ratio of two horizontal lines, and vertical line direction. Through experiments with simple plane polygons, we found that the back-projection properties were useful for estimating correctly the direction and the angle for horizontal and vertical lines.

A Camera Tracking System for Post Production of TV Contents (방송 콘텐츠의 후반 제작을 위한 카메라 추적 시스템)

  • Oh, Ju-Hyun;Nam, Seung-Jin;Jeon, Seong-Gyu;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.692-702
    • /
    • 2009
  • Real-time virtual studios which could run only on expensive workstations are now available for personal computers thanks to the recent development of graphics hardware. Nevertheless, graphics are rendered off-line in the post production stage in film or TV drama productions, because the graphics' quality is still restricted by the real-time hardware. Software-based camera tracking methods taking only the source video into account take much computation time, and often shows unstable results. To overcome this restriction, we propose a system that stores camera motion data from sensors at shooting time as common virtual studios and uses them in the post production stage, named as POVIS(post virtual imaging system). For seamless registration of graphics onto the camera video, precise zoom lens calibration must precede the post production. A practical method using only two planar patterns is used in this work. We present a method to reduce the camera sensor's error due to the mechanical mismatch, using the Kalman filter. POVIS was successfully used to track the camera in a documentary production and saved much of the processing time, while conventional methods failed due to lack of features to track.

Performance Simulation of Various Feature-Initialization Algorithms for Forward-Viewing Mono-Camera-Based SLAM (전방 모노카메라 기반 SLAM 을 위한 다양한 특징점 초기화 알고리즘의 성능 시뮬레이션)

  • Lee, Hun;Kim, Chul Hong;Lee, Tae-Jae;Cho, Dong-Il Dan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.833-838
    • /
    • 2016
  • This paper presents a performance evaluation of various feature-initialization algorithms for forward-viewing mono-camera based simultaneous localization and mapping (SLAM), specifically in indoor environments. For mono-camera based SLAM, the position of feature points cannot be known from a single view; therefore, it should be estimated from a feature initialization method using multiple viewpoint measurements. The accuracy of the feature initialization method directly affects the accuracy of the SLAM system. In this study, four different feature initialization algorithms are evaluated in simulations, including linear triangulation; depth parameterized, linear triangulation; weighted nearest point triangulation; and particle filter based depth estimation algorithms. In the simulation, the virtual feature positions are estimated when the virtual robot, containing a virtual forward-viewing mono-camera, moves forward. The results show that the linear triangulation method provides the best results in terms of feature-position estimation accuracy and computational speed.

Full-field Distortion Measurement of Virtual-reality Devices Using Camera Calibration and Probe Rotation (카메라 교정 및 측정부 회전을 이용한 가상현실 기기의 전역 왜곡 측정법)

  • Yang, Dong-Geun;Kang, Pilseong;Ghim, Young-Sik
    • Korean Journal of Optics and Photonics
    • /
    • v.30 no.6
    • /
    • pp.237-242
    • /
    • 2019
  • A compact virtual-reality (VR) device with wider field of view provides users with a more realistic experience and comfortable fit, but VR lens distortion is inevitable, and the amount of distortion must be measured for correction. In this paper, we propose two different full-field distortion-measurement methods, considering the characteristics of the VR device. The first is the distortion-measurement method using multiple images based on camera calibration, which is a well-known technique for the correction of camera-lens distortion. The other is the distortion-measurement method by measuring lens distortion at multiple measurement points by rotating a camera. Our proposed methods are verified by measuring the lens distortion of Google Cardboard, as a representative sample of a commercial VR device, and comparing our measurement results to a simulation using the nominal values.

A Real-time Plane Estimation in Virtual Reality Using a RGB-D Camera in Indoors (RGB-D 카메라를 이용한 실시간 가상 현실 평면 추정)

  • Yi, Chuho;Cho, Jungwon
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.319-324
    • /
    • 2016
  • In the case of robot and Argument Reality applications using a camera in environments, a technology to estimate planes is a very important technology. A RGB-D camera can get a three-dimensional measurement data even in a flat which has no information of the texture of the plane;, however, there is an enormous amount of computation in order to process the point-cloud data of the image. Furthermore, it could not know the number of planes that are currently observed as an advance, also, there is an additional operation required to estimate a three dimensional plane. In this paper, we proposed the real-time method that decides the number of planes automatically and estimates the three dimensional plane by using the continuous data of an RGB-D camera. As experimental results, the proposed method showed an improvement of approximately 22 times faster speed compared to processing the entire data.

Depth-based Mesh Modeling for Virtual Environment Generation (가상 환경 생성을 위한 깊이 기반 메쉬 모델링)

  • 이원우;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.111-114
    • /
    • 2003
  • In this paper, we propose a depth-based mesh modeling method to generate virtual environment. The proposed algorithm constructs mesh model from unorganized point cloud obtained from a multi-view camera. We separate the point cloud consisting objects from the background. Then, we apply triangulation to each object and background. Since the objects and the background are modeled independently, it is possible to construct effective virtual environment. The application of proposed modeling method is applicable to entertainment, such as movie and video game and effective virtual environment generation.

  • PDF

Coordinate Measuring Technique based on Optical Triangulation using the Two Images (두장의 사진을 이용한 광삼각법 삼차원측정)

  • 양주웅;이호재
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.76-80
    • /
    • 2000
  • This paper describes a coordinate measuring technique based on optical triangulation using the two images. To overcome the defect of structured light system which measures coordinate point by point, light source is replaced by CCD camera. Pixels in CCD camera were considered as virtual light source. The overall geometry including two camera images is modeled. Using this geometry, the formula for calculating 3D coordinate of specified point is derived. In a word, the ray from a virtual light source was reflected on measuring point and the corresponding image point was made on the other image. Through the simulation result, validation of formula is verified. This method enables to acquire multiple points detection by photographing.

  • PDF

A Gaze Tracking based on the Head Pose in Computer Monitor (얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적)

  • 오승환;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.