• Title/Summary/Keyword: multi-cameras

Search Result 257, Processing Time 0.028 seconds

Calibrating Stereoscopic 3D Position Measurement Systems Using Artificial Neural Nets (3차원 위치측정을 위한 스테레오 카메라 시스템의 인공 신경망을 이용한 보정)

  • Do, Yong-Tae;Lee, Dae-Sik;Yoo, Seog-Hwan
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.418-425
    • /
    • 1998
  • Stereo cameras are the most widely used sensing systems for automated machines including robots to interact with their three-dimensional(3D) working environments. The position of a target point in the 3D world coordinates can be measured by the use of stereo cameras and the camera calibration is an important preliminary step for the task. Existing camera calibration techniques can be classified into two large categories - linear and nonlinear techniques. While linear techniques are simple but somewhat inaccurate, the nonlinear ones require a modeling process to compensate for the lens distortion and a rather complicated procedure to solve the nonlinear equations. In this paper, a method employing a neural network for the calibration problem is described for tackling the problems arisen when existing techniques are applied and the results are reported. Particularly, it is shown experimentally that by utilizing the function approximation capability of multi-layer neural networks trained by the back-propagation(BP) algorithm to learn the error pattern of a linear technique, the measurement accuracy can be simply and efficiently increased.

  • PDF

Multiple Camera-Based Correspondence of Ground Foot for Human Motion Tracking (사람의 움직임 추적을 위한 다중 카메라 기반의 지면 위 발의 대응)

  • Seo, Dong-Wook;Chae, Hyun-Uk;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.848-855
    • /
    • 2008
  • In this paper, we describe correspondence among multiple images taken by multiple cameras. The correspondence among multiple views is an interesting problem which often appears in the application like visual surveillance or gesture recognition system. We use the principal axis and the ground plane homography to estimate foot of human. The principal axis belongs to the subtracted silhouette-based region of human using subtraction of the predetermined multiple background models with current image which includes moving person. For the calculation of the ground plane homography, we use landmarks on the ground plane in 3D space. Thus the ground plane homography means the relation of two common points in different views. In the normal human being, the foot of human has an exactly same position in the 3D space and we represent it to the intersection in this paper. The intersection occurs when the principal axis in an image crosses to the transformed ground plane from other image. However the positions of the intersection are different depend on camera views. Therefore we construct the correspondence that means the relationship between the intersection in current image and the transformed intersection from other image by homography. Those correspondences should confirm within a short distance measuring in the top viewed plane. Thus, we track a person by these corresponding points on the ground plane. Experimental result shows the accuracy of the proposed algorithm has almost 90% of detecting person for tracking based on correspondence of intersections.

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

Optimal Camera Placement Leaning of Multiple Cameras for 3D Environment Reconstruction (3차원 환경 복원을 위한 다수 카메라 최적 배치 학습 기법)

  • Kim, Ju-hwan;Jo, Dongsik
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.75-80
    • /
    • 2022
  • Recently, research and development on immersive virtual reality(VR) technology to provide a realistic experience is being widely conducted. To provide realistic experience in immersive virtual reality for VR participants, virtual environments should consist of high-realistic environments using 3D reconstruction. In this paper, to acquire 3D information in real space using multiple cameras in the reconstruction process, we propose a novel method of optimal camera placement for accurate reconstruction to minimize distortion of 3D information. Through our approach in this paper, real 3D information can obtain with minimized errors during environment reconstruction, and it is possible to provide a more immersive experience with the created virtual environment.

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

A Study on the Design of a Tubular Type Linear Actuator for Ultra-Small Camera Module (초소형 카메라 모듈을 위한 Tubular형 선형 액츄에이터의 설계에 관한 연구)

  • Jung, In-Soung;Hur, Jin;Sung, Ha-Gyeong
    • Proceedings of the KIEE Conference
    • /
    • 2007.04c
    • /
    • pp.147-149
    • /
    • 2007
  • This paper presents design results of anultra-small actuator for auto focus or optical zoom function of mobile phone cameras. The consideration of the actuator is tubular type having a large hole in its center to arrange some optical lenses. The stator consists of 3-phase windings, and the mover is designed to an Nd injection permanent magnet(PM) which magnetized with multi-pole polar pattern. From the numerical analysis that taking into account the magnetizing pattern of the PM, the thrust of the designed actuator is about 40mN with the outer diameter of 7mm.

  • PDF

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Mobile Robot Localization using Ubiquitous Vision System (시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정)

  • Dao, Nguyen Xuan;Kim, Chi-Ho;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF

Multi-Camera Vision System for Tele-Robotics

  • Park, Changhwn;Kohtaro Ohba;Park, Kyihwan;Sayaka Odano;Hisayaki Sasaki;Nakyoung Chong;Tetsuo Kotoku;Kazuo Tanie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.25.6-25
    • /
    • 2001
  • A new monitoring system is proposed to give direct visual information of the remote site when working with a tele-operation system. In order to have a similar behavior of a human when he is inspecting an object, multiple cameras that have different view point are attached around the robot hand and are switched on and elf according to the operator´s motion such as joystick manipulation or operator´s head movement. The performance of the system is estimated by performing comparison experiments among single camera (SC) vision system, head mount display (HMD)system and proposed multiple camera (MC) vision system by applying a task to several examines. The reality, depth feeling and controllability are estimated for the examines ...

  • PDF

Real-time Tracking and Identification for Multi-Camera Surveillance System

  • Hong, Yo-Hoon;Song, Seung June;Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.16-22
    • /
    • 2018
  • This paper presents a solution for personal profiling system based on user-oriented tracking. Here, we introduce a new way to identify and track humans by using two types of cameras: dome and face camera. Dome camera has a wide view angle so that it is suitable for tracking human movement in large area. However, it is difficult to identify a person only by using dome camera because it only sees the target from above. Thus, face camera is employed to obtain facial information for identifying a person. In addition, we also propose a new mechanism to locate human on targeted location by using grid-cell system. These result in a system which has the capability of maintaining human identity and tracking human activity (movement) effectively.