• Title/Summary/Keyword: multi-view camera

Search Result 159, Processing Time 0.023 seconds

Performance Improvement of Pedestrian Detection using a GM-PHD Filter (GM-PHD 필터를 이용한 보행자 탐지 성능 향상 방법)

  • Lee, Yeon-Jun;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.150-157
    • /
    • 2015
  • Pedestrian detection has largely been researched as one of the important technologies for autonomous driving vehicle and preventing accidents. There are two categories for pedestrian detection, camera-based and LIDAR-based. LIDAR-based methods have the advantage of the wide angle of view and insensitivity of illuminance change while camera-based methods have not. However, there are several problems with 3D LIDAR, such as insufficient resolution to detect distant pedestrians and decrease in detection rate in a complex situation due to segmentation error and occlusion. In this paper, two methods using GM-PHD filter are proposed to improve the poor rates of pedestrian detection algorithms based on 3D LIDAR. First one improves detection performance and resolution of object by automatic accumulation of points in previous frames onto current objects. Second one additionally enhances the detection results by applying the GM-PHD filter which is modified in order to handle the poor situation to classified multi target. A quantitative evaluation with autonomously acquired road environment data shows the proposed methods highly increase the performance of existing pedestrian detection algorithms.

3D-Based Monitoring System and Cloud Computing for Panoramic Video Service (3차원 기반의 모니터링 시스템과 클라우드 컴퓨팅을 이용한 파노라믹 비디오 서비스)

  • Cho, Yongwoo;Seok, Joo Myoung;Suh, Doug Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.9
    • /
    • pp.590-597
    • /
    • 2014
  • This paper proposes multi-camera system that relies on 3D views for panoramic video and distribution method about panoramic video generation algorithm by using cloud computing. The proposed monitoring system monitors the projected 3D model view, instead of individual 2D views, to detect image distortions. This can minimize compensation errors caused by parallax, thereby improving the quality of the resulting panoramic video. Panoramic video generation algorithm can be divided into registration part and compositing part. Therefore we propose off-loading method of these parts with cloud computing for panoramic video service.

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

Multiple Object Tracking with Color-Based Particle Filter for Intelligent Space (공간지능화를 위한 색상기반 파티클 필터를 이용한 다중물체추적)

  • Jin, Tae-Seok;Hashimoto, Hideki
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.1
    • /
    • pp.21-28
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

  • PDF

City-Scale Modeling for Street Navigation

  • Huang, Fay;Klette, Reinhard
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.4
    • /
    • pp.411-419
    • /
    • 2012
  • This paper proposes a semi-automatic image-based approach for 3-dimensional (3D) modeling of buildings along streets. Image-based urban 3D modeling techniques are typically based on the use of aerial and ground-level images. The aerial image of the relevant area is extracted from publically available sources in Google Maps by stitching together different patches of the map. Panoramic images are common for ground-level recording because they have advantages for 3D modeling. A panoramic video recorder is used in the proposed approach for recording sequences of ground-level spherical panoramic images. The proposed approach has two advantages. First, detected camera trajectories are more accurate and stable (compared to methods using multi-view planar images only) due to the use of spherical panoramic images. Second, we extract the texture of a facade of a building from a single panoramic image. Thus, there is no need to deal with color blending problems that typically occur when using overlapping textures.

MULTI-VIEW STEREO CAMERA CALIBRATION USING LASER TARGETS FOR MEASUREMENT OF LONG OBJECTS

  • Yoshimi, Takashi;Yoshimura, Takaharu;Takase, Ryuichi;Kawai, Yoshihiro;Tomita, Fumiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.566-571
    • /
    • 2009
  • A calibration method for multiple sets of stereo vision cameras is proposed. To measure the three-dimensional shape of a very long object, measuring the object at different viewpoints and registration of the data are necessary. In this study, two lasers beams generate two strings of calibration targets, which form straight lines in the world coordinate system. An evaluation function is defined to calculate the sum of the squares of the distances between each transformed target and the fitted line representing the laser beam to each target, and the distances between points appearing in the data sets of two adjacent viewpoints. The calculation process for the approximation method based on data linearity is presented. The experimental results show the effectiveness of the method.

  • PDF

Human and Robot Tracking Using Histogram of Oriented Gradient Feature

  • Lee, Jeong-eom;Yi, Chong-ho;Kim, Dong-won
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.18-25
    • /
    • 2018
  • This paper describes a real-time human and robot tracking method in Intelligent Space with multi-camera networks. The proposed method detects candidates for humans and robots by using the histogram of oriented gradients (HOG) feature in an image. To classify humans and robots from the candidates in real time, we apply cascaded structure to constructing a strong classifier which consists of many weak classifiers as follows: a linear support vector machine (SVM) and a radial-basis function (RBF) SVM. By using the multiple view geometry, the method estimates the 3D position of humans and robots from their 2D coordinates on image coordinate system, and tracks their positions by using stochastic approach. To test the performance of the method, humans and robots are asked to move according to given rectangular and circular paths. Experimental results show that the proposed method is able to reduce the localization error and be good for a practical application of human-centered services in the Intelligent Space.

Analysis of Affine Motion Compensation for Light Field Image Compression (라이트필드 영상 압축을 위한 Affine 움직임 보상 분석)

  • Huu, Thuc Nguyen;Duong, Vinh Van;Xu, Motong;Jeon, Byeungwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.216-217
    • /
    • 2019
  • Light Field (LF) image can be understood as a set of images captured by a multi-view camera array at the same time. The changes among views can be modeled by a general motion model such as affine motion model. In this paper, we study the impact of affine coding tool of Versatile Video Coding (VVC) on LF image compression. Our experimental results show a small contribution by affine coding tool in overall LF image compression of roughly 0.2% - 0.4%.

  • PDF

High-precision Skeleton Extraction Method using Multi-view Camera System (다시점 카메라 시스템을 이용한 고정밀 스켈레톤 추출 기법)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.297-299
    • /
    • 2020
  • 본 논문에서는 다시점 카메라 시스템을 통해 실사기반의 3D 모델을 획득하여 모션센서와 같은 별도의 기기 없이 해당 모델에 대한 고정밀 스켈레톤 추출 기법에 대해서 제시한다. 다시점 카메라 시스템을 이용하여 생성한 3D 모델을 앞, 뒤, 좌, 우 각 위치에서의 사상 매트릭스로 사상 영상을 생성하고 딥러닝 기술을 이용하여 2D 스켈레톤을 추출한다. 그리고 사상 매트릭스의 역변환 과정을 통해 2D 스켈레톤의 삼차원 좌표를 계산하고 추가적인 후처리를 통해 고정밀 스켈레톤을 획득한다.

  • PDF

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.