• 제목/요약/키워드: single-view camera

검색결과 98건 처리시간 0.024초

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • 제31권4호
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • 제37권2호
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

Performance evaluation of the 76 cm telescope at Kyung Hee Astronomical Observatory (KHAO)

  • Ji, Tae-Geun;Han, Jimin;Ahn, Hojae;Lee, Sumin;Kim, Dohoon;Kim, Kyung Tae;Im, Myungshin;Pak, Soojong
    • The Bulletin of The Korean Astronomical Society
    • /
    • 제46권1호
    • /
    • pp.49.3-49.3
    • /
    • 2021
  • The 76 cm telescope in Kyung Hee Astronomical Observatory is participating in the small telescope network of the SomangNet project, which started in 2020. Since the installation of the telescope in 1992, the system configuration has been changed several times. The optical system of this telescope has a Ritchey-Chrétien configuration with 76 cm in diameter and the focal ratio is f/7. The mount is a single fork equatorial type and its control system is operated by TheSkyX software. We use a science camera with a 4k × 4k CCD and standard Johnson-Cousins UBVRI filters, which cover a field of view of 23.7 × 23.7 arcmin. We are also developing the Kyung Hee Automatic Observing Software for the 76 cm telescope (KAOS76) for efficient operations. In this work, we present the standard star calibration results, the current status of the system, and the expected science capabilities.

  • PDF

A Rule-Based Vehicle Tracking with Multiple Video Sequences (복수개의 동영상 시퀜스를 이용한 차량추적)

  • Park, Eun-Jong;So, Hyung-Junn;Jeong, Sung-Hwan;Lee, Joon-Whoan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • 제6권3호
    • /
    • pp.45-56
    • /
    • 2007
  • Automatic tracking of vehicles is important to accurately estimate the traffic information including vehicle speeds in video-based traffic measurement systems. Because of the limited field of view, the range of visual tracking with a single camera is restricted. In order to enlarge the tracking range for better chance of monitoring the vehicle behaviors, a tracking with consecutive multiple video sequences is necessary. This parer proposes a carefully designed rule-based vehicle racking scheme and apply it for the tracking for two well synchronized video sequences. In the scheme, almost all possible cases that can appear in the video-based vehicle tracking are considered to make rules. Also, the rule based scheme is augmented with Kalman filter. The result of tracking can be successfully used to collect data such as temporal variation of vehicle speed and behavior of individual vehicle behaviors in the enlarged tracking region.

  • PDF

Light Field Angular Super-Resolution Algorithm Using Dilated Convolutional Neural Network with Residual Network (잔차 신경망과 팽창 합성곱 신경망을 이용한 라이트 필드 각 초해상도 기법)

  • Kim, Dong-Myung;Suh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제24권12호
    • /
    • pp.1604-1611
    • /
    • 2020
  • Light field image captured by a microlens array-based camera has many limitations in practical use due to its low spatial resolution and angular resolution. High spatial resolution images can be easily acquired with a single image super-resolution technique that has been studied a lot recently. But there is a problem in that high angular resolution images are distorted in the process of using disparity information inherent among images, and thus it is difficult to obtain a high-quality angular resolution image. In this paper, we propose light field angular super-resolution that extracts an initial feature map using an dilated convolutional neural network in order to effectively extract the view difference information inherent among images and generates target image using a residual neural network. The proposed network showed superior performance in PSNR and subjective image quality compared to existing angular super-resolution networks.

Multi-screen Content Creation using Rig and Monitoring System (다면 콘텐츠 현장 촬영 시스템)

  • Lee, Sangwoo;Kim, Younghui;Cha, Seunghoon;Kwon, Jaehwan;Koh, Haejeong;Park, Kisu;Song, Isaac;Yoon, Hyungjin;Jang, Kyungyoon
    • Journal of the Korea Computer Graphics Society
    • /
    • 제23권5호
    • /
    • pp.9-17
    • /
    • 2017
  • Filming using multiple cameras is required for the production of the multi-screen content. It can fill the viewer's field of view (FOV) entirely to provide an increased sense of immersion. In such a filming scenario, it is very important to monitor how images captured by multiple cameras are displayed as a single content or how the content will be displayed in an actual theatre. Most recent studies on creating the content of special format have been focused on their own purposes, such as stereoscopic and panoramic images. There is no research on content creation optimized for theatres that use three screens that are spreading recently. In this paper, we propose a novel content production system with a rig that can control three cameras and monitoring software specialized for multi-screen content. The proposed rig can precisely control the angles between the cameras and capture wide angle of view with three cameras. It works with monitoring software via remote communication. The monitoring software automatically aligned the content in real time, and the alignment of the content is updated according to the angle of camera rig. Futher, the producion efficiency is greatly improved by making the alignment information available for post-production.

A Study on the Internet Broadcasting Image Processing based on Offloading Technique on the Mobile Environments (모바일 환경에서 오프로딩 기술 기반 인터넷 방송 영상 처리에 관한 연구)

  • Kang, Hong-gue
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제18권6호
    • /
    • pp.63-68
    • /
    • 2018
  • Offloading is a method of communicating, processing, and receiving results from some of the applications performed on local computers to overcome the limitations of computing resources and computational speed.Recently, it has been applied in mobile games, multimedia data, 360-degree video processing, and image processing for Internet broadcasting to speed up processing and reduce battery consumption in the mobile computing sector. This paper implements a viewer that enables users to convert various flat-panel images and view contents in a wireless Internet environment and presents actual results of an experiment so that users can easily understand the images. The 360 degree spherical image is successfully converted to a plane image with Double Panorama, Quad, Single Rectangle, 360 Overview + 3 Rectangle depending on the image acquisition position of the 360 degree camera through the interface. During the experiment, more than 100 360 degree spherical images were successfully converted into plane images through the interface below.

User Experience Evaluation of Augmented Reality based Guidance Systems for Solving Rubik's Cube using HMD (HMD를 이용한 증강현실 큐브 맞추기 안내 시스템의 사용자 경험 평가)

  • Park, Jaebum;Park, Changhoon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • 제7권7호
    • /
    • pp.935-944
    • /
    • 2017
  • As augmented reality technology has developed, various augmented reality contents can be seen in real life, and the performance of mobile device is improved, so augmented reality technology can be used even without special device. As a result, a training system, guidance system and a museum art guide system based on augmented reality technology are emerging, and interest in augmented reality is also increasing. However, the existing guidance systems using a single mobile device have limitations in terms of the user experience (UX) because the camera of the device limits the field of view or the two hands are not free and the user input is difficult. In this paper, we compare augmented reality based guidance systems for Rubik's Cube using tablet and HMD to improve the constraint of user experience of such a single mobile device, and find elements that positively improve user experience. After that, we evaluate whether the user experience is actually improved through the user experience comparison test and the questionnaire.

3D Visualization and Work Status Analysis of Construction Site Objects

  • Junghoon Kim;Insoo Jeong;Seungmo Lim;Jeongbin Hwang;Seokho Chi
    • International conference on construction engineering and project management
    • /
    • The 10th International Conference on Construction Engineering and Project Management
    • /
    • pp.447-454
    • /
    • 2024
  • Construction site monitoring is pivotal for overseeing project progress to ensure that projects are completed as planned, within budget, and in compliance with applicable laws and safety standards. Additionally, it seeks to improve operational efficiency for better project execution. To achieve this, many researchers have utilized computer vision technologies to conduct automatic site monitoring and analyze the operational status of equipment. However, most existing studies estimate real-world 3D information (e.g., object tracking, work status analysis) based only on 2D pixel-based information of images. This approach presents a substantial challenge in the dynamic environments of construction sites, necessitating the manual recalibration of analytical rules and thresholds based on the specific placement and the field of view of cameras. To address these challenges, this study introduces a novel method for 3D visualization and status analysis of construction site objects using 3D reconstruction technology. This method enables the analysis of equipment's operational status by acquiring 3D spatial information of equipment from single-camera images, utilizing the Sam-Track model for object segmentation and the One-2-3-45 model for 3D reconstruction. The framework consists of three main processes: (i) single image-based 3D reconstruction, (ii) 3D visualization, and (iii) work status analysis. Experimental results from a construction site video demonstrated the method's feasibility and satisfactory performance, achieving high accuracy in status analysis for excavators (93.33%) and dump trucks (98.33%). This research provides a more consistent method for analyzing working status, making it suitable for practical field applications and offering new directions for research in vision-based 3D information analysis. Future studies will apply this method to longer videos and diverse construction sites, comparing its performance with existing 2D pixel-based methods.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • 제41권2호
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.