• 제목/요약/키워드: Monocular

검색결과 236건 처리시간 0.023초

단안 카메라를 이용한 LKAS 시험평가 방법에 관한 연구 (A Study on the Test Evaluation Method of LKAS Using a Monocular Camera)

  • 배건환;이선봉
    • 자동차안전학회지
    • /
    • 제12권3호
    • /
    • pp.34-42
    • /
    • 2020
  • ADAS (Advanced Driver Assistance Systems) uses sensors such as camera, radar, lidar and GPS (Global Positioning System). Among these sensors, the camera has many advantages compared with other sensors. The reason is that it is cheap, easy to use and can identify objects. In this paper, therefore, a theoretical formula was proposed to obtain the distance from the vehicle's front wheel to the lane using a monocular camera. And the validity of the theoretical formula was verified through the actual vehicle test. The results of the actual vehicle test in scenario 4 resulted in a maximum error of 0.21 m. The reason is that it is difficult to detect the lane in the curved road, and it is judged that errors occurred due to the occurrence of significant yaw rates. The maximum error occurred in curve road condition, but the error decreased after lane return. Therefore, the proposed theoretical formula makes it possible to assess the safety of the LKA system.

다초점 3차원 영상 표시 장치 (Multi-focus 3D Display)

  • 김성규;김동욱;권용무;손정영
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 2008년도 하계학술발표회 논문집
    • /
    • pp.119-120
    • /
    • 2008
  • A HMD type multi-focus 3D display system is developed and proof about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects. And the HMD type multi-focus 3D display can be applied to a monocular 3D display and monocular AR 3D display.

  • PDF

Deep Learning Based Monocular Depth Estimation: Survey

  • Lee, Chungkeun;Shim, Dongseok;Kim, H. Jin
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제10권4호
    • /
    • pp.297-305
    • /
    • 2021
  • Monocular depth estimation helps the robot to understand the surrounding environments in 3D. Especially, deep-learning-based monocular depth estimation has been widely researched, because it may overcome the scale ambiguity problem, which is a main issue in classical methods. Those learning based methods can be mainly divided into three parts: supervised learning, unsupervised learning, and semi-supervised learning. Supervised learning trains the network from dense ground-truth depth information, unsupervised one trains it from images sequences and semi-supervised one trains it from stereo images and sparse ground-truth depth. We describe the basics of each method, and then explain the recent research efforts to enhance the depth estimation performance.

특징점 기반 단안 영상 SLAM의 최적화 기법 및 필터링 기법 성능 분석 (Performance Analysis of Optimization Method and Filtering Method for Feature-based Monocular Visual SLAM)

  • 전진석;김효중;심덕선
    • 전기학회논문지
    • /
    • 제68권1호
    • /
    • pp.182-188
    • /
    • 2019
  • Autonomous mobile robots need SLAM (simultaneous localization and mapping) to look for the location and simultaneously to make the map around the location. In order to achieve visual SLAM, it is necessary to form an algorithm that detects and extracts feature points from camera images, and gets the camera pose and 3D points of the features. In this paper, we propose MPROSAC algorithm which combines MSAC and PROSAC, and compare the performance of optimization method and the filtering method for feature-based monocular visual SLAM. Sparse Bundle Adjustment (SBA) is used for the optimization method and the extended Kalman filter is used for the filtering method.

A Survey for 3D Object Detection Algorithms from Images

  • Lee, Han-Lim;Kim, Ye-ji;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • 제9권3호
    • /
    • pp.183-190
    • /
    • 2022
  • Image-based 3D object detection is one of the important and difficult problems in autonomous driving and robotics, and aims to find and represent the location, dimension and orientation of the object of interest. It generates three dimensional (3D) bounding boxes with only 2D images obtained from cameras, so there is no need for devices that provide accurate depth information such as LiDAR or Radar. Image-based methods can be divided into three main categories: monocular, stereo, and multi-view 3D object detection. In this paper, we investigate the recent state-of-the-art models of the above three categories. In the multi-view 3D object detection, which appeared together with the release of the new benchmark datasets, NuScenes and Waymo, we discuss the differences from the existing monocular and stereo methods. Also, we analyze their performance and discuss the advantages and disadvantages of them. Finally, we conclude the remaining challenges and a future direction in this field.

이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합 (Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot)

  • 김민영;안상태;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

단일 카메라를 이용한 동역학 기반의 보행 동작 추적 (Tracking a Walking Motion Based on Dynamics Using a Monocular Camera)

  • 유태근;최재림;김덕원
    • 전자공학회논문지SC
    • /
    • 제49권1호
    • /
    • pp.20-28
    • /
    • 2012
  • 보행을 관찰하고 객관적인 정보를 추출하여 그 기능을 평가하는 것을 보행 분석이라고 한다. 최근 사용되는 보행 측정 장비들은 다수의 카메라, 지면 반력 측정 장치로 구성되어 고가이며, 이를 설치할 넓은 장소를 필요로 한다는 단점이 있다. 본 연구에서는 이러한 문제점을 해결하기 위해 단일 카메라를 통해 얻은 영상에서 마커 없이 인체의 3차원 보행 동작을 측정하는 기법을 제안한다. 파티클 필터를 사용하여 훈련 데이터와 보행에 관한 사전 정보 없이 사람의 동작을 추적한다. 인체와 지면에 관한 동역학을 통해 물리적으로 합당한 인체의 동작들을 생성하였다. 보행 영상에서 계산한 모든 관절의 평균 에러는 제안한 방법에서 $12.4^{\circ}$로, 기존 파티클 필터의 에러 $34.6^{\circ}$보다 작았다. 이러한 결과를 바탕으로 단일 카메라만으로 보행을 정량적으로 측정하여 기존 복잡한 장비를 대체할 수 있는 가능성을 제시하였다.

GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안 (Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map)

  • 김영훈;김재명;김기창;최윤수
    • 대한원격탐사학회지
    • /
    • 제37권5_1호
    • /
    • pp.1095-1109
    • /
    • 2021
  • 안전한 자율주행을 위해 정확한 자기위치 측위와 주변지도 생성은 무엇보다 중요하다. 고가의 고정밀위성항법시스템(Global Positioning System, GPS), 관성측정장치(Inertial Measurement Unit, IMU), 라이다(Light Detection And Ranging, LiDAR), 레이더(Radio Detection And Ranging, RADAR), 주행거리측정계(Wheel odometry) 등의 많은 센서를 조합하여 워크스테이션급의 PC장비를 사용하여 센서데이터를 처리하면, cm급의 정밀한 자기위치 계산 및 주변지도 생성이 가능하다. 하지만 과도한 데이터 정합비용과 경제성 부족으로 고가의 장비 조합은 자율주행의 대중화에 걸림돌이 되고 있다. 본 논문에서는 기존 단안카메라를 사용하는 Monocular Visual SLAM을 발전시켜 RTK가 지원되는 GPS를 센서 융합하여 정확성과 경제성을 동시에 확보하였다. 또한 HD Map을 활용하여 오차를 보정하고 임베디드 PC장비에 포팅하여 도심 도로상에서 RMSE 33.7 cm의 위치 추정 및 주변지도를 생성할 수 있었다. 본 연구에서 제안한 방법으로 안전하고 저렴한 자율주행 시스템 개발과 정확한 정밀도로지도 생성이 가능할 것으로 기대한다.

시각정보가 Y-Balance Test에 미치는 영향 (Effects of visual information on Y-Balance Test)

  • 우병훈
    • 한국응용과학기술학회지
    • /
    • 제40권5호
    • /
    • pp.977-987
    • /
    • 2023
  • 본 연구의 목적은 시각정보로 양안시와 단안시 사용 시 Y-Balance Test(YBT) 수행이 YBT 절대도달거리, 수행점수와 압력중심 변인을 통하여 동적균형에 미치는 영향을 알아보고자 하였다. 연구의 대상자로 20-30대 성인 18명(연령: 23.17±1.72 년, 신장: 172.46±9.84 cm, 체중: 73.39±11.44 kg 다리길이: 88.89±5.69 cm)이 연구에 참여하였다. 양안 및 단안 사용으로 동적 균형 측정을 위하여 YBT를 통하여 절대도달거리, 종합점수, COP 변인들을 좌우발에서 측정하여 결과를 도출하였다. 연구결과로 YBT 시 후외측, 후내측, 종합점수에서 단안 차단(좌우 눈 차단)이 양안 사용보다 절대도달거리 및 종합점수가 높게 나타났다. COP 결과로, 전방 및 후내측 도달 시 차이가 나타나지 않았지만, 후외측 도달 시 전후 COP 속도에서 왼발은 단안 차단이 양안시 보다 전후 COP 속도가 느리게 나타났고, COP 속도에서 왼발은 단안 차단이 양안시 보다 COP 속도가 느리게 나타났다.

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권1호
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.