• Title/Summary/Keyword: 단안

Search Result 158, Processing Time 0.029 seconds

Improved depth map generation method using Vanishing Point area (소실점 영역을 이용한 개선된 Depth-map 생성 기법)

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Kyoung-Ok;Kim, Eung-Kon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.357-359
    • /
    • 2010
  • In monocular images that are used to determine the depth of the vanishing point, the buildings, roads and buildings, such as outdoor video or hallway with room inside for the interior structure, such as the vanishing point in the video is a very strong depth cue. Depth map using the vanishing point in the three-dimensional space, the two-dimensional imaging is used to restore the structure. But if there is a vanishing point vanishing point in the video also depends on the location of the relative depth of different ways to express that need. In this paper we present images of a vanishing point with respect to the improved depth-map was created. Proposed an area where the loss of seven points and areas defined as areas along the proposed direction of different depth.

  • PDF

The Change of Accommodative Functions by Difference Density and Color (착색렌즈의 농도와 색상에 따른 조절기능 변화)

  • Jang, Jung Un
    • The Korean Journal of Vision Science
    • /
    • v.20 no.4
    • /
    • pp.453-459
    • /
    • 2018
  • Purpose : This study was to investigate the change of accommodative functions by different color density and color of colored lenses. Methods : Participant had a normal NPC and no dyschromatopsia, phoria and eye disease, also had no histories of eye surgery in 31 students of university. Their accommodative functions were measured according to 50%, 80% density of the gray, blue, brown lens and non-colored lenses. Tests of accommodative functions performed include amplitude of accommodation, accommodative facility, relative accommodation, and accommodative lag. Results : The amplitude of accommodation and accommodation lag were increased when wearing the colored lens. Negative relative accommodation was more increased when wearing the colored lens than achromatic. Positive relative accommodation increased when wearing the blue color lens density by 50%. Also, accommodation facility increased when wearing the colored lens, but, as the density of the color increased, the accommodation facility was decreased. Conclusion : As since the accommodation function changes according to density of the colored lenses, working distance and environment of the wearing colored lens should be considered when selecting density and color of colored lenses.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Deep Learning Based On-Device Augmented Reality System using Multiple Images (다중영상을 이용한 딥러닝 기반 온디바이스 증강현실 시스템)

  • Jeong, Taehyeon;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.341-350
    • /
    • 2022
  • In this paper, we propose a deep learning based on-device augmented reality (AR) system in which multiple input images are used to implement the correct occlusion in a real environment. The proposed system is composed of three technical steps; camera pose estimation, depth estimation, and object augmentation. Each step employs various mobile frameworks to optimize the processing on the on-device environment. Firstly, in the camera pose estimation stage, the massive computation involved in feature extraction is parallelized using OpenCL which is the GPU parallelization framework. Next, in depth estimation, monocular and multiple image-based depth image inference is accelerated using the mobile deep learning framework, i.e. TensorFlow Lite. Finally, object augmentation and occlusion handling are performed on the OpenGL ES mobile graphics framework. The proposed augmented reality system is implemented as an application in the Android environment. We evaluate the performance of the proposed system in terms of augmentation accuracy and the processing time in the mobile as well as PC environments.

Non-Homogeneous Haze Synthesis for Hazy Image Depth Estimation Using Deep Learning (불균일 안개 영상 합성을 이용한 딥러닝 기반 안개 영상 깊이 추정)

  • Choi, Yeongcheol;Paik, Jeehyun;Ju, Gwangjin;Lee, Donggun;Hwang, Gyeongha;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.45-54
    • /
    • 2022
  • Image depth estimation is a technology that is the basis of various image analysis. As analysis methods using deep learning models emerge, studies using deep learning in image depth estimation are being actively conducted. Currently, most deep learning-based depth estimation models are being trained with clean and ideal images. However, due to the lack of data on adverse conditions such as haze or fog, the depth estimation may not work well in such an environment. It is hard to sufficiently secure an image in these environments, and in particular, obtaining non-homogeneous haze data is a very difficult problem. In order to solve this problem, in this study, we propose a method of synthesizing non-homogeneous haze images and a learning method for a monocular depth estimation deep learning model using this method. Considering that haze mainly occurs outdoors, datasets mainly containing outdoor images are constructed. Experiment results show that the model with the proposed method is good at estimating depth in both synthesized and real haze data.

Take-off and landing assistance system for efficient operation of compact drone CCTV in remote locations (원격지의 초소형 드론 CCTV의 효율적인 운영을 위한 이착륙 보조 시스템)

  • Byoung-Kug Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.287-292
    • /
    • 2023
  • In the case of fixed CCTV, there is a problem in that a shadow area occurs, even if the visible range is maximized by utilizing the pan-tilt and zoom functions. The representative solution for that problem is that a plurality of fixed CCTVs are used. This requires a large amount of additional equipment (e.g., wires, facilities, monitors, etc.) proportional to the number of the CCTVs. Another solution is to use drones that are equipped with cameras and fly. However, Drone's operation time is much short. In order to extend the time, we can use multiple drones and can fly one by one. In this case, drones that need to recharge their batteries re-enter into a ready state at the drone port for next operation. In this paper, we propose a system for precised positioning and stable landing on the drone port by utilizing a small drone equipped with a fixed forward-facing monocular camera. For our conclusion, we implement our proposed system, operate, and finally verify our feasibility.

A Case of Monocular Partial Oculomotor Nerve Palsy in a Patient with Midbrain Hemorrhage (중뇌 출혈 환자에서 나타난 단안의 부분 동안신경마비 여환 치험 1례)

  • Lee, Hyun-Joong;Lee, Bo-Yun;Lee, Young-eun;Yang, Seung-Bo;Cho, Seung-Yeon;Park, Jung-Mi;Ko, Chang-Nam;Park, Seong-Uk
    • The Journal of the Society of Stroke on Korean Medicine
    • /
    • v.16 no.1
    • /
    • pp.103-109
    • /
    • 2015
  • This report is about a case of monocular partial oculomotor nerve palsy in a patient with midbrain hemorrhage. The patient developed diplopia while driving. The Brain MRI film demonstrated a hemorrhage in the right midbrain and left corona radiata and microbleedings in both cerebral and cerebellar hemispheres, basal ganglia, midbrain, pons. We used Korean medicine treatment modalities including acupuncture, electroacupuncture, pharmacoacupuncture and herb medicines. As a result, limitation of upward gaze was recovered to about 90% of normal range.

  • PDF

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

The Obstacle Size Prediction Method Based on YOLO and IR Sensor for Avoiding Obstacle Collision of Small UAVs (소형 UAV의 장애물 충돌 회피를 위한 YOLO 및 IR 센서 기반 장애물 크기 예측 방법)

  • Uicheon Lee;Jongwon Lee;Euijin Choi;Seonah Lee
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.6
    • /
    • pp.16-26
    • /
    • 2023
  • With the growing demand for unmanned aerial vehicles (UAVs), various collision avoidance methods have been proposed, mainly using LiDAR and stereo cameras. However, it is difficult to apply these sensors to small UAVs due to heavy weight or lack of space. The recently proposed methods use a combination of object recognition models and distance sensors, but they lack information on the obstacle size. This disadvantage makes distance determination and obstacle coordination complicated in an early-stage collision avoidance. We propose a method for estimating obstacle sizes using a monocular camera-YOLO and infrared sensor. Our experimental results confirmed that the accuracy was 86.39% within the distance of 40 cm. In addition, the proposed method was applied to a small UAV to confirm whether it was possible to avoid obstacle collisions.

Infrastructure 2D Camera-based Real-time Vehicle-centered Estimation Method for Cooperative Driving Support (협력주행 지원을 위한 2D 인프라 카메라 기반의 실시간 차량 중심 추정 방법)

  • Ik-hyeon Jo;Goo-man Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.123-133
    • /
    • 2024
  • Existing autonomous driving technology has been developed based on sensors attached to the vehicles to detect the environment and formulate driving plans. On the other hand, it has limitations, such as performance degradation in specific situations like adverse weather conditions, backlighting, and obstruction-induced occlusion. To address these issues, cooperative autonomous driving technology, which extends the perception range of autonomous vehicles through the support of road infrastructure, has attracted attention. Nevertheless, the real-time analysis of the 3D centroids of objects, as required by international standards, is challenging using single-lens cameras. This paper proposes an approach to detect objects and estimate the centroid of vehicles using the fixed field of view of road infrastructure and pre-measured geometric information in real-time. The proposed method has been confirmed to effectively estimate the center point of objects using GPS positioning equipment, and it is expected to contribute to the proliferation and adoption of cooperative autonomous driving infrastructure technology, applicable to both vehicles and road infrastructure.