• 제목/요약/키워드: Monocular Vision

검색결과 103건 처리시간 0.032초

Targetless displacement measurement of RSW based on monocular vision and feature matching

  • Yong-Soo Ha;Minh-Vuong Pham;Jeongki Lee;Dae-Ho Yun;Yun-Tae Kim
    • Smart Structures and Systems
    • /
    • 제32권4호
    • /
    • pp.207-218
    • /
    • 2023
  • Real-time monitoring of the behavior of reinforced soil retaining wall (RSW) is required for safety checks. In this study, a targetless displacement measurement technology (TDMT) consisting of an image registration module and a displacement calculation module was proposed to monitor the behavior of RSW, in which facing displacement and settlement typically occur. Laboratory and field experiments were conducted to compare the measuring performance of natural target (NT) with the performance of artificial target (AT). Feature count- and location-based performance metrics and displacement calculation performance were analyzed to determine their correlations. The results of laboratory and field experiments showed that the feature location-based performance metric was more relevant to the displacement calculation performance than the feature count-based performance metric. The mean relative errors of the TDMT were less than 1.69 % and 5.50 % for the laboratory and field experiments, respectively. The proposed TDMT can accurately monitor the behavior of RSW for real-time safety checks.

Identification of structural systems and excitations using vision-based displacement measurements and substructure approach

  • Lei, Ying;Qi, Chengkai
    • Smart Structures and Systems
    • /
    • 제30권3호
    • /
    • pp.273-286
    • /
    • 2022
  • In recent years, vision-based monitoring has received great attention. However, structural identification using vision-based displacement measurements is far less established. Especially, simultaneous identification of structural systems and unknown excitation using vision-based displacement measurements is still a challenging task since the unknown excitations do not appear directly in the observation equations. Moreover, measurement accuracy deteriorates over a wider field of view by vision-based monitoring, so, only a portion of the structure is measured instead of targeting a whole structure when using monocular vision. In this paper, the identification of structural system and excitations using vision-based displacement measurements is investigated. It is based on substructure identification approach to treat of problem of limited field of view of vision-based monitoring. For the identification of a target substructure, substructure interaction forces are treated as unknown inputs. A smoothing extended Kalman filter with unknown inputs without direct feedthrough is proposed for the simultaneous identification of substructure and unknown inputs using vision-based displacement measurements. The smoothing makes the identification robust to measurement noises. The proposed algorithm is first validated by the identification of a three-span continuous beam bridge under an impact load. Then, it is investigated by the more difficult identification of a frame and unknown wind excitation. Both examples validate the good performances of the proposed method.

도심 자율주행을 위한 비전기반 차선 추종주행 실험 (Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision)

  • 서승범;강연식;노치원;강성철
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

Novel Backprojection Method for Monocular Head Pose Estimation

  • Ju, Kun;Shin, Bok-Suk;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권1호
    • /
    • pp.50-58
    • /
    • 2013
  • Estimating a driver's head pose is an important task in driver-assistance systems because it can provide information about where a driver is looking, thereby giving useful cues about the status of the driver (i.e., paying proper attention, fatigued, etc.). This study proposes a system for estimating the head pose using monocular images, which includes a novel use of backprojection. The system can use a single image to estimate a driver's head pose at a particular time stamp, or an image sequence to support the analysis of a driver's status. Using our proposed system, we compared two previous pose estimation approaches. We introduced an approach for providing ground-truth reference data using a mannequin model. Our experimental results demonstrate that the proposed system provides relatively accurate estimations of the yaw, tilt, and roll angle. The results also show that one of the pose estimation approaches (perspective-n-point, PnP) provided a consistently better estimate compared to the other (pose from orthography and scaling with iterations, POSIT) using our proposed system.

정시, 안경 및 콘택트렌즈 착용자의 조절반응량 비교 (Comparison of Accommodative Response among Emmetropes, Spectacle and Contact Lens Wearer)

  • 이규병;박지현;김효진
    • 한국안광학회지
    • /
    • 제17권4호
    • /
    • pp.403-410
    • /
    • 2012
  • 목적: 정시와 안경 및 콘택트렌즈 착용자의 조절반응량을 비교하고, 각 그룹내에서 굴절이상도와 조절래그의 상관성을 확인하고자 하였다. 방법: 안질환이 없고 굴절이상이 완전 교정된 72명(144안)을 대상으로 폐쇄형 자동굴절검사기로 측정한 굴절이상도를 사용하여 정시, 안경착용자 및 콘택트렌즈 착용자로 분류하였다. 이후 우세안, 나안시력 및 교정시력을 측정하고, 개방형 자동굴절검사기를 이용하여 단안 및 양안으로 원/근거리(5 m/33 cm)를 주시할 때의 굴절이상도를 측정하여 조절래그를 산출하였다. 결과: 좌우안 및 우세안/비우세안의 조절래그는 단안/양안주시 모두 통계적으로 유의한 차이가 없었으며, 단안주시의 조절래그가 양안주시보다 더 크게 검출되었다. 또한 남녀간의 조절래그는 단안주시는 남자가 컸지만 양안주시는 유의한 차이가 없었다. 근시도와 조절래그는 단안/양안주시 모두 근시안이 정시안에 비해 더 컸지만, 안경착용자와 콘택트렌즈 착용자의 조절래그는 단안/양안주시 모두 유의한 차이는 없었다. 전체 대상자의 굴절이상도와 조절래그 상관성은 단안/양안주시 모두 굴절이상도가 클수록 조절래그가 컸으며, 정시안과 콘택트렌즈 착용자는 단안/양안주시 모두 유의한 상관성이 있었으나 안경착용자는 단안/양안주시 모두 유의한 상관성이 없었다. 결론: 정시안과 근시안의 조절래그는 유의한 차이가 있고, 근시안 중 안경착용자와 콘택트렌즈 착용자의 조절래그는 유의한 차이가 없었다. 정시안과 콘택트렌즈 착용자의 굴절이상도는 조절반응량과 상관성을 보였다.

Single Image Depth Estimation With Integration of Parametric Learning and Non-Parametric Sampling

  • Jung, Hyungjoo;Sohn, Kwanghoon
    • 한국멀티미디어학회논문지
    • /
    • 제19권9호
    • /
    • pp.1659-1668
    • /
    • 2016
  • Understanding 3D structure of scenes is of a great interest in various vision-related tasks. In this paper, we present a unified approach for estimating depth from a single monocular image. The key idea of our approach is to take advantages both of parametric learning and non-parametric sampling method. Using a parametric convolutional network, our approach learns the relation of various monocular cues, which make a coarse global prediction. We also leverage the local prediction to refine the global prediction. It is practically estimated in a non-parametric framework. The integration of local and global predictions is accomplished by concatenating the feature maps of the global prediction with those from local ones. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively.

딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크 (Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation)

  • 최혁두
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발 (Radar and Vision Sensor Fusion for Primary Vehicle Detection)

  • 양승한;송봉섭;엄재용
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • 제3권2호
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.