• Title/Summary/Keyword: 기준점 자동인식

Search Result 16, Processing Time 0.02 seconds

Concept Design of Angular Deviation and Development of Measurement System for Transparency in Aircraft (항공기 투명체의 편각개념 설계 및 측정 시스템 개발)

  • Moon, Tae-Sang;Woo, Seong-Jo;Kwon, Seong-Il;Ryu, Kwang-Yeol
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.11
    • /
    • pp.1123-1129
    • /
    • 2010
  • Angular Deviation(AD) on transparency applied to TA-50 Aircraft deteriorates armament system's accuracy because it makes a difference in between actual and theoretical targets. In order to increase accuracy, therefore, TA-50 Aircraft measures AD on transparency and provide the measured values for the integrated mission display computer as a type of AD coefficients. This makes AD revised so that pilots can accurately see the actual target on their head-up display. In order to implement such mechanism into a real field, we develop a new device and system automatically measuring AD for the first time. We also deal with basic concept including AD induction formula as well as operating systems. As a consequence of testing the accuracy and precision for verifying reliability of the system, we got satisfactory results. In specific, the accuracy was within the resultant criterion of 1%. The precision was also satisfied with respect to the whole criteria. The system developed through this research is qualified as a military standard equipment for transparency of the canopy.

A Study on Factors Influencing Drone Mission Flight for Photogrammetry (Photogrammetry를 위한 드론 임무비행 영향인자 고찰)

  • Park, DongSoon;Kim, Taemin;Soh, Inho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.9-12
    • /
    • 2021
  • 드론 Photogrammetry는 높은 기술적 활용가치가 있는 기술로서, 결과물로 생성하는 3D 디지털 공간정보 모델이 시설물의 비육안 안전점검 및 진단에 활용될 수 있을 뿐만 아니라 디지털 트윈 구축을 위한 가장 기초적이고 핵심적인 수치 데이터를 제공하기 때문이다. 본 연구에서는 드론 Photogrammetry의 적정 품질을 구현하기 위한 임무비행의 다양한 영향인자에 대해 고찰하였다. K-water연구원 누수탐사실습장을 대상으로 드론 사진 촬영 시 비행고도, 비행속도, 중첩도, 카메라 Pitch각의 영향에 대해 연구를 수행하였다. 본 연구에서 비행시간에 영향을 미치는 인자로서 비행고도, 중첩도, 비행속도의 순으로 중요도가 있음을 알 수 있었다. 드론 임무 비행 시 후처리 결과에 가장 큰 영향을 미치는 인자는 중첩도로 나타났다. 중첩도 60% 임무비행은 3D 모델의 geometry 왜곡이 큰 편으로 나타났다. 비행 고도는 GSD (Ground Sampling Distance)와 직접 연계되므로 중요하며, 낮은 고도일수록 높은 품질의 모델링이 가능하다. April Tag를 통한 지상기준점 자동 패턴 인식 기능은 후처리 과정에서 시간 절약이 가능하여 유용하였다. 비행속도에 의한 결과물의 품질은 큰 차이가 없었으나, 수직 구조물의 모서리 부분에 다소 차이가 있었다. 짐벌 Pitch각도에 의한 정사영상 품질의 차이는 크지 않았으나 수직구조물과 평면적 구조물에 따라 각기 다른 촬영각도를 적용하는 것이 바람직하다. 본 연구성과는 향후 보다 다양한 환경에서의 데이터 수집을 통해 최적 디지털 현실 모델링에 기여할 것으로 판단된다.

  • PDF

Research to improve the performance of self localization of mobile robot utilizing video information of CCTV (CCTV 영상 정보를 활용한 이동 로봇의 자기 위치 추정 성능 향상을 위한 연구)

  • Park, Jong-Ho;Jeon, Young-Pil;Ryu, Ji-Hyoung;Yu, Dong-Hyun;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.12
    • /
    • pp.6420-6426
    • /
    • 2013
  • The indoor areas for the commercial use of automatic monitoring systems of mobile robot localization improves the cognitive abilities and the needs of the environment with this emerging and existing mobile robot localization, and object recognition methods commonly around its great sensor are leveraged. On the other hand, there is a difficulty with a problem-solving self-location estimation in indoor mobile robots using only the sensors of the robot. Therefore, in this paper, a self-position estimation method for an enhanced and effective mobile robot is proposed using a marker and CCTV video that is already installed in the building. In particular, after recognizing a square mobile robot and the object from the input image, and the vertices were confirmed, the feature points of the marker were found, and marker recognition was then performed. First, a self-position estimation of the mobile robot was performed according to the relationship of the image marker and a coordinate transformation was performed. In particular, the estimation was converted to an absolute coordinate value based on CCTV information, such as robots and obstacles. The study results can be used to make a convenient self-position estimation of the robot in the indoor areas to verify the self-position estimation method of the mobile robot. In addition, experimental operation was performed based on the actual robot system.

Individual Ortho-rectification of Coast Guard Aerial Images for Oil Spill Monitoring (유출유 모니터링을 위한 해경 항공 영상의 개별정사보정)

  • Oh, Youngon;Bui, An Ngoc;Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1479-1488
    • /
    • 2022
  • Accidents in which oil spills occur intermittently in the ocean due to ship collisions and sinkings. In order to prepare prompt countermeasures when such an accident occurs, it is necessary to accurately identify the current status of spilled oil. To this end, the Coast Guard patrols the target area with a fixed-wing airplane or helicopter and checks it with the naked eye or video, but it was difficult to determine the area contaminated by the spilled oil and its exact location on the map. Accordingly, this study develops a technology for direct ortho-rectification by automatically geo-referencing aerial images collected by the Coast Guard without individual ground reference points to identify the current status of spilled oil. First, meta information required for georeferencing is extracted from a visualized screen of sensor information such as video by optical character recognition (OCR). Based on the extracted information, the external orientation parameters of the image are determined. Images are individually orthorectified using the determined the external orientation parameters. The accuracy of individual orthoimages generated through this method was evaluated to be about tens of meters up to 100 m. The accuracy level was reasonably acceptable considering the inherent errors of the position and attitude sensors, the inaccuracies in the internal orientation parameters such as camera focal length, without using no ground control points. It is judged to be an appropriate level for identifying the current status of spilled oil contaminated areas in the sea. In the future, if real-time transmission of images captured during flight becomes possible, individual orthoimages can be generated in real time through the proposed individual orthorectification technology. Based on this, it can be effectively used to quickly identify the current status of spilled oil contamination and establish countermeasures.

Comparison of Deep Learning Based Pose Detection Models to Detect Fall of Workers in Underground Utility Tunnels (딥러닝 자세 추정 모델을 이용한 지하공동구 다중 작업자 낙상 검출 모델 비교)

  • Jeongsoo Kim
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.2
    • /
    • pp.302-314
    • /
    • 2024
  • Purpose: This study proposes a fall detection model based on a top-down deep learning pose estimation model to automatically determine falls of multiple workers in an underground utility tunnel, and evaluates the performance of the proposed model. Method: A model is presented that combines fall discrimination rules with the results inferred from YOLOv8-pose, one of the top-down pose estimation models, and metrics of the model are evaluated for images of standing and falling two or fewer workers in the tunnel. The same process is also conducted for a bottom-up type of pose estimation model (OpenPose). In addition, due to dependency of the falling interference of the models on worker detection by YOLOv8-pose and OpenPose, metrics of the models for fall was not only investigated, but also for person. Result: For worker detection, both YOLOv8-pose and OpenPose models have F1-score of 0.88 and 0.71, respectively. However, for fall detection, the metrics were deteriorated to 0.71 and 0.23. The results of the OpenPose based model were due to partially detected worker body, and detected workers but fail to part them correctly. Conclusion: Use of top-down type of pose estimation models would be more effective way to detect fall of workers in the underground utility tunnel, with respect to joint recognition and partition between workers.

Liver Splitting Using 2 Points for Liver Graft Volumetry (간 이식편의 체적 예측을 위한 2점 이용 간 분리)

  • Seo, Jeong-Joo;Park, Jong-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.123-126
    • /
    • 2012
  • This paper proposed a method to separate a liver into left and right liver lobes for simple and exact volumetry of the river graft at abdominal MDCT(Multi-Detector Computed Tomography) image before the living donor liver transplantation. A medical team can evaluate an accurate river graft with minimized interaction between the team and a system using this algorithm for ensuring donor's and recipient's safe. On the image of segmented liver, 2 points(PMHV: a point in Middle Hepatic Vein and PPV: a point at the beginning of right branch of Portal Vein) are selected to separate a liver into left and right liver lobes. Middle hepatic vein is automatically segmented using PMHV, and the cutting line is decided on the basis of segmented Middle Hepatic Vein. A liver is separated on connecting the cutting line and PPV. The volume and ratio of the river graft are estimated. The volume estimated using 2 points are compared with a manual volume that diagnostic radiologist processed and estimated and the weight measured during surgery to support proof of exact volume. The mean ${\pm}$ standard deviation of the differences between the actual weights and the estimated volumes was $162.38cm^3{\pm}124.39$ in the case of manual segmentation and $107.69cm^3{\pm}97.24$ in the case of 2 points method. The correlation coefficient between the actual weight and the manually estimated volume is 0.79, and the correlation coefficient between the actual weight and the volume estimated using 2 points is 0.87. After selection the 2 points, the time involved in separation a liver into left and right river lobe and volumetry of them is measured for confirmation that the algorithm can be used on real time during surgery. The mean ${\pm}$ standard deviation of the process time is $57.28sec{\pm}32.81$ per 1 data set ($149.17pages{\pm}55.92$).