• Title/Summary/Keyword: 비전모델

Search Result 548, Processing Time 0.038 seconds

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Development of two wheel vehicle using vision and inertial sensor (비전과 관성센서를 이용한 2 바퀴 이동장치 개발)

  • Kwon, Hye-Geun;Park, Sang-Kyeong;Suh, Young-Soo
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1967-1968
    • /
    • 2006
  • 본 논문에서는 비전과 관성센서를 이용하여 2바퀴를 가지는 이동장치를 개발하였고, 이에 대한 역학 모델을 제안한다. 본 이동장치에서 바디부분은 바퀴의 축에 직접 연결되어 있으므로 물리적인 결속이 필요한 기존의 센서로는 진자의 기울어짐을 알 수 없다. 따라서 바디의 기울어짐을 측정하기 위하여 관성센서를 사용하였다. 보다 안정된 주행을 위해 바닥의 기울어짐을 측정하기 위해 비전을 이용하였다.

  • PDF

A Method for Optimized Supervised Learning in Recyclable-PET Sorting based on Vision AI (비전 인공지능 기반의 Recyclable-PET 선별에서 최적의 감독학습 기법)

  • Kim, Ji Young;Ji, Min-Gu;Jung, Joong-Eun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.640-642
    • /
    • 2021
  • 비전 기반의 재활용-PET 선별공정에서, PET 외 물체와의 식별 성능은 물론 PET 용기 내 포함된 이물질 및 라벨, 뚜껑의 존재 여부, 색상에 대한 검출 성능은 재활용 소재 품질에 중요한 영향을 미친다. 본 연구에서는 비전 인공지능 기반의 재활용-PET 자동 선별 시스템을 제안하고, 인공지능 모델의 제작에서 감독학습의 학습 효과를 최적화하기 위한 데이터 레이블링 기법을 제안한다. 재활용대상 PET 와 이물질 파트가 포함된 용기의 컨베이어벨트 선별공정 혼입을 재현한 실험을 통해서, 재활용 소재화 물량과 순도를 최대화하기 위한 인공지능 모델 생성 방법에 대해 고찰한다.

Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor (k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술)

  • Bang, Hyuntae;Yu, Byeongjun;Jeon, Haemin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.5
    • /
    • pp.259-266
    • /
    • 2022
  • In this study, we developed a deep learning and vision sensor-based assembly performance evaluation method isfor prefabricated steel structures. The assembly parts were segmented using a modified version of the receptive field block convolution module inspired by the eccentric function of the human visual system. The quality of the assembly was evaluated by detecting the bolt holes in the segmented assembly part and calculating the bolt hole positions. To validate the performance of the evaluation, models of standard and defective assembly parts were produced using a 3D printer. The assembly part segmentation network was trained based on the 3D model images captured from a vision sensor. The sbolt hole positions in the segmented assembly image were calculated using image processing techniques, and the assembly performance evaluation using the k-nearest neighbor algorithm was verified. The experimental results show that the assembly parts were segmented with high precision, and the assembly performance based on the positions of the bolt holes in the detected assembly part was evaluated with a classification error of less than 5%.

Target Tracking Control of a Quadrotor UAV using Vision Sensor (비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어)

  • Yoo, Min-Goo;Hong, Sung-Kyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.2
    • /
    • pp.118-128
    • /
    • 2012
  • The goal of this paper is to design the target tracking controller for a quadrotor micro UAV using a vision sensor. First of all, the mathematical model of the quadrotor was estimated through the Prediction Error Method(PEM) using experimental input/output flight data, and then the estimated model was validated via the comparison with new experimental flight data. Next, the target tracking controller was designed using LQR(Linear Quadratic Regulator) method based on the estimated model. The relative distance between an object and the quadrotor was obtained by a vision sensor, and the altitude was obtained by a ultra sonic sensor. Finally, the performance of the designed target tracking controller was evaluated through flight tests.

Designing an Intelligent Data Coding Curriculum for Non-Software Majors: Centered on the EZMKER Kit as an Educational Resource (SW 비전공자 대상으로 지능형 데이터 코딩 교육과정 설계 : EZMKER kit교구 중심으로)

  • Seoung-Young Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.901-910
    • /
    • 2023
  • In universities, programming language-based thinking and software education for non-majors are being implemented to cultivate creative and convergent talent capable of leading the digital convergence era in line with the Fourth Industrial Revolution. However, learners face difficulties in acquiring the unfamiliar syntax and programming languages. The purpose of this study is to propose a software education model to alleviate the challenges faced by non-major students during the learning process. By introducing algorithm techniques and diagram techniques based on programming language thinking and using the EZMKER kit as an instructional model, this study aims to overcome the lack of learning about programming languages and syntax. Consequently, a structured software education model has been designed and implemented as a top-down system learning model.

Deep Learning Model Selection Platform for Object Detection (사물인식을 위한 딥러닝 모델 선정 플랫폼)

  • Lee, Hansol;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Recently, object recognition technology using computer vision has attracted attention as a technology to replace sensor-based object recognition technology. It is often difficult to commercialize sensor-based object recognition technology because such approach requires an expensive sensor. On the other hand, object recognition technology using computer vision may replace sensors with inexpensive cameras. Moreover, Real-time recognition is viable due to the growth of CNN, which is actively introduced into other fields such as IoT and autonomous vehicles. Because object recognition model applications demand expert knowledge on deep learning to select and learn the model, such method, however, is challenging for non-experts to use it. Therefore, in this paper, we analyze the structure of deep - learning - based object recognition models, and propose a platform that can automatically select a deep - running object recognition model based on a user 's desired condition. We also present the reason we need to select statistics-based object recognition model through conducted experiments on different models.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

A study on basic software education applying a step-by-step blinded programming practice (단계적 블라인드 프로그래밍 실습과정을 적용한 소프트웨어 기초교육에 관한 연구)

  • Jung, Hye-Wuk
    • Journal of Digital Convergence
    • /
    • v.17 no.3
    • /
    • pp.25-33
    • /
    • 2019
  • Recently, universities have been strengthening software basic education to be active in the era of the fourth industrial revolution. Non-majored students need a variety of teaching methods because they have low knowledge of programming or a lack of connectivity with major courses. Therefore, in this paper, a learning model applying the step-by-step blind programming practice based on the Demonstration Modeling Making model was designed and applied to the actual lecture. As a result of analyzing the problem solving ability of the learner, it was confirmed that the learner's self - solving ratio increased as parking progressed. In the following study, it is necessary to analyze the learner's learning results in various aspects and to study effective teaching methods according to the difficulty of the learning contents.

Deep Clustering Based on Vision Transformer(ViT) for Images (이미지에 대한 비전 트랜스포머(ViT) 기반 딥 클러스터링)

  • Hyesoo Shin;Sara Yu;Ki Yong Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.363-365
    • /
    • 2023
  • 본 논문에서는 어텐션(Attention) 메커니즘을 이미지 처리에 적용한 연구가 진행되면서 등장한 비전 트랜스포머 (Vision Transformer, ViT)의 한계를 극복하기 위해 ViT 기반의 딥 클러스터링(Deep Clustering) 기법을 제안한다. ViT는 완전히 트랜스포머(Transformer)만을 사용하여 입력 이미지의 패치(patch)들을 벡터로 변환하여 학습하는 모델로, 합성곱 신경망(Convolutional Neural Network, CNN)을 사용하지 않으므로 입력 이미지의 크기에 대한 제한이 없으며 높은 성능을 보인다. 그러나 작은 데이터셋에서는 학습이 어렵다는 단점이 있다. 제안하는 딥 클러스터링 기법은 처음에는 입력 이미지를 임베딩 모델에 통과시켜 임베딩 벡터를 추출하여 클러스터링을 수행한 뒤, 클러스터링 결과를 임베딩 벡터에 반영하도록 업데이트하여 클러스터링을 개선하고, 이를 반복하는 방식이다. 이를 통해 ViT 모델의 일반적인 패턴 파악 능력을 개선하고 더욱 정확한 클러스터링 결과를 얻을 수 있다는 것을 실험을 통해 확인하였다.