• Title/Summary/Keyword: 비전 센서

Search Result 292, Processing Time 0.032 seconds

Improvement of Plane Tracking Accuracy in AR Game Using Magnetic Field Sensor (자기장 센서를 사용한 AR 게임에서의 평면 추적 정확도 개선)

  • Lee, Won-Jun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.19 no.5
    • /
    • pp.91-102
    • /
    • 2019
  • In this paper, we propose an improved method of plane tracking in developing AR games for smartphones using magnetic field sensor. The previous method based on ARCore is a VIO method using a mixture of SLAM and IMU of smartphones. The disadvantages of accelerometers and gyroscopes in IMUs cause errors in tracking the plane. We propose an improved method of planar tracking by adding the magnetic field sensor as well as the existing IMU sensors. Experimental results shows that our method reduces the error of the smartphone posture estimation.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Development of a Vision Based Machine Tool Presetter (영상 기반 머신툴 프리세터 개발)

  • Jung, Ha-Hyoung;Kim, Tae-Tean;Park, Jin-Ha;Lyou, Joon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.3
    • /
    • pp.49-56
    • /
    • 2014
  • Generally, the tool presetter is utilized to align and measure some specific dimensions of a machine tool. It is classified into two types(contact and contactless) according to the measurement method, and the optical sensor based contactless scheme has the advantages of measurement flexibility and convenience. This paper describes the design and realization of an industrial tool presetter using machine vision and linear scaler. Before measurement, the objective tool is attached to the mechanical mount and is aligned with the optical apparatus. After capturing tool images, the suggested image processing algorithm calculates its dimesions accurately, combining the traversing distance from the linear scaler. Experimental results conforms that the present tool presetter system has the precision within ${\pm}20um$ error.

Real-Time Mapping of Mobile Robot on Stereo Vision (스테레오 비전 기반 이동 로봇의 실시간 지도 작성 기법)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.60-65
    • /
    • 2010
  • This paper describes the results of 2D mapping, feature detection and matching to create the surrounding environment in the mounted stereo camera on Mobile robot. Extract method of image's feature in real-time processing for quick operation uses the edge detection and Sum of Absolute Difference(SAD), stereo matching technique can be obtained through the correlation coefficient. To estimate the location of a mobile robot using ZigBee beacon and encoders mounted on the robot is estimated by Kalman filter. In addition, the merged gyro scope to measure compass is possible to generate map during mobile robot is moving. The Simultaneous Localization and Mapping (SLAM) of mobile robot technology with an intelligent robot can be applied efficiently in human life would be based.

Performance Comparison of Depth Map Based Landing Methods for a Quadrotor in Unknown Environment (미지 환경에서의 깊이지도를 이용한 쿼드로터 착륙방식 성능 비교)

  • Choi, Jong-Hyuck;Park, Jongho;Lim, Jaesung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.9
    • /
    • pp.639-646
    • /
    • 2022
  • Landing site searching algorithms are developed for a quadrotor using a depth map in unknown environment. Guidance and control system of Unmanned Aerial Vehicle (UAV) consists of a trajectory planner, a position and an attitude controller. Landing site is selected based on the information of the depth map which is acquired by a stereo vision sensor attached on the gimbal system pointing downwards. Flatness information is obtained by the maximum depth difference of a predefined depth map region, and the distance from the UAV is also considered. This study proposes three landing methods and compares their performance using various indices such as UAV travel distance, map accuracy, obstacle response time etc.

A Real-time Augmented Video System using Chroma-Pattern Tracking (색상패턴 추적을 이용한 실시간 증강영상 시스템)

  • 박성춘;남승진;오주현;박창섭
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.2-9
    • /
    • 2002
  • Recently. VR( Virtual Reality) applications such as virtual studio and virtual character are wifely used In TV programs. and AR( Augmented Reality) applications are also belong taken an interest increasingly. This paper introduces a virtual screen system. which Is a new AR application for broadcasting. The virtual screen system is a real-time video augmentation system by tracking a chroma-patterned moving panel. We haute recently developed a virtual screen system.'K-vision'. Our system enables the user to hold and morse a simple panel on which live video, pictures of 3D graphics images can appear. All the Images seen on the panel change In the correct perspective, according to movements of the camera and the user holding the panel, in real-time. For the purpose of tracking janet. we use some computer vision techniques such as blob analysis and feature tracking. K-vision can work well with any type of camera. requiring no special add-ons. And no need for sensor attachments to the panel. no calibration procedures required. We are using K-vision in some TV programs such as election. documentary and entertainment.

An Accurate Extrinsic Calibration of Laser Range Finder and Vision Camera Using 3D Edges of Multiple Planes (다중 평면의 3차원 모서리를 이용한 레이저 거리센서 및 카메라의 정밀 보정)

  • Choi, Sung-In;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.177-186
    • /
    • 2015
  • For data fusion of laser range finder (LRF) and vision camera, accurate calibration of external parameters which describe relative pose between two sensors is necessary. This paper proposes a new calibration method which can acquires more accurate external parameters between a LRF and a vision camera compared to other existing methods. The main motivation of the proposed method is that any corner data of a known 3D structure which is acquired by the LRF should be projected on a straight line in the camera image. To satisfy such constraint, we propose a 3D geometric model and a numerical solution to minimize the energy function of the model. In addition, we describe the implementation steps of the data acquisition of LRF and camera images which are necessary in accurate calibration results. In the experiment results, it is shown that the performance of the proposed method are better in terms of accuracy compared to other conventional methods.

Smart Ship Container With M2M Technology (M2M 기술을 이용한 스마트 선박 컨테이너)

  • Sharma, Ronesh;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.3
    • /
    • pp.278-287
    • /
    • 2013
  • Modern information technologies continue to provide industries with new and improved methods. With the rapid development of Machine to Machine (M2M) communication, a smart container supply chain management is formed based on high performance sensors, computer vision, Global Positioning System (GPS) satellites, and Globle System for Mobile (GSM) communication. Existing supply chain management has limitation to real time container tracking. This paper focuses on the studies and implementation of real time container chain management with the development of the container identification system and automatic alert system for interrupts and for normal periodical alerts. The concept and methods of smart container modeling are introduced together with the structure explained prior to the implementation of smart container tracking alert system. Firstly, the paper introduces the container code identification and recognition algorithm implemented in visual studio 2010 with Opencv (computer vision library) and Tesseract (OCR engine) for real time operation. Secondly it discusses the current automatic alert system provided for real time container tracking and the limitations of those systems. Finally the paper summarizes the challenges and the possibilities for the future work for real time container tracking solutions with the ubiquitous mobile and satellite network together with the high performance sensors and computer vision. All of those components combine to provide an excellent delivery of supply chain management with outstanding operation and security.