• 제목/요약/키워드: depth information sensors

검색결과 95건 처리시간 0.024초

초분광 영상의 최대 강도값과 하천 수심의 상관성 분석 (Correlation Analysis on the Water Depth and Peak Data Value of Hyperspectral Imagery)

  • 강준구;이창훈;여홍구;김종태
    • Ecology and Resilient Infrastructure
    • /
    • 제6권3호
    • /
    • pp.171-177
    • /
    • 2019
  • 초분광 영상은 기존 다중분광 영상에 비해 보다 세밀한 분석이 가능하며 감지가 어려운 지표 성질의 분석에 유용하게 활용될 수 있다. 따라서 본 연구에서는 수심에 대한 실측데이터와 드론 기반의 영상을 이용하여 하천환경 정보를 획득하는 것이 목적으로써 이를 위해 드론 기반의 초분광 센서를 활용하여 1개 측선 100개 지점에 대한 영상값을 취득하였으며 ADCP를 통해 확보된 실제 수심정보와 비교하여 상관관계를 분석하였다. ADCP 측정결과 중앙으로 갈수록 수심이 깊어지는 경향을 보이고 있으며 수심은 평균 0.81 m로 나타났다. 초분광 영상 분석 결과 최대 강도가 가장 높은 지점은 645, 가장 낮은 지점은 278이며 실제 수심과 초분광 영상분석결과의 상관성을 분석한 결과 최대 강도값이 감소할수록 수심은 증가하는 것으로 나타났다.

AR Anchor System Using Mobile Based 3D GNN Detection

  • Jeong, Chi-Seo;Kim, Jun-Sik;Kim, Dong-Kyun;Kwon, Soon-Chul;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권1호
    • /
    • pp.54-60
    • /
    • 2021
  • AR (Augmented Reality) is a technology that provides virtual content to the real world and provides additional information to objects in real-time through 3D content. In the past, a high-performance device was required to experience AR, but it was possible to implement AR more easily by improving mobile performance and mounting various sensors such as ToF (Time-of-Flight). Also, the importance of mobile augmented reality is growing with the commercialization of high-speed wireless Internet such as 5G. Thus, this paper proposes a system that can provide AR services via GNN (Graph Neural Network) using cameras and sensors on mobile devices. ToF of mobile devices is used to capture depth maps. A 3D point cloud was created using RGB images to distinguish specific colors of objects. Point clouds created with RGB images and Depth Map perform downsampling for smooth communication between mobile and server. Point clouds sent to the server are used for 3D object detection. The detection process determines the class of objects and uses one point in the 3D bounding box as an anchor point. AR contents are provided through app and web through class and anchor of the detected object.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성 (Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View)

  • 최재훈;이덕우
    • 한국산학기술학회논문지
    • /
    • 제21권6호
    • /
    • pp.28-34
    • /
    • 2020
  • 본 논문에서는 라이다(LIDAR) 센서와 일반 카메라 (RGB 센서)가 획득한 영상들을 정합하고, 일반 카메라가 획득한 컬러 영상에 해당하는 깊이맵을 생성하는 방법을 제시한다. 본 연구에서는 Slamtec사의 RPLIDAR A3 와 일반 디지털 카메라를 활용하고, 두 종류의 센서가 획득 및 제공하는 정보의 특징 및 형태는 서로 다르다. 라이다 센서가 제공하는 정보는 라이다부터 객체 또는 주변 물체들까지의 거리이고, 디지털 카메라가 제공하는 정보는 2차원 영상의 Red, Green, Blue 값이다. 두 개의 서로 다른 종류의 센서를 활용하여 정보를 정합할 경우 객체 검출 및 추적에서 더 좋은 성능을 보일 수 있는 가능성이 있고, 자율주행 자동차, 로봇 등 시각정보처리 기술이 필요한 영역에서 활용도가 높은 것으로 기대한다. 두 종류의 센서가 제공하는 정보들을 정합하기 위해서는 각 센서가 획득한 정보를 가공하고, 정합에 적합하도록 처리하는 과정이 필요하다. 본 논문에서는 두 센서가 획득하는 정보들을 정합한 결과를 제공할 수 있는 전처리 방법을 실험 결과와 함께 제시한다.

무인모선기반 무인잠수정의 3차원 위치계측 기법에 관한 연구 (A Study on a 3-D Localization of a AUV Based on a Mother Ship)

  • 임종환;강철웅;김성근
    • 한국해양공학회지
    • /
    • 제19권2호
    • /
    • pp.74-81
    • /
    • 2005
  • A 3-D localization method of an autonomous underwater vehicle (AUV) has been developed, which can solve the limitations oj the conventional localization, such as LBL or SBL that reduces the flexibility and availability of the AUV. The system is composed of a mother ship (small unmanned marine prober) on the surface of the water and an unmanned underwater vehicle in the water. The mother ship is equipped with a digital compass and a GPS for position information, and an extended Kalman filter is used for position estimation. For the localization of the AUV, we used only non-inertial sensors, such as a digital compass, a pressure sensor, a clinometer, and ultrasonic sensors. From the orientation and velocity information, a priori position of the AUV is estimated by applying the dead reckoning method. Based on the extended Kalman filter algorithm, a posteriori position of the AUV is, then, updated by using the distance between the AUV and a mother ship on the surface of the water, together with the depth information from the pressure sensor.

수중 측위 시스템과 SVR을 이용한 음영지역에서의 경로 추정 기법 (Path Estimation Method in Shadow Area Using Underwater Positioning System and SVR)

  • 박영식;송준우;이동혁;이장명
    • 로봇학회논문지
    • /
    • 제12권2호
    • /
    • pp.173-183
    • /
    • 2017
  • This paper proposes an integrated positioning system to localize a moving object in the shadow-area that exists in the water tank. The new water tank for underwater robots is constructed to evaluate the navigation performance of underwater vehicles. Several sensors are integrated in the water tank to provide the position information of the underwater vehicles. However there are some areas where the vehicle localization becomes very poor since the very limited sensors such as sonar and depth sensors are effective in underwater environment. Also there are many disturbances at sonar data. To reduce these disturbances, an extended Kalman filter has been adopted in this research. To localize the underwater vehicles under the hostile situations, a SVR (Support Vector Regression) has been systematically applied for estimating the position stochastically. To demonstrate the performance of the proposed algorithm (an extended Kalman filter + SVR analysis), a new UI (User Interface) has been developed.

LiDAR 신호처리 플랫폼을 위한 프레임 간 마스킹 기법 기반 유효 데이터 전송량 경량화 기법 (Semantic Depth Data Transmission Reduction Techniques using Frame-to-Frame Masking Method for Light-weighted LiDAR Signal Processing Platform)

  • 정태원;박대진
    • 한국정보통신학회논문지
    • /
    • 제25권12호
    • /
    • pp.1859-1867
    • /
    • 2021
  • 자율주행차량을 위해 다수의 LiDAR 센서가 차량에 탑재되고 있으며, 다수의 LiDAR 센서가 탑재됨에 따라 이를 전처리해줄 시스템이 요구되었다. 이러한 전처리 시스템을 거쳐 메인 프로세서에 센서의 데이터를 전달하거나 이를 처리할 경우 막대한 데이터양에 의해 전송 네트워크에 부하를 야기하고 이를 처리하는 메인 프로세서에도 상당한 부하를 야기하게 된다. 이러한 부하를 최소화하고자 LiDAR 센서의 데이터 중 프레임 간 데이터 비교를 통해 의미 있는 데이터만을 전송하고자 한다. 움직이는 객체가 없는 정적인 실험 환경과 센서의 시야각 내에서 사람이 움직이는 동적 실험환경에서 최대 4대의 LiDAR 센서의 데이터를 처리하였을 때, 정적 실험 환경일 경우 232,104 bytes에서 26,110 bytes로 약 89.5% 데이터 전송량을 줄일 수 있었으며, 동적 실험 환경일 경우 29,179 bytes로 약 88.1%의 데이터 전송량을 감축할 수 있었다.

CALOS : 주행계 추정을 위한 카메라와 레이저 융합 (CALOS : Camera And Laser for Odometry Sensing)

  • 복윤수;황영배;권인소
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

ECDIS에 의한 grab 준설작업의 실시간 모니터링에 관한 연구 (Real-time monitoring of grab dredging operation using ECDIS)

  • 정기원;이대재;정봉규;이유원
    • 수산해양기술연구
    • /
    • 제43권2호
    • /
    • pp.140-148
    • /
    • 2007
  • This paper describes on the real-time monitoring of dredging information for grab bucket dredger equipped with winch control sensors and differential global positioning system(DGPS) using electronic chart display and information system(ECDIS). The experiment was carried out at Gwangyang Hang and Gangwon-do Oho-ri on board M/V Kunwoong G-16. ECDIS system monitors consecutively the dredging's position, heading and shooting point of grab bucket in real-time through 3 DGPS attached to the top bridge of the dredger and crane frame. Dredging depth was measured by 2 up/down counter fitted with crane winch of the dredger. The depth and area of dredging in each shooting point of grab bucket are displayed in color band. The efficiency of its operation can be ensured by adjusting the tidal data in real-time and displaying the depth of dredging on the ECDIS monitor. The reliance for verification of dredging operation as well as supervision of dredging process was greatly enhanced by providing three-dimensional map with variation of dredging depth in real time. The results will contribute to establishing the system which can monitor and record the whole dredging operations in real-time as well as verify the result of dredging quantitatively.

비파괴검사를 위한 연속형 테라헤르츠 파 기반의 영상화 기술 (Imaging Technique Based on Continuous Terahertz Waves for Nondestructive Inspection)

  • 오경환;김학성
    • 센서학회지
    • /
    • 제27권5호
    • /
    • pp.328-334
    • /
    • 2018
  • The paper reviews an improved continuous-wave (CW) terahertz (THz) imaging system developed for nondestructive inspection, such as CW-THz quasi-time-domain spectroscopy (QTDS) and interferometry. First, a comparison between CW and pulsed THz imaging systems is reported. The CW-THz imaging system is a simple, fast, compact, and relatively low-cost system. However, it only provides intensity data, without depth and frequency- or time-domain information. The pulsed THz imaging system yields a broader range of information, but it is expensive because of the femtosecond laser. Recently, to overcome the drawbacks of CW-THz imaging systems, many studies have been conducted, including a study on the QTDS system. In this system, an optical delay line is added to the optical arm leading to the detector. Another system studied is a CW-THz interferometric imaging system, which combines the CW-THz imaging system and far-infrared interferometer system. These systems commonly obtain depth information despite the CW-THz system. Reportedly, these systems can be successfully applied to fields where pulsed THz is used. Lastly, the applicability of these systems for nondestructive inspection was confirmed.