• Title/Summary/Keyword: 무인센서카메라

Search Result 84, Processing Time 0.043 seconds

Development of a Close-range Real-time Aerial Monitoring System based on a Low Altitude Unmanned Air Vehicle (저고도 무인 항공기 기반의 근접 실시간 공중 모니터링 시스템 구축)

  • Choi, Kyoung-Ah;Lee, Ji-Hun;Lee, Im-Pyeong
    • Spatial Information Research
    • /
    • v.19 no.4
    • /
    • pp.21-31
    • /
    • 2011
  • As large scaled natural or man-made disasters being increased, the demand for rapid responses for such emergent situations also has been ever-increasing. These responses need to acquire spatial information of each individual site rapidly for more effective management of the situations. Therefore, we are developing a close-range real-time aerial monitoring system based on a low altitude unmanned helicopter. This system can acquire airborne sensory data in real-time and generate rapidly geospatial information. The system consists of two main parts: aerial and ground parts. The aerial part includes an aerial platform equipped with multi-sensor(cameras, a laser scanner, a GPS receiver, an IMU) and sensor supporting modules. The ground part includes a ground vehicle, a receiving system to receive sensory data in real-time and a processing system to generate the geospatial information rapidly. Development and testing of the individual modules and subsystems have been almost completed. Integration of the modules and subsystems is now in progress. In this paper, we w ill introduce our system, explain intermediate results, and discuss expected outcome.

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.

Analysis of the Naemorhedus caudatus Population in Odaesan National Park - The Goral Individually Identification and Statistical Analysis Using the Sensor Camera - (오대산국립공원 산양(Naemorhedus caudatus) 개체 수 분석 - 무인센서카메라 분석을 이용한 개체 구분 및 통계 분석 -)

  • Kim, Gyu-cheol;Lee, Yong-hak;Lee, Dong-un;Son, Jang-ick;Kang, Jae-gu;Cho, Chea-un
    • Korean Journal of Environment and Ecology
    • /
    • v.34 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • This study conducted a full survey of the goral population using sensor cameras to identify the exact habitat of the gorals that inhabit Odaesan National Park and for restoration and habitat management-focused conservation projects following the population growth. We surveyed Odaesan National Park for a year in 2018 and selected18 grids (2km×2km) first based on the survey results. We then further divided each grid into four small grids (1km×1km) and installed a total of 62 sensor cameras in 38 small girds divided by four grids(1km×1km). The survey resulted in a total of 5,096 photographed wild animals, 2,268 of which were gorals, and the analysis by the classification table of goral (horn shape (Ⓐ), ring pattern (Ⓑ), ring formation ratio (Ⓒ), and facial color (Ⓓ)) identified a total of 95 animals. The ratio of male and female was 35 males (36.8%), 46 females (48.4%), and 14 sex unknowns (14.7%), while the ratio of female and male excluding sex unknowns was 4 (male):6 (female). The horn shape (Ⓐ) and face color (Ⓓ) were the important factors for distinguishing male and female and identifying individuals. The analysis of the correlation of 81 individuals, excluding 14 individuals of unknown sex, showed a significant difference (r=-0.635, p<0.01). Since the goral population in Odaesan National Park has reached a minimum viable population, it is necessary to change the focus of the management policy of Odaesan National Park from restoration to conservation.

Evaluation of Measurement Accuracy for Unmanned Aerial Vehicle-based Land Surface Temperature Depending on Climate and Crop Conditions (기상 조건과 작물 생육상태에 따른 무인기 기반 지표면온도의 관측 정확도 평가)

  • Ryu, Jae-Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.211-220
    • /
    • 2021
  • Land Surface Temperature (LST) is one of the useful parameters to diagnose the growth and development of crop and to detect crop stress. Unmanned Aerial Vehicle (UAV)-based LST (LSTUAV) can be estimated in the regional spatial scale due to miniaturization of thermal infrared camera and development of UAV. Given that meteorological variable, type of instrument, and surface condition can affect the LSTUAV, the evaluation for accuracy of LSTUAV is required. The purpose of this study is to evaluate the accuracy of LSTUAV using LST measured at ground (LSTGround) under various meteorological conditions and growth phases of garlic crop. To evaluate the accuracy of LSTUAV, Relative humidity (RH), absolute humidity (AH), gust, and vegetation index were considered. Root mean square error (RMSE) after minimizing the bias between LSTUAV and LSTGround was 2.565℃ under above 60% of RH, and it was higher than that of 1.82℃ under the below 60% of RH. Therefore, LSTUAV measurement should be conducted under the below 60% of RH. The error depending on the gust and surface conditions was not statistically significant (p-value < 0.05). LSTUAV had reliable accuracy under the wind speed conditions that allow flight and reflected the crop condition. These results help to comprehend the accuracy of LSTUAV and to utilize it in the agriculture field.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

PID Controled UAV Monitoring System for Fire-Event Detection (PID 제어 UAV를 이용한 발화 감지 시스템의 구현)

  • Choi, Jeong-Wook;Kim, Bo-Seong;Yu, Je-Min;Choi, Ji-Hoon;Lee, Seung-Dae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • If a dangerous situation arises in a place where out of reach from the human, UAVs can be used to determine the size and location of the situation to reduce the further damage. With this in mind, this paper sets the minimum value of the roll, pitch, and yaw using beta flight to detect the UAV's smooth hovering, integration, and derivative (PID) values to ensure that the UAV stays horizontal, minimizing errors for safe hovering, and the camera uses Open CV to install the Raspberry Pi program and then HSV (color, saturation, Brightness) using the color palette, the filter is black and white except for the red color, which is the closest to the fire we want, so that the UAV detects the image in the air in real time. Finally, it was confirmed that hovering was possible at a height of 0.5 to 5m, and red color recognition was possible at a distance of 5cm and at a distance of 5m.

A Comparison Between the Tape Switch Sensor and the Video Images Frame Analysis Method on the Speed Measurement of Vehicle (차량 속도 측정의 실무적용을 위한 테이프스위치 센서 방식과 영상 프레임 분석방법의 비교연구)

  • Kim Man-Bae;Hyun Cheol-Seung;Yoo Sung-Jun;Hong You-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.120-127
    • /
    • 2006
  • In Korea the vehicle enforcement system(VES) detects speeding vehicle using two inductive loop detectors. And the speed reliability of theirs are evaluated through the analysis of image frame which is captured from video camera. This method is validated to evaluate VES on Korea Laboratory Accreditation Scheme(KOLAS) but it needs much time and expense for the analysis of image frame. Because the number of VES are increasing rapidly, the requirement of new evaluation method is necessary. On this paper, the tape switch sensor as a substitution of existing method was introduced and its application on the site are discussed. On the site test we compared the tape switch sensor on the speed measurement of vehicle with the video image frame. As a result we have founded that the tape switch sensor is evaluated to be feasible system on site in respect to measure the overspeed vehicle.

Development of Surveillance Gateway system for Event Sensitivity Type using Sensor fusion-based M2M Technology (센서 융합기술을 활용한 M2M 기반의 이벤트 감응형 감시 게이트웨이 기반 시스템 개발)

  • Kim, Ju-Su;Park, Joon-Hoon;Lee, Chol-U;Oh, Ryum-Duck
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.107-108
    • /
    • 2014
  • 현재의 지능형 안전 및 유지관리 방법은 자연재해 및 시설물 사용성능 향상 등의 환경변화 대응에 아직 미흡하고, 기술역량도 부족하다. 국내의 시설물의 점검 및 관리 시스템은 대부분 수작업으로 이루어진다. 이러한 수동적인 관리는 시설물의 상태 변화에 실시간으로 대응하지 못함으로서 여러 사고를 초래하기도 한다. 하지만 사람이 일일이 검사하는 수동적인 시설물 관리에서는 이러한 문제점을 완벽히 해결할 수 없으며, 시설물 관리를 위해 많은 유지보수 인력이 필요하지만 예산상의 문제로 인해 관리가 미흡하다. 본 논문에서는 4G 무선네트워크 기반의 영상카메라 및 감지센서 융합형 시각정보화 M2M 게이트웨이를 활용하여 간단한 시설물 관리 시스템 구성으로 인한 기존 원격 영상 감시 시스템과 차별화된 저전력, 저비용, 고효율, 고성능의 무인 시설물 관리 시스템을 설계하였다.

  • PDF

The Obstacle Size Prediction Method Based on YOLO and IR Sensor for Avoiding Obstacle Collision of Small UAVs (소형 UAV의 장애물 충돌 회피를 위한 YOLO 및 IR 센서 기반 장애물 크기 예측 방법)

  • Uicheon Lee;Jongwon Lee;Euijin Choi;Seonah Lee
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.6
    • /
    • pp.16-26
    • /
    • 2023
  • With the growing demand for unmanned aerial vehicles (UAVs), various collision avoidance methods have been proposed, mainly using LiDAR and stereo cameras. However, it is difficult to apply these sensors to small UAVs due to heavy weight or lack of space. The recently proposed methods use a combination of object recognition models and distance sensors, but they lack information on the obstacle size. This disadvantage makes distance determination and obstacle coordination complicated in an early-stage collision avoidance. We propose a method for estimating obstacle sizes using a monocular camera-YOLO and infrared sensor. Our experimental results confirmed that the accuracy was 86.39% within the distance of 40 cm. In addition, the proposed method was applied to a small UAV to confirm whether it was possible to avoid obstacle collisions.