• 제목/요약/키워드: Cloud Detection

검색결과 380건 처리시간 0.022초

서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획 (Object Pose Estimation and Motion Planning for Service Automation System)

  • 권영우;이동영;강호선;최지욱;이인호
    • 로봇학회논문지
    • /
    • 제19권2호
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

이중결정트리를 이용한 CCTV영상에서의 도로 날씨정보검출알고리즘 개발 (Development of the Road Weather Detection Algorithm on CCTV Video Images using Double Decision Trees)

  • 박병율;남궁성;임종태
    • 정보처리학회논문지B
    • /
    • 제14B권6호
    • /
    • pp.445-452
    • /
    • 2007
  • 본 논문에서는 도로 상에 설치된 CCTV의 영상정보에서 날씨정보를 검출하기 위한 방법으로 도로날씨정보 검출알고리즘을 제안한다. 도로 상의 CCTV 영상정보에서 날씨정보를 얻는 방법으로 맑은 날의 영상에서 RGB 평균값을 얻고 이를 기준으로 흐린 날 혹은 비 오는 날, 눈 오는 날, 안개 낀 날 등의 영상을 구분한다. 본 논문에서 제안하는 도로날씨정보 검출알고리즘은 많은 시간비용과 공간비용이 소모되는 날씨 데이터베이스를 활용하는 기존의 기법에 비하여, 시간비용과 공간비용이 적게 들기에 시스템을 구축함과 동시에 현장에 적용할 수 있다는 장점이 있다. 또한 본 알고리즘에서는 온 습도 정보와 일자 정보를 이용하여 검출된 날씨 정보를 재검증함으로 보다 정확한 날씨 정보를 검출할 수 있다.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • 제3권1호
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

사전 정보가 없는 배송지에서 장애물 탐지 및 배송 드론의 안전 착륙 지점 선정 기법 (Obstacle Detection and Safe Landing Site Selection for Delivery Drones at Delivery Destinations without Prior Information)

  • 서민철;한상익
    • 자동차안전학회지
    • /
    • 제16권2호
    • /
    • pp.20-26
    • /
    • 2024
  • The delivery using drones has been attracting attention because it can innovatively reduce the delivery time from the time of order to completion of delivery compared to the current delivery system, and there have been pilot projects conducted for safe drone delivery. However, the current drone delivery system has the disadvantage of limiting the operational efficiency offered by fully autonomous delivery drones in that drones mainly deliver goods to pre-set landing sites or delivery bases, and the final delivery is still made by humans. In this paper, to overcome these limitations, we propose obstacle detection and landing site selection algorithm based on a vision sensor that enables safe drone landing at the delivery location of the product orderer, and experimentally prove the possibility of station-to-door delivery. The proposed algorithm forms a 3D map of point cloud based on simultaneous localization and mapping (SLAM) technology and presents a grid segmentation technique, allowing drones to stably find a landing site even in places without prior information. We aims to verify the performance of the proposed algorithm through streaming data received from the drone.

천리안 위성 자료를 이용한 대류권계면 접힘 난류 탐지 가능성 연구 (Feasibility Study for Detecting the Tropopause Folding Turbulence Using COMS Geostationary Satellite)

  • 김미정;김재환
    • 대기
    • /
    • 제27권2호
    • /
    • pp.119-131
    • /
    • 2017
  • We present and discuss the Tropopause Folding Turbulence Detection (TFTD) algorithm for the Korean Communication, Ocean, Meteorological Satellite (COMS) which is originally developed for the Tropopause Folding Turbulence Product (TFTP) from the Geostationary Operational Environmental Satellite (GOES)-R. The TFTD algorithm assumes that the tropopause folding is linked to the Clear Air Turbulence (CAT), and thereby the tropopause folding areas are detected from the rapid spatial gradients of the upper tropospheric specific humidity. The Layer Averaged Specific Humidity (LASH) is used to represent the upper tropospheric specific humidity calculated using COMS $6.7{\mu}m$ water vapor channel and ERA-interim reanalysis temperature at 300, 400, and 500 hPa. The comparison of LASH with the numerical model specific humidity shows a strong negative correlation of 80% or more. We apply the single threshold, which is determined from sensitivity analysis, for cloud-clearing to overcome strong gradient of LASH at the edge of clouds. The tropopause break lines are detected from the location of strong LASH-gradient using the Canny edge detection based on the image processing technique. The tropopause folding area is defined by expanding the break lines by 2-degree positive gradient direction. The validations of COMS TFTD is performed with Pilot Reports (PIREPs) filtered out Convective Induced Turbulence (CIT) from Dec 2013 to Nov 2014 over the South Korea. The score test shows 0.49 PODy (Probability of Detection 'Yes') and 0.64 PODn (Probability of Detection 'No'). Low POD results from various kinds of CAT reported from PIREPs and the characteristics of high sensitivity in edge detection algorithm.

클러스터링 알고리즘에서 저비용 3D LiDAR 기반 객체 감지를 위한 향상된 파라미터 추론 (Improved Parameter Inference for Low-Cost 3D LiDAR-Based Object Detection on Clustering Algorithms)

  • 김다현;안준호
    • 인터넷정보학회논문지
    • /
    • 제23권6호
    • /
    • pp.71-78
    • /
    • 2022
  • 본 논문은 3D LiDAR의 포인트 클라우드 데이터를 가공하여 3D 객체탐지를 위한 알고리즘을 제안했다. 기존에 2D LiDAR와 달리 3D LiDAR 기반의 데이터는 너무 방대하며 3차원으로 가공이 힘들었다. 본 논문은 3D LiDAR 기반의 다양한 연구들을 소개하고 3D LiDAR 데이터 처리에 관해 서술하였다. 본 연구에서는 객체탐지를 위해 클러스터링 기법을 활용한 3D LiDAR의 데이터를 가공하는 방법을 제안하며 명확하고 정확한 3D 객체탐지를 위해 카메라와 융합하는 알고리즘 설계하였다. 또한, 3D LiDAR 기반 데이터를 클러스터링하기 위한 모델을 연구하였으며 모델에 따른 하이퍼 파라미터값을 연구하였다. 3D LiDAR 기반 데이터를 클러스터링할 때, DBSCAN 알고리즘이 가장 정확한 결과를 보였으며 DBSCAN의 하이퍼 파라미터값을 비교 분석하였다. 본 연구가 추후 3D LiDAR를 활용한 객체탐지 연구에 도움이 될 것이다.

해양환경에서 선박 추적을 위한 라이다를 이용한 궤적 초기화 및 표적 추적 필터 (Track Initiation and Target Tracking Filter Using LiDAR for Ship Tracking in Marine Environment)

  • 황태현;한정욱;손남선;김선영
    • 제어로봇시스템학회논문지
    • /
    • 제22권2호
    • /
    • pp.133-138
    • /
    • 2016
  • This paper describes the track initiation and target-tracking filter for ship tracking in a marine environment by using Light Detection And Ranging (LiDAR). LiDAR with three-dimensional scanning capability is more useful for target tracking in the short to medium range compared to RADAR. LiDAR has rotating multi-beams that return point clouds reflected from targets. Through preprocessing the cluster of the point cloud, the center point can be obtained from the cloud. Target tracking is carried out by using the center points of targets. The track of the target is initiated by investigating the normalized distance between the center points and connecting the points. The regular track obtained from the track initiation can be maintained by the target-tracking filter, which is commonly used in radar target tracking. The target-tracking filter is constructed to track a maneuvering target in a cluttered environment. The target-tracking algorithm including track initiation is experimentally evaluated in a sea-trial test with several boats.

자동차 부품 형상 결함 탐지를 위한 측정 방법 개발 (Development of An Inspection Method for Defect Detection on the Surface of Automotive Parts)

  • 박홍석;우펜드라 마니 툴라다르;신승철
    • 한국생산제조학회지
    • /
    • 제22권3호
    • /
    • pp.452-458
    • /
    • 2013
  • Over the past several years, many studies have been carried out in the field of 3D data inspection systems. Several attempts have been made to improve the quality of manufactured parts. The introduction of laser sensors for inspection has made it possible to acquire data at a remarkably high speed. In this paper, a robust inspection technique for detecting defects in 3D pressed parts using laser-scanned data is proposed. Point cloud data are segmented for the extraction of features. These segmented features are used for shape matching during the localization process. An iterative closest point (ICP) algorithm is used for the localization of the scanned model and CAD model. To achieve a higher accuracy rate, the ICP algorithm is modified and then used for matching. To enhance the speed of the matching process, aKd-tree algorithm is used. Then, the deviation of the scanned points from the CAD model is computed.

Performance Test of Asynchronous Process of OGC WPS 2.0: A Case Study for Geo-based Image Processing

  • Yoon, Gooseon;Lee, Kiwon
    • 대한원격탐사학회지
    • /
    • 제33권4호
    • /
    • pp.391-400
    • /
    • 2017
  • Geo-based application services linked with the Open Geospatial Consortium (OGC) Web Processing Service (WPS) protocol have been regarded as an important standardized framework for of digital earth building in the web environments. The WPS protocol provides interface standards for analysis functionalities within geo-spatial processing in web-based service systems. Despite its significance, there is few performance tests of WPS applications. The main motivation in this study is to perform the comparative performance test on WPS standards. Test system, which was composed of WPS servers, WPS framework, data management module, geo-based data processing module and client-sided system, was implemented by fully open source stack. In this system, two kinds of geo-based image processing functions such as cloud detection and gradient magnitude computation were applied. The performance test of different server environments of non-WPS, synchronous WPS 1.0 and asynchronous WPS 2.0 was carried out using 100 threads and 400 threads corresponds client users on a web-based application service. As the result, at 100 threads, performance of three environments was within an adjacent range in the average response time to complete the processing of each thread. At 400 threads, the application case of WPS 2.0 showed the distinguished characteristics for higher performance in the response time than the small threads cases. It is thought that WPS 2.0 contributes to settlement of without performance problems such as time delay or thread accumulation.