• Title/Summary/Keyword: Ground detection

Search Result 822, Processing Time 0.025 seconds

Study of Joint Histogram Based Statistical Features for Early Detection of Lung Disease (폐질환 조기 검출을 위한 결합 히스토그램 기반의 통계적 특징 인자에 대한 연구)

  • Won, Chul-ho
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.4
    • /
    • pp.259-265
    • /
    • 2016
  • In this paper, new method was proposed to classify lung tissues such as Broncho vascular, Emphysema, Ground Glass Reticular, Ground Glass, Honeycomb, Normal for early lung disease detection. 459 Statistical features was extraced from joint histogram matrix based on multi resolution analysis, volumetric LBP, and CT intensity, then dominant features was selected by using adaboost learning. Accuracy of proposed features and 3D AMFM was 90.1% and 85.3%, respectively. Proposed joint histogram based features shows better classification result than 3D AMFM in terms of accuracy, sensitivity, and specificity.

Performance Analysis of Deep Learning-Based Detection/Classification for SAR Ground Targets with the Synthetic Dataset (합성 데이터를 이용한 SAR 지상표적의 딥러닝 탐지/분류 성능분석)

  • Ji-Hoon Park
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.147-155
    • /
    • 2024
  • Based on the recently developed deep learning technology, many studies have been conducted on deep learning networks that simultaneously detect and classify targets of interest in synthetic aperture radar(SAR) images. Although numerous research results have been derived mainly with the open SAR ship datasets, there is a lack of work carried out on the deep learning network aimed at detecting and classifying SAR ground targets and trained with the synthetic dataset generated from electromagnetic scattering simulations. In this respect, this paper presents the deep learning network trained with the synthetic dataset and applies it to detecting and classifying real SAR ground targets. With experiment results, this paper also analyzes the network performance according to the composition ratio between the real measured data and the synthetic data involved in network training. Finally, the summary and limitations are discussed to give information on the future research direction.

Deep-learning-based GPR Data Interpretation Technique for Detecting Cavities in Urban Roads (도심지 도로 지하공동 탐지를 위한 딥러닝 기반 GPR 자료 해석 기법)

  • Byunghoon, Choi;Sukjoon, Pyun;Woochang, Choi;Churl-hyun, Jo;Jinsung, Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.189-200
    • /
    • 2022
  • Ground subsidence on urban roads is a social issue that can lead to human and property damages. Therefore, it is crucial to detect underground cavities in advance and repair them. Underground cavity detection is mainly performed using ground penetrating radar (GPR) surveys. This process is time-consuming, as a massive amount of GPR data needs to be interpreted, and the results vary depending on the skills and subjectivity of experts. To address these problems, researchers have studied automation and quantification techniques for GPR data interpretation, and recent studies have focused on deep learning-based interpretation techniques. In this study, we described a hyperbolic event detection process based on deep learning for GPR data interpretation. To demonstrate this process, we implemented a series of algorithms introduced in the preexisting research step by step. First, a deep learning-based YOLOv3 object detection model was applied to automatically detect hyperbolic signals. Subsequently, only hyperbolic signals were extracted using the column-connection clustering (C3) algorithm. Finally, the horizontal locations of the underground cavities were determined using regression analysis. The hyperbolic event detection using the YOLOv3 object detection technique achieved 84% precision and a recall score of 92% based on AP50. The predicted horizontal locations of the four underground cavities were approximately 0.12 ~ 0.36 m away from their actual locations. Thus, we confirmed that the existing deep learning-based interpretation technique is reliable with regard to detecting the hyperbolic patterns indicating underground cavities.

A Study on the Detecting Method of Intercept Violation Vehicles Using an Image Detection Techniques (영상검지기법을 활용한 끼어들기 위반차량 검지 방법에 관한 연구)

  • Kim, Wan-Ki;Ryu, Boo-Hyung
    • Journal of the Korean Society of Safety
    • /
    • v.23 no.6
    • /
    • pp.164-170
    • /
    • 2008
  • This research was verified detection way of intercept vehicles and performance evaluation after system installation using image detector as detection way of ground installation. By image recognition algorithm was on the trace of moving orbit of violation vehicles for detection way of intercept vehicles. When moving orbit is located special site, utilized geometric image calibration and DC-notch filter. These are cognitive system of license plate by making signal. Then, Bright Evidence Detection and Dark Evidence Detection were applied to after mixing. It is applied to way of Backward tracking for detection way of intercept vehicles. After the field evaluation of developed system, it should be analyzed the more high than recognition rate of minimum standards 80%. It should rise in the estimation of the site applicability is highly from now.

DESIGN OF AN UNMANNED GROUND VEHICLE, TAILGATOR THEORY AND PRACTICE

  • KIM S. G.;GALLUZZO T.;MACARTHUR D.;SOLANKI S.;ZAWODNY E.;KENT D.;KIM J. H.;CRANE C. D.
    • International Journal of Automotive Technology
    • /
    • v.7 no.1
    • /
    • pp.83-90
    • /
    • 2006
  • The purpose of this paper is to describe the design and implementation of an unmanned ground vehicle, called the TailGator at CIMAR (Center for Intelligent Machines and Robotics) of the University of Florida. The TailGator is a gas powered, four-wheeled vehicle that was designed for the AUVSI Intelligent Ground Vehicle Competition and has been tested in the contest for 2 years. The vehicle control model and design of the sensory systems are described. The competition is comprised of two events called the Autonomous Challenge and the Navigation Challenge: For the autonomous challenge, line following, obstacle avoidance, and detection are required. Line following is accomplished with a camera system. Obstacle avoidance and detection are accomplished with a laser scanner. For the navigation challenge, waypoint following and obstacle detection are required. The waypoint navigation is implemented with a global positioning system. The TailGator has provided an educational test bed for not only the contest requirements but also other studies in developing artificial intelligence algorithms such as adaptive control, creative control, automatic calibration, and internet-base control. The significance of this effort is in helping engineering and technology students understand the transition from theory to practice.

High resolution groud penetrating image radar using an ultra wideband (UWB) impulse waveform (초광대역 임펄스를 이용한 고해상도 지반탐사 이미지 레이더)

  • Park Young-Jin;Kim Kwan-Ho;Lee Won-Tae
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.11
    • /
    • pp.101-106
    • /
    • 2005
  • A ground penetrating image radar (GPR) using an ultra wideband (UWB)impulse waveform is developed for non destructive detection of metallic pipelines buried under the ground. Dielectric constant of test field is measured and then a GPR system is designed for better detection up to 1 meter deep. By considering total path loss, volume of complete system, and resolution, upper and lower frequencies are chosen. First, a UWB impulse for the frequency bandwidth of the impulse is chosen with rising time less than 1 ns, and then compact planar UWB dipole antenna suitable for frequency bandwidth of a UWB impulse is designed. Also, to receive reflected signals, a digital storage oscilloscope is used. For measurement, a monostatic technique and a migration technique are used. For visualizing underground targets, simple image processing techniques of A-scan removal and B-scan average removal are applied. The prototype of the system is tested on a test field in wet clay soil and it is shown that the developed system has a good ability in detecting underground metal objects, even small targets of several centimeters.

Sensor Fusion Docking System of Drone and Ground Vehicles Using Image Object Detection (영상 객체 검출을 이용한 드론과 지상로봇의 센서 융합 도킹 시스템)

  • Beck, Jong-Hwan;Park, Hee-Su;Oh, Se-Ryeong;Shin, Ji-Hun;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.217-222
    • /
    • 2017
  • Recent studies for working robot in dangerous places have been carried out on large unmanned ground vehicles or 4-legged robots with the advantage of long working time, but it is difficult to apply in practical dangerous fields which require the real-time system with high locomotion and capability of delicate working. This research shows the collaborated docking system of drone and ground vehicles which combines image processing algorithm and laser sensors for effective detection of docking markers, and is finally capable of moving a long distance and doing very delicate works. We proposed the docking system of drone and ground vehicles with sensor fusion which also suggests two template matching methods appropriate for this application. The system showed 95% docking success rate in 50 docking attempts.

Analysis of Traversable Candidate Region for Unmanned Ground Vehicle Using 3D LIDAR Reflectivity (3D LIDAR 반사율을 이용한 무인지상차량의 주행가능 후보 영역 분석)

  • Kim, Jun;Ahn, Seongyong;Min, Jihong;Bae, Keunsung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1047-1053
    • /
    • 2017
  • The range data acquired by 2D/3D LIDAR, a core sensor for autonomous navigation of an unmanned ground vehicle, is effectively used for ground modeling and obstacle detection. Within the ambiguous boundary of a road environment, however, LIDAR does not provide enough information to analyze the traversable region. This paper presents a new method to analyze a candidate area using the characteristics of LIDAR reflectivity for better detection of a traversable region. We detected a candidate traversable area through the front zone of the vehicle using the learning process of LIDAR reflectivity, after calibration of the reflectivity of each channel. We validated the proposed method of a candidate traversable region detection by performing experiments in the real operating environment of the unmanned ground vehicle.

Detection of planetary signals in extremely weak central perturbation microlensing events via next-generation ground-based surveys

  • Chung, Sun-Ju;Lee, Chung-Uk
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.72.1-72.1
    • /
    • 2013
  • Even though current microlensing follow-up observations focus on high-magnification events due to the high efficiency of planet detection, it is very difficult to do a confident detection of planets in high-magnification events with extremely weak central perturbations (i.e., the fractional deviation is ${\delta}{\leq}0.02$). For the confident detection of planets in the extremely weak central perturbation events, it is needed both the high cadence monitoring and the high photometric accuracy. A next-generation ground-based observation project, KMTNet (Korea Microlensing Telescope Network), satisfies both the conditions. Here we investigate how well planets in high-magnification events with extremely weak central perturbations are detected by KMTNet. First, we determine the probability of occurrence of events with ${\delta}{\leq}0.02$. From this, we find that for ${\leq}100M_E$ planets in the separation of $0.2AU{\leq}d{\leq}20AU$, events with ${\delta}{\leq}0.02$ occur with a frequency of more than 70%, in which d is the projected planet-star separation. Second, we estimate the efficiency of detecting planetary signals in the events with ${\delta}{\leq}0.02$ via KMTNet. We find that for main-sequence and subgiant source stars, ${\geq}1M_E$ planets can be detected more than 50% in a certain range that has the efficiency of ${\geq}10%$ and changes with the planet mass.

  • PDF

Surf points based Moving Target Detection and Long-term Tracking in Aerial Videos

  • Zhu, Juan-juan;Sun, Wei;Guo, Bao-long;Li, Cheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5624-5638
    • /
    • 2016
  • A novel method based on Surf points is proposed to detect and lock-track single ground target in aerial videos. Videos captured by moving cameras contain complex motions, which bring difficulty in moving object detection. Our approach contains three parts: moving target template detection, search area estimation and target tracking. Global motion estimation and compensation are first made by grids-sampling Surf points selecting and matching. And then, the single ground target is detected by joint spatial-temporal information processing. The temporal process is made by calculating difference between compensated reference and current image and the spatial process is implementing morphological operations and adaptive binarization. The second part improves KALMAN filter with surf points scale information to predict target position and search area adaptively. Lastly, the local Surf points of target template are matched in this search region to realize target tracking. The long-term tracking is updated following target scaling, occlusion and large deformation. Experimental results show that the algorithm can correctly detect small moving target in dynamic scenes with complex motions. It is robust to vehicle dithering and target scale changing, rotation, especially partial occlusion or temporal complete occlusion. Comparing with traditional algorithms, our method enables real time operation, processing $520{\times}390$ frames at around 15fps.