• Title/Summary/Keyword: object detection and classification

Search Result 296, Processing Time 0.027 seconds

Active Vibration Measuring Sensor for Nondestructive Test of Electric Power Transmission Line Insulators (송전선로 애자의 비파괴 검사를 위한 능동형 진동 측정센서)

  • Lee, Jae-Kyung;Park, Joon-Young;Cho, Byung-Hak
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.57 no.4
    • /
    • pp.424-430
    • /
    • 2008
  • A new active vibration measurement system in electric power transmission line is presented, using in the nondestructive test. With a permanent magnet and a couple of coils, the system exerts impact force to a test object and in turn picks up the vibration of the object. The natural frequency with the amplitude obtained from the system are used as a basis for the detection of defects in the object. The system is controlled by an electronic device designed to facilitate the fully automated testing process with consistent repeatability and reliability which are essential to the nondestructive test. The system is expected to be applied to the wide area of defect detection including the classification of mechanical parts in production and inspection processes.

Resource-Efficient Object Detector for Low-Power Devices (저전력 장치를 위한 자원 효율적 객체 검출기)

  • Akshay Kumar Sharma;Kyung Ki Kim
    • Transactions on Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.17-20
    • /
    • 2024
  • This paper presents a novel lightweight object detection model tailored for low-powered edge devices, addressing the limitations of traditional resource-intensive computer vision models. Our proposed detector, inspired by the Single Shot Detector (SSD), employs a compact yet robust network design. Crucially, it integrates an 'enhancer block' that significantly boosts its efficiency in detecting smaller objects. The model comprises two primary components: the Light_Block for efficient feature extraction using Depth-wise and Pointwise Convolution layers, and the Enhancer_Block for enhanced detection of tiny objects. Trained from scratch on the Udacity Annotated Dataset with image dimensions of 300x480, our model eschews the need for pre-trained classification weights. Weighing only 5.5MB with approximately 0.43M parameters, our detector achieved a mean average precision (mAP) of 27.7% and processed at 140 FPS, outperforming conventional models in both precision and efficiency. This research underscores the potential of lightweight designs in advancing object detection for edge devices without compromising accuracy.

Object Segmentation for Detection of Moths in the Pheromone Trap Images (페로몬 트랩 영상에서 해충 검출을 위한 객체 분할)

  • Kim, Tae-Woo;Cho, Tae-Kyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.157-163
    • /
    • 2017
  • The object segmentation approach has the merit of reducing the processing cost required to detect moths of interest, because it applies a moth detection algorithm to the segmented objects after segmenting the objects individually in the moth image. In this paper, an object segmentation method for moth detection in pheromone trap images is proposed. Our method consists of preprocessing, thresholding, morphological filtering, and object labeling processes. Thresholding in the process is a critical step significantly influencing the performance of object segmentation. The proposed method can threshold very elaborately by reflecting the local properties of the moth images. We performed thresholding using global and local versions of Ostu's method and, used the proposed method for the moth images of Carposina sasakii acquired on a pheromone trap placed in an orchard. It was demonstrated that the proposed method could reflect the properties of light and background on the moth images. Also, we performed object segmentation and moth classification for Carposina sasakii images, where the latter process used an SVM classifier with training and classification steps. In the experiments, the proposed method performed the detection of Carposina sasakii for 10 moth images and achieved an average detection rate of 95% of them. Therefore, it was shown that the proposed technique is an effective monitoring method of Carposina sasakii in an orchard.

Detection of Dangerous Situations using Deep Learning Model with Relational Inference

  • Jang, Sein;Battulga, Lkhagvadorj;Nasridinov, Aziz
    • Journal of Multimedia Information System
    • /
    • v.7 no.3
    • /
    • pp.205-214
    • /
    • 2020
  • Crime has become one of the major problems in modern society. Even though visual surveillances through closed-circuit television (CCTV) is extensively used for solving crime, the number of crimes has not decreased. This is because there is insufficient workforce for performing 24-hour surveillance. In addition, CCTV surveillance by humans is not efficient for detecting dangerous situations owing to accuracy issues. In this paper, we propose the autonomous detection of dangerous situations in CCTV scenes using a deep learning model with relational inference. The main feature of the proposed method is that it can simultaneously perform object detection and relational inference to determine the danger of the situations captured by CCTV. This enables us to efficiently classify dangerous situations by inferring the relationship between detected objects (i.e., distance and position). Experimental results demonstrate that the proposed method outperforms existing methods in terms of the accuracy of image classification and the false alarm rate even when object detection accuracy is low.

3D Object Detection with Low-Density 4D Imaging Radar PCD Data Clustering and Voxel Feature Extraction for Each Cluster (4D 이미징 레이더의 저밀도 PCD 데이터 군집화와 각 군집에 복셀 특징 추출 기법을 적용한 3D 객체 인식 기법)

  • Cha-Young, Oh;Soon-Jae, Gwon;Hyun-Jung, Jung;Gu-Min, Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.471-476
    • /
    • 2022
  • In this paper, we propose an object detection using a 4D imaging radar, which developed to solve the problems of weak cameras and LiDAR in bad weather. When data are measured and collected through a 4D imaging radar, the density of point cloud data is low compared to LiDAR data. A technique for clustering objects and extracting the features of objects through voxels in the cluster is proposed using the characteristics of wide distances between objects due to low density. Furthermore, we propose an object detection using the extracted features.

Selective labeling using image super resolution for improving the efficiency of object detection in low-resolution oriental paintings

  • Moon, Hyeyoung;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.21-32
    • /
    • 2022
  • Image labeling must be preceded in order to perform object detection, and this task is considered a significant burden in building a deep learning model. Tens of thousands of images need to be trained for building a deep learning model, and human labelers have many limitations in labeling these images manually. In order to overcome these difficulties, this study proposes a method to perform object detection without significant performance degradation, even though labeling some images rather than the entire image. Specifically, in this study, low-resolution oriental painting images are converted into high-quality images using a super-resolution algorithm, and the effect of SSIM and PSNR derived in this process on the mAP of object detection is analyzed. We expect that the results of this study can contribute significantly to constructing deep learning models such as image classification, object detection, and image segmentation that require efficient image labeling.

Moving Object Detection in Pan-Tilt Camera using Image Alignment (영상 정렬 알고리듬을 이용한 팬틸트 카메라에서 움직이는 물체 탐지 기법)

  • Baek, Young-Min;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.260-261
    • /
    • 2008
  • 이동 물체 탐지(Object Detection) 기법은 대부분의 감시 시스템에서 가장 초기 단계로서, 이후에 물체 추적(Object Tracking) 및 물체 식별(Object Classification) 등의 지능 알고리듬에 입력으로 사용된다. 따라서 물체의 윤곽의 변화 없이 최대한 정교하게 이동 물체 영역 맵을 생성하는 것이 물체 탐지의 가장 중요한 요소가 된다. 카메라가 고정되어 있는 경우에는 현재 들어오는 영상에 대한 확률적 배경 모델을 생성할 수 있지만, 팬틸트 카메라와 같이 영상의 좌표가 변하는 환경에서는 배경 모델도 계속 변하기 때문에 기존의 배경 모델을 그대로 사용할 수 없다. 본 논문에서는 팬틸트 카메라와 같이 동적인 카메라에서 이동 물체 탐지를 위해, 국소 특징점(Local Feature)를 통해 카메라의 움직임을 판단하여 연속되는 영상간의 변환 행렬(Transformation Matrix)를 구하고 하고, 확률적 배경 모델링을 통한 이동 물체 탐지 기법을 제안한다. 자제 촬영한 이동 카메라 실험영상을 통해서 이 알고리듬이 동적 배경에서도 매우 강인하게 동작하는 것을 검증하였다.

  • PDF

Detection of Trees with Pine Wilt Disease Using Object-based Classification Method

  • Park, Jeongmook;Sim, Woodam;Lee, Jungsoo
    • Journal of Forest and Environmental Science
    • /
    • v.32 no.4
    • /
    • pp.384-391
    • /
    • 2016
  • In this study, regions infected by pine wilt disease were extracted by using object-based classification method (OB-infected region), and the characteristics of special distribution about OB-infected region were figured out. Scale 24, Shape 0.1, Color 0.9, Compactness 0.5, and Smoothness 0.5 was selected as the objected-based, optimal weighted value of OB-infected region classification. The total accuracy of classification was high with 99% and Kappa coefficient was also high with 0.97. The area of OB-infected region was approximately 90 ha, 16% of the total area. The OB-infected region in Age class V and VI was intensively distributed with 97% of the total. Also, The OB-infected region in Middle and Large DBH class was intensively distributed with 99% of the total. In terms of the topographic characteristics of OB-infected region, the damages occurred approximately 86% below the altitude of 200 m, and occurred 91% with a slope less than 10 degree. The damage occurred a lot in low hilly mountain and undulating slope. In addition, the accessibility to road and residential area from OB-infected region was less than 300 m in large part. Overall, it was figured out that artificial effect is stronger than natural effect with regard to the spread of pine wilt disease.

Data Augmentation Method of Small Dataset for Object Detection and Classification (영상 내 물체 검출 및 분류를 위한 소규모 데이터 확장 기법)

  • Kim, Jin Yong;Kim, Eun Kyeong;Kim, Sungshin
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.184-189
    • /
    • 2020
  • This paper is a study on data augmentation for small dataset by using deep learning. In case of training a deep learning model for recognition and classification of non-mainstream objects, there is a limit to obtaining a large amount of training data. Therefore, this paper proposes a data augmentation method using perspective transform and image synthesis. In addition, it is necessary to save the object area for all training data to detect the object area. Thus, we devised a way to augment the data and save object regions at the same time. To verify the performance of the augmented data using the proposed method, an experiment was conducted to compare classification accuracy with the augmented data by the traditional method, and transfer learning was used in model learning. As experimental results, the model trained using the proposed method showed higher accuracy than the model trained using the traditional method.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.