• Title/Summary/Keyword: 이미지 데이터 셋

Search Result 283, Processing Time 0.028 seconds

Application of Deep Learning-based Object Detection and Distance Estimation Algorithms for Driving to Urban Area (도심로 주행을 위한 딥러닝 기반 객체 검출 및 거리 추정 알고리즘 적용)

  • Seo, Juyeong;Park, Manbok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.83-95
    • /
    • 2022
  • This paper proposes a system that performs object detection and distance estimation for application to autonomous vehicles. Object detection is performed by a network that adjusts the split grid to the input image ratio using the characteristics of the recently actively used deep learning model YOLOv4, and is trained to a custom dataset. The distance to the detected object is estimated using a bounding box and homography. As a result of the experiment, the proposed method improved in overall detection performance and processing speed close to real-time. Compared to the existing YOLOv4, the total mAP of the proposed method increased by 4.03%. The accuracy of object recognition such as pedestrians, vehicles, construction sites, and PE drums, which frequently occur when driving to the city center, has been improved. The processing speed is approximately 55 FPS. The average of the distance estimation error was 5.25m in the X coordinate and 0.97m in the Y coordinate.

Deep Learning-based system for plant disease detection and classification (딥러닝 기반 작물 질병 탐지 및 분류 시스템)

  • YuJin Ko;HyunJun Lee;HeeJa Jeong;Li Yu;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.9-17
    • /
    • 2023
  • Plant diseases and pests affect the growth of various plants, so it is very important to identify pests at an early stage. Although many machine learning (ML) models have already been used for the inspection and classification of plant pests, advances in deep learning (DL), a subset of machine learning, have led to many advances in this field of research. In this study, disease and pest inspection of abnormal crops and maturity classification were performed for normal crops using YOLOX detector and MobileNet classifier. Through this method, various plant pest features can be effectively extracted. For the experiment, image datasets of various resolutions related to strawberries, peppers, and tomatoes were prepared and used for plant pest classification. According to the experimental results, it was confirmed that the average test accuracy was 84% and the maturity classification accuracy was 83.91% in images with complex background conditions. This model was able to effectively detect 6 diseases of 3 plants and classify the maturity of each plant in natural conditions.

Effective Classification Method of Hierarchical CNN for Multi-Class Outlier Detection (다중 클래스 이상치 탐지를 위한 계층 CNN의 효과적인 클래스 분할 방법)

  • Kim, Jee-Hyun;Lee, Seyoung;Kim, Yerim;Ahn, Seo-Yeong;Park, Saerom
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.81-84
    • /
    • 2022
  • 제조 산업에서의 이상치 검출은 생산품의 품질과 운영비용을 절감하기 위한 중요한 요소로 최근 딥러닝을 사용하여 자동화되고 있다. 이상치 검출을 위한 딥러닝 기법에는 CNN이 있으며, CNN을 계층적으로 구성할 경우 단일 CNN 모델에 비해 상대적으로 성능의 향상을 보일 수 있다는 것이 많은 선행 연구에서 나타났다. 이에 MVTec-AD 데이터셋을 이용하여 계층 CNN이 다중 클래스 이상치 판별 문제에 대해 효과적인지를 탐구하고자 하였다. 실험 결과 단일 CNN의 정확도는 0.7715, 계층 CNN의 정확도는 0.7838로 다중 클래스 이상치 판별 문제에 있어 계층 CNN 방식 접근이 다중 클래스 이상치 탐지 문제에서 알고리즘의 성능을 향상할 수 있음을 확인할 수 있었다. 계층 CNN은 모델과 파라미터의 개수와 리소스의 사용이 단일 CNN에 비하여 기하급수적으로 증가한다는 단점이 존재한다. 이에 계층 CNN의 장점을 유지하며 사용 리소스를 절약하고자 하였고 K-means, GMM, 계층적 클러스터링 알고리즘을 통해 제작한 새로운 클래스를 이용해 계층 CNN을 구성하여 각각 정확도 0.7930, 0.7891, 0.7936의 결과를 얻을 수 있었다. 이를 통해 Clustering 알고리즘을 사용하여 적절히 물체를 분류할 경우 물체에 따른 개별 상태 판단 모델을 제작하는 것과 비슷하거나 더 좋은 성능을 내며 리소스 사용을 줄일 수 있음을 확인할 수 있었다.

  • PDF

Resource-Efficient Object Detector for Low-Power Devices (저전력 장치를 위한 자원 효율적 객체 검출기)

  • Akshay Kumar Sharma;Kyung Ki Kim
    • Transactions on Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.17-20
    • /
    • 2024
  • This paper presents a novel lightweight object detection model tailored for low-powered edge devices, addressing the limitations of traditional resource-intensive computer vision models. Our proposed detector, inspired by the Single Shot Detector (SSD), employs a compact yet robust network design. Crucially, it integrates an 'enhancer block' that significantly boosts its efficiency in detecting smaller objects. The model comprises two primary components: the Light_Block for efficient feature extraction using Depth-wise and Pointwise Convolution layers, and the Enhancer_Block for enhanced detection of tiny objects. Trained from scratch on the Udacity Annotated Dataset with image dimensions of 300x480, our model eschews the need for pre-trained classification weights. Weighing only 5.5MB with approximately 0.43M parameters, our detector achieved a mean average precision (mAP) of 27.7% and processed at 140 FPS, outperforming conventional models in both precision and efficiency. This research underscores the potential of lightweight designs in advancing object detection for edge devices without compromising accuracy.

Detecting Adversarial Example Using Ensemble Method on Deep Neural Network (딥뉴럴네트워크에서의 적대적 샘플에 관한 앙상블 방어 연구)

  • Kwon, Hyun;Yoon, Joonhyeok;Kim, Junseob;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.21 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • Deep neural networks (DNNs) provide excellent performance for image, speech, and pattern recognition. However, DNNs sometimes misrecognize certain adversarial examples. An adversarial example is a sample that adds optimized noise to the original data, which makes the DNN erroneously misclassified, although there is nothing wrong with the human eye. Therefore studies on defense against adversarial example attacks are required. In this paper, we have experimentally analyzed the success rate of detection for adversarial examples by adjusting various parameters. The performance of the ensemble defense method was analyzed using fast gradient sign method, DeepFool method, Carlini & Wanger method, which are adversarial example attack methods. Moreover, we used MNIST as experimental data and Tensorflow as a machine learning library. As an experimental method, we carried out performance analysis based on three adversarial example attack methods, threshold, number of models, and random noise. As a result, when there were 7 models and a threshold of 1, the detection rate for adversarial example is 98.3%, and the accuracy of 99.2% of the original sample is maintained.

A Study on Field Compost Detection by Using Unmanned AerialVehicle Image and Semantic Segmentation Technique based Deep Learning (무인항공기 영상과 딥러닝 기반의 의미론적 분할 기법을 활용한 야적퇴비 탐지 연구)

  • Kim, Na-Kyeong;Park, Mi-So;Jeong, Min-Ji;Hwang, Do-Hyun;Yoon, Hong-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.367-378
    • /
    • 2021
  • Field compost is a representative non-point pollution source for livestock. If the field compost flows into the water system due to rainfall, nutrients such as phosphorus and nitrogen contained in the field compost can adversely affect the water quality of the river. In this paper, we propose a method for detecting field compost using unmanned aerial vehicle images and deep learning-based semantic segmentation. Based on 39 ortho images acquired in the study area, about 30,000 data were obtained through data augmentation. Then, the accuracy was evaluated by applying the semantic segmentation algorithm developed based on U-net and the filtering technique of Open CV. As a result of the accuracy evaluation, the pixel accuracy was 99.97%, the precision was 83.80%, the recall rate was 60.95%, and the F1-Score was 70.57%. The low recall compared to precision is due to the underestimation of compost pixels when there is a small proportion of compost pixels at the edges of the image. After, It seems that accuracy can be improved by combining additional data sets with additional bands other than the RGB band.

DNN Model for Calculation of UV Index at The Location of User Using Solar Object Information and Sunlight Characteristics (태양객체 정보 및 태양광 특성을 이용하여 사용자 위치의 자외선 지수를 산출하는 DNN 모델)

  • Ga, Deog-hyun;Oh, Seung-Taek;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.29-35
    • /
    • 2022
  • UV rays have beneficial or harmful effects on the human body depending on the degree of exposure. An accurate UV information is required for proper exposure to UV rays per individual. The UV rays' information is provided by the Korea Meteorological Administration as one component of daily weather information in Korea. However, it does not provide an accurate UVI at the user's location based on the region's Ultraviolet index. Some operate measuring instrument to obtain an accurate UVI, but it would be costly and inconvenient. Studies which assumed the UVI through environmental factors such as solar radiation and amount of cloud have been introduced, but those studies also could not provide service to individual. Therefore, this paper proposes a deep learning model to calculate UVI using solar object information and sunlight characteristics to provide an accurate UVI at individual location. After selecting the factors, which were considered as highly correlated with UVI such as location and size and illuminance of sun and which were obtained through the analysis of sky images and solar characteristics data, a data set for DNN model was constructed. A DNN model that calculates the UVI was finally realized by entering the solar object information and sunlight characteristics extracted through Mask R-CNN. In consideration of the domestic UVI recommendation standards, it was possible to accurately calculate UVI within the range of MAE 0.26 compared to the standard equipment in the performance evaluation for days with UVI above and below 8.

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

A Study on Attention Mechanism in DeepLabv3+ for Deep Learning-based Semantic Segmentation (딥러닝 기반의 Semantic Segmentation을 위한 DeepLabv3+에서 강조 기법에 관한 연구)

  • Shin, SeokYong;Lee, SangHun;Han, HyunHo
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.55-61
    • /
    • 2021
  • In this paper, we proposed a DeepLabv3+ based encoder-decoder model utilizing an attention mechanism for precise semantic segmentation. The DeepLabv3+ is a semantic segmentation method based on deep learning and is mainly used in applications such as autonomous vehicles, and infrared image analysis. In the conventional DeepLabv3+, there is little use of the encoder's intermediate feature map in the decoder part, resulting in loss in restoration process. Such restoration loss causes a problem of reducing segmentation accuracy. Therefore, the proposed method firstly minimized the restoration loss by additionally using one intermediate feature map. Furthermore, we fused hierarchically from small feature map in order to effectively utilize this. Finally, we applied an attention mechanism to the decoder to maximize the decoder's ability to converge intermediate feature maps. We evaluated the proposed method on the Cityscapes dataset, which is commonly used for street scene image segmentation research. Experiment results showed that our proposed method improved segmentation results compared to the conventional DeepLabv3+. The proposed method can be used in applications that require high accuracy.

A Study on Improving Facial Recognition Performance to Introduce a New Dog Registration Method (새로운 반려견 등록방식 도입을 위한 안면 인식 성능 개선 연구)

  • Lee, Dongsu;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.794-807
    • /
    • 2022
  • Although registration of dogs is mandatory according to the revision of the Animal Protection Act, the registration rate is low due to the inconvenience of the current registration method. In this paper, a performance improvement study was conducted on the dog face recognition technology, which is being reviewed as a new registration method. Through deep learning learning, an embedding vector for facial recognition of a dog was created and a method for identifying each dog individual was experimented. We built a dog image dataset for deep learning learning and experimented with InceptionNet and ResNet-50 as backbone networks. It was learned by the triplet loss method, and the experiments were divided into face verification and face recognition. In the ResNet-50-based model, it was possible to obtain the best facial verification performance of 93.46%, and in the face recognition test, the highest performance of 91.44% was obtained in rank-5, respectively. The experimental methods and results presented in this paper can be used in various fields, such as checking whether a dog is registered or not, and checking an object at a dog access facility.