• Title/Summary/Keyword: ResNeXt

Search Result 7, Processing Time 0.023 seconds

Exotic Weed Image Recognition System Based on ResNeXt Model (ResNeXt 모델 기반의 외래잡초 영상 판별 시스템)

  • Kim, Min-Soo;Lee, Gi Yong;Kim, Hyoung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.6
    • /
    • pp.745-752
    • /
    • 2021
  • In this paper, we propose a system that recognizes weed images using a classifier based on ResNeXt model. On the server of the proposed system, the ResNeXt model extracts the fine features of the weed images sent from the user and classifies it as one of the most similar weeds out of 21 species. And the classification result is delivered to the client and displayed on the smartphone screen through the application. The experimental results show that the proposed weed recognition system based on ResNeXt model is superior to existing methods and can be effectively applied in the real-world agriculture field.

Oriented object detection in satellite images using convolutional neural network based on ResNeXt

  • Asep Haryono;Grafika Jati;Wisnu Jatmiko
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.307-322
    • /
    • 2024
  • Most object detection methods use a horizontal bounding box that causes problems between adjacent objects with arbitrary directions, resulting in misaligned detection. Hence, the horizontal anchor should be replaced by a rotating anchor to determine oriented bounding boxes. A two-stage process of delineating a horizontal bounding box and then converting it into an oriented bounding box is inefficient. To improve detection, a box-boundary-aware vector can be estimated based on a convolutional neural network. Specifically, we propose a ResNeXt101 encoder to overcome the weaknesses of the conventional ResNet, which is less effective as the network depth and complexity increase. Owing to the cardinality of using a homogeneous design and multi-branch architecture with few hyperparameters, ResNeXt captures better information than ResNet. Experimental results demonstrate more accurate and faster oriented object detection of our proposal compared with a baseline, achieving a mean average precision of 89.41% and inference rate of 23.67 fps.

Application of Deep Learning-Based Nuclear Medicine Lung Study Classification Model (딥러닝 기반의 핵의학 폐검사 분류 모델 적용)

  • Jeong, Eui-Hwan;Oh, Joo-Young;Lee, Ju-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.1
    • /
    • pp.41-47
    • /
    • 2022
  • The purpose of this study is to apply a deep learning model that can distinguish lung perfusion and lung ventilation images in nuclear medicine, and to evaluate the image classification ability. Image data pre-processing was performed in the following order: image matrix size adjustment, min-max normalization, image center position adjustment, train/validation/test data set classification, and data augmentation. The convolutional neural network(CNN) structures of VGG-16, ResNet-18, Inception-ResNet-v2, and SE-ResNeXt-101 were used. For classification model evaluation, performance evaluation index of classification model, class activation map(CAM), and statistical image evaluation method were applied. As for the performance evaluation index of the classification model, SE-ResNeXt-101 and Inception-ResNet-v2 showed the highest performance with the same results. As a result of CAM, cardiac and right lung regions were highly activated in lung perfusion, and upper lung and neck regions were highly activated in lung ventilation. Statistical image evaluation showed a meaningful difference between SE-ResNeXt-101 and Inception-ResNet-v2. As a result of the study, the applicability of the CNN model for lung scintigraphy classification was confirmed. In the future, it is expected that it will be used as basic data for research on new artificial intelligence models and will help stable image management in clinical practice.

Assessing Stream Vegetation Dynamics and Revetment Impact Using Time-Series RGB UAV Images and ResNeXt101 CNNs

  • Seung-Hwan Go;Kyeong-Soo Jeong;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • Small streams, despite their rich ecosystems, face challenges in vegetation assessment due to the limitations of traditional, time-consuming methods. This study presents a groundbreaking approach, combining unmanned aerial vehicles(UAVs), convolutional neural networks(CNNs), and the vegetation differential vegetation index (VDVI), to revolutionize both assessment and management of stream vegetation. Focusing on Idong Stream in South Korea (2.7 km long, 2.34 km2 basin area)with eight diverse revetment methods, we leveraged high-resolution RGB images captured by UAVs across five dates (July-December). These images trained a ResNeXt101 CNN model, achieving an impressive 89% accuracy in classifying vegetation cover(soil,water, and vegetation). This enabled detailed spatial and temporal analysis of vegetation distribution. Further, VDVI calculations on classified vegetation areas allowed assessment of vegetation vitality. Our key findings showcase the power of this approach:(a) TheCNN model generated highly accurate cover maps, facilitating precise monitoring of vegetation changes overtime and space. (b) August displayed the highest average VDVI(0.24), indicating peak vegetation growth crucial for stabilizing streambanks and resisting flow. (c) Different revetment methods impacted vegetation vitality. Fieldstone sections exhibited initial high vitality followed by decline due to leaf browning. Block-type sections and the control group showed a gradual decline after peak growth. Interestingly, the "H environment block" exhibited minimal change, suggesting potential benefits for specific ecological functions.(d) Despite initial differences, all sections converged in vegetation distribution trends after 15 years due to the influence of surrounding vegetation. This study demonstrates the immense potential of UAV-based remote sensing and CNNs for revolutionizing small-stream vegetation assessment and management. By providing high-resolution, temporally detailed data, this approach offers distinct advantages over traditional methods, ultimately benefiting both the environment and surrounding communities through informed decision-making for improved stream health and ecological conservation.

Crack Detection Technology Based on Ortho-image Using Convolutional Neural Network (합성곱 신경망을 이용한 정사사진 기반 균열 탐지 기법)

  • Jang, Arum;Jeong, Sanggi;Park, Jinhan;, Kang Chang-hoon;Ju, Young K.
    • Journal of Korean Association for Spatial Structures
    • /
    • v.22 no.2
    • /
    • pp.19-27
    • /
    • 2022
  • Visual inspection methods have limitations, such as reflecting the subjective opinions of workers. Moreover, additional equipment is required when inspecting the high-rise buildings because the height is limited during the inspection. Various methods have been studied to detect concrete cracks due to the disadvantage of existing visual inspection. In this study, a crack detection technology was proposed, and the technology was objectively and accurately through AI. In this study, an efficient method was proposed that automatically detects concrete cracks by using a Convolutional Neural Network(CNN) with the Orthomosaic image, modeled with the help of UAV. The concrete cracks were predicted by three different CNN models: AlexNet, ResNet50, and ResNeXt. The models were verified by accuracy, recall, and F1 Score. The ResNeXt model had the high performance among the three models. Also, this study confirmed the reliability of the model designed by applying it to the experiment.

Deep Learning Models for Autonomous Crack Detection System (자동화 균열 탐지 시스템을 위한 딥러닝 모델에 관한 연구)

  • Ji, HongGeun;Kim, Jina;Hwang, Syjung;Kim, Dogun;Park, Eunil;Kim, Young Seok;Ryu, Seung Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.161-168
    • /
    • 2021
  • Cracks affect the robustness of infrastructures such as buildings, bridge, pavement, and pipelines. This paper presents an automated crack detection system which detect cracks in diverse surfaces. We first constructed the combined crack dataset, consists of multiple crack datasets in diverse domains presented in prior studies. Then, state-of-the-art deep learning models in computer vision tasks including VGG, ResNet, WideResNet, ResNeXt, DenseNet, and EfficientNet, were used to validate the performance of crack detection. We divided the combined dataset into train (80%) and test set (20%) to evaluate the employed models. DenseNet121 showed the highest accuracy at 96.20% with relatively low number of parameters compared to other models. Based on the validation procedures of the advanced deep learning models in crack detection task, we shed light on the cost-effective automated crack detection system which can be applied to different surfaces and structures with low computing resources.

Modified Pyramid Scene Parsing Network with Deep Learning based Multi Scale Attention (딥러닝 기반의 Multi Scale Attention을 적용한 개선된 Pyramid Scene Parsing Network)

  • Kim, Jun-Hyeok;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.45-51
    • /
    • 2021
  • With the development of deep learning, semantic segmentation methods are being studied in various fields. There is a problem that segmenation accuracy drops in fields that require accuracy such as medical image analysis. In this paper, we improved PSPNet, which is a deep learning based segmentation method to minimized the loss of features during semantic segmentation. Conventional deep learning based segmentation methods result in lower resolution and loss of object features during feature extraction and compression. Due to these losses, the edge and the internal information of the object are lost, and there is a problem that the accuracy at the time of object segmentation is lowered. To solve these problems, we improved PSPNet, which is a semantic segmentation model. The multi-scale attention proposed to the conventional PSPNet was added to prevent feature loss of objects. The feature purification process was performed by applying the attention method to the conventional PPM module. By suppressing unnecessary feature information, eadg and texture information was improved. The proposed method trained on the Cityscapes dataset and use the segmentation index MIoU for quantitative evaluation. As a result of the experiment, the segmentation accuracy was improved by about 1.5% compared to the conventional PSPNet.