• Title/Summary/Keyword: ImageNet

Search Result 778, Processing Time 0.026 seconds

Evaluation of Deep Learning Model for Scoliosis Pre-Screening Using Preprocessed Chest X-ray Images

  • Min Gu Jang;Jin Woong Yi;Hyun Ju Lee;Ki Sik Tae
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.4
    • /
    • pp.293-301
    • /
    • 2023
  • Scoliosis is a three-dimensional deformation of the spine that is a deformity induced by physical or disease-related causes as the spine is rotated abnormally. Early detection has a significant influence on the possibility of nonsurgical treatment. To train a deep learning model with preprocessed images and to evaluate the results with and without data augmentation to enable the diagnosis of scoliosis based only on a chest X-ray image. The preprocessed images in which only the spine, rib contours, and some hard tissues were left from the original chest image, were used for learning along with the original images, and three CNN(Convolutional Neural Networks) models (VGG16, ResNet152, and EfficientNet) were selected to proceed with training. The results obtained by training with the preprocessed images showed a superior accuracy to those obtained by training with the original image. When the scoliosis image was added through data augmentation, the accuracy was further improved, ultimately achieving a classification accuracy of 93.56% with the ResNet152 model using test data. Through supplementation with future research, the method proposed herein is expected to allow the early diagnosis of scoliosis as well as cost reduction by reducing the burden of additional radiographic imaging for disease detection.

Fishing Boat Rolling Movement of Time Series Prediction based on Deep Network Model (심층 네트워크 모델에 기반한 어선 횡동요 시계열 예측)

  • Donggyun Kim;Nam-Kyun Im
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.376-385
    • /
    • 2023
  • Fishing boat capsizing accidents account for more than half of all capsize accidents. These can occur for a variety of reasons, including inexperienced operation, bad weather, and poor maintenance. Due to the size and influence of the industry, technological complexity, and regional diversity, fishing ships are relatively under-researched compared to commercial ships. This study aimed to predict the rolling motion time series of fishing boats using an image-based deep learning model. Image-based deep learning can achieve high performance by learning various patterns in a time series. Three image-based deep learning models were used for this purpose: Xception, ResNet50, and CRNN. Xception and ResNet50 are composed of 177 and 184 layers, respectively, while CRNN is composed of 22 relatively thin layers. The experimental results showed that the Xception deep learning model recorded the lowest Symmetric mean absolute percentage error(sMAPE) of 0.04291 and Root Mean Squared Error(RMSE) of 0.0198. ResNet50 and CRNN recorded an RMSE of 0.0217 and 0.022, respectively. This confirms that the models with relatively deeper layers had higher accuracy.

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

Deep learning-based clothing attribute classification using fashion image data (패션 이미지 데이터를 활용한 딥러닝 기반의 의류속성 분류)

  • Hye Seon Jeong;So Young Lee;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • Attributes such as material, color, and fit in fashion images are important factors for consumers to purchase clothing. However, the process of classifying clothing attributes requires a large amount of manpower and is inconsistent because it relies on the subjective judgment of human operators. To alleviate this problem, there is a need for research that utilizes artificial intelligence to classify clothing attributes in fashion images. Previous studies have mainly focused on classifying clothing attributes for either tops or bottoms, so there is a limitation that the attributes of both tops and bottoms cannot be identified simultaneously in the case of full-body fashion images. In this study, we propose a deep learning model that can distinguish between tops and bottoms in fashion images and classify the category of each item and the attributes of the clothing material. The deep learning models ResNet and EfficientNet were used in this study, and the dataset used for training was 1,002,718 fashion images and 125 labels including clothing categories and material properties. Based on the weighted F1-Score, ResNet is 0.800 and EfficientNet is 0.781, with ResNet showing better performance.

Automatic Extraction of Liver Region from Medical Images by Using an MFUnet

  • Vi, Vo Thi Tuong;Oh, A-Ran;Lee, Guee-Sang;Yang, Hyung-Jeong;Kim, Soo-Hyung
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.59-70
    • /
    • 2020
  • This paper presents a fully automatic tool to recognize the liver region from CT images based on a deep learning model, namely Multiple Filter U-net, MFUnet. The advantages of both U-net and Multiple Filters were utilized to construct an autoencoder model, called MFUnet for segmenting the liver region from computed tomograph. The MFUnet architecture includes the autoencoding model which is used for regenerating the liver region, the backbone model for extracting features which is trained on ImageNet, and the predicting model used for liver segmentation. The LiTS dataset and Chaos dataset were used for the evaluation of our research. This result shows that the integration of Multiple Filter to U-net improves the performance of liver segmentation and it opens up many research directions in medical imaging processing field.

Image Segmentation Using SqueezeNet based on CUDA C (CUDA C기반 SqueezeNet을 이용한 영상 분할)

  • Jeon, Sae-Yun;Wang, Jin-Yeong;Lee, Sang-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.631-633
    • /
    • 2018
  • 최근 영상처리 분야에서 딥러닝(Deep learning)을 이용한 기술이 좋은 성능을 보이면서 이에 대한 관심과 연구가 증가하고 있다. 본 연구에서는 최근 딥러닝 네트워크 중 적은 파라미터 수로 AlexNet수준의 성능을 보인 SquezeNet을 영상 분할(Image segmentation)의 특징 추출(feature extraction)영역으로 사용하고, CUDA C기반으로 코드를 작성하여 정확도를 유지하면서 계산 속도 면에서도 좋은 성능을 얻을 수 있었다.

An Improved PeleeNet Algorithm with Feature Pyramid Networks for Image Detection

  • Yangfan, Bai;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.398-400
    • /
    • 2019
  • Faced with the increasing demand for image recognition on mobile devices, how to run convolutional neural network (CNN) models on mobile devices with limited computing power and limited storage resources encourages people to study efficient model design. In recent years, many effective architectures have been proposed, such as mobilenet_v1, mobilenet_v2 and PeleeNet. However, in the process of feature selection, all these models neglect some information of shallow features, which reduces the capture of shallow feature location and semantics. In this study, we propose an effective framework based on Feature Pyramid Networks to improve the recognition accuracy of deep and shallow images while guaranteeing the recognition speed of PeleeNet structured images. Compared with PeleeNet, the accuracy of structure recognition on CIFA-10 data set increased by 4.0%.

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning (전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • v.13 no.6
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

Wood Species Classification Utilizing Ensembles of Convolutional Neural Networks Established by Near-Infrared Spectra and Images Acquired from Korean Softwood Lumber

  • Yang, Sang-Yun;Lee, Hyung Gu;Park, Yonggun;Chung, Hyunwoo;Kim, Hyunbin;Park, Se-Yeong;Choi, In-Gyu;Kwon, Ohkyung;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.47 no.4
    • /
    • pp.385-392
    • /
    • 2019
  • In our previous study, we investigated the use of ensemble models based on LeNet and MiniVGGNet to classify the images of transverse and longitudinal surfaces of five Korean softwoods (cedar, cypress, Korean pine, Korean red pine, and larch). It had accomplished an average F1 score of more than 98%; the classification performance of the longitudinal surface image was still less than that of the transverse surface image. In this study, ensemble methods of two different convolutional neural network models (LeNet3 for smartphone camera images and NIRNet for NIR spectra) were applied to lumber species classification. Experimentally, the best classification performance was obtained by the averaging ensemble method of LeNet3 and NIRNet. The average F1 scores of the individual LeNet3 model and the individual NIRNet model were 91.98% and 85.94%, respectively. By the averaging ensemble method of LeNet3 and NIRNet, an average F1 score was increased to 95.31%.