• Title/Summary/Keyword: ImageNet

Search Result 806, Processing Time 0.025 seconds

A three-stage deep-learning-based method for crack detection of high-resolution steel box girder image

  • Meng, Shiqiao;Gao, Zhiyuan;Zhou, Ying;He, Bin;Kong, Qingzhao
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.29-39
    • /
    • 2022
  • Crack detection plays an important role in the maintenance and protection of steel box girder of bridges. However, since the cracks only occupy an extremely small region of the high-resolution images captured from actual conditions, the existing methods cannot deal with this kind of image effectively. To solve this problem, this paper proposed a novel three-stage method based on deep learning technology and morphology operations. The training set and test set used in this paper are composed of 360 images (4928 × 3264 pixels) in steel girder box. The first stage of the proposed model converted high-resolution images into sub-images by using patch-based method and located the region of cracks by CBAM ResNet-50 model. The Recall reaches 0.95 on the test set. The second stage of our method uses the Attention U-Net model to get the accurate geometric edges of cracks based on results in the first stage. The IoU of the segmentation model implemented in this stage attains 0.48. In the third stage of the model, we remove the wrong-predicted isolated points in the predicted results through dilate operation and outlier elimination algorithm. The IoU of test set ascends to 0.70 after this stage. Ablation experiments are conducted to optimize the parameters and further promote the accuracy of the proposed method. The result shows that: (1) the best patch size of sub-images is 1024 × 1024. (2) the CBAM ResNet-50 and the Attention U-Net achieved the best results in the first and the second stage, respectively. (3) Pre-training the model of the first two stages can improve the IoU by 2.9%. In general, our method is of great significance for crack detection.

Multistage Transfer Learning for Breast Cancer Early Diagnosis via Ultrasound (유방암 조기 진단을 위한 초음파 영상의 다단계 전이 학습)

  • Ayana, Gelan;Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.134-136
    • /
    • 2021
  • Research related to early diagnosis of breast cancer using artificial intelligence algorithms has been actively conducted in recent years. Although various algorithms that classify breast cancer based on a few publicly available ultrasound breast cancer images have been published, these methods show various limitations such as, processing speed and accuracy suitable for the user's purpose. To solve this problem, in this paper, we propose a multi-stage transfer learning where ResNet model trained on ImageNet is transfer learned to microscopic cancer cell line images, which was again transfer learned to classify ultrasound breast cancer images as benign and malignant. The images for the experiment consisted of 250 breast cancer ultrasound images including benign and malignant images and 27,200 cancer cell line images. The proposed multi-stage transfer learning algorithm showed more than 96% accuracy when classifying ultrasound breast cancer images, and is expected to show higher utilization and accuracy through the addition of more cancer cell lines and real-time image processing in the future.

  • PDF

Estimation of Heading Date of Paddy Rice from Slanted View Images Using Deep Learning Classification Model

  • Hyeokjin Bak;Hoyoung Ban;SeongryulChang;Dongwon Gwon;Jae-Kyeong Baek;Jeong-Il Cho;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.80-80
    • /
    • 2022
  • Estimation of heading date of paddy rice is laborious and time consuming. Therefore, automatic estimation of heading date of paddy rice is highly essential. In this experiment, deep learning classification models were used to classify two difference categories of rice (vegetative and reproductive stage) based on the panicle initiation of paddy field. Specifically, the dataset includes 444 slanted view images belonging to two categories and was then expanded to include 1,497 images via IMGAUG data augmentation technique. We adopt two transfer learning strategies: (First, used transferring model weights already trained on ImageNet to six classification network models: VGGNet, ResNet, DenseNet, InceptionV3, Xception and MobileNet, Second, fine-tuned some layers of the network according to our dataset). After training the CNN model, we used several evaluation metrics commonly used for classification tasks, including Accuracy, Precision, Recall, and F1-score. In addition, GradCAM was used to generate visual explanations for each image patch. Experimental results showed that the InceptionV3 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. The fine-tuned InceptionV3 model achieved an overall classification accuracy of 0.95 with a high F1-score of 0.95. Our CNN model also represented the change of rice heading date under different date of transplanting. This study demonstrated that image based deep learning model can reliably be used as an automatic monitoring system to detect the heading date of rice crops using CCTV camera.

  • PDF

Speckle Noise Reduction and Image Quality Improvement in U-net-based Phase Holograms in BL-ASM (BL-ASM에서 U-net 기반 위상 홀로그램의 스펙클 노이즈 감소와 이미지 품질 향상)

  • Oh-Seung Nam;Ki-Chul Kwon;Jong-Rae Jeong;Kwon-Yeon Lee;Nam Kim
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.5
    • /
    • pp.192-201
    • /
    • 2023
  • The band-limited angular spectrum method (BL-ASM) causes aliasing errors due to spatial frequency control problems. In this paper, a sampling interval adjustment technique for phase holograms and a technique for reducing speckle noise and improving image quality using a deep-learningbased U-net model are proposed. With the proposed technique, speckle noise is reduced by first calculating the sampling factor and controlling the spatial frequency by adjusting the sampling interval so that aliasing errors can be removed in a wide range of propagation. The next step is to improve the quality of the reconstructed image by learning the phase hologram to which the deep learning model is applied. In the S/W simulation of various sample images, it was confirmed that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were improved by 5% and 0.14% on average, compared with the existing BL-ASM.

Application of CCTV Image and Semantic Segmentation Model for Water Level Estimation of Irrigation Channel (관개용수로 CCTV 이미지를 이용한 CNN 딥러닝 이미지 모델 적용)

  • Kim, Kwi-Hoon;Kim, Ma-Ga;Yoon, Pu-Reun;Bang, Je-Hong;Myoung, Woo-Ho;Choi, Jin-Yong;Choi, Gyu-Hoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.63-73
    • /
    • 2022
  • A more accurate understanding of the irrigation water supply is necessary for efficient agricultural water management. Although we measure water levels in an irrigation canal using ultrasonic water level gauges, some errors occur due to malfunctions or the surrounding environment. This study aims to apply CNN (Convolutional Neural Network) Deep-learning-based image classification and segmentation models to the irrigation canal's CCTV (Closed-Circuit Television) images. The CCTV images were acquired from the irrigation canal of the agricultural reservoir in Cheorwon-gun, Gangwon-do. We used the ResNet-50 model for the image classification model and the U-Net model for the image segmentation model. Using the Natural Breaks algorithm, we divided water level data into 2, 4, and 8 groups for image classification models. The classification models of 2, 4, and 8 groups showed the accuracy of 1.000, 0.987, and 0.634, respectively. The image segmentation model showed a Dice score of 0.998 and predicted water levels showed R2 of 0.97 and MAE (Mean Absolute Error) of 0.02 m. The image classification models can be applied to the automatic gate-controller at four divisions of water levels. Also, the image segmentation model results can be applied to the alternative measurement for ultrasonic water gauges. We expect that the results of this study can provide a more scientific and efficient approach for agricultural water management.

Comparative Experiment of Cloud Classification and Detection of Aerial Image by Deep Learning (딥러닝에 의한 항공사진 구름 분류 및 탐지 비교 실험)

  • Song, Junyoung;Won, Taeyeon;Jo, Su Min;Eo, Yang Dam;Park, So young;Shin, Sang ho;Park, Jin Sue;Kim, Changjae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.409-418
    • /
    • 2021
  • As the amount of construction for aerial photography increases, the need for automation of quality inspection is emerging. In this study, an experiment was performed to classify or detect clouds in aerial photos using deep learning techniques. Also, classification and detection were performed by including satellite images in the learning data. As algorithms used in the experiment, GoogLeNet, VGG16, Faster R-CNN and YOLOv3 were applied and the results were compared. In addition, considering the practical limitations of securing erroneous images including clouds in aerial images, we also analyzed whether additional learning of satellite images affects classification and detection accuracy in comparison a training dataset that only contains aerial images. As results, the GoogLeNet and YOLOv3 algorithms showed relatively superior accuracy in cloud classification and detection of aerial images, respectively. GoogLeNet showed producer's accuracy of 83.8% for cloud and YOLOv3 showed producer's accuracy of 84.0% for cloud. And, the addition of satellite image learning data showed that it can be applied as an alternative when there is a lack of aerial image data.

Textile material classification in clothing images using deep learning (딥러닝을 이용한 의류 이미지의 텍스타일 소재 분류)

  • So Young Lee;Hye Seon Jeong;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.43-51
    • /
    • 2023
  • As online transactions increase, the image of clothing has a great influence on consumer purchasing decisions. The importance of image information for clothing materials has been emphasized, and it is important for the fashion industry to analyze clothing images and grasp the materials used. Textile materials used for clothing are difficult to identify with the naked eye, and much time and cost are consumed in sorting. This study aims to classify the materials of textiles from clothing images based on deep learning algorithms. Classifying materials can help reduce clothing production costs, increase the efficiency of the manufacturing process, and contribute to the service of recommending products of specific materials to consumers. We used machine vision-based deep learning algorithms ResNet and Vision Transformer to classify clothing images. A total of 760,949 images were collected and preprocessed to detect abnormal images. Finally, a total of 167,299 clothing images, 19 textile labels and 20 fabric labels were used. We used ResNet and Vision Transformer to classify clothing materials and compared the performance of the algorithms with the Top-k Accuracy Score metric. As a result of comparing the performance, the Vision Transformer algorithm outperforms ResNet.

Applicability of Image Classification Using Deep Learning in Small Area : Case of Agricultural Lands Using UAV Image (딥러닝을 이용한 소규모 지역의 영상분류 적용성 분석 : UAV 영상을 이용한 농경지를 대상으로)

  • Choi, Seok-Keun;Lee, Soung-Ki;Kang, Yeon-Bin;Seong, Seon-Kyeong;Choi, Do-Yeon;Kim, Gwang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.1
    • /
    • pp.23-33
    • /
    • 2020
  • Recently, high-resolution images can be easily acquired using UAV (Unmanned Aerial Vehicle), so that it is possible to produce small area observation and spatial information at low cost. In particular, research on the generation of cover maps in crop production areas is being actively conducted for monitoring the agricultural environment. As a result of comparing classification performance by applying RF(Random Forest), SVM(Support Vector Machine) and CNN(Convolutional Neural Network), deep learning classification method has many advantages in image classification. In particular, land cover classification using satellite images has the advantage of accuracy and time of classification using satellite image data set and pre-trained parameters. However, UAV images have different characteristics such as satellite images and spatial resolution, which makes it difficult to apply them. In order to solve this problem, we conducted a study on the application of deep learning algorithms that can be used for analyzing agricultural lands where UAV data sets and small-scale composite cover exist in Korea. In this study, we applied DeepLab V3 +, FC-DenseNet (Fully Convolutional DenseNets) and FRRN-B (Full-Resolution Residual Networks), the semantic image classification of the state-of-art algorithm, to UAV data set. As a result, DeepLab V3 + and FC-DenseNet have an overall accuracy of 97% and a Kappa coefficient of 0.92, which is higher than the conventional classification. The applicability of the cover classification using UAV images of small areas is shown.

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning (딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net)

  • Shin, SeokYong;Lee, SangHun;Han, HyunHo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.10
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we proposed an Atrous Residual U-Net (AR-UNet) to improve the segmentation accuracy of semantic segmentation method based on U-Net. The U-Net is mainly used in fields such as medical image analysis, autonomous vehicles, and remote sensing images. The conventional U-Net lacks extracted features due to the small number of convolution layers in the encoder part. The extracted features are essential for classifying object categories, and if they are insufficient, it causes a problem of lowering the segmentation accuracy. Therefore, to improve this problem, we proposed the AR-UNet using residual learning and ASPP in the encoder. Residual learning improves feature extraction ability and is effective in preventing feature loss and vanishing gradient problems caused by continuous convolutions. In addition, ASPP enables additional feature extraction without reducing the resolution of the feature map. Experiments verified the effectiveness of the AR-UNet with Cityscapes dataset. The experimental results showed that the AR-UNet showed improved segmentation results compared to the conventional U-Net. In this way, AR-UNet can contribute to the advancement of many applications where accuracy is important.