• Title/Summary/Keyword: Image Augmentation

Search Result 213, Processing Time 0.024 seconds

Automatic Detection System of Underground Pipe Using 3D GPR Exploration Data and Deep Convolutional Neural Networks

  • Son, Jeong-Woo;Moon, Gwi-Seong;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.27-37
    • /
    • 2021
  • In this paper, we propose Automatic detection system of underground pipe which automatically detects underground pipe to help experts. Actual location of underground pipe does not match with blueprint due to various factors such as ground changes over time, construction discrepancies, etc. So, various accidents occur during excavation or just by ageing. Locating underground utilities is done through GPR exploration to prevent these accidents but there are shortage of experts, because GPR data is enormous and takes long time to analyze. In this paper, To analyze 3D GPR data automatically, we use 3D image segmentation, one of deep learning technique, and propose proper data generation algorithm. We also propose data augmentation technique and pre-processing module that are adequate to GPR data. In experiment results, we found the possibility for pipe analysis using image segmentation through our system recorded the performance of F1 score 40.4%.

Estimation of Heading Date of Paddy Rice from Slanted View Images Using Deep Learning Classification Model

  • Hyeokjin Bak;Hoyoung Ban;SeongryulChang;Dongwon Gwon;Jae-Kyeong Baek;Jeong-Il Cho;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.80-80
    • /
    • 2022
  • Estimation of heading date of paddy rice is laborious and time consuming. Therefore, automatic estimation of heading date of paddy rice is highly essential. In this experiment, deep learning classification models were used to classify two difference categories of rice (vegetative and reproductive stage) based on the panicle initiation of paddy field. Specifically, the dataset includes 444 slanted view images belonging to two categories and was then expanded to include 1,497 images via IMGAUG data augmentation technique. We adopt two transfer learning strategies: (First, used transferring model weights already trained on ImageNet to six classification network models: VGGNet, ResNet, DenseNet, InceptionV3, Xception and MobileNet, Second, fine-tuned some layers of the network according to our dataset). After training the CNN model, we used several evaluation metrics commonly used for classification tasks, including Accuracy, Precision, Recall, and F1-score. In addition, GradCAM was used to generate visual explanations for each image patch. Experimental results showed that the InceptionV3 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. The fine-tuned InceptionV3 model achieved an overall classification accuracy of 0.95 with a high F1-score of 0.95. Our CNN model also represented the change of rice heading date under different date of transplanting. This study demonstrated that image based deep learning model can reliably be used as an automatic monitoring system to detect the heading date of rice crops using CCTV camera.

  • PDF

Image-Based Skin Cancer Classification System Using Attention Layer (Attention layer를 활용한 이미지 기반 피부암 분류 시스템)

  • GyuWon Lee;SungHee Woo
    • Journal of Practical Engineering Education
    • /
    • v.16 no.1_spc
    • /
    • pp.59-64
    • /
    • 2024
  • As the aging population grows, the incidence of cancer is increasing. Skin cancer appears externally, but people often don't notice it or simply overlook it. As a result, if the early detection period is missed, the survival rate in the case of late stage cancer is only 7.5-11%. However, the disadvantage of diagnosing, serious skin cancer is that it requires a lot of time and money, such as a detailed examination and cell tests, rather than simple visual diagnosis. To overcome these challenges, we propose an Attention-based CNN model skin cancer classification system. If skin cancer can be detected early, it can be treated quickly, and the proposed system can greatly help the work of a specialist. To mitigate the problem of image data imbalance according to skin cancer type, this skin cancer classification model applies the Over Sampling, technique to data with a high distribution ratio, and adds a pre-learning model without an Attention layer. This model is then compared to the model without the Attention layer. We also plan to solve the data imbalance problem by strengthening data augmentation techniques for specific classes.

Strawberry Pests and Diseases Detection Technique Optimized for Symptoms Using Deep Learning Algorithm (딥러닝을 이용한 병징에 최적화된 딸기 병충해 검출 기법)

  • Choi, Young-Woo;Kim, Na-eun;Paudel, Bhola;Kim, Hyeon-tae
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.255-260
    • /
    • 2022
  • This study aimed to develop a service model that uses a deep learning algorithm for detecting diseases and pests in strawberries through image data. In addition, the pest detection performance of deep learning models was further improved by proposing segmented image data sets specialized in disease and pest symptoms. The CNN-based YOLO deep learning model was selected to enhance the existing R-CNN-based model's slow learning speed and inference speed. A general image data set and a proposed segmented image dataset was prepared to train the pest and disease detection model. When the deep learning model was trained with the general training data set, the pest detection rate was 81.35%, and the pest detection reliability was 73.35%. On the other hand, when the deep learning model was trained with the segmented image dataset, the pest detection rate increased to 91.93%, and detection reliability was increased to 83.41%. This study concludes with the possibility of improving the performance of the deep learning model by using a segmented image dataset instead of a general image dataset.

Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning (딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구)

  • Cho, Woo-Yun;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.2
    • /
    • pp.96-109
    • /
    • 2024
  • The purpose of this study is to propose a method for evaluating the similarity of Show gardens using Deep Learning models, specifically VGG-16 and ResNet50. A model for judging the similarity of show gardens based on VGG-16 and ResNet50 models was developed, and was referred to as DRG (Deep Recognition of similarity in show Garden design). An algorithm utilizing GAP and Pearson correlation coefficient was employed to construct the model, and the accuracy of similarity was analyzed by comparing the total number of similar images derived at 1st (Top1), 3rd (Top3), and 5th (Top5) ranks with the original images. The image data used for the DRG model consisted of a total of 278 works from the Le Festival International des Jardins de Chaumont-sur-Loire, 27 works from the Seoul International Garden Show, and 17 works from the Korea Garden Show. Image analysis was conducted using the DRG model for both the same group and different groups, resulting in the establishment of guidelines for assessing show garden similarity. First, overall image similarity analysis was best suited for applying data augmentation techniques based on the ResNet50 model. Second, for image analysis focusing on internal structure and outer form, it was effective to apply a certain size filter (16cm × 16cm) to generate images emphasizing form and then compare similarity using the VGG-16 model. It was suggested that an image size of 448 × 448 pixels and the original image in full color are the optimal settings. Based on these research findings, a quantitative method for assessing show gardens is proposed and it is expected to contribute to the continuous development of garden culture through interdisciplinary research moving forward.

Development and application of stent-based image guided navigation system for oral and maxillofacial surgery (구강외과 수술용 스텐트 기반 영상유도 수술 시스템의 개발)

  • Lee, Woo-Jin;Kim, Dae-Seung;Yi, Won-Jin;Lee, Sam-Sun;Choi, Soon-Chul;Heo, Min-Suk;Huh, Kyung-Hoe;Kim, Myung-Jin;Lee, Jee-Ho
    • Imaging Science in Dentistry
    • /
    • v.39 no.3
    • /
    • pp.149-156
    • /
    • 2009
  • Purpose : The purpose of this study was to develop a stent-based image guided surgery system and to apply it to oral and maxillofacial surgeries for anatomically complex sites. Materials and Methods : We devised a patient-specific stent for patient-to-image registration and navigation. Three-dimensional positions of the reference probe and the tool probe were tracked by an optical camera system and the relative position of the handpiece drill tip to the reference probe was monitored continuously on the monitor of a PC. Using 8 landmarks for measuring accuracy, the spatial discrepancy between CT image coordinate and physical coordinate was calculated for testing the normality. Results : The accuracy over 8 anatomical landmarks showed an overall mean of $0.56{\pm}0.16\;mm$. The developed system was applied to a surgery for a vertical alveolar bone augmentation in right mandibular posterior area and possible interior alveolar nerve injury case of an impacted third molar. The developed system provided continuous monitoring of invisible anatomical structures during operation and 3D information for operation sites. The clinical challenge showed sufficient accuracy and availability of anatomically complex operation sites. Conclusion : The developed system showed sufficient accuracy and availability in oral and maxillofacial surgeries for anatomically complex sites.

  • PDF

Development of a method for urban flooding detection using unstructured data and deep learing (비정형 데이터와 딥러닝을 활용한 내수침수 탐지기술 개발)

  • Lee, Haneul;Kim, Hung Soo;Kim, Soojun;Kim, Donghyun;Kim, Jongsung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1233-1242
    • /
    • 2021
  • In this study, a model was developed to determine whether flooding occurred using image data, which is unstructured data. CNN-based VGG16 and VGG19 were used to develop the flood classification model. In order to develop a model, images of flooded and non-flooded images were collected using web crawling method. Since the data collected using the web crawling method contains noise data, data irrelevant to this study was primarily deleted, and secondly, the image size was changed to 224×224 for model application. In addition, image augmentation was performed by changing the angle of the image for diversity of image. Finally, learning was performed using 2,500 images of flooding and 2,500 images of non-flooding. As a result of model evaluation, the average classification performance of the model was found to be 97%. In the future, if the model developed through the results of this study is mounted on the CCTV control center system, it is judged that the respons against flood damage can be done quickly.

Enhancing CT Image Quality Using Conditional Generative Adversarial Networks for Applying Post-mortem Computed Tomography in Forensic Pathology: A Phantom Study (사후전산화단층촬영의 법의병리학 분야 활용을 위한 조건부 적대적 생성 신경망을 이용한 CT 영상의 해상도 개선: 팬텀 연구)

  • Yebin Yoon;Jinhaeng Heo;Yeji Kim;Hyejin Jo;Yongsu Yoon
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.315-323
    • /
    • 2023
  • Post-mortem computed tomography (PMCT) is commonly employed in the field of forensic pathology. PMCT was mainly performed using a whole-body scan with a wide field of view (FOV), which lead to a decrease in spatial resolution due to the increased pixel size. This study aims to evaluate the potential for developing a super-resolution model based on conditional generative adversarial networks (CGAN) to enhance the image quality of CT. 1761 low-resolution images were obtained using a whole-body scan with a wide FOV of the head phantom, and 341 high-resolution images were obtained using the appropriate FOV for the head phantom. Of the 150 paired images in the total dataset, which were divided into training set (96 paired images) and validation set (54 paired images). Data augmentation was perform to improve the effectiveness of training by implementing rotations and flips. To evaluate the performance of the proposed model, we used the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Deep Image Structure and Texture Similarity (DISTS). Obtained the PSNR, SSIM, and DISTS values of the entire image and the Medial orbital wall, the zygomatic arch, and the temporal bone, where fractures often occur during head trauma. The proposed method demonstrated improvements in values of PSNR by 13.14%, SSIM by 13.10% and DISTS by 45.45% when compared to low-resolution images. The image quality of the three areas where fractures commonly occur during head trauma has also improved compared to low-resolution images.

Assessment of Visual Landscape Image Analysis Method Using CNN Deep Learning - Focused on Healing Place - (CNN 딥러닝을 활용한 경관 이미지 분석 방법 평가 - 힐링장소를 대상으로 -)

  • Sung, Jung-Han;Lee, Kyung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.166-178
    • /
    • 2023
  • This study aims to introduce and assess CNN Deep Learning methods to analyze visual landscape images on social media with embedded user perceptions and experiences. This study analyzed visual landscape images by focusing on a healing place. For the study, seven adjectives related to healing were selected through text mining and consideration of previous studies. Subsequently, 50 evaluators were recruited to build a Deep Learning image. Evaluators were asked to collect three images most suitable for 'healing', 'healing landscape', and 'healing place' on portal sites. The collected images were refined and a data augmentation process was applied to build a CNN model. After that, 15,097 images of 'healing' and 'healing landscape' on portal sites were collected and classified to analyze the visual landscape of a healing place. As a result of the study, 'quiet' was the highest in the category except 'other' and 'indoor' with 2,093 (22%), followed by 'open', 'joyful', 'comfortable', 'clean', 'natural', and 'beautiful'. It was found through research that CNN Deep Learning is an analysis method that can derive results from visual landscape image analysis. It also suggested that it is one way to supplement the existing visual landscape analysis method, and suggests in-depth and diverse visual landscape analysis in the future by establishing a landscape image learning dataset.

Granular Bidirectional and Multidirectional Associative Memories: Towards a Collaborative Buildup of Granular Mappings

  • Pedrycz, Witold
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.435-447
    • /
    • 2017
  • Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one-directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so-called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.