• Title/Summary/Keyword: Image Learning

Search Result 3,108, Processing Time 0.038 seconds

Image-based rainfall prediction from a novel deep learning method

  • Byun, Jongyun;Kim, Jinwon;Jun, Changhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.183-183
    • /
    • 2021
  • Deep learning methods and their application have become an essential part of prediction and modeling in water-related research areas, including hydrological processes, climate change, etc. It is known that application of deep learning leads to high availability of data sources in hydrology, which shows its usefulness in analysis of precipitation, runoff, groundwater level, evapotranspiration, and so on. However, there is still a limitation on microclimate analysis and prediction with deep learning methods because of deficiency of gauge-based data and shortcomings of existing technologies. In this study, a real-time rainfall prediction model was developed from a sky image data set with convolutional neural networks (CNNs). These daily image data were collected at Chung-Ang University and Korea University. For high accuracy of the proposed model, it considers data classification, image processing, ratio adjustment of no-rain data. Rainfall prediction data were compared with minutely rainfall data at rain gauge stations close to image sensors. It indicates that the proposed model could offer an interpolation of current rainfall observation system and have large potential to fill an observation gap. Information from small-scaled areas leads to advance in accurate weather forecasting and hydrological modeling at a micro scale.

  • PDF

Active Learning on Sparse Graph for Image Annotation

  • Li, Minxian;Tang, Jinhui;Zhao, Chunxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2650-2662
    • /
    • 2012
  • Due to the semantic gap issue, the performance of automatic image annotation is still far from satisfactory. Active learning approaches provide a possible solution to cope with this problem by selecting most effective samples to ask users to label for training. One of the key research points in active learning is how to select the most effective samples. In this paper, we propose a novel active learning approach based on sparse graph. Comparing with the existing active learning approaches, the proposed method selects the samples based on two criteria: uncertainty and representativeness. The representativeness indicates the contribution of a sample's label propagating to the other samples, while the existing approaches did not take the representativeness into consideration. Extensive experiments show that bringing the representativeness criterion into the sample selection process can significantly improve the active learning effectiveness.

Deep Learning in Genomic and Medical Image Data Analysis: Challenges and Approaches

  • Yu, Ning;Yu, Zeng;Gu, Feng;Li, Tianrui;Tian, Xinmin;Pan, Yi
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.204-214
    • /
    • 2017
  • Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.

Intra-class Local Descriptor-based Prototypical Network for Few-Shot Learning

  • Huang, Xi-Lang;Choi, Seon Han
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.52-60
    • /
    • 2022
  • Few-shot learning is a sub-area of machine learning problems, which aims to classify target images that only contain a few labeled samples for training. As a representative few-shot learning method, the Prototypical network has been received much attention due to its simplicity and promising results. However, the Prototypical network uses the sample mean of samples from the same class as the prototypes of that class, which easily results in learning uncharacteristic features in the low-data scenery. In this study, we propose to use local descriptors (i.e., patches along the channel within feature maps) from the same class to explicitly obtain more representative prototypes for Prototypical Network so that significant intra-class feature information can be maintained and thus improving the classification performance on few-shot learning tasks. Experimental results on various benchmark datasets including mini-ImageNet, CUB-200-2011, and tiered-ImageNet show that the proposed method can learn more discriminative intra-class features by the local descriptors and obtain more generic prototype representations under the few-shot setting.

Image generation and classification using GAN-based Semi Supervised Learning (GAN기반의 Semi Supervised Learning을 활용한 이미지 생성 및 분류)

  • Doyoon Jung;Gwangmi Choi;NamHo Kim
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.27-35
    • /
    • 2024
  • This study deals with a method of combining image generation using Semi Supervised Learning based on GAN (Generative Adversarial Network) and image classification using ResNet50. Through this, a new approach was proposed to obtain more accurate and diverse results by integrating image generation and classification. The generator and discriminator are trained to distinguish generated images from actual images, and image classification is performed using ResNet50. In the experimental results, it was confirmed that the quality of the generated images changes depending on the epoch, and through this, we aim to improve the accuracy of industrial accident prediction. In addition, we would like to present an efficient method to improve the quality of image generation and increase the accuracy of image classification through the combination of GAN and ResNet50.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Evaluation of Adult Lung CT Image for Ultra-Low-Dose CT Using Deep Learning Based Reconstruction

  • JO, Jun-Ho;MIN, Hyo-June;JEON, Kwang-Ho;KIM, Yu-Jin;LEE, Sang-Hyeok;KIM, Mi-Sung;JEON, Pil-Hyun;KIM, Daehong;BAEK, Cheol-Ha;LEE, Hakjae
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.1-5
    • /
    • 2021
  • Although CT has an advantage in describing the three-dimensional anatomical structure of the human body, it also has a disadvantage in that high doses are exposed to the patient. Recently, a deep learning-based image reconstruction method has been used to reduce patient dose. The purpose of this study is to analyze the dose reduction and image quality improvement of deep learning-based reconstruction (DLR) on the adult's chest CT examination. Adult lung phantom was used for image acquisition and analysis. Lung phantom was scanned at ultra-low-dose (ULD), low-dose (LD), and standard dose (SD) modes, and images were reconstructed using FBP (Filtered back projection), IR (Iterative reconstruction), DLR (Deep learning reconstruction) algorithms. Image quality variations with respect to varying imaging doses were evaluated using noise and SNR. At ULD mode, the noise of the DLR image was reduced by 62.42% compared to the FBP image, and at SD mode, the SNR of the DLR image was increased by 159.60% compared to the SNR of the FBP image. Based on this study, it is anticipated that the DLR will not only substantially reduce the chest CT dose but also drastic improvement of the image quality.

Thermal Image Processing and Synthesis Technique Using Faster-RCNN (Faster-RCNN을 이용한 열화상 이미지 처리 및 합성 기법)

  • Shin, Ki-Chul;Lee, Jun-Su;Kim, Ju-Sik;Kim, Ju-Hyung;Kwon, Jang-woo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.30-38
    • /
    • 2021
  • In this paper, we propose a method for extracting thermal data from thermal image and improving detection of heating equipment using the data. The main goal is to read the data in bytes from the thermal image file to extract the thermal data and the real image, and to apply the composite image obtained by synthesizing the image and data to the deep learning model to improve the detection accuracy of the heating facility. Data of KHNP was used for evaluation data, and Faster-RCNN is used as a learning model to compare and evaluate deep learning detection performance according to each data group. The proposed method improved on average by 0.17 compared to the existing method in average precision evaluation.As a result, this study attempted to combine national data-based thermal image data and deep learning detection to improve effective data utilization.

A Review on Deep Learning-based Image Outpainting (딥러닝 기반 이미지 아웃페인팅 기술의 현황 및 최신 동향)

  • Kim, Kyunghun;Kong, Kyeongbo;Kang, Suk-ju
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.61-69
    • /
    • 2021
  • Image outpainting is a very interesting problem in that it can continuously fill the outside of a given image by considering the context of the image. There are two main challenges in this work. The first is to maintain the spatial consistency of the content of the generated area and the original input. The second is to generate high quality large image with a small amount of adjacent information. Existing image outpainting methods have difficulties such as generating inconsistent, blurry, and repetitive pixels. However, thanks to the recent development of deep learning technology, deep learning-based algorithms that show high performance compared to existing traditional techniques have been introduced. Deep learning-based image outpainting has been actively researched with various networks proposed until now. In this paper, we would like to introduce the latest technology and trends in the field of outpainting. This study compared recent techniques by analyzing representative networks among deep learning-based outpainting algorithms and showed experimental results through various data sets and comparison methods.

An Effective Framework for Contented-Based Image Retrieval with Multi-Instance Learning Techniques

  • Peng, Yu;Wei, Kun-Juan;Zhang, Da-Li
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.1 no.1
    • /
    • pp.18-22
    • /
    • 2007
  • Multi-Instance Learning(MIL) performs well to deal with inherently ambiguity of images in multimedia retrieval. In this paper, an effective framework for Contented-Based Image Retrieval(CBIR) with MIL techniques is proposed, the effective mechanism is based on the image segmentation employing improved Mean Shift algorithm, and processes the segmentation results utilizing mathematical morphology, where the goal is to detect the semantic concepts contained in the query. Every sub-image detected is represented as a multiple features vector which is regarded as an instance. Each image is produced to a bag comprised of a flexible number of instances. And we apply a few number of MIL algorithms in this framework to perform the retrieval. Extensive experimental results illustrate the excellent performance in comparison with the existing methods of CBIR with MIL.

  • PDF