• Title/Summary/Keyword: generative learning

Search Result 283, Processing Time 0.038 seconds

A Developing a Machine Leaning-Based Defect Data Management System For Multi-Family Housing Unit (기계학습 알고리즘 기반 하자 정보 관리 시스템 개발 - 공동주택 전용부분을 중심으로 -)

  • Park, Da-seul;Cha, Hee-sung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.35-43
    • /
    • 2023
  • Along with the increase in Multi-unit housing defect disputes, the importance of defect management is also increased. However, previous studies have mostly focused on the Multi-unit housing's 'common part'. In addition, there is a lack of research on the system for the 'management office', which is a part of the subject of defect management. These resulted in the lack of defect management capability of the management office and the deterioration of management quality. Therefore, this paper proposes a machine learning-based defect data management system for management offices. The goal is to solve the inconvenience of management by using Optical Character Recognition (OCR) and Natural Language Processing (NLP) modules. This system converts handwritten defect information into online text via OCR. By using the language model, the defect information is regenerated along with the form specified by the user. Eventually, the generated text is stored in a database and statistical analysis is performed. Through this chain of system, management office is expected to improve its defect management capabilities and support decision-making.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

Deep survey using deep learning: generative adversarial network

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.78.1-78.1
    • /
    • 2019
  • There are a huge number of faint objects that have not been observed due to the lack of large and deep surveys. In this study, we demonstrate that a deep learning approach can produce a better quality deep image from a single pass imaging so that could be an alternative of conventional image stacking technique or the expensive large and deep surveys. Using data from the Sloan Digital Sky Survey (SDSS) stripe 82 which provide repeatedly scanned imaging data, a training data set is constructed: g-, r-, and i-band images of single pass data as an input and r-band co-added image as a target. Out of 151 SDSS fields that have been repeatedly scanned 34 times, 120 fields were used for training and 31 fields for validation. The size of a frame selected for the training is 1k by 1k pixel scale. To avoid possible problems caused by the small number of training sets, frames are randomly selected within that field each iteration of training. Every 5000 iterations of training, the performance were evaluated with RMSE, peak signal-to-noise ratio which is given on logarithmic scale, structural symmetry index (SSIM) and difference in SSIM. We continued the training until a GAN model with the best performance is found. We apply the best GAN-model to NGC0941 located in SDSS stripe 82. By comparing the radial surface brightness and photometry error of images, we found the possibility that this technique could generate a deep image with statistics close to the stacked image from a single-pass image.

  • PDF

Analysis of methods for the model extraction without training data (학습 데이터가 없는 모델 탈취 방법에 대한 분석)

  • Hyun Kwon;Yonggi Kim;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.57-64
    • /
    • 2023
  • In this study, we analyzed how to steal the target model without training data. Input data is generated using the generative model, and a similar model is created by defining a loss function so that the predicted values of the target model and the similar model are close to each other. At this time, the target model has a process of learning so that the similar model is similar to it by gradient descent using the logit (logic) value of each class for the input data. The tensorflow machine learning library was used as an experimental environment, and CIFAR10 and SVHN were used as datasets. A similar model was created using the ResNet model as a target model. As a result of the experiment, it was found that the model stealing method generated a similar model with an accuracy of 86.18% for CIFAR10 and 96.02% for SVHN, producing similar predicted values to the target model. In addition, considerations on the model stealing method, military use, and limitations were also analyzed.

Three Teaching-Learning Plans for Integrated Science Teaching of 'Energy' Applying Knowledge-, Social Problem-, and Individual Interest-Centered Approaches (지식내용, 사회문제, 개인흥미 중심의 통합과학교육 접근법을 적용한 '에너지' 주제의 교수.학습 방안 개발(II))

  • Lee, Mi-Hye;Son, Yeon-A;Young, Donald B.;Choi, Don-Hyung
    • Journal of The Korean Association For Science Education
    • /
    • v.21 no.2
    • /
    • pp.357-384
    • /
    • 2001
  • In this paper, we described practical teaching-learning plans based on three different theoretical approaches to Integrated Science Education (ISE): a knowledge centered ISE, a social problem centered ISE, and an individual interest centered ISE. We believe that science teachers can understand integrated science education through this paper and they are able to apply simultaneously our integrated science teaching materials to their real instruction in classroom. For this we developed integrated science teaching-learning plans for the topic of energy which has a integrated feature strongly among integrated science subject contents. These modules were based upon the teaching strategies of 'Energy' following each integrated directions organized in the previous paper (Three Strategies for Integrated Science Teaching of "Energy" Applying Knowledge, Social Problem, and Individual Interest Centered Approaches) and we applied instruction models fitting each features of integrated directions to the teaching strategies of 'Energy'. There is a concrete describing on the above three integrated science teaching-learning plans as follows. 1. For the knowledge centered integration, we selected the topic, 'Journey of Energy' and we tried to integrate the knowledge of physics, chemistry, biology, and earth science applying the instruction model of 'Free Discovery Learning' which is emphasized on concepts and inquiry. 2. For the social problem centered integration, we selected the topic, 'Future of Energy' to resolve the science-related social problems and we applied the instruction model of 'Project Learning' which is emphasized on learner's cognitive process to the topic. 3. For the individual interest centered integration, we selected the topic, 'Transformation of Energy' for the integration of science and individual interest and we applied the instruction model of 'Project Learning' centering learner's interest and concern. Based upon the above direction, we developed the integrated science teaching-learning plans as following steps. First, we organized 'Integrated Teaching-Learning Contents' according to the topics. Second, based upon the above organization, we designed 'Instructional procedures' to integrate within the topics. Third, in accordance with the above 'Instructional Procedures', we created 'Instructional Coaching Plan' that can be applied in the practical world of real classrooms. These plans can be used as models for the further development of integrated science instruction for teacher preparation, textbook development, and classroom learning.

  • PDF

A COVID-19 Chest X-ray Reading Technique based on Deep Learning (딥 러닝 기반 코로나19 흉부 X선 판독 기법)

  • Ann, Kyung-Hee;Ohm, Seong-Yong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.789-795
    • /
    • 2020
  • Many deaths have been reported due to the worldwide pandemic of COVID-19. In order to prevent the further spread of COVID-19, it is necessary to quickly and accurately read images of suspected patients and take appropriate measures. To this end, this paper introduces a deep learning-based COVID-19 chest X-ray reading technique that can assist in image reading by providing medical staff whether a patient is infected. First of all, in order to learn the reading model, a sufficient dataset must be secured, but the currently provided COVID-19 open dataset does not have enough image data to ensure the accuracy of learning. Therefore, we solved the image data number imbalance problem that degrades AI learning performance by using a Stacked Generative Adversarial Network(StackGAN++). Next, the DenseNet-based classification model was trained using the augmented data set to develop the reading model. This classification model is a model for binary classification of normal chest X-ray and COVID-19 chest X-ray, and the performance of the model was evaluated using part of the actual image data as test data. Finally, the reliability of the model was secured by presenting the basis for judging the presence or absence of disease in the input image using Grad-CAM, one of the explainable artificial intelligence called XAI.

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • v.63 no.3
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

A research on the possibility of restoring cultural assets of artificial intelligence through the application of artificial neural networks to roof tile(Wadang)

  • Kim, JunO;Lee, Byong-Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.19-26
    • /
    • 2021
  • Cultural assets excavated in historical areas have their own characteristics based on the background of the times, and it can be seen that their patterns and characteristics change little by little according to the history and the flow of the spreading area. Cultural properties excavated in some areas represent the culture of the time and some maintain their intact appearance, but most of them are damaged/lost or divided into parts, and many experts are mobilized to research the composition and repair the damaged parts. The purpose of this research is to learn patterns and characteristics of the past through artificial intelligence neural networks for such restoration research, and to restore the lost parts of the excavated cultural assets based on Generative Adversarial Network(GAN)[1]. The research is a process in which the rest of the damaged/lost parts are restored based on some of the cultural assets excavated based on the GAN. To recover some parts of dammed of cultural asset, through training with the 2D image of a complete cultural asset. This research is focused on how much recovered not only damaged parts but also reproduce colors and materials. Finally, through adopted this trained neural network to real damaged cultural, confirmed area of recovered area and limitation.

The Effect of Training Patch Size and ConvNeXt application on the Accuracy of CycleGAN-based Satellite Image Simulation (학습패치 크기와 ConvNeXt 적용이 CycleGAN 기반 위성영상 모의 정확도에 미치는 영향)

  • Won, Taeyeon;Jo, Su Min;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2022
  • A method of restoring the occluded area was proposed by referring to images taken with the same types of sensors on high-resolution optical satellite images through deep learning. For the natural continuity of the simulated image with the occlusion region and the surrounding image while maintaining the pixel distribution of the original image as much as possible in the patch segmentation image, CycleGAN (Cycle Generative Adversarial Network) method with ConvNeXt block applied was used to analyze three experimental regions. In addition, We compared the experimental results of a training patch size of 512*512 pixels and a 1024*1024 pixel size that was doubled. As a result of experimenting with three regions with different characteristics,the ConvNeXt CycleGAN methodology showed an improved R2 value compared to the existing CycleGAN-applied image and histogram matching image. For the experiment by patch size used for training, an R2 value of about 0.98 was generated for a patch of 1024*1024 pixels. Furthermore, As a result of comparing the pixel distribution for each image band, the simulation result trained with a large patch size showed a more similar histogram distribution to the original image. Therefore, by using ConvNeXt CycleGAN, which is more advanced than the image applied with the existing CycleGAN method and the histogram-matching image, it is possible to derive simulation results similar to the original image and perform a successful simulation.