• Title/Summary/Keyword: adversarial training

Search Result 101, Processing Time 0.029 seconds

A Comparison of Deep Reinforcement Learning and Deep learning for Complex Image Analysis

  • Khajuria, Rishi;Quyoom, Abdul;Sarwar, Abid
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.1-10
    • /
    • 2020
  • The image analysis is an important and predominant task for classifying the different parts of the image. The analysis of complex image analysis like histopathological define a crucial factor in oncology due to its ability to help pathologists for interpretation of images and therefore various feature extraction techniques have been evolved from time to time for such analysis. Although deep reinforcement learning is a new and emerging technique but very less effort has been made to compare the deep learning and deep reinforcement learning for image analysis. The paper highlights how both techniques differ in feature extraction from complex images and discusses the potential pros and cons. The use of Convolution Neural Network (CNN) in image segmentation, detection and diagnosis of tumour, feature extraction is important but there are several challenges that need to be overcome before Deep Learning can be applied to digital pathology. The one being is the availability of sufficient training examples for medical image datasets, feature extraction from whole area of the image, ground truth localized annotations, adversarial effects of input representations and extremely large size of the digital pathological slides (in gigabytes).Even though formulating Histopathological Image Analysis (HIA) as Multi Instance Learning (MIL) problem is a remarkable step where histopathological image is divided into high resolution patches to make predictions for the patch and then combining them for overall slide predictions but it suffers from loss of contextual and spatial information. In such cases the deep reinforcement learning techniques can be used to learn feature from the limited data without losing contextual and spatial information.

Challenges to Prevent in Practice for Effective Cost and Time Control of Construction Projects

  • Olawale, Yakubu A.
    • Journal of Construction Engineering and Project Management
    • /
    • v.10 no.1
    • /
    • pp.16-32
    • /
    • 2020
  • Cost and time control of projects is important in preventing project failure. However, achieving effective cost and time control in practice is often challenging. The challenges of project cost and time control in practice are investigated by carrying out a questionnaire survey on the top 150 construction contractors in the UK followed by in-depth semi-structured interviews of practitioners from 15 construction companies in the country. Quantitative analysis reveals that design change is the most important factor inhibiting the ability of UK contractors from effectively controlling both the cost and time of construction projects. Four of the top five factors inhibiting effective cost control are also the top factors inhibiting effective time control albeit in a different order. These top factors-design changes, inaccurate evaluation of project time/duration, risk and uncertainty, non-performance of subcontractors and nominated suppliers were also found to be endogenous factors to the project. Additionally, qualitative analysis of the interviews reveals 16 key challenges to prevent for effective project cost and time control in practice. These are classified into four categorised based on where they stem from as follows; from the organisation (1. Lack of integration of cost and time during project control, 2. lack of management buy-in, 3. complicated project control systems and processes, 4. lack of a project control training regime); from the construction management/project management approach (5. Lapses in integration of interfaces, 6. project control not being implemented from the early stages of a project, 7. inefficient utilisation and control of labour, 8. limited time devoted to planning how a project will be controlled at the outset); from the client; (9. Excessive authorisation gates, 10. use of adversarial and non-collaborative forms of contracts, 11. communication problems within client set-up, 12. obstructive client representatives) and; from the project team (13. Lack of detailed/complete design, 14. lack of trust among the project partners, 15. limited time devoted to project control on site, 16. non-factual reporting). The study posits that knowledge of these project control inhibiting factors and challenges is the first step at ensuring they are avoided and enable the implementation of a more effective project cost and time control process in practice.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Analysis of Deep Learning Model Vulnerability According to Input Mutation (입력 변이에 따른 딥러닝 모델 취약점 연구 및 검증)

  • Kim, Jaeuk;Park, Leo Hyun;Kwon, Taekyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.1
    • /
    • pp.51-59
    • /
    • 2021
  • The deep learning model can produce false prediction results due to inputs that deviate from training data through variation, which leads to fatal accidents in areas such as autonomous driving and security. To ensure reliability of the model, the model's coping ability for exceptional situations should be verified through various mutations. However, previous studies were carried out on limited scope of models and used several mutation types without separating them. Based on the CIFAR10 data set, widely used dataset for deep learning verification, this study carries out reliability verification for total of six models including various commercialized models and their additional versions. To this end, six types of input mutation algorithms that may occur in real life are applied individually with their various parameters to the dataset to compare the accuracy of the models for each of them to rigorously identify vulnerabilities of the models associated with a particular mutation type.

Structural health monitoring response reconstruction based on UAGAN under structural condition variations with few-shot learning

  • Jun, Li;Zhengyan, He;Gao, Fan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.687-701
    • /
    • 2022
  • Inevitable response loss under complex operational conditions significantly affects the integrity and quality of measured data, leading the structural health monitoring (SHM) ineffective. To remedy the impact of data loss, a common way is to transfer the recorded response of available measure point to where the data loss occurred by establishing the response mapping from measured data. However, the current research has yet addressed the structural condition changes afterward and response mapping learning from a small sample. So, this paper proposes a novel data driven structural response reconstruction method based on a sophisticated designed generating adversarial network (UAGAN). Advanced deep learning techniques including U-shaped dense blocks, self-attention and a customized loss function are specialized and embedded in UAGAN to improve the universal and representative features extraction and generalized responses mapping establishment. In numerical validation, UAGAN efficiently and accurately captures the distinguished features of structural response from only 40 training samples of the intact structure. Besides, the established response mapping is universal, which effectively reconstructs responses of the structure suffered up to 10% random stiffness reduction or structural damage. In the experimental validation, UAGAN is trained with ambient response and applied to reconstruct response measured under earthquake. The reconstruction losses of response in the time and frequency domains reached 16% and 17%, that is better than the previous research, demonstrating the leading performance of the sophisticated designed network. In addition, the identified modal parameters from reconstructed and the corresponding true responses are highly consistent indicates that the proposed UAGAN is very potential to be applied to practical civil engineering.

Challenges of diet planning for children using artificial intelligence

  • Changhun, Lee;Soohyeok, Kim;Jayun, Kim;Chiehyeon, Lim;Minyoung, Jung
    • Nutrition Research and Practice
    • /
    • v.16 no.6
    • /
    • pp.801-812
    • /
    • 2022
  • BACKGROUND/OBJECTIVES: Diet planning in childcare centers is difficult because of the required knowledge of nutrition and development as well as the high design complexity associated with large numbers of food items. Artificial intelligence (AI) is expected to provide diet-planning solutions via automatic and effective application of professional knowledge, addressing the complexity of optimal diet design. This study presents the results of the evaluation of the utility of AI-generated diets for children and provides related implications. MATERIALS/METHODS: We developed 2 AI solutions for children aged 3-5 yrs using a generative adversarial network (GAN) model and a reinforcement learning (RL) framework. After training these solutions to produce daily diet plans, experts evaluated the human- and AI-generated diets in 2 steps. RESULTS: In the evaluation of adequacy of nutrition, where experts were provided only with nutrient information and no food names, the proportion of strong positive responses to RL-generated diets was higher than that of the human- and GAN-generated diets (P < 0.001). In contrast, in terms of diet composition, the experts' responses to human-designed diets were more positive when experts were provided with food name information (i.e., composition information). CONCLUSIONS: To the best of our knowledge, this is the first study to demonstrate the development and evaluation of AI to support dietary planning for children. This study demonstrates the possibility of developing AI-assisted diet planning methods for children and highlights the importance of composition compliance in diet planning. Further integrative cooperation in the fields of nutrition, engineering, and medicine is needed to improve the suitability of our proposed AI solutions and benefit children's well-being by providing high-quality diet planning in terms of both compositional and nutritional criteria.

Sparse Class Processing Strategy in Image-based Livestock Defect Detection (이미지 기반 축산물 불량 탐지에서의 희소 클래스 처리 전략)

  • Lee, Bumho;Cho, Yesung;Yi, Mun Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1720-1728
    • /
    • 2022
  • The industrial 4.0 era has been opened with the development of artificial intelligence technology, and the realization of smart farms incorporating ICT technology is receiving great attention in the livestock industry. Among them, the quality management technology of livestock products and livestock operations incorporating computer vision-based artificial intelligence technology represent key technologies. However, the insufficient number of livestock image data for artificial intelligence model training and the severely unbalanced ratio of labels for recognizing a specific defective state are major obstacles to the related research and technology development. To overcome these problems, in this study, combining oversampling and adversarial case generation techniques is proposed as a method necessary to effectively utilizing small data labels for successful defect detection. In addition, experiments comparing performance and time cost of the applicable techniques were conducted. Through experiments, we confirm the validity of the proposed methods and draw utilization strategies from the study results.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.