• 제목/요약/키워드: adversarial training

검색결과 101건 처리시간 0.024초

A Comparison of Deep Reinforcement Learning and Deep learning for Complex Image Analysis

  • Khajuria, Rishi;Quyoom, Abdul;Sarwar, Abid
    • Journal of Multimedia Information System
    • /
    • 제7권1호
    • /
    • pp.1-10
    • /
    • 2020
  • The image analysis is an important and predominant task for classifying the different parts of the image. The analysis of complex image analysis like histopathological define a crucial factor in oncology due to its ability to help pathologists for interpretation of images and therefore various feature extraction techniques have been evolved from time to time for such analysis. Although deep reinforcement learning is a new and emerging technique but very less effort has been made to compare the deep learning and deep reinforcement learning for image analysis. The paper highlights how both techniques differ in feature extraction from complex images and discusses the potential pros and cons. The use of Convolution Neural Network (CNN) in image segmentation, detection and diagnosis of tumour, feature extraction is important but there are several challenges that need to be overcome before Deep Learning can be applied to digital pathology. The one being is the availability of sufficient training examples for medical image datasets, feature extraction from whole area of the image, ground truth localized annotations, adversarial effects of input representations and extremely large size of the digital pathological slides (in gigabytes).Even though formulating Histopathological Image Analysis (HIA) as Multi Instance Learning (MIL) problem is a remarkable step where histopathological image is divided into high resolution patches to make predictions for the patch and then combining them for overall slide predictions but it suffers from loss of contextual and spatial information. In such cases the deep reinforcement learning techniques can be used to learn feature from the limited data without losing contextual and spatial information.

Challenges to Prevent in Practice for Effective Cost and Time Control of Construction Projects

  • Olawale, Yakubu A.
    • Journal of Construction Engineering and Project Management
    • /
    • 제10권1호
    • /
    • pp.16-32
    • /
    • 2020
  • Cost and time control of projects is important in preventing project failure. However, achieving effective cost and time control in practice is often challenging. The challenges of project cost and time control in practice are investigated by carrying out a questionnaire survey on the top 150 construction contractors in the UK followed by in-depth semi-structured interviews of practitioners from 15 construction companies in the country. Quantitative analysis reveals that design change is the most important factor inhibiting the ability of UK contractors from effectively controlling both the cost and time of construction projects. Four of the top five factors inhibiting effective cost control are also the top factors inhibiting effective time control albeit in a different order. These top factors-design changes, inaccurate evaluation of project time/duration, risk and uncertainty, non-performance of subcontractors and nominated suppliers were also found to be endogenous factors to the project. Additionally, qualitative analysis of the interviews reveals 16 key challenges to prevent for effective project cost and time control in practice. These are classified into four categorised based on where they stem from as follows; from the organisation (1. Lack of integration of cost and time during project control, 2. lack of management buy-in, 3. complicated project control systems and processes, 4. lack of a project control training regime); from the construction management/project management approach (5. Lapses in integration of interfaces, 6. project control not being implemented from the early stages of a project, 7. inefficient utilisation and control of labour, 8. limited time devoted to planning how a project will be controlled at the outset); from the client; (9. Excessive authorisation gates, 10. use of adversarial and non-collaborative forms of contracts, 11. communication problems within client set-up, 12. obstructive client representatives) and; from the project team (13. Lack of detailed/complete design, 14. lack of trust among the project partners, 15. limited time devoted to project control on site, 16. non-factual reporting). The study posits that knowledge of these project control inhibiting factors and challenges is the first step at ensuring they are avoided and enable the implementation of a more effective project cost and time control process in practice.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

입력 변이에 따른 딥러닝 모델 취약점 연구 및 검증 (Analysis of Deep Learning Model Vulnerability According to Input Mutation)

  • 김재욱;박래현;권태경
    • 정보보호학회논문지
    • /
    • 제31권1호
    • /
    • pp.51-59
    • /
    • 2021
  • 딥러닝 모델은 변이를 통해 훈련 데이터에서 벗어난 입력으로부터 잘못된 예측 결과를 산출할 수 있으며 이는 자율주행, 보안 분야 등에서 치명적인 사고로 이어질 수 있다. 딥러닝 모델의 신뢰성 보장을 위해서는 다양한 변이를 통해 예외적인 상황에 대한 모델의 처리 능력이 검증되어야 한다. 하지만, 기존 연구가 제한된 모델을 대상으로만 수행되었으며, 여러 입력 변이 유형에 구분을 짓지 않고 사용했다. 본 연구에서는 딥러닝 검증 데이터 세트로 널리 사용되고 있는 CIFAR10 데이터 세트를 기반으로 다양한 상용화된 모델과 추가 버전을 포함하여 총 6개의 모델에 대한 신뢰성 검증을 수행한다. 이를 위해 실생활에서 발생할 수 있는 6가지 유형의 입력 변이 알고리즘을 다양한 파라미터와 함께 데이터 세트에 개별적으로 적용하여 각각에 대한 모델의 정확도를 비교함으로써 특정 변이 유형과 관련된 모델의 취약점을 구체적으로 파악한다.

Structural health monitoring response reconstruction based on UAGAN under structural condition variations with few-shot learning

  • Jun, Li;Zhengyan, He;Gao, Fan
    • Smart Structures and Systems
    • /
    • 제30권6호
    • /
    • pp.687-701
    • /
    • 2022
  • Inevitable response loss under complex operational conditions significantly affects the integrity and quality of measured data, leading the structural health monitoring (SHM) ineffective. To remedy the impact of data loss, a common way is to transfer the recorded response of available measure point to where the data loss occurred by establishing the response mapping from measured data. However, the current research has yet addressed the structural condition changes afterward and response mapping learning from a small sample. So, this paper proposes a novel data driven structural response reconstruction method based on a sophisticated designed generating adversarial network (UAGAN). Advanced deep learning techniques including U-shaped dense blocks, self-attention and a customized loss function are specialized and embedded in UAGAN to improve the universal and representative features extraction and generalized responses mapping establishment. In numerical validation, UAGAN efficiently and accurately captures the distinguished features of structural response from only 40 training samples of the intact structure. Besides, the established response mapping is universal, which effectively reconstructs responses of the structure suffered up to 10% random stiffness reduction or structural damage. In the experimental validation, UAGAN is trained with ambient response and applied to reconstruct response measured under earthquake. The reconstruction losses of response in the time and frequency domains reached 16% and 17%, that is better than the previous research, demonstrating the leading performance of the sophisticated designed network. In addition, the identified modal parameters from reconstructed and the corresponding true responses are highly consistent indicates that the proposed UAGAN is very potential to be applied to practical civil engineering.

Challenges of diet planning for children using artificial intelligence

  • Changhun, Lee;Soohyeok, Kim;Jayun, Kim;Chiehyeon, Lim;Minyoung, Jung
    • Nutrition Research and Practice
    • /
    • 제16권6호
    • /
    • pp.801-812
    • /
    • 2022
  • BACKGROUND/OBJECTIVES: Diet planning in childcare centers is difficult because of the required knowledge of nutrition and development as well as the high design complexity associated with large numbers of food items. Artificial intelligence (AI) is expected to provide diet-planning solutions via automatic and effective application of professional knowledge, addressing the complexity of optimal diet design. This study presents the results of the evaluation of the utility of AI-generated diets for children and provides related implications. MATERIALS/METHODS: We developed 2 AI solutions for children aged 3-5 yrs using a generative adversarial network (GAN) model and a reinforcement learning (RL) framework. After training these solutions to produce daily diet plans, experts evaluated the human- and AI-generated diets in 2 steps. RESULTS: In the evaluation of adequacy of nutrition, where experts were provided only with nutrient information and no food names, the proportion of strong positive responses to RL-generated diets was higher than that of the human- and GAN-generated diets (P < 0.001). In contrast, in terms of diet composition, the experts' responses to human-designed diets were more positive when experts were provided with food name information (i.e., composition information). CONCLUSIONS: To the best of our knowledge, this is the first study to demonstrate the development and evaluation of AI to support dietary planning for children. This study demonstrates the possibility of developing AI-assisted diet planning methods for children and highlights the importance of composition compliance in diet planning. Further integrative cooperation in the fields of nutrition, engineering, and medicine is needed to improve the suitability of our proposed AI solutions and benefit children's well-being by providing high-quality diet planning in terms of both compositional and nutritional criteria.

이미지 기반 축산물 불량 탐지에서의 희소 클래스 처리 전략 (Sparse Class Processing Strategy in Image-based Livestock Defect Detection)

  • 이범호;조예성;이문용
    • 한국정보통신학회논문지
    • /
    • 제26권11호
    • /
    • pp.1720-1728
    • /
    • 2022
  • 인공지능 기술의 발전으로 산업 4.0시대가 열렸고 축산업에서도 ICT 기술이 접목된 스마트 농장의 구현이 큰 관심을 받고 있다. 그중에서도 컴퓨터 비전 기반 인공지능 기술을 접목한 축산물 및 축산 가공품의 품질 관리 기술은 스마트 축산의 핵심 기술에 해당한다. 그러나 인공지능 모형 훈련을 위한 축산물 이미지 데이터 수의 부족과 특정 범주(class)에 대한 데이터 불균형은 관련 연구 및 기술 개발에 큰 장해물이 되고 있다. 이러한 문제들을 해결하기 위해, 본 연구에서는 오버샘플링과 적대적 사례 생성기법의 활용을 제안한다. 제안되는 방법은 성공적인 불량 탐지 (Defect detection) 관점을 기반으로 하며, 이는 부족한 데이터 레이블을 효과적으로 활용하는데 필요한 방법이다. 최종적으로 실험을 통해 제안된 방법의 타당성을 확인하고 활용 전략을 검토한다.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • 천문학회보
    • /
    • 제46권1호
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성 (True Orthoimage Generation from LiDAR Intensity Using Deep Learning)

  • 신영하;형성웅;이동천
    • 한국측량학회지
    • /
    • 제38권4호
    • /
    • pp.363-373
    • /
    • 2020
  • 정사영상 생성을 위한 많은 연구들이 진행되어 왔다. 기존의 방법은 정사영상을 제작할 경우, 폐색지역을 탐지하고 복원하기 위해 항공영상의 외부표정요소와 정밀 3D 객체 모델링 데이터가 필요하며, 일련의 복잡한 과정을 자동화하는 것은 어렵다. 본 논문에서는 기존의 방법에서 탈피하여 딥러닝(DL)을 이용하여 엄밀정사영상을 제작하는 새로운 방법을 제안하였다. 딥러닝은 여러 분야에서 더욱 급속하게 활용되고 있으며, 최근 생성적 적대 신경망(GAN)은 영상처리 및 컴퓨터비전 분야에서 많은 관심의 대상이다. GAN을 구성하는 생성망은 실제 영상과 유사한 결과가 생성되도록 학습을 수행하고, 판별망은 생성망의 결과가 실제 영상으로 판단될 때까지 반복적으로 수행한다. 본 논문에서 독일 사진측량, 원격탐사 및 공간정보학회(DGPF)가 구축하고 국제 사진측량 및 원격탐사학회(ISPRS)가 제공하는 데이터 셋 중에서 라이다 반사강도 데이터와 적외선 정사영상을 GAN기반의 Pix2Pix 모델 학습에 사용하여 엄밀정사영상을 생성하는 두 가지 방법을 제안하였다. 첫 번째 방법은 라이다 반사강도영상을 입력하고 고해상도의 정사영상을 목적영상으로 사용하여 학습하는 방식이고, 두 번째 방법에서도 입력영상은 첫 번째 방법과 같이 라이다 반사강도영상이지만 목적영상은 라이다 점군집 데이터에 칼라를 지정한 저해상도의 영상을 이용하여 재귀적으로 학습하여 점진적으로 화질을 개선하는 방법이다. 두 가지 방법으로 생성된 정사영상을 FID(Fréchet Inception Distance)를 이용하여 정량적 수치로 비교하면 큰 차이는 없었지만, 입력영상과 목적영상의 품질이 유사할수록, 학습 수행 시 epoch를 증가시키면 우수한 결과를 얻을 수 있었다. 본 논문은 딥러닝으로 엄밀정사영상 생성 가능성을 확인하기 위한 초기단계의 실험적 연구로서 향후 보완 및 개선할 사항을 파악할 수 있었다.

기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합 (Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data)

  • 하지훈;박건우;임효혁;조동희;김용혁
    • 한국융합학회논문지
    • /
    • 제12권10호
    • /
    • pp.63-70
    • /
    • 2021
  • 고해상도 심층신경망을 이용하여 기상데이터를 초해상화하면 보다 더 정밀한 연구와 실생활에 유용한 서비스를 제공할 수 있다. 본 논문에서는 고해상도 심층신경망 학습에 사용하기 위한 개선된 훈련자료 생산기술을 최초로 제안한다. 기상전문 지식으로 고해상도 기상 자료를 생성하기 위해, 전문 기관의 관측자료와 ERA5 재분석장 자료를 바탕으로 람베르트 정각원추도법과 객관분석을 적용했다. 그 결과, 기상 전문 지식 기반의 기온 및 습도 분석자료는 기존 배경장 대비 RMSE 값이 각각 최대 42%, 46% 개선되었다. 다음으로, 기상 전문 기술을 이용한 수동적인 데이터 생성 기법을 자동화하기 위해 인공지능 기술 중 하나인 SRGAN을 이용했고, 10 km 해상도를 가지는 전지구모델자료로부터 1 km 해상도를 가지는 고해상도 자료를 생성하는 실험을 진행했다. 최종적으로, SRGAN으로 생성한 결과는 전지구모델입력자료에 비해 높은 해상도를 가지며 수동으로 생성한 고해상도 분석자료와 유사한 분석 패턴을 보이면서도 부드러운 경계를 보였다.