• Title/Summary/Keyword: Pix2pix

Search Result 59, Processing Time 0.028 seconds

Calculating coniferous tree coverage using unmanned aerial vehicle photogrammetry

  • Ivosevic, Bojana;Han, Yong-Gu;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • v.41 no.3
    • /
    • pp.85-92
    • /
    • 2017
  • Unmanned aerial vehicles (UAVs) are a new and yet constantly developing part of forest inventory studies and vegetation-monitoring fields. Covering large areas, their extensive usage has saved time and money for researchers and conservationists to survey vegetation for various data analyses. Post-processing imaging software has improved the effectiveness of UAVs further by providing 3D models for accurate visualization of the data. We focus on determining the coniferous tree coverage to show the current advantages and disadvantages of the orthorectified 2D and 3D models obtained from the image photogrammetry software, Pix4Dmapper Pro-Non-Commercial. We also examine the methodology used for mapping the study site, additionally investigating the spread of coniferous trees. The collected images were transformed into 2D black and white binary pixel images to calculate the coverage area of coniferous trees in the study site using MATLAB. The research was able to conclude that the 3D model was effective in perceiving the tree composition in the designated site, while the orthorectified 2D map is appropriate for the clear differentiation of coniferous and deciduous trees. In its conclusion, the paper will also be able to show how UAVs could be improved for future usability.

Toward accurate synchronic magnetic field maps using solar frontside and AI-generated farside data

  • Jeong, Hyun-Jin;Moon, Yong-Jae;Park, Eunsu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.3-42
    • /
    • 2021
  • Conventional global magnetic field maps, such as daily updated synoptic maps, have been constructed by merging together a series of observations from the Earth's viewing direction taken over a 27-day solar rotation period to represent the full surface of the Sun. It has limitations to predict real-time farside magnetic fields, especially for rapid changes in magnetic fields by flux emergence or disappearance. Here, we construct accurate synchronic magnetic field maps using frontside and AI-generated farside data. To generate the farside data, we train and evaluate our deep learning model with frontside SDO observations. We use an improved version of Pix2PixHD with a new objective function and a new configuration of the model input data. We compute correlation coefficients between real magnetograms and AI-generated ones for test data sets. Then we demonstrate that our model better generate magnetic field distributions than before. We compare AI-generated farside data with those predicted by the magnetic flux transport model. Finally, we assimilate our AI-generated farside magnetograms into the flux transport model and show several successive global magnetic field data from our new methodology.

  • PDF

Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol

  • Chena, Lee;Eun-Gyu, Ha;Yoon Joo, Choi;Kug Jin, Jeon;Sang-Sun, Han
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.393-398
    • /
    • 2022
  • Purpose: This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint(TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods: From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results: The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement(ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement(ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion: The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.

Comparison of Virtual 3D Tree Modelling Using Photogrammetry Software and Laser Scanning Technology (레이저스캐닝과 포토그래메트리 소프트웨어 기술을 이용한 조경 수목 3D모델링 재현 특성 비교)

  • Park, Jae-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.304-310
    • /
    • 2020
  • The technology in 3D modelling have advanced not only maps, heritages, constructions but also trees modelling. By laser scanning(Faro s350) and photogrammetry software(Pix4d) for 3D modelling, this study compared with real coniferous tree and both technology's results about characteristics of shape, texture, and dimensions. As a result, both technologies all showed high reproducibility. The scanning technique showed very good results in the reproduction about bark and leaves. Comparing the detailed dimensions on it, the error between the actual tree and modelling with scanning was 1.7~2.2%, and the scanning result was larger than the actual tree. The error between the actual tree and photogrammetry was only 0.2~0.5%, which was larger than the actual tree. On the other hand, the dark areas's modelling was not fully processed. This study is meaningful as a basic research that can be used for tree DB on BIM for the landscape architecture, landscape design and analysis with AR technology, historical tree and heritage also.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

Effects of Boshimgeonbi-tang on Gene Expression in Hypothalamus of Immobilization-stressed Mouse (보심건비탕(補心健脾湯) 투여가 Stress 유발 Mouse의 Hypothalamus 유전자 발현에 미치는 영향)

  • Lee Seoung-Hee;Chang Gyu-Tae;Kim Jang-Hyun
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.19 no.6
    • /
    • pp.1585-1593
    • /
    • 2005
  • The genetic effects of restraint stress challenge on HPA axis and the therapeutic effect of Boshimgeonbi-tang on the stress were studied with cDNA microarray analyses, RT-PCR on hypothalamus using an immobilization-stress mice as an animal model. Male CD-1 mice were restrained in a tightly fitted and ventilated vinyl holder for 2hrs once a day, and this challenge was repeated for seven· consecutive days. In the change of body weight it showed that the Boshimgeonbi-tang is effected recovery on weight loss caused by the immobilization-stress. Seven days later, total RNA was extracted from the organs of the mouse, body-labeled with $CyDye^{TM}$ fluorescence dyes and then hybridized to CDNA microarray chip. Scanning and analyzing the array slides were carried out using GenePix4000 series scanner and GenePix $Pro^{TM}$ analyzing program, respectively. The expression profiles of 109 genes out of 6000 genes on the chip were significantly modulated in hypothalamus by the immobilization stress. Energy metabolism-, lipid metabolism-, apoptosis-, stress protein, transcriptional factor, and signal transduction-related genes were transcriptionally activated whereas DNA repair-, protein biosysthesis-, and structure integrity-related genes were down-regulated in hypothalamus. The 58 genes were up-regulated by the mRNA expression folds of 1.5 to 7.9. and the 51 genes were down-regulated by 1.5 - 5.5 fold. The 11 genes among them were selected to confirm the expression profiles by RT-PCR. The mRNA expression levels of Tnfrsf1a (apoptosis), Calm2 (cell cycle), Bag3 (apoptosis), Ogg1 (DNA repair), Aatk (apoptosis), Dffa (apoptosis), Fkbp5 (protein folding) were restored to the normal one by the treatment of Boshimgeonbi-tang.

A Study on Green Algae Monitoring in Watershed Using Fixed Wing UAV (고정익 무인비행기를 이용한 수계 내 녹조 모니터링 연구)

  • Park, Jung-Il;Choi, Seung-Young;Park, Min-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.164-169
    • /
    • 2017
  • The primary purpose of this study is to determine NDVI analysis methodologies for green algae monitoring system. A fixed wing UAV integrated with multi-spectral sensor has been adopted to capture the images along the watershed in Gumgang River. The study area was near the Baekje water reservoir and the images was captured on July 2016. Pix4D Mapper Pro was used to process the captured images. Through the comparison actual chlorophyll measurement values with NDVI output image, empirical formula was suggested and geo-locational conversion was carried out. As a result of this study chlorophyll image set applied to actual measurement values was able to extracted. For the efficient management of green algae, its monitoring and prevention in terms of disaster management, gathering chlorophyll information using UAV is very beneficial.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Simulation and Colorization between Gray-scale Images and Satellite SAR Images Using GAN (GAN을 이용한 흑백영상과 위성 SAR 영상간의 모의 및 컬러화)

  • Jo, Su Min;Heo, Jun Hyuk;Eo, Yang Dam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.125-132
    • /
    • 2024
  • Optical satellite images are being used for national security and collection of information, and their utilization is increasing. However, it acquires low-quality images that are not suitable for the user's requirement due to weather conditions and time constraints. In this paper, a deep learning-based conversion of image and colorization model referring to high-resolution SAR images was created to simulate the occluded area with clouds of optical satellite images. The model was experimented according to the type of algorithm applied and input data, and each simulated images was compared and analyzed. In particular, the amount of pixel value information between the input black-and-white image and the SAR image was similarly constructed to overcome the problem caused by the relatively lack of color information. As a result of the experiment, the histogram distribution of the simulated image learned with the Gray-scale image and the high-resolution SAR image was relatively similar to the original image. In addition, the RMSE value was about 6.9827 and the PSNR value was about 31.3960 calculated for quantitative analysis.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.