• Title/Summary/Keyword: Validation technique

Search Result 636, Processing Time 0.026 seconds

New Methods for Correcting the Atmospheric Effects in Landsat Imagery over Turbid (Case-2) Waters

  • Ahn Yu-Hwan;Shanmugam P.
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.5
    • /
    • pp.289-305
    • /
    • 2004
  • Atmospheric correction of Landsat Visible and Near Infrared imagery (VIS/NIR) over aquatic environment is more demanding than over land because the signal from the water column is small and it carries immense information about biogeochemical variables in the ocean. This paper introduces two methods, a modified dark-pixel substraction technique (path--extraction) and our spectral shape matching method (SSMM), for the correction of the atmospheric effects in the Landsat VIS/NIR imagery in relation to the retrieval of meaningful information about the ocean color, especially from Case-2 waters (Morel and Prieur, 1977) around Korean peninsula. The results of these methods are compared with the classical atmospheric correction approaches based on the 6S radiative transfer model and standard SeaWiFS atmospheric algorithm. The atmospheric correction scheme using 6S radiative transfer code assumes a standard atmosphere with constant aerosol loading and a uniform, Lambertian surface, while the path-extraction assumes that the total radiance (L/sub TOA/) of a pixel of the black ocean (referred by Antoine and Morel, 1999) in a given image is considered as the path signal, which remains constant over, at least, the sub scene of Landsat VIS/NIR imagery. The assumption of SSMM is nearly similar, but it extracts the path signal from the L/sub TOA/ by matching-up the in-situ data of water-leaving radiance, for typical clear and turbid waters, and extrapolate it to be the spatially homogeneous contribution of the scattered signal after complex interaction of light with atmospheric aerosols and Raleigh particles, and direct reflection of light on the sea surface. The overall shape and magnitude of radiance or reflectance spectra of the atmospherically corrected Landsat VIS/NIR imagery by SSMM appears to have good agreement with the in-situ spectra collected for clear and turbid waters, while path-extraction over turbid waters though often reproduces in-situ spectra, but yields significant errors for clear waters due to the invalid assumption of zero water-leaving radiance for the black ocean pixels. Because of the standard atmosphere with constant aerosols and models adopted in 6S radiative transfer code, a large error is possible between the retrieved and in-situ spectra. The efficiency of spectral shape matching has also been explored, using SeaWiFS imagery for turbid waters and compared with that of the standard SeaWiFS atmospheric correction algorithm, which falls in highly turbid waters, due to the assumption that values of water-leaving radiance in the two NIR bands are negligible to enable retrieval of aerosol reflectance in the correction of ocean color imagery. Validation suggests that accurate the retrieval of water-leaving radiance is not feasible with the invalid assumption of the classical algorithms, but is feasible with SSMM.

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

Estimation of Benthic Microalgae Chlorophyll-a Concentration in Mudflat Surfaces of Geunso Bay Using Ground-based Hyperspectral Data (지상 초분광자료를 이용한 근소만 갯벌표층에서 저서성 미세조류의 엽록소-a 공간분포 추정)

  • Koh, Sooyoon;Noh, Jaehoon;Baek, Seungil;Lee, Howon;Won, Jongseok;Kim, Wonkook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1111-1124
    • /
    • 2021
  • Mudflats are crucial for understanding the ecological structure and biological function of coastal ecosystem because of its high primary production by microalgae. There have been many studies on measuring primary productivity of tidal flats for the estimation of organic carbon abundance, but it is relatively recent that optical remote sensing technique, particularly hyperspectral sensing, was used for it. This study investigates hyperspectral sensing of chlorophyll concentration on a tidal flat surface, which is a key variable in deriving primary productivity. The study site is a mudflat in Geunso bay, South Korea and field campaigns were conducted at ebb tide in April and June 2021. Hyperspectral reflectance of the mudflat surfaces was measured with two types of hyperspectral sensors; TriOS RAMSES (directionalsensor) and the Specim-IQ (camera sensor), and Normal Differenced Vegetation Index (NDVI) and Contiuum Removal Depth (CRD) were used to estimate Chl-a from the optical measurements. The validation performed against independent field measurements of Chl-a showed that both CRD and NDVI can retrieve surface Chl-a with R2 around 0.7 for the Chl-a range of 0~150 mg/m2 tested in this study.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

Receptor Binding Affinities of Synthetic Cannabinoids Determined by Non-Isotopic Receptor Binding Assay

  • Cha, Hye Jin;Song, Yun Jeong;Lee, Da Eun;Kim, Young-Hoon;Shin, Jisoon;Jang, Choon-Gon;Suh, Soo Kyung;Kim, Sung Jin;Yun, Jaesuk
    • Toxicological Research
    • /
    • v.35 no.1
    • /
    • pp.37-44
    • /
    • 2019
  • A major predictor of the efficacy of natural or synthetic cannabinoids is their binding affinity to the cannabinoid type I receptor ($CB_1$) in the central nervous system, as the main psychological effects of cannabinoids are achieved via binding to this receptor. Conventionally, receptor binding assays have been performed using isotopes, which are inconvenient owing to the effects of radioactivity. In the present study, the binding affinities of five cannabinoids for purified $CB_1$ were measured using a surface plasmon resonance (SPR) technique as a putative non-isotopic receptor binding assay. Results were compared with those of a radio-isotope-labeled receptor binding assay. The representative natural cannabinoid ${\Delta}^9$-tetrahydrocannabinol and four synthetic cannabinoids, JWH-015, JWH-210, RCS-4, and JWH-250, were assessed using both the SPR biosensor assay and the conventional isotopic receptor binding assay. The binding affinities of the test substances to $CB_1$ were determined to be (from highest to lowest) $9.52{\times}10^{-3}M$ (JWH-210), $6.54{\times}10^{-12}M$ (JWH-250), $1.56{\times}10^{-11}M$ (${\Delta}^9$-tetrahydrocannabinol), $2.75{\times}10^{-11}M$ (RCS-4), and $6.80{\times}10^{-11}M$ (JWH-015) using the non-isotopic method. Using the conventional isotopic receptor binding assay, the same order of affinities was observed. In conclusion, our results support the use of kinetic analysis via SPR in place of the isotopic receptor binding assay. To replace the receptor binding affinity assay with SPR techniques in routine assays, further studies for method validation will be needed in the future.

Three Dimensional Printing Technique and Its Application to Bone Tumor Surgery (3차원 프린팅 기술과 이를 활용한 골종양 수술)

  • Kang, Hyun Guy;Park, Jong Woong;Park, Dae Woo
    • Journal of the Korean Orthopaedic Association
    • /
    • v.53 no.6
    • /
    • pp.466-477
    • /
    • 2018
  • Orthopaedics is an area where 3-dimensional (3D) printing technology is most likely to be utilized because it has been used to treat a range of diseases of the whole body. For arthritis, spinal diseases, trauma, deformities, and tumors, 3D printing can be used in the form of anatomical models, surgical guides, metal implants, bio-ceramic body reconstruction, and orthosis. In particular, in orthopaedic oncology, patients have a wide variety of tumor locations, but limited options for the limb salvage surgery have resulted in many complications. Currently, 3D printing personalized implants can be fabricated easily in a short time, and it is anticipated that all bone tumors in various surgical sites will be reconstructed properly. An improvement of 3D printing technology in the healthcare field requires close cooperation with many professionals in the design, printing, and validation processes. The government, which has determined that it can promote the development of 3D printing-related industries in other fields by leading the use of 3D printing in the medical field, is also actively supporting with an emphasis on promotion rather than regulation. In this review, the experience of using 3D printing technology for bone tumor surgery was shared, expecting orthopaedic surgeons to lead 3D printing in the medical field.

Analysis and Verification of Ancient DNA (고대 DNA의 분석과 검증)

  • Jee, Sang-hyun;Seo, Min-seok
    • Korean Journal of Heritage: History & Science
    • /
    • v.40
    • /
    • pp.387-411
    • /
    • 2007
  • The analysis of ancient DNA (aDNA) has become increasingly considerable anthropological, archaeological, biological and public interest. Although this approach is complicated by the natural damage and exogenous contamination of a DNA, archaeologists and biologists have attempted to understand issues such as human evolutionary history, migration and social organization, funeral custom and disease, and even evolutionary phylogeny of extinct animals. Polymerase chain reaction(PCR) is powerful technique that analyzes DNA sequences from a little extract of an ancient specimen. However, deamination and fragmentation are common molecular damages of aDNA and cause enzymatic inhibition in PCR for DNA amplification. Besides, the deamination of a cytosine residue yielded an uracil residue in the ancient template, and results in the misincorporation of an adenine residue in PCR. This promotes a consistent substitution (cytosine thymine, guanine adenine) to original nucleotide sequences. Contamination with exogenous DNA is a major problem in aDNA analysis, and causes oversight as erroneous conclusion. This report represents serious problems that DNA modification and contamination are the main issues in result validation of aDNA analysis. Now, we introduce several criterions suggested to authenticate reliance of aDNA analysis by many researchers in this field.

A TBM data-based ground prediction using deep neural network (심층 신경망을 이용한 TBM 데이터 기반의 굴착 지반 예측 연구)

  • Kim, Tae-Hwan;Kwak, No-Sang;Kim, Taek Kon;Jung, Sabum;Ko, Tae Young
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.1
    • /
    • pp.13-24
    • /
    • 2021
  • Tunnel boring machine (TBM) is widely used for tunnel excavation in hard rock and soft ground. In the perspective of TBM-based tunneling, one of the main challenges is to drive the machine optimally according to varying geological conditions, which could significantly lead to saving highly expensive costs by reducing the total operation time. Generally, drilling investigations are conducted to survey the geological ground before the TBM tunneling. However, it is difficult to provide the precise ground information over the whole tunnel path to operators because it acquires insufficient samples around the path sparsely and irregularly. To overcome this issue, in this study, we proposed a geological type classification system using the TBM operating data recorded in a 5 s sampling rate. We first categorized the various geological conditions (here, we limit to granite) as three geological types (i.e., rock, soil, and mixed type). Then, we applied the preprocessing methods including outlier rejection, normalization, and extracting input features, etc. We adopted a deep neural network (DNN), which has 6 hidden layers, to classify the geological types based on TBM operating data. We evaluated the classification system using the 10-fold cross-validation. Average classification accuracy presents the 75.4% (here, the total number of data were 388,639 samples). Our experimental results still need to improve accuracy but show that geology information classification technique based on TBM operating data could be utilized in the real environment to complement the sparse ground information.

Numerical comparative investigation on blade tip vortex cavitation and cavitation noise of underwater propeller with compressible and incompressible flow solvers (압축성과 비압축성 유동해석에 따른 수중 추진기 날개 끝 와류공동과 공동소음에 대한 수치비교 연구)

  • Ha, Junbeom;Ku, Garam;Cho, Junghoon;Cheong, Cheolung;Seol, Hanshin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.4
    • /
    • pp.261-269
    • /
    • 2021
  • Without any validation of the incompressible assumption, most of previous studies on cavitation flow and its noise have utilized numerical methods based on the incompressible Reynolds Average Navier-Stokes (RANS) equations because of advantage of its efficiency. In this study, to investigate the effects of the flow compressibility on the Tip Vortex Cavitation (TVC) flow and noise, both the incompressible and compressible simulations are performed to simulate the TVC flow, and the Ffowcs Williams and Hawkings (FW-H) integral equation is utilized to predict the TVC noise. The DARPA Suboff submarine body with an underwater propeller of a skew angle of 17 degree is targeted to account for the effects of upstream disturbance. The computation domain is set to be same as the test-section of the large cavitation tunnel in Korea Research Institute of Ships and Ocean Engineering to compare the prediction results with the measured ones. To predict the TVC accurately, the Delayed Detached Eddy Simulation (DDES) technique is used in combination with the adaptive grid techniques. The acoustic spectrum obtained using the compressible flow solver shows closer agreement with the measured one.

Digital Twin-Based Communication Optimization Method for Mission Validation of Swarm Robot (군집 로봇의 임무 검증 지원을 위한 디지털 트윈 기반 통신 최적화 기법)

  • Gwanhyeok, Kim;Hanjin, Kim;Junhyung, Kwon;Beomsu, Ha;Seok Haeng, Huh;Jee Hoon, Koo;Ho Jung, Sohn;Won-Tae, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Robots are expected to expand their scope of application to the military field and take on important missions such as surveillance and enemy detection in the coming future warfare. Swarm robots can perform tasks that are difficult or time-consuming for a single robot to be performed more efficiently due to the advantage of having multiple robots. Swarm robots require mutual recognition and collaboration. So they send and receive vast amounts of data, making it increasingly difficult to verify SW. Hardware-in-the-loop simulation used to increase the reliability of mission verification enables SW verification of complex swarm robots, but the amount of verification data exchanged between the HILS device and the simulator increases exponentially according to the number of systems to be verified. So communication overload may occur. In this paper, we propose a digital twin-based communication optimization technique to solve the communication overload problem that occurs in mission verification of swarm robots. Under the proposed Digital Twin based Multi HILS Framework, Network DT can efficiently allocate network resources to each robot according to the mission scenario through the Network Controller algorithm, and can satisfy all sensor generation rates required by individual robots participating in the group. In addition, as a result of an experiment on packet loss rate, it was possible to reduce the packet loss rate from 15.7% to 0.2%.