• Title/Summary/Keyword: normalization method

Search Result 639, Processing Time 0.025 seconds

Application of Deep Learning-Based Nuclear Medicine Lung Study Classification Model (딥러닝 기반의 핵의학 폐검사 분류 모델 적용)

  • Jeong, Eui-Hwan;Oh, Joo-Young;Lee, Ju-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.1
    • /
    • pp.41-47
    • /
    • 2022
  • The purpose of this study is to apply a deep learning model that can distinguish lung perfusion and lung ventilation images in nuclear medicine, and to evaluate the image classification ability. Image data pre-processing was performed in the following order: image matrix size adjustment, min-max normalization, image center position adjustment, train/validation/test data set classification, and data augmentation. The convolutional neural network(CNN) structures of VGG-16, ResNet-18, Inception-ResNet-v2, and SE-ResNeXt-101 were used. For classification model evaluation, performance evaluation index of classification model, class activation map(CAM), and statistical image evaluation method were applied. As for the performance evaluation index of the classification model, SE-ResNeXt-101 and Inception-ResNet-v2 showed the highest performance with the same results. As a result of CAM, cardiac and right lung regions were highly activated in lung perfusion, and upper lung and neck regions were highly activated in lung ventilation. Statistical image evaluation showed a meaningful difference between SE-ResNeXt-101 and Inception-ResNet-v2. As a result of the study, the applicability of the CNN model for lung scintigraphy classification was confirmed. In the future, it is expected that it will be used as basic data for research on new artificial intelligence models and will help stable image management in clinical practice.

Identification of Endogenous Genes for Normalizing Titer Variation of Citrus Tristeza Virus in Aphids at Different Post-acquisition Feeding Times

  • Wang, Hongsu;Chen, Qi;Liu, Luqin;Zhou, Yan;Wang, Huanhuan;Li, Zhongan;Liu, Jinxiang
    • The Plant Pathology Journal
    • /
    • v.38 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • Citrus tristeza virus (CTV) is efficiently transmitted in a semi-persistent manner by the brown citrus aphid (Toxoptera citricida (Kirkaldy)). Currently, the most sensitive method for detecting plant viruses in insect vectors is reverse-transcription quantitative polymerase chain reaction (RT-qPCR). In this study, the elongation factor-1 alpha (EF-1α) gene and acidic p0 ribosomal protein (RPAP0) gene were confirmed to be suitable reference genes for RT-qPCR normalization in viruliferous T. citricida aphids using the geNorm, NormFinder, and BestKeeper tools. Then the relative CTV titer in aphids (T. citricida) at different post-acquisition feeding times on healthy plants was quantified by RT-qPCR using EF-1α and RPAP0 as reference genes. The relative CTV titer retained in the aphids gradually decreased with increasing feeding time. During the first 0.5 h of feeding time on healthy plants, the remaining CTV titer in aphids showed about 80% rapid loss for the highly transmissible isolate CT11A and 40% loss for the poorly transmissible isolate CTLJ. The relative CTV titer in aphids during more than 12 h post-acquisition times for CT11A was significantly lower than at the other feeding times, which is similar to the trend found for CTLJ. To our knowledge, this is the first report about the relative titer variation of CTV remaining in T. citricida at different post-acquisition feeding times on healthy plants.

Black Ice Formation Prediction Model Based on Public Data in Land, Infrastructure and Transport Domain (국토 교통 공공데이터 기반 블랙아이스 발생 구간 예측 모델)

  • Na, Jeong Ho;Yoon, Sung-Ho;Oh, Hyo-Jung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.257-262
    • /
    • 2021
  • Accidents caused by black ice occur frequently every winter, and the fatality rate is very high compared to other traffic accidents. Therefore, a systematic method is needed to predict the black ice formation before accidents. In this paper, we proposed a black ice prediction model based on heterogenous and multi-type data. To this end, 12,574,630 cases of 46 types of land, infrastructure, transport public data and meteorological public data were collected. Subsequently, the data cleansing process including missing value detection and normalization was followed by the establishment of approximately 600,000 refined datasets. We analyzed the correlation of 42 factors collected to predict the occurrence of black ice by selecting only 21 factors that have a valid effect on black ice prediction. The prediction model developed through this will eventually be used to derive the route-specific black ice risk index, which will be utilized as a preliminary study for black ice warning alart services.

A clinical study on a patient with hypothyroidism (생간건비탕가미방(生肝健脾湯加味方)을 이용한 갑상선기능저하증 치험1예(例))

  • Koo, Jin Suk;Kim, Bong Hyun;Seo, Bu Il
    • The Korea Journal of Herbology
    • /
    • v.29 no.5
    • /
    • pp.17-21
    • /
    • 2014
  • Objectives : Hypothyroidism is a common endocrine disorder in which the thyroid gland does not produce enough thyroid hormone. It can cause a number of symptoms, such as tiredness, poor ability to tolerate cold, and weight gain. The purpose of this study was to report the clinical effects of herbal medicine on hypothyroidism. Methods : We employed oriental medical treatments; herbal-medication (Saenggangeonbi - tang gamibang), acupuncture and moxibustion. At the same time, the patient started to exercise. We treated the patient two or three times a week with oriental therapy method. She took medicine three times a day after a meal. During taking medicine, we let the patient avoid fatty food, flour based food, and alcohol. The symptoms and normalization of the thyroxine and TSH levels are important points of evaluating the patient's condition. So the patient measured the body weight and took a blood test a time per two months and compared the results with previous results. Results : After taking treatment - acupuncture and moxibustion during 6 months - and taking herbal-medicine, the level of TSH, fT4, T4 and T3 became normalized. The body weight was decreased about 18 pounds. In advance, the symptom of tiredness, edema was much improved. Conclusion : Herbal medicine (Saenggangeonbi-tang gamibang) was effective in the treatment of hypothyroidism and it helped to normalize the level of TSH, fT4, T4 AND T3.

SAVITZKY-GOLAY DERIVATIVES : A SYSTEMATIC APPROACH TO REMOVING VARIABILITY BEFORE APPLYING CHEMOMETRICS

  • Hopkins, David W.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1041-1041
    • /
    • 2001
  • Removal of variability in spectra data before the application of chemometric modeling will generally result in simpler (and presumably more robust) models. Particularly for sparsely sampled data, such as typically encountered in diode array instruments, the use of Savitzky-Golay (S-G) derivatives offers an effective method to remove effects of shifting baselines and sloping or curving apparent baselines often observed with scattering samples. The application of these convolution functions is equivalent to fitting a selected polynomial to a number of points in the spectrum, usually 5 to 25 points. The value of the polynomial evaluated at its mid-point, or its derivative, is taken as the (smoothed) spectrum or its derivative at the mid-point of the wavelength window. The process is continued for successive windows along the spectrum. The original paper, published in 1964 [1] presented these convolution functions as integers to be used as multipliers for the spectral values at equal intervals in the window, with a normalization integer to divide the sum of the products, to determine the result for each point. Steinier et al. [2] published corrections to errors in the original presentation [1], and a vector formulation for obtaining the coefficients. The actual selection of the degree of polynomial and number of points in the window determines whether closely situated bands and shoulders are resolved in the derivatives. Furthermore, the actual noise reduction in the derivatives may be estimated from the square root of the sums of the coefficients, divided by the NORM value. A simple technique to evaluate the actual convolution factors employed in the calculation by the software will be presented. It has been found that some software packages do not properly account for the sampling interval of the spectral data (Equation Ⅶ in [1]). While this is not a problem in the construction and implementation of chemometric models, it may be noticed in comparing models at differing spectral resolutions. Also, the effects on parameters of PLS models of choosing various polynomials and numbers of points in the window will be presented.

  • PDF

Deep Prediction of Stock Prices with K-Means Clustered Data Augmentation (K-평균 군집화 데이터 증강을 통한 주가 심층 예측)

  • Kyounghoon Han;Huigyu Yang;Hyunseung Choo
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.67-74
    • /
    • 2023
  • Stock price prediction research in the financial sector aims to ensure trading stability and achieve profit realization. Conventional statistical prediction techniques are not reliable for actual trading decisions due to low prediction accuracy compared to randomly predicted results. Artificial intelligence models improve accuracy by learning data characteristics and fluctuation patterns to make predictions. However, predicting stock prices using long-term time series data remains a challenging problem. This paper proposes a stable and reliable stock price prediction method using K-means clustering-based data augmentation and normalization techniques and LSTM models specialized in time series learning. This enables obtaining more accurate and reliable prediction results and pursuing high profits, as well as contributing to market stability.

A Study on Speechreading about the Korean 8 Vowels (한국어 8모음 자동 독화에 관한 연구)

  • Lee, Kyong-Ho;Yang, Ryong;Kim, Sun-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.173-182
    • /
    • 2009
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

Application of EDA Techniques for Estimating Rainfall Quantiles (확률강우량 산정을 위한 EDA 기법의 적용)

  • Park, Hyunkeun;Oh, Sejeong;Yoo, Chulsang
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.4B
    • /
    • pp.319-328
    • /
    • 2009
  • This study quantified the data by applying the EDA techniques considering the data structure, and the results were then used for the frequency analysis. Although traditional methods based on the method of moments provide very sensitive statistics to the extreme values, the EDA techniques have an advantage of providing very stable statistics with their small variation. For the application of the EDA techniques to the frequency analysis, it is necessary to normalization transform and inverse-transform to conserve the skewness of the raw data. That is, it is necessary to transform the raw data to make the data follow the normal distribution, to estimate the statistics by applying the EDA techniques, and then finally to inverse-transform the statistics of transformed data. These statistics decided are then applied for the frequency analysis with a given probability density function. This study analyzed the annual maxima one hour rainfall data at Seoul and Pohang stations. As a result, it was found that more stable rainfall quantiles, which were also less sensitive to extreme values, could be estimated by applying the EDA techniques. This methodology may be effectively used for the frequency analysis of rainfall at stations with especially high annual variations of rainfall due to climate change, etc.

Comparison between Old and New Versions of Electron Monte Carlo (eMC) Dose Calculation

  • Seongmoon Jung;Jaeman Son;Hyeongmin Jin;Seonghee Kang;Jong Min Park;Jung-in Kim;Chang Heon Choi
    • Progress in Medical Physics
    • /
    • v.34 no.2
    • /
    • pp.15-22
    • /
    • 2023
  • This study compared the dose calculated using the electron Monte Carlo (eMC) dose calculation algorithm employing the old version (eMC V13.7) of the Varian Eclipse treatment-planning system (TPS) and its newer version (eMC V16.1). The eMC V16.1 was configured using the same beam data as the eMC V13.7. Beam data measured using the VitalBeam linear accelerator were implemented. A box-shaped water phantom (30×30×30 cm3) was generated in the TPS. Consequently, the TPS with eMC V13.7 and eMC V16.1 calculated the dose to the water phantom delivered by electron beams of various energies with a field size of 10×10 cm2. The calculations were repeated while changing the dose-smoothing levels and normalization method. Subsequently, the percentage depth dose and lateral profile of the dose distributions acquired by eMC V13.7 and eMC V16.1 were analyzed. In addition, the dose-volume histogram (DVH) differences between the two versions for the heterogeneous phantom with bone and lung inserted were compared. The doses calculated using eMC V16.1 were similar to those calculated using eMC V13.7 for the homogenous phantoms. However, a DVH difference was observed in the heterogeneous phantom, particularly in the bone material. The dose distribution calculated using eMC V16.1 was comparable to that of eMC V13.7 in the case of homogenous phantoms. The version changes resulted in a different DVH for the heterogeneous phantoms. However, further investigations to assess the DVH differences in patients and experimental validations for eMC V16.1, particularly for heterogeneous geometry, are required.

EDNN based prediction of strength and durability properties of HPC using fibres & copper slag

  • Gupta, Mohit;Raj, Ritu;Sahu, Anil Kumar
    • Advances in concrete construction
    • /
    • v.14 no.3
    • /
    • pp.185-194
    • /
    • 2022
  • For producing cement and concrete, the construction field has been encouraged by the usage of industrial soil waste (or) secondary materials since it decreases the utilization of natural resources. Simultaneously, for ensuring the quality, the analyses of the strength along with durability properties of that sort of cement and concrete are required. The prediction of strength along with other properties of High-Performance Concrete (HPC) by optimization and machine learning algorithms are focused by already available research methods. However, an error and accuracy issue are possessed. Therefore, the Enhanced Deep Neural Network (EDNN) based strength along with durability prediction of HPC was utilized by this research method. Initially, the data is gathered in the proposed work. Then, the data's pre-processing is done by the elimination of missing data along with normalization. Next, from the pre-processed data, the features are extracted. Hence, the data input to the EDNN algorithm which predicts the strength along with durability properties of the specific mixing input designs. Using the Switched Multi-Objective Jellyfish Optimization (SMOJO) algorithm, the weight value is initialized in the EDNN. The Gaussian radial function is utilized as the activation function. The proposed EDNN's performance is examined with the already available algorithms in the experimental analysis. Based on the RMSE, MAE, MAPE, and R2 metrics, the performance of the proposed EDNN is compared to the existing DNN, CNN, ANN, and SVM methods. Further, according to the metrices, the proposed EDNN performs better. Moreover, the effectiveness of proposed EDNN is examined based on the accuracy, precision, recall, and F-Measure metrics. With the already-existing algorithms i.e., JO, GWO, PSO, and GA, the fitness for the proposed SMOJO algorithm is also examined. The proposed SMOJO algorithm achieves a higher fitness value than the already available algorithm.