• Title/Summary/Keyword: 정밀도향상

Search Result 590, Processing Time 0.027 seconds

The Role of Inhaled Corticosteroid in the Management of Chronic Cough (만성 기침에서 스테로이드 흡입제의 역할)

  • Lee, Kyung-Hun;Jang, Seung Hun;Lee, Jung-Hwa;Eom, Kwang-Seok;Bahn, Joon-Woo;Kim, Dong-Gyu;Shin, Tae Rim;Park, Sang Myon;Lee, Myung-Gu;Kim, Chul-Hong;Hyun, In-Gyu;Jung, Ki-Suck
    • Tuberculosis and Respiratory Diseases
    • /
    • v.60 no.2
    • /
    • pp.221-227
    • /
    • 2006
  • Background : Cough may be a consequence of bronchial hyperresponsiveness or inflammation. Empirical treatment is important in this context because it difficult to verify the obvious cause of cough using laboratory tests, Corticosteroid has a nonspecific anti-inflammatory effect, and can be used for cough management. However, its response rate has not yet been fully elucidated. This study investigated the short- term effects of inhaled corticosteroid on chronic cough Methods : Patients with chronic cough with a normal chest radiograph and a pulmonary function test were enrolled. Cases with a prior respiratory infection within 8 weeks, a history of bronchial asthma, objective wheezing on examination, subjective symptoms of gastroesophageal reflux or taking an ACE inhibitor were excluded. On the first visit, a methacholine bronchial provocation test, spontaneous sputum eosinophil count performed twice and a paranasal sinus radiograph were checked, and the patients were treated with budesonide turbuhaler $800{\mu}g/day$ for ten days. The primary outcome measure was a decrease in the cough score after treatment. Results : Sixty nine chronic coughers were finally analyzed. The final diagnoses by the routine tests were as follows: bronchial asthma 13.0%, eosinophilic bronchitis 18.8%, paranasal sinusitis 23.2% and non-diagnostic cases 53.6%. The following responses to the inhaled corticosteroid were observed: definite responders, 76.8%, possible responders, 2.9% and non-responders, 20.3%. The response rate was not affected by the final diagnosis even in the non-diagnostic cases. There were minimal adverse drug related effects during the empirical treatment. Conclusion : Routine objective tests such as methacholine provocation, sputum eosinophil count and simple radiographs were notare not suitable for diagnosing chronic cough Therefore, empirical treatment is important. Short term inhaled corticosteroid is effective and can guide a further treatment plan for chronic cough.

The study of nondestructive method for measuring the acidity of the recent record paper in Hanji by using FT-NIR spectroscopy and Integrating sphere (푸리에 변환 근적외선 분광분석기(FT-NIR)와 적분구를 이용한 근대 한지 기록물의 산성도 비파괴 평가방법에 대한 연구)

  • Shin, Yong-Min;Park, Soung-Be;Kim, Chan-Bong;Lee, Seong-Uk;Cho, Won-Bo;Kim, Hyo-Jin
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2011.10a
    • /
    • pp.255-269
    • /
    • 2011
  • The purpose of study has to analyze with non destructive method for researching the tool that could be measured with the status of record written on Hanji speedily. Because the original record should be destructed for analyzing with previous method in the case of the paper record, it was to develop the tool based on non destructive method for overcoming such limit. The study was used with FT NIR (Fourier transform NIR) for analyzing the Hanji for being written and preserved. The FT NIR spectrometer that of NIR spectrometer has the better performance of precision and accuracy than dispersive NIR spectrometer was used. Also the wavelength of FT-NIR was measured with 12,500 to 4,000 $cm^{-1}$, and the integrating sphere as diffuse reflectance type was used for analyzing Hanji. The moisture and acidity (pH) of chemical factors as quality evaluated factor of Hanji was studied for the correlation of NIR spectrum. And then The NIR spectrum was pretreated for showing the coefficients of optimum correlation. MSC and First derivative of Savitzky - Golay was used as pretreated method, and the coefficients of optimum correlation were shown by PLSR(Partial least square regression). And the coefficients of optimum correlation were calculated by PLSR(Partial least square regression). The correlation coefficients of acidity had 0.92 on NIR spectra without pretreatment. Also the SEP of acidity was 0.24. And then The NIR spectra with pretreatment would have more good correlation coefficients ($R^2=0.98$) and more good SEP(=019) on acidity. Therefore the data of correlation coefficients ($R^2$) and SEP with pretreatment was shown to be superior. And NIR spectra data of first derivative had best linearity on the correlation coefficients ($R^2=0.99$) and also SEP(=0.45) was superior. Therefore the correlation coefficients and SEP of first derivative had better than those of NIR spectra of no pretreatment. As such result, it was possible to evaluate the record status of Hanji speedily with integrated sphere and NIR analyzer as non destructive method.

  • PDF

Initial results from spatially averaged coherency, frequency-wavenumber, and horizontal to vertical spectrum ratio microtremor survey methods for site hazard study at Launceston, Tasmania (Tasmania 의 Launceston 시의 위험 지역 분석을 위한 공간적 평균 일관성, 주파수-파수, 수평과 수직 스펙트럼의 비율을 이용한 상신 진동 탐사법의 일차적 결과)

  • Claprood, Maxime;Asten, Michael W.
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.132-142
    • /
    • 2009
  • The Tamar rift valley runs through the City of Launceston, Tasmania. Damage has occurred to city buildings due to earthquake activity in Bass Strait. The presence of the ancient valley, the Tamar valley, in-filled with soft sediments that vary rapidly in thickness from 0 to 250mover a few hundreds metres, is thought to induce a 2D resonance pattern, amplifying the surface motions over the valley and in Launceston. Spatially averaged coherency (SPAC), frequency-wavenumber (FK) and horizontal to vertical spectrum ratio (HVSR) microtremor survey methods are combined to identify and characterise site effects over the Tamar valley. Passive seismic array measurements acquired at seven selected sites were analysed with SPAC to estimate shear wave velocity (slowness) depth profiles. SPAC was then combined with HVSR to improve the resolution of these profiles in the sediments to an approximate depth of 125 m. Results show that sediments thicknesses vary significantly throughout Launceston. The top layer is composed of as much as 20m of very soft Quaternary alluvial sediments with a velocity from 50 m/s to 125 m/s. Shear-wave velocities in the deeper Tertiary sediment fill of the Tamar valley, with thicknesses from 0 to 250m vary from 400 m/s to 750 m/s. Results obtained using SPAC are presented at two selected sites (GUN and KPK) that agree well with dispersion curves interpreted with FK analysis. FK interpretation is, however, limited to a narrower range of frequencies than SPAC and seems to overestimate the shear wave velocity at lower frequencies. Observed HVSR are also compared with the results obtained by SPAC, assuming a layered earth model, and provide additional constraints on the shear wave slowness profiles at these sites. The combined SPAC and HVSR analysis confirms the hypothesis of a layered geology at the GUN site and indicates the presence of a 2D resonance pattern across the Tamar valley at the KPK site.

Live Load Distribution in Prestressed Concrete I-Girder Bridges (I형 프리스트레스트 콘크리트 거더교의 활하중 분배)

  • Lee, Hwan-Woo;Kim, Kwang-Yang
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.4
    • /
    • pp.325-334
    • /
    • 2008
  • The standard prestressed concrete I-girder bridge (PSC I-girder bridge) is one of the most prevalent types for small and medium bridges in Korea. When determining the member forces in a section to assess the safety of girder in this type of bridge, the general practice is to use the simplified practical equations or the live load distribution factors proposed in design standards rather than the precise analysis through the finite element method or so. Meanwhile, the live load distribution factors currently used in Korean design practice are just a reflection of overseas research results or design standards without alterations. Therefore, it is necessary to develop an equation of the live load distribution factors fit for the design conditions of Korea, considering the standardized section of standard PSC I-girder bridges and the design strength of concrete. In this study, to develop an equation of the live load distribution factors, a parametric analysis and sensitivity analysis were carried out on the parameters such as width of bridge, span length, girder spacing, width of traffic lane, etc. As a result, the major variables to determine the size of distribution factors were girder spacing, overhang length and span length in case of external girders. For internal adjacent girders, the determinant factors were girder spacing, overhang length, span length and width of bridge. For internal girders, the factors were girder spacing, width of bridge and span length. Then, an equation of live load distribution factors was developed through the multiple linear regression analysis on the results of parametric analysis. When the actual practice engineers design a bridge with the equation of live load distribution factors developed here, they will determine the design of member forces ensuring the appropriate safety rate more easily. Moreover, in the preliminary design, this model is expected to save much time for the repetitive design to improve the structural efficiency of PSC I-girder bridges.

Estimation of the Three-dimensional Vegetation Landscape of the Donhwamun Gate Area in Changdeokgung Palace through the Rubber Sheeting Transformation of (<동궐도(東闕圖)>의 러버쉬팅변환을 통한 창덕궁 돈화문 지역의 입체적 식생 경관 추정)

  • Lee, Jae-Yong
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.2
    • /
    • pp.138-153
    • /
    • 2018
  • The purpose of this study was to analyze , which was made in the late Joseon Dynasty to specify the vegetation landscape of the Donhwamun Gate area in Changdeokgung Palace. The study results can be summarized as below. First, based on "Jieziyuan Huazhuan(芥子園畵傳)", the introductory book of tree expression delivered from China in the 17th century, allowed the classification criteria of the trees described in the picture to be established and helped identify their types. As a result of the classification, there were 10 species and 50 trees in the Donhwamun Gate area of . Second, it was possible to measure the real size of the trees described in the picture through the elevated drawing scale of . The height of the trees ranged from a minimum of 4.37 m to a maximum of 22.37 m. According to the measurement results, compared to the old trees currently living in Changdeokgung Palace, the trees described in the picture were found to be produced in almost actual size without exaggeration. Thus, the measured height of the trees turned out to be appropriate as baseline data for reproduction of the vegetation landscape. Third, through the Rubber Sheeting Transformation of , it was possible to make a ground plan for the planting of on the current digital topographic map. In particular, as the transformed area of was departmentalized and control points were added, the precision of transformation improved. It was possible to grasp the changed position of planting as well as the change in planting density through a ground plan of planting of . Lastly, it was possible to produce a three-dimensional vegetation landscape model by using the information of the shape of the trees and the ground plan for the planting of . Based on the three-dimensional model, it was easy to examine the characteristics of the three-dimensional view of the current vegetation via the view axis, skyline, and openness to and cover from the adjacent regions at the level of the eyes. This study is differentiated from others in that it verified the realism of and suggested the possibility of ascertaining the original form of the vegetation landscape described in the painting.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Verification of the upper limit of results through dilution tests for RIA test (RIA 검사별 희석실험을 통한 결과의 상한치 검증)

  • LEE, Geun Ui;CHOI, Jin Ju;LEE, Young Ji;YOO, Seon Hee;LEE, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.1
    • /
    • pp.42-46
    • /
    • 2022
  • Purpose In the meantime, there have been not many samples that require dilution, and it has been difficult for the examiner to set an appropriate dilution multiple for RIA test item and report the results. Accordingly, it was judged that it was necessary to set the maximum dilution multiple for each test and to verify the upper limit of the clinical reportable range. Therefore, in this study, the maximum dilution multiple for each RIA test was set and the upper limit of the clinical reportable range was verified accordingly Materials and Methods Among all RIA tests conducted at Asan Medical Center, the study treated on 30 types of tests which also conduct the dilution test. Data from March to July 2021 were collected and analyzed. The study was conducted on samples subjected to serial dilutions such as X2, X4 or X10, X102, X103, X104, X105. Results Among a total of 30 test types, 18 test types have more than 5 N values in the tolerance range of 80~120%. As a result of the verification of maximum dilution multiples, the test set to 104 is 𝛼-fetoprotein and thyroglobulin, and the test set to 103 is CA-125, CEA, and 𝛽-hCG, and the test set to 102 is Free PSA, PSA, CA15-3, SCC, Ferritin, PTH, Cortisol, and Calcitonin. Tests set to 10 include three categories: 𝛽2-Microglobulin, C-peptide, and Testosterone. Conclusion It is expected that it will contribute to improving the quality of nuclear medicine blood tests as the results of dilution experiments can be reported quickly and accurately through the verification of the clinical reportable range.

Effect of Difference in Irrigation Amount on Growth and Yield of Tomato Plant in Long-term Cultivation of Hydroponics (장기 수경재배에서 급액량의 차이가 토마토 생육과 수량 특성에 미치는 영향)

  • Choi, Gyeong Lee;Lim, Mi Young;Kim, So Hui;Rho, Mi Young
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.444-451
    • /
    • 2022
  • Recently, long-term cultivation is becoming more common with the increase in tomato hydroponics. In hydroponics, it is very important to supply an appropriate nutrient solution considering the nutrient and moisture requirements of crops, in terms of productivity, resource use, and environmental conservation. Since seasonal environmental changes appear severely in long-term cultivation, it is so critical to manage irrigation control considering these changes. Therefore, this study was carried out to investigate the effect of irrigation volume on growth and yield in tomato long-term cultivation using coir substrate. The irrigation volume was adjusted at 4 levels (high, medium high, medium low and low) by different irrigation frequency. Irrigation scheduling (frequency) was controlled based on solar radiation which measured by radiation sensor installed outside the greenhouse and performed whenever accumulated solar radiation energy reached set value. Set value of integrated solar radiation was changed by the growing season. The results revealed that the higher irrigation volume caused the higher drainage rate, which could prevent the EC of drainage from rising excessively. As the cultivation period elapsed, the EC of the drainage increased. And the lower irrigation volume supplied, the more the increase in EC of the drainage. Plant length was shorter in the low irrigation volume treatment compared to the other treatments. But irrigation volume did not affect the number of nodes and fruit clusters. The number of fruit settings was not significantly affected by the irrigation volume in general, but high irrigation volume significantly decreased fruit setting and yield of the 12-15th cluster developed during low temperature period. Blossom-end rot occurred early with a high incidence rate in the low irrigation volume treatment group. The highest weight fruits was obtained from the high irrigation treatment group, while the medium high treatment group had the highest total yield. As a result of the experiment, it could be confirmed the effect of irrigation amount on the nutrient and moisture stabilization in the root zone and yield, in addition to the importance of proper irrigation control when cultivating tomato plants hydroponically using coir substrate. Therefore, it is necessary to continue the research on this topic, as it is judged that the precise irrigation control algorithm based on root zone-information applied to the integrated environmental control system, will contribute to the improvement of crop productivity as well as the development of hydroponics control techniques.

A Study on the Establishment of Acceptable Range for Internal Quality Control of Radioimmunoassay (핵의학 검체검사 내부정도관리 허용범위 설정에 관한 고찰)

  • Young Ji, LEE;So Young, LEE;Sun Ho, LEE
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.43-47
    • /
    • 2022
  • Purpose Radioimmunoassay implement quality control by systematizing the internal quality control system for quality assurance of test results. This study aims to contribute to the quality assurance of radioimmunoassay results and to implement systematic quality control by measuring the average CV of internal quality control and external quality control by plenty of institutions for reference when setting the laboratory's own acceptable range. Materials and Methods We measured the average CV of internal quality control and the bounce rate of more than 10.0% for a total of 42 items from October 2020 to December 2021. According to the CV result, we classified and compared the upper group (5.0% or less), the middle group (5.0~10.0%) and the lower group (10.0% or more). The bounce rate of 10.0% or more was compared by classifying the item of five or more institutions into tumor markers, thyroid hormones and other hormones. The average CV was measured by the overall average and standard deviation of the external quality control results for 28 items from the first quarter to the fourth quarter of 2021. In addition, the average CV was measured by the overall average and standard deviation of the proficiency results between institutions for 13 items in the first half and the second half of 2021. The average CV of internal quality control and external quality control was compared by item so we compared and analyzed the items that implement well to quality control and the items that require attention to quality control. Results As a result of measuring the precision average of internal quality control for 42 items of six institutions, the top group (5.0% or less) are Ferritin, HGH, SHBG, and 25-OH-VitD, while the bottom group (≤10.0%) are cortisol, ATA, AMA, renin, and estradiol. When comparing more than 10.0% bounce rate of CV for tumor markers, CA-125 (6.7%), CA-19-9 (9.8%) implemented well, while SCC-Ag (24.3%), CA-15-3 (26.7%) were among the items that require attention to control. As a result of comparing the bounce rate of more than 10.0% of CV for thyroid hormones examination, free T4 (2.1%), T3 (9.3%) showed excellent performance and AMA (39.6%), ATA (51.6%) required attention to control. When comparing the bounce rate of 10.0% or more of CV for other hormones, IGF-1 (8.8%), FSH (9.1%), prolactin (9.2%) showed excellent performance, however estradiol (37.3%), testosterone (37.7%), cortisol (44.4%) required attention to control. As a result of measuring the average CV of the whole institutions participating at external quality control for 28 items, HGH and SCC-Ag were included in the top group (≤10.0%), however ATA, estradiol, TSI, and thyroglobulin included in bottom group (≥30.0%). Conclusion As a result of evaluating 42 items of six institutions, the average CV was 3.7~12.2% showing a 3.3 times difference between the upper group and the lower group. Cortisol, ATA, AMA, Renin and estradiol tests with high CV will require continuous improvement activities to improve precision. In addition, we measured and compared the overall average CV of the internal quality control, the external quality control and the proficiency between institutions participating of six institutions for 41 items excluding HBs-Ab. As a result, ATA, AMA, Renin and estradiol belong to the same subgroup so we require attention to control and consider setting a higher acceptable range. It is recommended to set and control the acceptable range standard of internal quality control CV in consideration of many things in the laboratory due to the different reagents and instruments, and the results vary depending on the test's proficiency and quality control materials. It is thought that the accuracy and reliability of radioimmunoassay results can be improved if systematic quality control is implemented based on the set acceptable range.