• Title/Summary/Keyword: 성능 분석

Search Result 25,595, Processing Time 0.061 seconds

Absorption of Carbon Dioxide into Aqueous Potassium Salt of Serine (Serine 칼륨염 수용액의 이산화탄소 흡수특성)

  • Song, Ho-Jun;Lee, Seung-Moon;Lee, Joon-Ho;Park, Jin-Won;Jang, Kyung-Ryong;Shim, Jae-Goo;Kim, Jun-Han
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.7
    • /
    • pp.505-514
    • /
    • 2009
  • Aqueous potassium salt of serine was proposed as an alternative $CO_2$ absorbent to monoethanolamine (MEA) and its $CO_2$ absorption characteristics were studied. The experiment has been conducted using screening test equipment with NDIR type gas analyzer and vapor-liquid equilibrium apparatus. $CO_2$ absorption/desorption rate and net amount of $CO_2$ absorbed in cyclic process are the criteria to assess the $CO_2$ absorption characteristics in this study. Effective $CO_2$ loading of potassium salt of serine and MEA are 0.425 and 0.230 respectively. Cyclic capacities are 0.354 and 0.298 for potassium salt of serine and MEA. The absorption rate of the potassium serinate decreased sharply at $CO_2$ loading is 0.1 and were maintained approximately at half of MEA. To enhance the absorption rate of aqueous potassium salt of serine, small quantities of rate promoters, namely piperazine and tetraethylenepentamine were blended, so that rich $CO_2$ loading were increased by 13.7% and 18.7% respectively. The rich $CO_2$ loading of potassium salt of serine was 29.2% and 35.0% higher than those of aqueous sodium and lithium salt of serine, respectively. The absorption rate of potassium salt of valine and isoleucine which have similar molecular structures to serine were lower than that of serine because of the presence of bulky side group. Precipitation phenomena during $CO_2$ absorption were discussed by the aid of literatures.

Analyze for the Quality Control of General X-ray Systems in Capital region (수도권지역 일반촬영 장비의 정도관리 분석)

  • Kang, Byung-Sam;Lee, Kang-Min;Shim, Woo-Yong;Park, Soon-Chul;Choi, Hak-Dong;Cho, Yong-Kwon
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.93-102
    • /
    • 2012
  • Thanks to the rapid increase of the interest in the quality control of the General X-ray systems, this research proposes the direction of the quality control through comparing and inspecting the actual condition of the respective quality control in the Clinic, the educational institution and the hospital. The subjects of the investigation are diagnostic radiation equipment's in the clinic, the educational institution and the hospital around the capital. A test of kVp, mR/mAs out put test and reproducibility of the exposure dose, half value layer, an accordance between the light field and the beam alignment test, and lastly reproducibility of the exposure time. Then the mean difference of the percentage, the CV (Coefficient of Variation, CV) and the attenuated curve which are respectively resulted from the above tests are computed. After that we have evaluated the values according to the regulations on the Diagnostic Radiation Equipment Safety Administration regulations. In the case of the clinic and the educational institution, there were 22 general X-ray devices. And 18.2% of the kVp test, 13.6% of the reproducibility of exposure dose test, 9.1% of the mR/mAs out put test, and 13.6% of the HVL (Half Value Layer) test appeared to be improper. In the case of the hospital, however, there were 28 devices. And 7.1% of the reproducibility of exposure dose, 7.1% of the difference in the light field/ beam alignment, and 7.1% of the reproducibility of the exposure time appeared to be improper. According to the investigation, the hospital's quality control condition is better than the condition in the clinic and the educational institution. The quality control condition of the general X-ray devices in the clinic is unsatisfactory compared to the hospital. Thus, it is considered that realizing the importance of the quality control is necessary.

Development of Carbonization Technology and Application of Unutilized Wood Wastes(II) - Carbonization and it's properties of wood-based materials - (미이용 목질폐잔재의 탄화 이용개발(II) - 수종의 목질재료 탄화와 탄화물의 특성 -)

  • Kong, Seog-Woo;Kim, Byung-Ro
    • Journal of the Korean Wood Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.57-65
    • /
    • 2000
  • Objective of research is obtain fundamental data of carbonized wood wastes for soil condition, de-ordorization, absorption of water, carrier for microbial activity, and purifying agent for water quality of river. The carbonization technique and the properties of carbonized wood wastes(wood-based materials) were analyzed. Proximate analysis showed the wood-based materials contains 0.37~2.27% ash, 70~74% volatile matter, and 17~20% fixed carbon. As carbonization temperature was increased, the charcoal yield was decreased. However, no difference in charcoal yield was found due to time increase. The specific gravity after the carbonization decreased about 30~40% comparing to green wood. The charcoal had 1.08~4.18% ash, 5.88~13.79% volatile matter, and 80.15~90.94% fixed carbon. The pH of plywood and particleboard(pH 9 at $400^{\circ}C$, pH 10 at $600^{\circ}C$ and $800^{\circ}C$) made charcoals was higher than that of fiberboard. The water-retention capacity was not affected by the carbonization temperature and time. The water-retention capacity within 24h was about 2~2.5 times of sample weight, and the Equilibrium moisture content(EMC) became 2~10% after 24h. EMC of charcoal from the thinned trees were 9.40~11.82%($20^{\circ}C$, RH 90%), 6.87~7.61%($20^{\circ}C$, RH 65%), and 1.69~2.81%($20^{\circ}C$, RH 25%). EMC of charcoal from the wood-based materials under $20^{\circ}C$, relative humidity(RH) 90% was similar to EMC of charcoal from the thinned trees(9~11 %). However, under $20^{\circ}C$, RH 25.65%, EMC of charcoal from the wood-based materials were higher(2~3%) than EMC of charcoal from the thinned trees. Every charcoal from the wood-based materials fulfilled the criteria in JWWA K 113-1947.

  • PDF

Debris flow characteristics and sabo dam function in urban steep slopes (도심지 급경사지에서 토석류 범람 특성 및 사방댐 기능)

  • Kim, Yeonjoong;Kim, Taewoo;Kim, Dongkyum;Yoon, Jongsung
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.8
    • /
    • pp.627-636
    • /
    • 2020
  • Debris flow disasters primarily occur in mountainous terrains far from cities. As such, they have been underestimated to cause relatively less damage compared with other natural disasters. However, owing to urbanization, several residential areas and major facilities have been built in mountainous regions, and the frequency of debris flow disasters is steadily increasing owing to the increase in rainfall with environmental and climate changes. Thus, the risk of debris flow is on the rise. However, only a few studies have explored the characteristics of flooding and reduction measures for debris flow in areas designated as steep slopes. In this regard, it is necessary to conduct research on securing independent disaster prevention technology, suitable for the environment in South Korea and reflective of the topographical characteristics thereof, and update and improve disaster prevention information. Accordingly, this study aimed to calculate the amount of debris flow, depending on disaster prevention performance targets for regions designated as steep slopes in South Korea, and develop an independent model to not only evaluate the impact of debris flow but also identify debris barriers that are optimal for mitigating damage. To validate the reliability of the two-dimensional debris flow model developed for the evaluation of debris barriers, the model's performance was compared with that of the hydraulic model. Furthermore, a 2-D debris model was constructed in consideration of the regional characteristics around the steep slopes to analyze the flow characteristics of the debris that directly reaches the damaged area. The flow characteristics of the debris delivered downstream were further analyzed, depending on the specifications (height) and installation locations of the debris barriers employed to reduce the damage. The experimental results showed that the reliability of the developed model is satisfactory; further, this study confirmed significant performance degradation of debris barriers in areas where the barriers were installed at a slope of 20° or more, which is the slope at which debris flows occur.

A Study on the Cause Analysis and Countermeasures of the Traditional Market for Fires in the TRIZ Method (TRIZ 기법에 의한 재래시장 화재의 원인분석과 대책에 관한 연구)

  • Seo, Yong-Goo;Min, Se-Hong
    • Fire Science and Engineering
    • /
    • v.31 no.4
    • /
    • pp.95-102
    • /
    • 2017
  • The fires in the traditional markets often occur recently with the most of them expanded into great fires so that the damage is very serious. The status of traditional markets handling the distribution for ordinary people is greatly shrunk with the aggressive marketing of the local large companies and the foreign large distribution companies after the overall opening of the local distribution market. Most of the traditional markets have the history and tradition from decades to centuries and have grown steadily with the joys and sorrows of ordinary people and the development of the local economy. The fire developing to the large fire has the characteristics of the problem that the fire possibility is high since all products can be flammable due to the deterioration of facilities, the arbitrary modification of equipment, and the crowding of the goods for sale. Furthermore, most of the stores are petty with their small sizes so that the passage is narrow affecting the passage of pedestrians. Accordingly, the traditional markets are vulnerable to fire due to the initial unplanned structural problem so that the large scale fire damage occurs. The study is concerned with systematically classifying and analyzing the result by applying the TRIZ tool to the fire risk factors to extract the fundamental problem with the fire of the traditional market and make the active response. The study was done for preventing the fire on the basis of it and the expansion to the large fire in case of fire to prepare the specific measure to minimize the fire damage. On the basis of the fire expansion risk factor of the derived traditional market, the study presented the passive measures such as the improvement of the fire resisting capacity, the fire safety island, etc. and the active and institutional measures such as the obligation of the fire breaking news facilities, the application of the extra-high pressure pump system, the divided use of the electric line, etc.

Estimation of TROPOMI-derived Ground-level SO2 Concentrations Using Machine Learning Over East Asia (기계학습을 활용한 동아시아 지역의 TROPOMI 기반 SO2 지상농도 추정)

  • Choi, Hyunyoung;Kang, Yoojin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.275-290
    • /
    • 2021
  • Sulfur dioxide (SO2) in the atmosphere is mainly generated from anthropogenic emission sources. It forms ultra-fine particulate matter through chemical reaction and has harmful effect on both the environment and human health. In particular, ground-level SO2 concentrations are closely related to human activities. Satellite observations such as TROPOMI (TROPOspheric Monitoring Instrument)-derived column density data can provide spatially continuous monitoring of ground-level SO2 concentrations. This study aims to propose a 2-step residual corrected model to estimate ground-level SO2 concentrations through the synergistic use of satellite data and numerical model output. Random forest machine learning was adopted in the 2-step residual corrected model. The proposed model was evaluated through three cross-validations (i.e., random, spatial and temporal). The results showed that the model produced slopes of 1.14-1.25, R values of 0.55-0.65, and relative root-mean-square-error of 58-63%, which were improved by 10% for slopes and 3% for R and rRMSE when compared to the model without residual correction. The model performance by country was slightly reduced in Japan, often resulting in overestimation, where the sample size was small, and the concentration level was relatively low. The spatial and temporal distributions of SO2 produced by the model agreed with those of the in-situ measurements, especially over Yangtze River Delta in China and Seoul Metropolitan Area in South Korea, which are highly dependent on the characteristics of anthropogenic emission sources. The model proposed in this study can be used for long-term monitoring of ground-level SO2 concentrations on both the spatial and temporal domains.

A Study on Rapid Color Difference Discrimination for Fabrics using Digital Imaging Device (디지털 화상 장치를 이용한 섬유제품류 간이 색차판별에 관한 연구)

  • Park, Jae Woo;Byun, Kisik;Cho, Sung-Yong;Kim, Byung-Soon;Oh, Jun-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.8
    • /
    • pp.29-37
    • /
    • 2019
  • Textile quality management targets the physical properties of fabrics and the subjective discriminations of color and fitting. Color is the most representative quality factor that consumers can use to evaluate quality levels without any instruments. For this reason, quantification using a color discrimination device has been used for statistical quality management in the textile industry. However, small and medium-sized domestic textile manufacturers use only visual inspection for color discrimination. As a result, color discrimination is different based on the inspectors' individual tendencies and work procedures. In this research, we want to develop a textile industry-friendly quality management method, evaluating the possibility of rapid color discrimination using a digital imaging device, which is one of the office-automation instruments. The results show that an imaging process-based color discrimination method is highly correlated with conventional color discrimination instruments ($R^2=0.969$), and is also applicable to field discrimination of the manufacturing process, or for different lots. Moreover, it is possible to recognize quality management factors by analyzing color components, ${\Delta}L$, ${\Delta}a$, ${\Delta}b$. We hope that our rapid discrimination method will be a substitute technique for conventional color discrimination instruments via elaboration and optimization.

A Study on the Fabrication and Comparison of the Phantom for CT Dose Measurements Using 3D Printer (3D프린터를 이용한 CT 선량측정 팬텀 제작 및 비교에 관한 연구)

  • Yoon, Myeong-Seong;Kang, Seong-Hyeon;Hong, Soon-Min;Lee, Youngjin;Han, Dong-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.6
    • /
    • pp.737-743
    • /
    • 2018
  • Patient exposure dose exposure test, which is one of the items of accuracy control of Computed Tomography, conducts measurements every year based on the installation and operation of special medical equipment under Article 38 of the Medical Law, And keep records. The CT-Dose phantom used for dosimetry can accurately measure doses, but has the disadvantage of high price. Therefore, through this research, the existing CT - Dose phantom was similarly manufactured with a 3D printer and compared with the existing phantom to examine the usefulness. In order to produce the same phantom as the conventional CT-Dose phantom, a 3D printer of the FFF method is used by using a PLA filament, and in order to calculate the CTDIw value, Ion chambers were inserted into the central part and the central part, and measurements were made ten times each. Measurement results The CT-Dose phantom was measured at $30.44{\pm}0.31mGy$ in the periphery, $29.55{\pm}0.34mGy$ CTDIw value was measured at $30.14{\pm}0.30mGy$ in the center, and the phantom fabricated using the 3D printer was measured at the periphery $30.59{\pm}0.18mGy$, the central part was $29.01{\pm}0.04mGy$, and the CTDIw value was measured at $30.06{\pm}0.13mGy$. Analysis using the Mann - Whiteney U-test of the SPSS statistical program showed that there was a statistically significant difference in the result values in the central part, but statistically significant differences were observed between the peripheral part and CTDIw results I did not show. In conclusion, even in the CT-Dose phantom made with a 3D printer, we showed dose measurement performance like existing CT-Dose phantom and confirmed the possibility of low-cost phantom production using 3D printer through this research did it.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.