• Title/Summary/Keyword: 위험함수

Search Result 408, Processing Time 0.027 seconds

Corrosion Rate of Structural Pipes for Greenhouse (온실 구조용 파이프의 부식속도 검토)

  • Yun, Sung-Wook;Choi, Man Kwon;Lee, Si Young;Moon, Sung Dong;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.24 no.4
    • /
    • pp.333-340
    • /
    • 2015
  • Because soils in reclaimed lands nearby coastal areas have much higher salinity and moisture content than soils in inland area, parts of greenhouses embedded in such soils are exposed to highly corrosive environments. Owing to the accelerated corrosion of galvanized steel pipes for substrucrture and structure of greenhouses in saline environments, repair and reinforcement technologies and efficient maintenance and management for the construction materials in such facilities are required. In this study, we measured the corrosion rates of the parts used for greenhouse construction that are exposed to the saline environment to obtain a basic database for the establishment of maintenance and reinforcement standards for greenhouse construction in reclaimed lands with soils with high salinity. All the test pipes were exposed to soil and water environments with 0, 0.1, 0.3, and 0.5% salinity during the observation period of 480 days. At the end of the observation period, salinity-dependent differences of corrosion rate between black-surface corrosion and relatively regular corrosion were clearly manifested in a visual assessment. For the soils in rice paddies, the corrosion growth rate increased with salinity (0.008, 0.027, 0.036, and $0.043mm{\cdot}yr^{-1}$ at 0, 0.1, 0.3, and 0.5% salinity, respectively). The results for the soils in agricultural fields are 0.0002, 0.039, 0.040, and $0.039mm{\cdot}yr^{-1}$ at 0, 0.1, 0.3, and 0.5% salinity, respectively. The higher corrosion rate of rice-paddy soil was associated with the relatively high proportion of fine particles in it, reflecting the general tendency of soils with evenly distributed fine particles. Hence, it was concluded that thorough measures should be taken to counteract pipe corrosion, given that besides high salinity, the soils in reclaimed lands are expected to have a higher proportion of fine particles than those in inland rice paddies and agricultural fields.

Quality Characteristics of Rough Rice during Low Temperature Drying (저온건조 중 벼의 품질 특성)

  • Kim, Hoon;Han, Jae-Woong
    • Food Science and Preservation
    • /
    • v.16 no.5
    • /
    • pp.650-655
    • /
    • 2009
  • This study was conducted to measure the quality characteristics of rough rice during low temperature drying by using an experimental dryer and heat pump with a capacity of 150kg at four temperature levels of 20, 30, 40, and $50^{\circ}C$. The quality and proper drying temperature of rough rice was investigated by measuring variations in moisture content, crack rates, germination rates and cooked rice. Temperatures over $40^{\circ}C$ is considered a high-temperature area, and below $40^{\circ}C$ is considered a low-temperature area. The drying rates were 0.3, 0.6, 0.9, and 1.3%/hr, and the crack ratios were 0, 1.6, 6.8, and 24.2% at the drying temperatures of 20, 30, 40, and $50^{\circ}C$, respectively, which showed that the higher the drying temperature was, the higher the drying rate and crack rate was. Therefore, 20 and $30^{\circ}C$ were found to be appropriate drying temperatures for avoiding crack formation, and $50^{\circ}C$ was inappropriate. At $40^{\circ}C$, the operation methods needed to be modified to limit cracking, such as increasing the tempering time. Also, as the drying temperature increased, the germination rate decreased. Germination rates at 20 and $30^{\circ}C$ were suitable for using the rough rice as a seed, and those at 40 and $50^{\circ}C$ were over 80%, which is the minimum allowable percentage. In the sensory evaluation of cooked rice, the quality of appearance, taste, and texture varied as a function of drying temperature. When considering these factors, the cooked rice that was dried at 20 and $30^{\circ}C$ was better than the cooked rice dried at high-temperature. Consequently, in view of drying temperature and rates, the best conditions for drying rough rice were below $30^{\circ}C$ and below 0.6%/hr.

A study on the optimization of tunnel support patterns using ANN and SVR algorithms (ANN 및 SVR 알고리즘을 활용한 최적 터널지보패턴 선정에 관한 연구)

  • Lee, Je-Kyum;Kim, YangKyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.617-628
    • /
    • 2022
  • A ground support pattern should be designed by properly integrating various support materials in accordance with the rock mass grade when constructing a tunnel, and a technical decision must be made in this process by professionals with vast construction experiences. However, designing supports at the early stage of tunnel design, such as feasibility study or basic design, may be very challenging due to the short timeline, insufficient budget, and deficiency of field data. Meanwhile, the design of the support pattern can be performed more quickly and reliably by utilizing the machine learning technique and the accumulated design data with the rapid increase in tunnel construction in South Korea. Therefore, in this study, the design data and ground exploration data of 48 road tunnels in South Korea were inspected, and data about 19 items, including eight input items (rock type, resistivity, depth, tunnel length, safety index by tunnel length, safety index by rick index, tunnel type, tunnel area) and 11 output items (rock mass grade, two items for shotcrete, three items for rock bolt, three items for steel support, two items for concrete lining), were collected to automatically determine the rock mass class and the support pattern. Three machine learning models (S1, A1, A2) were developed using two machine learning algorithms (SVR, ANN) and organized data. As a result, the A2 model, which applied different loss functions according to the output data format, showed the best performance. This study confirms the potential of support pattern design using machine learning, and it is expected that it will be able to improve the design model by continuously using the model in the actual design, compensating for its shortcomings, and improving its usability.

Development of disaster severity classification model using machine learning technique (머신러닝 기법을 이용한 재해강도 분류모형 개발)

  • Lee, Seungmin;Baek, Seonuk;Lee, Junhak;Kim, Kyungtak;Kim, Soojun;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.261-272
    • /
    • 2023
  • In recent years, natural disasters such as heavy rainfall and typhoons have occurred more frequently, and their severity has increased due to climate change. The Korea Meteorological Administration (KMA) currently uses the same criteria for all regions in Korea for watch and warning based on the maximum cumulative rainfall with durations of 3-hour and 12-hour to reduce damage. However, KMA's criteria do not consider the regional characteristics of damages caused by heavy rainfall and typhoon events. In this regard, it is necessary to develop new criteria considering regional characteristics of damage and cumulative rainfalls in durations, establishing four stages: blue, yellow, orange, and red. A classification model, called DSCM (Disaster Severity Classification Model), for the four-stage disaster severity was developed using four machine learning models (Decision Tree, Support Vector Machine, Random Forest, and XGBoost). This study applied DSCM to local governments of Seoul, Incheon, and Gyeonggi Province province. To develop DSCM, we used data on rainfall, cumulative rainfall, maximum rainfalls for durations of 3-hour and 12-hour, and antecedent rainfall as independent variables, and a 4-class damage scale for heavy rain damage and typhoon damage for each local government as dependent variables. As a result, the Decision Tree model had the highest accuracy with an F1-Score of 0.56. We believe that this developed DSCM can help identify disaster risk at each stage and contribute to reducing damage through efficient disaster management for local governments based on specific events.

Optimal Monetary Policy System for Both Macroeconomics and Financial Stability (거시경제와 금융안정을 종합 고려한 최적 통화정책체계 연구)

  • Joonyoung Hur;Hyoung Seok Oh
    • KDI Journal of Economic Policy
    • /
    • v.46 no.1
    • /
    • pp.91-129
    • /
    • 2024
  • The Bank of Korea, through a legal amendment in 2011 following the financial crisis, was entrusted with the additional responsibility of financial stability beyond its existing mandate of price stability. Since then, concerns have been raised about the prolonged increase in household debt compared to income conditions, which could constrain consumption and growth and increase the possibility of a crisis in the event of negative economic shocks. The current accumulation of financial imbalances suggests a critical period for the government and central bank to be more vigilant, ensuring it does not impede the stable flow of our financial and economic systems. This study examines the applicability of the Integrated Inflation Targeting (IIT) framework proposed by the Bank for International Settlements (BIS) for macro-financial stability in promoting long-term economic stability. Using VAR models, the study reveals a clear increase in risk appetite following interest rate cuts after the financial crisis, leading to a rise in household debt. Additionally, analyzing the central bank's conduct of monetary policy from 2000 to 2021 through DSGE models indicates that the Bank of Korea has operated with a form of IIT, considering both inflation and growth in its policy decisions, with some responsiveness to the increase in household debt. However, the estimation of a high interest rate smoothing coefficient suggests a cautious approach to interest rate adjustments. Furthermore, estimating the optimal interest rate rule to minimize the central bank's loss function reveals that a policy considering inflation, growth, and being mindful of household credit conditions is superior. It suggests that the policy of actively adjusting the benchmark interest rate in response to changes in economic conditions and being attentive to household credit situations when household debt is increasing rapidly compared to income conditions has been analyzed as a desirable policy approach. Based on these findings, we conclude that the integrated inflation targeting framework proposed by the BIS could be considered as an alternative policy system that supports the stable growth of the economy in the medium to long term.

무령왕릉보존에 있어서의 지질공학적 고찰

  • 서만철;최석원;구민호
    • Proceedings of the KSEEG Conference
    • /
    • 2001.05b
    • /
    • pp.42-63
    • /
    • 2001
  • The detail survey on the Songsanri tomb site including the Muryong royal tomb was carried out during the period from May 1 , 1996 to April 30, 1997. A quantitative analysis was tried to find changes of tomb itself since the excavation. Main subjects of the survey are to find out the cause of infiltration of rain water and groundwater into the tomb and the tomb site, monitoring of the movement of tomb structure and safety, removal method of the algae inside the tomb, and air controlling system to solve high humidity condition and dew inside the tomb. For these purposes, detail survery inside and outside the tombs using a electronic distance meter and small airplane, monitoring of temperature and humidity, geophysical exploration including electrical resistivity, geomagnetic, gravity and georadar methods, drilling, measurement of physical and chemical properties of drill core and measurement of groundwater permeability were conducted. We found that the center of the subsurface tomb and the center of soil mound on ground are different 4.5 meter and 5 meter for the 5th tomb and 7th tomb, respectively. The fact has caused unequal stress on the tomb structure. In the 7th tomb (the Muryong royal tomb), 435 bricks were broken out of 6025 bricks in 1972, but 1072 bricks are broken in 1996. The break rate has been increased about 250% for just 24 years. The break rate increased about 290% in the 6th tomb. The situation in 1996 is the result for just 24 years while the situation in 1972 was the result for about 1450 years. Status of breaking of bircks represents that a severe problem is undergoing. The eastern wall of the Muryong royal tomb is moving toward inside the tomb with the rate of 2.95 mm/myr in rainy season and 1.52 mm/myr in dry season. The frontal wall shows biggest movement in the 7th tomb having a rate of 2.05 mm/myr toward the passage way. The 6th tomb shows biggest movement among the three tombs having the rate of 7.44mm/myr and 3.61mm/myr toward east for the high break rate of bricks in the 6th tomb. Georadar section of the shallow soil layer represents several faults in the top soil layer of the 5th tomb and 7th tomb. Raninwater flew through faults tnto the tomb and nearby ground and high water content in nearby ground resulted in low resistance and high humidity inside tombs. High humidity inside tomb made a good condition for algae living with high temperature and moderate light source. The 6th tomb is most severe situation and the 7th tomb is the second in terms of algae living. Artificial change of the tomb environment since the excavation, infiltration of rain water and groundwater into the tombsite and bad drainage system had resulted in dangerous status for the tomb structure. Main cause for many problems including breaking of bricks, movement of tomb walls and algae living is infiltration of rainwater and groundwater into the tomb site. Therefore, protection of the tomb site from high water content should be carried out at first. Waterproofing method includes a cover system over the tomvsith using geotextile, clay layer and geomembrane and a deep trench which is 2 meter down to the base of the 5th tomb at the north of the tomv site. Decrease and balancing of soil weight above the tomb are also needed for the sfety of tomb structures. For the algae living inside tombs, we recommend to spray K101 which developed in this study on the surface of wall and then, exposure to ultraviolet light sources for 24 hours. Air controlling system should be changed to a constant temperature and humidity system for the 6th tomb and the 7th tomb. It seems to much better to place the system at frontal room and to ciculate cold air inside tombs to solve dew problem. Above mentioned preservation methods are suggested to give least changes to tomb site and to solve the most fundmental problems. Repairing should be planned in order and some special cares are needed for the safety of tombs in reparing work. Finally, a monitoring system measuring tilting of tomb walls, water content, groundwater level, temperature and humidity is required to monitor and to evaluate the repairing work.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF