• Title/Summary/Keyword: system engineering

Search Result 100,364, Processing Time 0.123 seconds

Analysis of media trends related to spent nuclear fuel treatment technology using text mining techniques (텍스트마이닝 기법을 활용한 사용후핵연료 건식처리기술 관련 언론 동향 분석)

  • Jeong, Ji-Song;Kim, Ho-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.33-54
    • /
    • 2021
  • With the fourth industrial revolution and the arrival of the New Normal era due to Corona, the importance of Non-contact technologies such as artificial intelligence and big data research has been increasing. Convergent research is being conducted in earnest to keep up with these research trends, but not many studies have been conducted in the area of nuclear research using artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. This study was conducted to confirm the applicability of data science analysis techniques to the field of nuclear research. Furthermore, the study of identifying trends in nuclear spent fuel recognition is critical in terms of being able to determine directions to nuclear industry policies and respond in advance to changes in industrial policies. For those reasons, this study conducted a media trend analysis of pyroprocessing, a spent nuclear fuel treatment technology. We objectively analyze changes in media perception of spent nuclear fuel dry treatment techniques by applying text mining analysis techniques. Text data specializing in Naver's web news articles, including the keywords "Pyroprocessing" and "Sodium Cooled Reactor," were collected through Python code to identify changes in perception over time. The analysis period was set from 2007 to 2020, when the first article was published, and detailed and multi-layered analysis of text data was carried out through analysis methods such as word cloud writing based on frequency analysis, TF-IDF and degree centrality calculation. Analysis of the frequency of the keyword showed that there was a change in media perception of spent nuclear fuel dry treatment technology in the mid-2010s, which was influenced by the Gyeongju earthquake in 2016 and the implementation of the new government's energy conversion policy in 2017. Therefore, trend analysis was conducted based on the corresponding time period, and word frequency analysis, TF-IDF, degree centrality values, and semantic network graphs were derived. Studies show that before the 2010s, media perception of spent nuclear fuel dry treatment technology was diplomatic and positive. However, over time, the frequency of keywords such as "safety", "reexamination", "disposal", and "disassembly" has increased, indicating that the sustainability of spent nuclear fuel dry treatment technology is being seriously considered. It was confirmed that social awareness also changed as spent nuclear fuel dry treatment technology, which was recognized as a political and diplomatic technology, became ambiguous due to changes in domestic policy. This means that domestic policy changes such as nuclear power policy have a greater impact on media perceptions than issues of "spent nuclear fuel processing technology" itself. This seems to be because nuclear policy is a socially more discussed and public-friendly topic than spent nuclear fuel. Therefore, in order to improve social awareness of spent nuclear fuel processing technology, it would be necessary to provide sufficient information about this, and linking it to nuclear policy issues would also be a good idea. In addition, the study highlighted the importance of social science research in nuclear power. It is necessary to apply the social sciences sector widely to the nuclear engineering sector, and considering national policy changes, we could confirm that the nuclear industry would be sustainable. However, this study has limitations that it has applied big data analysis methods only to detailed research areas such as "Pyroprocessing," a spent nuclear fuel dry processing technology. Furthermore, there was no clear basis for the cause of the change in social perception, and only news articles were analyzed to determine social perception. Considering future comments, it is expected that more reliable results will be produced and efficiently used in the field of nuclear policy research if a media trend analysis study on nuclear power is conducted. Recently, the development of uncontact-related technologies such as artificial intelligence and big data research is accelerating in the wake of the recent arrival of the New Normal era caused by corona. Convergence research is being conducted in earnest in various research fields to follow these research trends, but not many studies have been conducted in the nuclear field with artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. The academic significance of this study is that it was possible to confirm the applicability of data science analysis technology in the field of nuclear research. Furthermore, due to the impact of current government energy policies such as nuclear power plant reductions, re-evaluation of spent fuel treatment technology research is undertaken, and key keyword analysis in the field can contribute to future research orientation. It is important to consider the views of others outside, not just the safety technology and engineering integrity of nuclear power, and further reconsider whether it is appropriate to discuss nuclear engineering technology internally. In addition, if multidisciplinary research on nuclear power is carried out, reasonable alternatives can be prepared to maintain the nuclear industry.

Analysis and Implication on the International Regulations related to Unmanned Aircraft -with emphasis on ICAO, U.S.A., Germany, Australia- (세계 무인항공기 운용 관련 규제 분석과 시사점 - ICAO, 미국, 독일, 호주를 중심으로 -)

  • Kim, Dong-Uk;Kim, Ji-Hoon;Kim, Sung-Mi;Kwon, Ky-Beom
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.225-285
    • /
    • 2017
  • In regard to the regulations related to the RPA(Remotely Piloted Aircraft), which is sometimes called in other countries as UA(Unmanned Aircraft), ICAO stipulates the regulations in the 'RPAS manual (2015)' in detail based on the 'Chicago Convention' in 1944, and enacts provisions for the Rules of UAS or RPAS. Other contries stipulates them such as the Federal Airline Rules (14 CFR), Public Law (112-95) in the United States, the Air Transport Act, Air Transport Order, Air Transport Authorization Order (through revision in "Regulations to operating Rules on unmanned aerial System") based on EASA Regulation (EC) No.216/2008 in the case of unmanned aircaft under 150kg in Germany, and Civil Aviation Act (CAA 1998), Civil Aviation Act 101 (CASR Part 101) in Australia. Commonly, these laws exclude the model aircraft for leisure purpose and require pilots on the ground, not onboard aricraft, capable of controlling RPA. The laws also require that all managements necessary to operate RPA and pilots safely and efficiently under the structure of the unmanned aircraft system within the scope of the regulations. Each country classifies the RPA as an aircraft less than 25kg. Australia and Germany further break down the RPA at a lower weight. ICAO stipulates all general aviation operations, including commercial operation, in accordance with Annex 6 of the Chicago Convention, and it also applies to RPAs operations. However, passenger transportation using RPAs is excluded. If the operational scope of the RPAs includes the airspace of another country, the special permission of the relevant country shall be required 7 days before the flight date with detail flight plan submitted. In accordance with Federal Aviation Regulation 107 in the United States, a small non-leisure RPA may be operated within line-of-sight of a responsible navigator or observer during the day in the speed range up to 161 km/hr (87 knots) and to the height up to 122 m (400 ft) from surface or water. RPA must yield flight path to other aircraft, and is prohibited to load dangerous materials or to operate more than two RPAs at the same time. In Germany, the regulations on UAS except for leisure and sports provide duty to avoidance of airborne collisions and other provisions related to ground safety and individual privacy. Although commercial UAS of 5 kg or less can be freely operated without approval by relaxing the existing regulatory requirements, all the UAS regardless of the weight must be operated below an altitude of 100 meters with continuous monitoring and pilot control. Australia was the first country to regulate unmanned aircraft in 2001, and its regulations have impacts on the unmanned aircraft laws of ICAO, FAA, and EASA. In order to improve the utiliity of unmanned aircraft which is considered to be low risk, the regulation conditions were relaxed through the revision in 2016 by adding the concept "Excluded RPA". In the case of excluded RPA, it can be operated without special permission even for commercial purpose. Furthermore, disscussions on a new standard manual is being conducted for further flexibility of the current regulations.

  • PDF

Estimation of Internal Motion for Quantitative Improvement of Lung Tumor in Small Animal (소동물 폐종양의 정량적 개선을 위한 내부 움직임 평가)

  • Yu, Jung-Woo;Woo, Sang-Keun;Lee, Yong-Jin;Kim, Kyeong-Min;Kim, Jin-Su;Lee, Kyo-Chul;Park, Sang-Jun;Yu, Ran-Ji;Kang, Joo-Hyun;Ji, Young-Hoon;Chung, Yong-Hyun;Kim, Byung-Il;Lim, Sang-Moo
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.140-147
    • /
    • 2011
  • The purpose of this study was to estimate internal motion using molecular sieve for quantitative improvement of lung tumor and to localize lung tumor in the small animal PET image by evaluated data. Internal motion has been demonstrated in small animal lung region by molecular sieve contained radioactive substance. Molecular sieve for internal lung motion target was contained approximately 37 kBq Cu-64. The small animal PET images were obtained from Siemens Inveon scanner using external trigger system (BioVet). SD-Rat PET images were obtained at 60 min post injection of FDG 37 MBq/0.2 mL via tail vein for 20 min. Each line of response in the list-mode data was converted to sinogram gated frames (2~16 bin) by trigger signal obtained from BioVet. The sinogram data was reconstructed using OSEM 2D with 4 iterations. PET images were evaluated with count, SNR, FWHM from ROI drawn in the target region for quantitative tumor analysis. The size of molecular sieve motion target was $1.59{\times}2.50mm$. The reference motion target FWHM of vertical and horizontal was 2.91 mm and 1.43 mm, respectively. The vertical FWHM of static, 4 bin and 8 bin was 3.90 mm, 3.74 mm, and 3.16 mm, respectively. The horizontal FWHM of static, 4 bin and 8 bin was 2.21 mm, 2.06 mm, and 1.60 mm, respectively. Count of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.10, 4.83, 5.59, 5.38, and 5.31, respectively. The SNR of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.18, 4.05, 4.22, 3.89, and 3.58, respectively. The FWHM were improved in accordance with gate number increase. The count and SNR were not proportionately improve with gate number, but shown the highest value in specific bin number. We measured the optimal gate number what minimize the SNR loss and gain improved count when imaging lung tumor in small animal. The internal motion estimation provide localized tumor image and will be a useful method for organ motion prediction modeling without external motion monitoring system.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Hydrochemistry and Noble Gas Origin of Various Hot Spring Waters from the Eastern area in South Korea (동해안지역 온천유형별 수리화학적 특성 및 영족기체 기원)

  • Jeong, Chan-Ho;Nagao, Keisuke;Kim, Kyu-Han;Choi, Hun-Kong;Sumino, Hirochika;Park, Ji-Sun;Park, Chung-Hwa;Lee, Jong-Ig;Hur, Soon-Do
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • The purpose of this study is to characterize the hydrogeochemical characteristics of hot spring waters and to interpret the source of noble gases and the geochemical environment of the hot spring waters distributed along the eastern area of the Korean peninsula. For this purpose, We carried out the chemical, stable isotopic and noble gas isotopic analyses for eleven hot spring water and fourteen hot spring gas samples collected from six hot spring sites. The hot spring waters except the Osaek hot spring water show the pH range of 7.0 to 9.1. However, the Osaek $CO_2$-rich hot spring water shows a weak acid of pH 5.7. The temperature of hot spring waters in the study area ranges from $25.7^{\circ}C$ to $68.3^{\circ}C$. Electrical conductivity of hot spring waters varies widely from 202 to $7,130{\mu}S/cm$. High electrical conductivity (av., $3,890{\mu}S/sm$) by high Na and Cl contents of the Haeundae and the Dongrae hot spring waters indicates that the hot spring waters were mixed with seawater in the subsurface thermal system. The type of hot springs in the viewpoint of dissolved components can be grouped into three types: (1) alkaline Na-$HCO_3$ type including sulfur gas of the Osaek, Baekam, Dukgu and Chuksan hot springs, and (2) saline Na-Cl type of the Haeundae and Dongrae hot springs, and (3) weak acid $CO_2$-rich Na-$HCO_3$ type of Osaek hot spring. Tritium ratios of the Haeundae and the Dongrae hot springs indicate different residence time in their aquifers of older water of $0.0{\sim}0.3$ TU and younger water of $5.9{\sim}8.8$ TU. The ${\delta}^{18}O$ and ${\delta}D$ values of hot spring waters indicate that they originate from the meteoric water, and that the values also reflect a latitude effect according to their locations. $^3He/^4He$ ratios of the hot spring waters except Osaek $CO_2$-rich hot spring water range from $0.1{\times}10^{-6}$ to $1.1{\times}10^{-6}$ which are plotted above the mixing line between air and crustal components. It means that the He gas in hot spring waters was originated mainly from atmosphere and crust sources, and partly from mantle sources. The Osaek $CO_2$-rich hot spring water shows $3.3{\times}10^{-6}$ in $^3He/^4He$ ratio that is 2.4 times higher than those of atmosphere. It provides clearly a helium source from the deep mantle. $^{40}Ar/^{36}Ar$ ratios of hot spring water are in the range of an atmosphere source.

Evaluation of Cryptosporidiurn Disinfection by Ozone and Ultraviolet Irradiation Using Viability and Infectivity Assays (크립토스포리디움의 활성/감염성 판별법을 이용한 오존 및 자외선 소독능 평가)

  • Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
    • Journal of Life Science
    • /
    • v.16 no.3 s.76
    • /
    • pp.534-539
    • /
    • 2006
  • In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.

IR Study on the Adsorption of Carbon Monoxide on Silica Supported Ruthenium-Nickel Alloy (실리카 지지 루테늄-니켈 합금에 있어서 일산화탄소의 흡착에 관한 IR 연구)

  • Park, Sang-Youn;Yoon, Dong-Wook
    • Applied Chemistry for Engineering
    • /
    • v.17 no.4
    • /
    • pp.349-356
    • /
    • 2006
  • We have investigated adsorption and desorption properties of CO adsorption on silica supported Ru/Ni alloys at various Ru/Ni mole content ratio as well as CO partial pressures using Fourier transform infrared spectrometer (FT-IR). For Ru-$SiO_{2}$ sample, four bands were observed at $2080.0cm^{-1}$, $2021.0{\sim}2030.7cm^{-1}$, $1778.9{\sim}1799.3cm^{-1}$, $1623.8cm^{-1}$ on adsorption and three bands were observed at $2138.7cm^{-1}$, $2069.3cm^{-1}$, $1988.3{\sim}2030.7cm^{-1}$ on vacumn desorption. For Ni-$SiO_{2}$ sample, four bands were observed at $2057.7cm^{-1}$, $2019.1{\sim}2040.3cm^{-1}$, $1862.9{\sim}1868.7cm^{-1}$, $1625.7cm^{-1}$ on adsorption and two bands were observed at $2009.5{\sim}2040.3cm^{-1}$, $1828.4{\sim}1868.7cm^{-1}$ on vacumn desorption. These absorption bands correspond with those of the previous reports approximately. For Ru/Ni(9/1, 8/2, 7/3, 6/4, 5/5; mole content ratio)-$SiO_{2}$ samples, three bands were observed at $2001.8{\sim}2057.7cm^{-1}$, $1812.8{\sim}1926.5cm^{-1}$, $1623.8{\sim}1625.7cm^{-1}$ on adsorption and three bands were observed at $2140.6cm^{-1}$, $2073.1cm^{-1}$, $1969.0{\sim}2057.7cm^{-1}$ on vacumn desorption. The spectrum pattern observed for Ru/Ni-$SiO_{2}$ sample at 9/1 Ru/Ni mole content ratio on CO adsorption and on vacumn desorption is almost like the spectrum pattern observed for Ru-$SiO_{2}$ sample. But the spectrum patterns observed for Ru/Ni-$SiO_{2}$ samples under 8/2 Ru/Ni mole content ratio on CO adsorption and vacumn desorption are almost like the pattern observed for $Ni-SiO_{2}$ sample. It may be suggested surfaces of alloy clusters on the Ru/Ni-$SiO_{2}$ samples contain more Ni components than the mole content ratio of the sample considering the above phenomena. With Ru/Ni-$SiO_{2}$ samples the absorption band shifts may be ascribed to variations of surface concentration, strain variation due to atomic size difference, variation of bonding energy and electronic densities, and changes of surface geometries according to surface concentration variation. Studies for CO adsorption on Ru/Ni alloy cluster surface by LEED and Auger spectroscopy, interation between Ru/Ni alloy cluster and $SiO_{2}$, and MO calculation for the system would be needed to look into the phenomena.

Pilot-scale Applications of a Well-type Reactive Barrier using Autotrophic Sulfur-oxidizers for Nitrate Removal (독립영양 황탈질 미생물을 이용한 관정형 반응벽체의 현장적용성 연구)

  • Lee, Byung-Sun;Um, Jae-Yeon;Lee, Kyu-Yeon;Moon, Hee-Sun;Kim, Yang-Bin;Woo, Nam-C.;Lee, Jong-Min;Nam, Kyoung-Phile
    • Journal of Soil and Groundwater Environment
    • /
    • v.14 no.3
    • /
    • pp.40-46
    • /
    • 2009
  • The applicability of a well-type autotrophic sulfur-oxidizing reactive barrier (L $\times$ W $\times$ D = $3m\;{\times}\;4\;m\;{\times}\;2\;m$) as a long-term treatment option for nitrate removal in groundwater was evaluated. Pilot-scale (L $\times$ W $\times$ D = $8m\;{\times}\;4\;m\;{\times}\;2\;m$) flow-tank experiments were conducted to examine remedial efficacy of the well-type reactive barrier. A total of 80 kg sulfur granules as an electron donor and Thiobacillus denitrificans as an active bacterial species were prepared. Thiobacillus denitrificans was successfully colonized on the surface of the sulfur granules and the microflora transformed nitrate with removal efficiency of ~12% (0.07 mM) for 11 days, ~24% (1.3 mM) for 18 days, ~45% (2.4 mM) for 32 days, and ~52% (2.8 mM) for 60 days. Sulfur granules attached to Thiobacillus denitrificans were used to construct the well-type reactive barrier comprising three discrete barriers installed at 1-m interval downstream. Average initial nitrate concentrations were 181 mg/L for the first 28 days and 281 mg/L for the next 14 days. For the 181 mg/L (2.9 mM) plume, nitrate concentrations decreased by ~2% (0.06 mM), ~9% (0.27 mM), and ~15% (0.44 mM) after $1^{st}$, $2^{nd}$, and $3^{rd}$ barriers, respectively. For the 281 mg/L (4.5 mM) plume, nitrate concentrations decreased by ~1% (0.02 mM), ~6% (0.27 mM), and ~8% (0.37 mM) after $1^{st}$, $2^{nd}$, and $3^{rd}$ barriers, respectively. Nitrate plume was flowed through the flow-tank for 49 days by supplying $1.24\;m^3/d$ of nitrate solution. During nitrate treatment, flow velocity (0.44 m/d), pH (6.7 to 8.3), and DO (0.9~2.8 mg/L) showed little variations. Incomplete destruction of nitrate plume was attributed to the lack of retention time, rarely transverse dispersion, and inhibiting the activity of denitrification enzymes caused by relatively high DO concentrations. For field applications, it should be considered increments of retention time, modification of well placements, and intrinsic DO concentration.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.