• Title/Summary/Keyword: Engineering information

Search Result 82,725, Processing Time 0.102 seconds

Impact of Sluice Gates at Stream Mouth on Fish Community (하구의 배수갑문 설치 유무가 어류군집에 미치는 영향)

  • Kim, Jun-Wan;Kim, Kyu-Jin;Choi, Beom-Myeong;Yoon, Ju-Duk;Park, Bae-Kyung;Kim, Jong-Hak;Jang, Min-Ho
    • Korean Journal of Ecology and Environment
    • /
    • v.55 no.1
    • /
    • pp.49-59
    • /
    • 2022
  • Total of 325 estuaries in Korea were surveyed to analyze the effect of presence of sluice gate on the estuary environment and fish community from 2016 to 2018. Fish community in closed and open estuaries showed differences generally, and the relative abundance (RA) of primary freshwater species in the closed and migratory species in the open estuaries were high. The result of classifying species by habitat characteristics in closed and open estuaries showed similar tendencies at the estuaries of south sea and west sea. The relative abundances of primary freshwater species in the closed estuaries at the estuaries of south sea and west sea were the highest, but estuarine and migratory species were high in both closed and open estuaries at the estuaries of east sea. Primary freshwater species showed higher abundances in the closed estuaries with reduced salinity due to blocking of seawater since they are not resistant to salt. However, primary freshwater species in open estuaries at east sea was higher than that of the closed estuaries, which is considered to be the result of reflecting the characteristics (tide, sand bar, etc.) of the east sea. Korea Estuary Fish Assessment Index (KEFAI) was showed to be higher at open estuaries than closed in all sea areas (T-test, P<0.001), the highest KEFAI was observed in closed estuaries at south sea, and open estuaries in east sea. Fish community of closed and open estuaries in each sea areas showed statistically significant differences (PERMANOVA, East, Pseudo-F=3.0198, P=0.002; South, Pseudo-F=22.00, P=0.001; West, Pseudo-F=14.067, P=0.001). Fish assemblage similarity by sea areas showed a significant differences on fish community in closed and open estuaries at east sea, south sea, and west sea (SIMPER, Group dissimilarity, 85.85%, 88.36%, and 88.05%). This study provided information on the characteristics and distribution of fish community according to the types of estuaries. The results of this study can be used as a reference for establishing appropriate management plans according to the sea areas and type in the management and restoration of estuaries for future.

One-Dimensional Consolidation Simulation of Kaolinte using Geotechnical Online Testing Method (온라인 실험을 이용한 카올리나이트 점토의 일차원 압밀 시뮬레이션)

  • Kwon, Youngcheul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4C
    • /
    • pp.247-254
    • /
    • 2006
  • Online testing method is one of the numerical experiment methods using experimental information for a numerical analysis directly. The method has an advantage in that analysis can be conducted without using an idealized mechanical model, because mechanical properties are updated from element test for a numerical analysis in real time. The online testing method has mainly been used for the geotechnical seismic engineering, whose major target is sand. A testing method that may be applied to a consolidation problem has recently been developed and laboratory and field verifications have been tried. Although related research thus far has mainly used a method to update average reaction for a numerical analysis by positioning an element tests at the center of a consolidation layer, a weakness that accuracy of the analysis can be impaired as the thickness of the consolidation layer becomes more thicker has been pointed out regarding the method. To clarify the effectiveness and possible analysis scope of the online testing method in relation to the consolidation problem, we need to review the results by applying experiment conditions that may completely exclude such a factor. This research reviewed the results of the online consolidation test in terms of reproduction of the consolidation settlement and the dissipation of excess pore water pressure of a clay specimen by comparing the results of an online consolidation test and a separated-type consolidation test carried out under the same conditions. As a result, the online consolidation test reproduced the change of compressibility according effective stress of clay without a huge contradiction. In terms of the dissipation rate of excess pore water pressure, however, the online consolidation test was a little faster. In conclusion, experiment procedure needs to improve in a direction that hydraulic conductivity can be updated in real time so as to more precisely predict the dissipation of excess pore water pressure. Further research or improvement should be carried out with regard to the consolidation settlement after the end of the dissipation of excess pore water pressure.

Development of Genetic Selection Marker via Examination of Genome in Bacillus velezensis K10 (Bacillus velezensis K10 유전체 분석을 통한 균주 선발 마커 개발)

  • Sam Woong Kim;Young Jin Kim;Tae Wook Lee;Won-Jae Chi;Woo Young Bang;Tae Wan Kim;Kyu Ho Bang;Sang Wan Gal
    • Journal of Life Science
    • /
    • v.33 no.11
    • /
    • pp.897-904
    • /
    • 2023
  • This study was done to develope genetic markers with the unique characteristics of genes according to the genomic information of Bacillus velezensis K10. B. velezensis K10 maintained a total of 4,159,835 bps, which was found to encode 5,136 open reading frames (orfs). B. velezensis K10 was found to have much more gene migration due to external factors overall compared to standard strain B. velezensis JS25R. In order to discover genetic selection markers, orfs on the genome to be easily induced to gene mutation were surveyed such as recombinase, integrase, transposase, and phage-related genes. As a result of the investigation, 9 candidate markers were isolated with high possibility as genetic selection markers. Although a part in the various origin's areas showed specificities in comparison with homology, the selected markers were all existed in phage-related areas because they were relatively lower homologies in phage-related genes. PCR analysis was done on B. licheniformis K12, B. velezensis K10, B. subtilis, and B. cereus to establish them as inter-species candidate selection markers. As a result, it was confirmed that B. velezensis K10-specific PCR products were formed in a total of 6 primer sets such as BV3 and BV5 to 9. On the other hand, analysis at the subspecies level observed the formation of B. velezensis K10-specific PCR products in 4 primer sets such as BV3, 5, 8, and 9. Among them, since BV5 and BV8 were detected by very specific results, we suggest that BV5 and 8 can be used as B. velezensis K10 gene selection markers at the species and sub-species level.

Building Change Detection Methodology in Urban Area from Single Satellite Image (단일위성영상 기반 도심지 건물변화탐지 방안)

  • Seunghee Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1097-1109
    • /
    • 2023
  • Urban is an area where small-scale changes to individual buildings occur frequently. An existing urban building database requires periodic updating to increase its usability. However, there are limitations in data collection for building changes over a wide urban. In this study, we check the possibility of detecting building changes and updating a building database by using satellite images that can capture a wide urban region by a single image. For this purpose, building areas in a satellite image are first extracted by projecting 3D coordinates of building corners available in a building database onto the image. Building areas are then divided into roof and facade areas. By comparing textures of the roof areas projected, building changes such as height change or building removal can be detected. New height values are estimated by adjusting building heights until projected roofs align to actual roofs observed in the image. If the projected image appeared in the image while no building is observed, it corresponds to a demolished building. By checking buildings in the original image whose roofs and facades areas are not projected, new buildings are identified. Based on these results, the building database is updated by the three categories of height update, building deletion, or new building creation. This method was tested with a KOMPSAT-3A image over Incheon Metropolitan City and Incheon building database available in public. Building change detection and building database update was carried out. Updated building corners were then projected to another KOMPSAT-3 image. It was confirmed that building areas projected by updated building information agreed with actual buildings in the image very well. Through this study, the possibility of semi-automatic building change detection and building database update based on single satellite image was confirmed. In the future, follow-up research is needed on technology to enhance computational automation of the proposed method.

Review of the Korean Indigenous Species Investigation Project (2006-2020) by the National Institute of Biological Resources under the Ministry of Environment, Republic of Korea (한반도 자생생물 조사·발굴 연구사업 고찰(2006~2020))

  • Bae, Yeon Jae;Cho, Kijong;Min, Gi-Sik;Kim, Byung-Jik;Hyun, Jin-Oh;Lee, Jin Hwan;Lee, Hyang Burm;Yoon, Jung-Hoon;Hwang, Jeong Mi;Yum, Jin Hwa
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.1
    • /
    • pp.119-135
    • /
    • 2021
  • Korea has stepped up efforts to investigate and catalog its flora and fauna to conserve the biodiversity of the Korean Peninsula and secure biological resources since the ratification of the Convention on Biological Diversity (CBD) in 1992 and the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits (ABS) in 2010. Thus, after its establishment in 2007, the National Institute of Biological Resources (NIBR) of the Ministry of Environment of Korea initiated a project called the Korean Indigenous Species Investigation Project to investigate indigenous species on the Korean Peninsula. For 15 years since its beginning in 2006, this project has been carried out in five phases, Phase 1 from 2006-2008, Phase 2 from 2009-2011, Phase 3 from 2012-2014, Phase 4 from 2015-2017, and Phase 5 from 2018-2020. Before this project, in 2006, the number of indigenous species surveyed was 29,916. The figure was cumulatively aggregated at the end of each phase as 33,253 species for Phase 1 (2008), 38,011 species for Phase 2 (2011), 42,756 species for Phase 3 (2014), 49,027 species for Phase 4 (2017), and 54,428 species for Phase 5(2020). The number of indigenous species surveyed grew rapidly, showing an approximately 1.8-fold increase as the project progressed. These statistics showed an annual average of 2,320 newly recorded species during the project period. Among the recorded species, a total of 5,242 new species were reported in scientific publications, a great scientific achievement. During this project period, newly recorded species on the Korean Peninsula were identified using the recent taxonomic classifications as follows: 4,440 insect species (including 988 new species), 4,333 invertebrate species except for insects (including 1,492 new species), 98 vertebrate species (fish) (including nine new species), 309 plant species (including 176 vascular plant species, 133 bryophyte species, and 39 new species), 1,916 algae species (including 178 new species), 1,716 fungi and lichen species(including 309 new species), and 4,812 prokaryotic species (including 2,226 new species). The number of collected biological specimens in each phase was aggregated as follows: 247,226 for Phase 1 (2008), 207,827 for Phase 2 (2011), 287,133 for Phase 3 (2014), 244,920 for Phase 4(2017), and 144,333 for Phase 5(2020). A total of 1,131,439 specimens were obtained with an annual average of 75,429. More specifically, 281,054 insect specimens, 194,667 invertebrate specimens (except for insects), 40,100 fish specimens, 378,251 plant specimens, 140,490 algae specimens, 61,695 fungi specimens, and 35,182 prokaryotic specimens were collected. The cumulative number of researchers, which were nearly all professional taxonomists and graduate students majoring in taxonomy across the country, involved in this project was around 5,000, with an annual average of 395. The number of researchers/assistant researchers or mainly graduate students participating in Phase 1 was 597/268; 522/191 in Phase 2; 939/292 in Phase 3; 575/852 in Phase 4; and 601/1,097 in Phase 5. During this project period, 3,488 papers were published in major scientific journals. Of these, 2,320 papers were published in domestic journals and 1,168 papers were published in Science Citation Index(SCI) journals. During the project period, a total of 83.3 billion won (annual average of 5.5 billion won) or approximately US $75 million (annual average of US $5 million) was invested in investigating indigenous species and collecting specimens. This project was a large-scale research study led by the Korean government. It is considered to be a successful example of Korea's compressed development as it attracted almost all of the taxonomists in Korea and made remarkable achievements with a massive budget in a short time. The results from this project led to the National List of Species of Korea, where all species were organized by taxonomic classification. Information regarding the National List of Species of Korea is available to experts, students, and the general public (https://species.nibr.go.kr/index.do). The information, including descriptions, DNA sequences, habitats, distributions, ecological aspects, images, and multimedia, has been digitized, making contributions to scientific advancement in research fields such as phylogenetics and evolution. The species information also serves as a basis for projects aimed at species distribution and biological monitoring such as climate-sensitive biological indicator species. Moreover, the species information helps bio-industries search for useful biological resources. The most meaningful achievement of this project can be in providing support for nurturing young taxonomists like graduate students. This project has continued for the past 15 years and is still ongoing. Efforts to address issues, including species misidentification and invalid synonyms, still have to be made to enhance taxonomic research. Research needs to be conducted to investigate another 50,000 species out of the estimated 100,000 indigenous species on the Korean Peninsula.

A Study on the Dimensions, Surface Area and Volume of Grains (곡립(穀粒)의 치수, 표면적(表面積) 및 체적(體積)에 관(關)한 연구(硏究))

  • Park, Jong Min;Kim, Man Soo
    • Korean Journal of Agricultural Science
    • /
    • v.16 no.1
    • /
    • pp.84-101
    • /
    • 1989
  • An accurate measurement of size, surface area and volume of agricultural products is essential in many engineering operations such as handling and sorting, and in heat transfer studies on heating and cooling processes. Little information is available on these properties due to their irregular shape, and moreover very little information on the rough rice, soybean, barley, and wheat has been published. Physical dimensions of grain, such as length, width, thickness, surface area, and volume vary according to the variety, environmental conditions, temperature, and moisture content. Especially, recent research has emphasized on the variation of these properties with the important factors such as moisture content. The objectives of this study were to determine physical dimensions such as length, width and thickness, surface area and volume of the rough rice, soybean, barley, and wheat as a function of moisture content, to investigate the effect of moisture content on the properties, and to develop exponential equations to predict the surface area and the volume of the grains as a function of physical dimensions. The varieties of the rough rice used in this study were Akibare, Milyang 15, Seomjin, Samkang, Chilseong, and Yongmun, as a soybean sample Jangyeobkong and Hwangkeumkong, as a barley sample Olbori and Salbori, and as a wheat sample Eunpa and Guru were selected, respectively. The physical properties of the grain samples were determined at four levels of moisture content and ten or fifteen replications were run at each moisture content level and each variety. The results of this study are summarized as follows; 1. In comparison of the surface area and the volume of the 0.0375m diameter-sphere measured in this study with the calculated values by the formula the percent error between them showed least values of 0.65% and 0.77% at the rotational degree interval of 15 degree respectively. 2. The statistical test(t-test) results of the physical properties between the types of rough rice, and between the varieties of soybean and wheat indicated that there were significant difference at the 5% level between them. 3. The physical dimensions varied linearly with the moisture content, and the ratios of length to thickness (L/T) and of width to thickness (W/T) in rough rice decreased with increase of moisture content, while increased in soybean, but uniform tendency of the ratios in barley and wheat was not shown. In all of the sample grains except Olbori, sphericity decreased with increase of moisture content. 4. Over the experimental moisture levels, the surface area and the volume were in the ranges of about $45{\sim}51{\times}10^{-6}m^2$, $25{\sim}30{\times}10^{-9}m^3$ for Japonica-type rough rice, about $42{\sim}47{\times}10^{-6}m^2$, $21{\sim}26{\times}10^{-9}m^3$ for Indica${\times}$Japonica type rough rice, about $188{\sim}200{\times}10^{-6}m^2$, $277{\sim}300{\times}10^{-9}m^3$ for Jangyeobkong, about $180{\sim}201{\times}10^{-6}m^2$, $190{\sim}253{\times}10^{-9}m^3$ for Hwangkeumkong, about $60{\sim}69{\times}10^{-6}m^2$, $36{\sim}45{\times}10^{-9}m^3$ for Covered barley, about $47{\sim}60{\times}10^{-6}m^2$, $22{\sim}28{\times}10^{-9}m^3$ for Naked barley, about $51{\sim}20{\times}10^{-6}m^2$, $23{\sim}31{\times}10^{-9}m^3$ for Eunpamill, and about $57{\sim}69{\times}10^{-6}m^2$, $27{\sim}34{\times}10^{-9}m^3$ for Gurumill, respectively. 5. The increasing rate of surface area and volume with increase of moisture content was higher in soybean than other sample grains, and that of Japonica-type was slightly higher than Indica${\times}$Japonica type in rough rice. 6. The regression equations of physical dimensions, surface area and volume were developed as a function of moisture content, the exponential equations of surface area and volume were also developed as a function of physical dimensions, and the regression equations of surface area were also developed as a function of volume in all grain samples.

  • PDF

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

The Limnological Survey of Major Lakes in Korea (4): Lake Juam (국내 주요 호수의 육수학적 조사(4) : 주암호)

  • Kim, Bom-Chul;Heo, Woo-Myung;Lim, Byung-Jin;Hwang, Gil-Son;Choi, Kwang-Soon;Choi, Jong-Soo;Park, Ju-Hyun
    • Korean Journal of Ecology and Environment
    • /
    • v.34 no.1 s.93
    • /
    • pp.30-44
    • /
    • 2001
  • In this study limnological characteristics of Lake Juam was surveyed from June 1993 to May 1994 in order to provides important information regarding water resources. Secchi disc transparency, epilimnetic chlorophyll a (chi-a), total nitrogen (TN), total phosphorus (TP) concentration and primary productivity were in the range of $2.0{\sim}4.5\;m$, $0.9{\sim}13.6\;mgChl/m^3$, 0.78$\{sim}$2.32 mgN/l, $11{\sim}56\;mgP/m^3$, $270{\sim}2.160\;mgCm^{-2}\;day^{-1}$, respectively. On the basis of TP, Chl-a and Secchi disc depth, the trophic state of Lake Juam can be classied as mesotrophic lake. The phosphorus inputs from non-point sources are concentrated in heavy rain episodes during the monsoon season. As a result, phosphorus concentration are higher in summer than in winter. TP loading from the watershed were estimated to be $0.9\;gPm^{-2}yr^{-1}$, which correspond to a boundary of the critical loading ($1.0\;gPm^{-2}yr^{-1}$) for eutrophication. From the results of the algal assay, both phosphous and nitrogen act as limiting nutrients in algal growth. The seasonal succession of phytoplankton community structure in Lake Juam was similar to that observed in other temperate lakes. Diatoms (Asterionella formosa and Aulacoseira granulate var. angustissima)fujacofeira BraHuJafa uar. aHgusHrsiaia) weredominant in spring and winter, cyanobacteria) were dominant in warm season. The organic carbon, nitrogen and phosphorus content of lake sediment were $9.5{\sim}14.0\;mgC/g$, $1.01{\sim}1.82\;mgN/g$ and $0.51{\sim}0.65\;mgP/g$, respectively. The allochthonous organic carbon loading from the watershed and autochthonous organic carbon loading by primary production of phytoplankton were determined to be 1,122 tC/yr and 6,718 tC/yr, respectively. To prevent eutrophication of Lake Juam, nutrient management of watershed should be focus on reduction of fertilizer application, proper treatment of manure, and conservation of topsoil as well as point source.

  • PDF