• Title/Summary/Keyword: Data Sets

Search Result 3,761, Processing Time 0.029 seconds

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

A Study on the Geophysical Characteristics and Geological Structure of the Northeastern Part of the Ulleung Basin in the East Sea (동해 울릉분지 북동부지역의 지구물리학적 특성 및 지구조 연구)

  • Kim, Chang-Hwan;Park, Chan-Hong
    • Economic and Environmental Geology
    • /
    • v.43 no.6
    • /
    • pp.625-636
    • /
    • 2010
  • The geophysical characteristics and geological structure of the northeastern part of the Ulleung Basin were investigated from interpretation of geophysical data including gravity, magnetic, bathymetry data, and seismic data. Relative correction was applied to reduce errors between sets of gravity and magnetic data, obtained at different times and by different equipments. The northeastern margin of the Ulleung Basin is characterized by complicated morphology consisting of volcanic islands (Ulleungdo and Dokdo), the Dokdo seamounts, and a deep pathway (Korea Gap) with the maximum depth of -2500 m. Free-air anomalies generally reflect the topography effect. There are high anomalies over the volcanic islands and the Dokdo seamounts. Except local anomalous zones of volcanic edifices, the gradual increasing of the Bouguer anomalies from the Oki Bank toward the Ulleung Basin and the Korea Gap is related to higher mantle level and denser crust in the central of the Ulleung Basin. Complicated magnetic anomalies in the study area occur over volcanic islands and seamounts. The power spectrum analysis of the Bouguer anomalies indicates that the depth to the averaged Moho discontinuity is -16.1 km. The inversion of the Bouguer anomaly shows that the Moho depth under the Korea Gap is about -16~17 km and the Moho depths towards the Oki Bank and the northwestern part of Ulleung Island are gradually deeper. The inversion result suggests that the crust of the Ulleung Basin is thicker than normal oceanic crusts. The result of 20 gravity modeling is in good agreement with the results of the power spectrum analysis and the inversion of the Bouguer anomaly. Except the volcanic edifices, the main pattern of magnetization distribution shows lineation in NE-SW. The inversion results, the 2D gravity modeling, and the magnetization distribution support possible NE-SW spreading of the Ulleung Basin proposed by other papers.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Preparation of Pure CO2 Standard Gas from Calcium Carbonate for Stable Isotope Analysis (탄산칼슘을 이용한 이산화탄소 안정동위원소 표준시료 제작에 대한 연구)

  • Park, Mi-Kyung;Park, Sunyoung;Kang, Dong-Jin;Li, Shanlan;Kim, Jae-Yeon;Jo, Chun Ok;Kim, Jooil;Kim, Kyung-Ryul
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.1
    • /
    • pp.40-46
    • /
    • 2013
  • The isotope ratios of $^{13}C/^{12}C$ and $^{18}O/^{16}O$ for a sample in a mass spectrometer are measured relative to those of a pure $CO_2$ reference gas (i.e., laboratory working standard). Thus, the calibration of a laboratory working standard gas to the international isotope scales (Pee Dee Belemnite (PDB) for ${\delta}^{13}C$ and Vienna Standard Mean Ocean Water (V-SMOW) for ${\delta}^{18}O$) is essential for comparisons between data sets obtained by other groups on other mass spectrometers. However, one often finds difficulties in getting well-calibrated standard gases, because of their production time and high price. Additional difficulty is that fractionation processes can occur inside the gas cylinder most likely due to pressure drop in long-term use. Therefore, studies on laboratory production of pure $CO_2$ isotope standard gas from stable solid calcium carbonate standard materials, have been performed. For this study, we propose a method to extract pure $CO_2$ gas without isotope fractionation from a solid calcium carbonate material. The method is similar to that suggested by Coplen et al., (1983), but is better optimized particularly to make a large amount of pure $CO_2$ gas from calcium carbonate material. The $CaCO_3$ releases $CO_2$ in reaction with 100% pure phosphoric acid at $25^{\circ}C$ in a custom designed, evacuated reaction vessel. Here we introduce optimal procedure, reaction conditions, and samples/reactants size for calcium carbonate-phosphoric acid reaction and also provide the details for extracting, purifying and collecting $CO_2$ gas out of the reaction vessel. The measurements for ${\delta}^{18}O$ and ${\delta}^{13}C$ of $CO_2$ were performed at Seoul National University using a stable isotope ratio mass spectrometer (VG Isotech, SIRA Series II) operated in dual-inlet mode. The entire analysis precisions for ${\delta}^{18}O$ and ${\delta}^{13}C$ were evaluated based on the standard deviations of multiple measurements on 15 separate samples of purified $CO_2$. The pure $CO_2$ samples were taken from 100-mg aliquots of a solid calcium carbonate (Solenhofen-ori $CaCO_3$) during 8-day experimental period. The multiple measurements yielded the $1{\sigma}$ precisions of ${\pm}0.01$‰ for ${\delta}^{13}C$ and ${\pm}0.05$‰ for ${\delta}^{18}O$, comparable to the internal instrumental precisions of SIRA. Therefore, we conclude the method proposed in this study can serve as a way to produce an accurate secondary and/or laboratory $CO_2$ standard gas. We hope this study helps resolve difficulties in placing a laboratory working standard onto the international isotope scales and does make accurate comparisons with other data sets from other groups.

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.

A Study on the Market Structure Analysis for Durable Goods Using Consideration Set:An Exploratory Approach for Automotive Market (고려상표군을 이용한 내구재 시장구조 분석에 관한 연구: 자동차 시장에 대한 탐색적 분석방법)

  • Lee, Seokoo
    • Asia Marketing Journal
    • /
    • v.14 no.2
    • /
    • pp.157-176
    • /
    • 2012
  • Brand switching data frequently used in market structure analysis is adequate to analyze non- durable goods, because it can capture competition between specific two brands. But brand switching data sometimes can not be used to analyze goods like automobiles having long term duration because one of main assumptions that consumer preference toward brand attributes is not changed against time can be violated. Therefore a new type of data which can precisely capture competition among durable goods is needed. Another problem of using brand switching data collected from actual purchase behavior is short of explanation why consumers consider different set of brands. Considering above problems, main purpose of this study is to analyze market structure for durable goods with consideration set. The author uses exploratory approach and latent class clustering to identify market structure based on heterogeneous consideration set among consumers. Then the relationship between some factors and consideration set formation is analyzed. Some benefits and two demographic variables - age and income - are selected as factors based on consumer behavior theory. The author analyzed USA automotive market with top 11 brands using exploratory approach and latent class clustering. 2,500 respondents are randomly selected from the total sample and used for analysis. Six models concerning market structure are established to test. Model 1 means non-structured market and model 6 means market structure composed of six sub-markets. It is exploratory approach because any hypothetical market structure is not defined. The result showed that model 1 is insufficient to fit data. It implies that USA automotive market is a structured market. Model 3 with three market structures is significant and identified as the optimal market structure in USA automotive market. Three sub markets are named as USA brands, Asian Brands, and European Brands. And it implies that country of origin effect may exist in USA automotive market. Comparison between modal classification by derived market structures and probabilistic classification by research model was conducted to test how model 3 can correctly classify respondents. The model classify 97% of respondents exactly. The result of this study is different from those of previous research. Previous research used confirmatory approach. Car type and price were chosen as criteria for market structuring and car type-price structure was revealed as the optimal structure for USA automotive market. But this research used exploratory approach without hypothetical market structures. It is not concluded yet which approach is superior. For confirmatory approach, hypothetical market structures should be established exhaustively, because the optimal market structure is selected among hypothetical structures. On the other hand, exploratory approach has a potential problem that validity for derived optimal market structure is somewhat difficult to verify. There also exist market boundary difference between this research and previous research. While previous research analyzed seven car brands, this research analyzed eleven car brands. Both researches seemed to represent entire car market, because cumulative market shares for analyzed brands exceeds 50%. But market boundary difference might affect the different results. Though both researches showed different results, it is obvious that country of origin effect among brands should be considered as important criteria to analyze USA automotive market structure. This research tried to explain heterogeneity of consideration sets among consumers using benefits and two demographic factors, sex and income. Benefit works as a key variable for consumer decision process, and also works as an important criterion in market segmentation. Three factors - trust/safety, image/fun to drive, and economy - are identified among nine benefit related measure. Then the relationship between market structures and independent variables is analyzed using multinomial regression. Independent variables are three benefit factors and two demographic factors. The result showed that all independent variables can be used to explain why there exist different market structures in USA automotive market. For example, a male consumer who perceives all benefits important and has lower income tends to consider domestic brands more than European brands. And the result also showed benefits, sex, and income have an effect to consideration set formation. Though it is generally perceived that a consumer who has higher income is likely to purchase a high priced car, it is notable that American consumers perceived benefits of domestic brands much positive regardless of income. Male consumers especially showed higher loyalty for domestic brands. Managerial implications of this research are as follow. Though implication may be confined to the USA automotive market, the effect of sex on automotive buying behavior should be analyzed. The automotive market is traditionally conceived as male consumers oriented market. But the proportion of female consumers has grown over the years in the automotive market. It is natural outcome that Volvo and Hyundai motors recently developed new cars which are targeted for women market. Secondly, the model used in this research can be applied easier than that of previous researches. Exploratory approach has many advantages except difficulty to apply for practice, because it tends to accompany with complicated model and to require various types of data. The data needed for the model in this research are a few items such as purchased brands, consideration set, some benefits, and some demographic factors and easy to collect from consumers.

  • PDF

Influence of Grid Cell Size and Flow Routing Algorithm on Soil-Landform Modeling (수치고도모델의 격자크기와 유수흐름 알고리듬의 선택이 토양경관 모델링에 미치는 영향)

  • Park, S.J.;Ruecker, G.R.;Agyare, W.A.;Akramhanov, A.;Kim, D.;Vlek, P.L.G.
    • Journal of the Korean Geographical Society
    • /
    • v.44 no.2
    • /
    • pp.122-145
    • /
    • 2009
  • Terrain parameters calculated from digital elevation models (DEM) have become increasingly important in current spatially distributed models of earth surface processes. This paper investigated how the ability of upslope area for predicting the spatial distribution of soil properties varies depending on the selection of spatial resolutions of DEM and algorithms. Four soil attributes from eight soil-terrain data sets collected from different environments were used. Five different methods of calculating upslope area were first compared for their dependency on different grid sizes of DEM. Multiple flow algorithms produced the highest correlation coefficients for most soil attributes and the lowest variations amongst different DEM resolutions and soil attributes. The high correlation coefficient remained unchanged at resolutions from 15 m to 50 m. Considering decreasing topographical details with increasing grid size, we suggest that the size of 15-30 m may be most suitable for soil-landscape analysis purposes in our study areas.

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.

Urine Cotinine for Assessing Tobacco Smoke Exposure in Korean: Analysis of the Korea National Health and Nutrition Examination Survey (KNHANES)

  • Jung, Sungmo;Lee, In Seon;Kim, Sae Byol;Moon, Chan Soo;Jung, Ji Ye;Kang, Young Ae;Park, Moo Suk;Kim, Young Sam;Kim, Se Kyu;Chang, Joon;Kim, Eun Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.73 no.4
    • /
    • pp.210-218
    • /
    • 2012
  • Background: The level of urine cotinine is an indicator of tobacco smoke exposure. The purpose of this study is to investigate urine cotinine for the purpose of assessing the smoking status of Korean smokers and non-smokers exposed to tobacco smoke. Methods: The subjects were identified from the 2007-2009 and the 2010 data sets of the Korea National Health and Nutrition Examination Survey (KNHANES). They were assigned as non-smokers, current smokers and ex-smokers. Non-smokers were also divided into three subset groups according to the duration of smoke exposure. Each group was stratified by gender prior to analysis. Results: The median value of urine cotinine in the male current smokers was 1,221.93 ng/mL which was the highest among all groups. The difference between levels of urine cotinine for male and the female groups was statistically significant (p<0.01). In the female group, passive smoke exposure groups reported higher urine cotinine levels than non-exposure groups (p=0.01). The cutoff point for the discrimination of current smokers from non-smokers was 95.6 ng/mL in males and 96.8 ng/mL in females. The sensitivity and specificity were 95.2% and 97.1%, respectively, in males, 96.1% and 96.5% in females. However, the determination of urine cotinine level was not useful in distinguishing between passive smoke exposure groups and non-exposure groups. Conclusion: Urine cotinine concentration is a useful biomarker for discriminating non-smokers from current smokers. However, careful interpretation is necessary for assessing passive smoke exposure by urine cotinine concentration.

The Optimal Environmental Ranges for Wetland Plants : I. Zizania latifolia and Typha angustifolia (습지식물의 적정 서식 환경 : I. 줄과 애기부들)

  • Kwon, Gi Jin;Lee, Bo Ah;Byun, Chae Ho;Nam, Jong Min;Kim, Jae Geun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.9 no.1
    • /
    • pp.72-88
    • /
    • 2006
  • The optimal environmental ranges of the establishment phase for the distribution of Zizania latifolia and Typha angustifolia was determined to develop a set of basic data and criteria of planting substrate for the restoration, conservation and management of wetlands. The study was carried at 17 wetlands in the Kyunggi-do and Gyeongsangnam-do region where inland wetlands place intensively in June, 2005. Total 127 quadrats were sets in growing areas of Zizania latifolia and Typha angustifolia. $NO_3-N$, K, Ca, Mg and Na in the water variables and soil texture, LOI (loss on ignition), soil pH and soil conductivity in the soil variables were analyzed. The optimal range of water depth for the distribution of Zizania latifolia was -5~39cm, $NO_3-N$ content of water was <0.01~0.19ppm, K content of water was 0.1~5.9ppm, Ca content of water 0.5~44.9ppm, Mg content of water was 1.2~11.9ppm, Na content of water 3.4~29.9ppm, water conductivity was 48~450${\mu}S$/cm, respectively. The optimal range of LOI for the distribution of Zizania latifolia was 1.7~11.9%, soil conductivity was 25.5~149.9${\mu}S$/cm, respectively. The optimal range of water depth for the distribution of Typha angustifolia was -20~24cm, $NO_3-N$ content of water was <0.01~0.19ppm, K content of water was 0.2~2.9ppm, Ca content of water 0.6~19.9ppm, Mg content of water was 0.2~5.9ppm, Na content of water 3.5~19.9ppm, water conductivity was 96~450${\mu}S$/cm, respectively. The optimal range of LOI for the distribution of Typha angustifolia was 2.4~15.9%, soil conductivity was 17.6~149.9${\mu}S$/cm, respectively. The optimal soil texture were loam, silt loam and sandy loam in both species. The lower water depth (-20~40cm) is appropriate to increase biodiversity in both species dominated community and it is better to maintain water depth of 40~100cm for water purification. Both species appear frequently in the soil with high silt content.