• Title/Summary/Keyword: annual

Search Result 11,071, Processing Time 0.041 seconds

Scanning Electron Microscopic Studies on the Features of Compression Wood, Opposite Wood, and Side Wood in Branch of Pitch Pine(Pinus rigida Miller) (리기다소나무 (Pinus rigida Miller) 지재(枝材)의 압축이상재(壓縮異常材), 대응재(對應材) 및 측면재(側面材) 특성(特性)에 관한 주사전자현미경적(走査電子顯微鏡的)인 연구(硏究))

  • Eom, Young-Geun;Lee, Phil-Woo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.3-18
    • /
    • 1985
  • In Korea, a study on the anatomical features of pitch pine (pinus rigida Miller) branch wood through photo-microscopical method was reported in 1972 by Lee. Therefore, as a further study of Lee's on the anatomical features in branch wood of pinus rigida miller that grows in Korea, compression wood, opposite wood, and side wood were selected and treated for the purpose of comparing their structures revealed on cross and radial surface through scanning electron microscope in this study. The obtained results in this study were summarized as follows; 1. The trachied transition from earlywood to late wood is very gradual and the tracheids are nearly regular in both arrangement and size in compression wood but this transition in opposite wood and side wood is abrupt and the tracheids in opposite wood and side wood are less regular than those in compression wood. Also, the annual ring width of opposite wood is narrower than that of compression wood or side wood and the rays revealed on cross surface of side wood are more distinct than compression wood and opposite wood rays. 2. The tracheids of compression wood show roundish trends especially in earlywood but those of opposite wood and side wood show some angular trends. And intercellular space, helical cavity, and spiral check are present in both earlywood and latewood of compression wood but not present in opposite wood and side wood irrespective of earlywood and latewood. 3. The wall thickness of latewood tracheid is similar to that of earlywood tracheid in compression wood whereas the wall thickness of latewood tracheid is by far thicker than that of earlywood tracheid in opposite wood and side wood and the S3 layer of secondary wall is lack in compression wood tracheid unlike opposite wood and side wood tracheid. 4. The tracheids in compression wood are often distorted at their tips unlike those in opposite wood and side wood and the bordered pit in compression wood tracheid is located at the bottom of helical groove unlike that in opposite wood and side wood tracheid. 5. The bordered pits in radial wall of opposite wood and side wood tracheids are oval in shape but those of compression wood tracheids show some modified oval shape. 6. In earlywood of side wood, the small apertures of cross-field pits are roundish triangle to rectangle and the large one are fenestriform through the coalition of two small ones. However, the small apertures of cross-field pits are upright oval and the large ones are procumbent oval shape in earlywood of opposite wood and the apertures of cross-field pits in compression wood are tilted bifacial convex lens shape in earlywood and slit in late wood because of the border on tracheid side.

  • PDF

Effects of Applying Livestock Manure on Productivity and Organic Stock Carrying Capacity of Summer Forage Crops (가축분뇨시용이 하계사료작물의 생산성 및 유기가축 사육능력에 미치는 영향)

  • Jo, Ik-Hwan;HwangBo, Soon;Lee, Ju-Sam
    • Korean Journal of Organic Agriculture
    • /
    • v.16 no.4
    • /
    • pp.421-434
    • /
    • 2008
  • This study was carried out to estimate the selection of appropriate forage crops, proper application levels of livestock manure, and carrying capacity per unit area for organic livestock, as influenced by livestock manure application levels compared with chemical fertilizer to corn and sorghum $\times$ sorghum hybrid, in order to produce organic forages by utilizing livestock manure. For both corns and sorghum $\times$ sorghum hybrids, no fertilizer plots had significantly (p<0.05) lower annual dry matter (DM), crude protein (CP) and total digestible nutrients (TDN) yields than those of other plots, whereas the N+P+K plots ranked the highest yields, followed by 150% cattle manure plots and 100% cattle manure plots. Dry matter, CP and TDN yields of cattle manure plots were significantly (p<0.05) higher than those of no fertilizer and P+K plots. In applying cattle manure, the yields of cattle slurry plots tended to be a little higher than those of composted cattle manure plots. Assuming that corns and sorghum $\times$ sorghum hybrids produced from this trial were fed at 70% level to 450kg of Hanwoo heifer with 400g of average daily gain, livestock carrying capacity (head/year/ha) ranked the highest in N+P+K plots of the case of corns (mean 6.7 heads), followed by 150% cattle slurry plots (mean 5.6 heads), 150% composted cattle manure plots (mean 4.8 heads), 100% cattle slurry plots (mean 4.4 heads), 100% composted cattle manure plots (mean 4.3 heads), P+K plots (mean 4.1 heads), and no fertilizer plots (mean 3.1 heads). Meanwhile, in case of sorghum $\times$ sorghum hybrids, N+P+K plots (mean 5.7 heads) ranked the highest carrying capacity, followed by $100{\sim}150%$ cattle slurry plots (mean $4.8{\sim}5.2$ heads), 150% composted cattle manure plots (mean 4.7 heads), 100 % composted cattle manure plots (mean 4.3 heads), P+K plots (mean 3.8 heads), and no fertilizer plots (mean 3.4 heads). The results indicated that replacing chemical fertilizer by livestock manure application to cultivation soil for forage crops could enhance not only DM and TDN yields, but also organic stock carrying capacity. In conclusion, it was conceived that organic forage production by reutilizing livestock manure might contribute to reduced environmental pollution and the production of environment friendly agricultural products through resources recycling.

  • PDF

The Study of Water Environment Variations in Lake Hwajinpo (화진포호의 수환경변화에 관한 연구)

  • Heo, Woo-Myung;Choi, Sang-Gyu;Kwak, Sung-Jin;Bhattrai, Bal Dev;Lee, Eun-Joo
    • Korean Journal of Ecology and Environment
    • /
    • v.44 no.1
    • /
    • pp.9-21
    • /
    • 2011
  • This study is conducted to know the change in water environment of Lake Hwajinpo from 2000 to 2008 with physico-chemical parameters; salinity, dissolved oxygen, total phosphorus and total nitrogen and others. And zooplanktons and phytoplanktons were studied from 2007 to 2008. From the water quality data of Lake Hwajinpo from 2000 to 200S; water temperature, salinity, transparency, chemical oxygen demand and dissolved oxygen ranges are $2.8{\sim}29.4^{\circ}C$, 0.23~33.2‰, $0.2{\sim}1.8\;m$, $0.2{\sim}20.2\;mg\;L^{-1}$ and $0.1{\sim}17.4\;mg\;L^{-1}$ and the average values are $18.0^{\circ}C$, 15.7‰, 0.7 m, $5.7\;mg\;L^{-1}$ and $8.0\;mg\;L^{-1}$, respectively. Total phosphorus (TP) and total nitrogen (TN) ranges are $0.024{\sim}0.869\;mg\;L^{-1}$ (average 0.091) and $0.240{\sim}5.310\;mg\;L^{-1}$ (average 1.235). Average TN/TP ratio is 16.4. The annual variations in COD, TP, TN and Chl.${\alpha}$ are compared. COD in 2000 is $4.83\;mg\;L^{-1}$ and 2008 is $1.80\;mg\;L^{-1}$ which is reduced by $0.34\;mg\;L^{-1}$ every year. TP in 2000 is $0.07\;mg\;L^{-1}$ and 2008 is $0.05\;mg\;L^{-1}$ reduced gradually. Yearly reduction in TN is $0.09\;mg\;L^{-1}$, in 2000 and 2008 the values are $1.54\;mg\;L^{-1}$ and $0.77\;mg\;L^{-1}$ respectivly. Chl.${\alpha}$ in 2000 is $46.30\;{\mu}g\;L^{-1}$ and $5.78\;{\mu}g\;L^{-1}$ in 2008; yearly reduction is $4.50\;{\mu}g\;L^{-1}$. The tropic state index (TSI) in south and north parts of Lake Hwajinpo in 2000 are 67 and 63 which are reduced to 63 and 59 in 2008 respectively. North and south part of Lake Hwajinpo have 67 species of phytoplankton under 47 families in 2007 and 2008. Dominant species in south part in 2007 are; Asterococcus superbus in May, Lyngbya sp. in September and Trachelomonas spp. in November and in 2008 Anabaena spiroides in August are abundant and varies with time. Zooplankton species in Lake Hwajinpo are 25 of 25 families. Dominant species in south part in May and August 2007 and May and November in 2008 Copepoda larvae and in September 2007 Protozoa spp. of Protozoan and Brachionus plicatilis and Brachionus urceolaris of Cladocera in August 2008. Dominant species in north part Asplanchna sp. of Cladecera in August and November 2007 and rest of the time are larvae of Copepoda. In this way, the water quality of Lake Hwajinpo is changing with slow rate in the long period specially nutrients concentration (TP, TN etc) is decreasing.

A Study on the Utilzation of Two Furrow Combine (2조형(條型) Combine의 이용(利用)에 관(關)한 연구(硏究))

  • Lee, Sang Woo;Kim, Soung Rai
    • Korean Journal of Agricultural Science
    • /
    • v.3 no.1
    • /
    • pp.95-104
    • /
    • 1976
  • This study was conducted to test the harvesting operation of two kinds of rice varieties such as Milyang #15 and Tong-il with a imported two furrow Japanese combine and was performed to find out the operational accuracy of it, the adaptability of this machine, and the feasibility of supplying this machine to rural area in Korea. The results obtained in this study are summarized as follows; 1. The harvesting test of the Milyang #15 was carried out 5 times from the optimum harvesting operation was good regardless of its maturity. The field grain loss ratio and the rate of unthreshed paddy were all about 1 percent. 2. The field grain loss of Tong-il harvested was increased from 5.13% to 10.34% along its maturity as shown in Fig 1. In considering this, it was needed that the combine mechanism should be improved mechanically for harvesting of Tong-il rice variety. 3. The rate of unthreshed paddy of Tong-il rice variety of which stem was short was average 1.6 percent, because the sample combine used in this study was developed on basisof the long stem variety in Japan, therefore some ears owing to the uneven stem of Tong-il rice could nat reach the teeth of the threshing drum. 4. The cracking rates of brown rice depending mostly upon the revolution speed of the threshing drum(240-350 rpm) in harvesting of Tong-il and Milyang #15 were all below 1 percent, and there was no significance between two varieties. 5. Since the ears of Tong-il rice variety covered with its leaves, a lots of trashes was produced, especially when threshed in raw materials, and the cleaning and the trashout mechanisms were clogged with those trashes very often, and so these two mechanisms were needed for being improved. 6. The sample combine of which track pressure was $0.19kg/cm^2$ could drive on the soft ground of which sinking was even 25cm as shown in Fig 3. But in considering the reaping height adjustment, 5cm sinking may be afford to drive the combine on the irregular sinking level ground without any readjustment of the resaping height. 7. The harvesting expenses per ha. by the sample combine of which annual coverage area is 4.7 ha. under conditions that the yearly workable days is 40, percentage of days being good for harvesting operation is 60%, field efficiency is 56%, working speed is 0.273m/sec, and daily workable hours is 8 hrs is reasonable to spread this combine to rural area in Korea, comparing to the expenses by the conventional harvesting expenses, if mechanical improvement is supplemented so as to harvest Tong-il rice. 8. In order to harvest Tong-il rice, the two furrow combine should be needed some mechanical improvements that divider can control not to touch ears of paddy, the space between the feeding chain and the thrshing drum is reduced, trash treatment apparatus must be improved, fore and rear adjust-interval is enlarged, and width of track must be enlarged so as to drive on the soft ground.

  • PDF

Changes in blood pressure and determinants of blood pressure level and change in Korean adolescents (성장기 청소년의 혈압변화와 결정요인)

  • Suh, Il;Nam, Chung-Mo;Jee, Sun-Ha;Kim, Suk-Il;Kim, Young-Ok;Kim, Sung-Soon;Shim, Won-Heum;Kim, Chun-Bae;Lee, Kang-Hee;Ha, Jong-Won;Kang, Hyung-Gon;Oh, Kyung-Won
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.2 s.57
    • /
    • pp.308-326
    • /
    • 1997
  • Many studies have led to the notion that essential hypertension in adults is the result of a process that starts early in life: investigation of blood pressure(BP) in children and adolescents can therefore contribute to knowledge of the etiology of the condition. A unique longitudinal study on BP in Korea, known as Kangwha Children's Blood Pressure(KCBP) Study was initiated in 1986 to investigate changes in BP in children. This study is a part of the KCBP study. The purposes of this study are to show changes in BP and to determine factors affecting to BP level and change in Korean adolescents during age period 12 to 16 years. A total of 710 students(335 males, 375 females) who were in the first grade at junior high school(12 years old) in 1992 in Kangwha County, Korea have been followed to measure BP and related factors(anthropometric, serologic and dietary factors) annually up to 1996. A total of 562 students(242 males, 320 females) completed all five annual examinations. The main results are as follows: 1. For males, mean systolic and diastolic BP at age 12 and 16 years old were 108.7 mmHg and 118.1 mmHg(systolic), and 69.5 mmHg and 73.4 mmHg(diastolic), respectively. BP level was the highest when students were at 15 years old. For females, mean systolic and diastolic BP at age 12 and 16 years were 114.4 mmHg and 113.5 mmHg(systolic) and 75.2 mmHg and 72.1 mmHg(diastolic), respectively. BP level reached the highest point when they were 13-14 years old. 2. Anthropometric variables(height, weight and body mass index, etc) increased constantly during the study period for males. However, the rate of increase was decreased for females after age 15 years. Serum total cholesterol decreased and triglyceride increased according to age for males, but they did not show any significant trend fer females. Total fat intake increased at age 16 years compared with that at age 14 years. Compositions of carbohydrate, protein and fat among total energy intake were 66.2:12.0:19.4, 64.1:12.1:21.8 at age 14 and 16 years, respectively. 3. Most of anthropometric measures, especially, height, body mass index(BMI) and triceps skinfold thickness showed a significant correlation with BP level in both sexes. When BMI was adjusted, serum total cholesterol showed a significant negative correlation with systolic BP at age 12 years in males, but at age 14 years the direction of correlation changed to positive. In females serum total cholesterol was negatively correlated with diastolic BP at age 15 and 16 years. Triglyceride and creatinine showed positive correlation with systolic and diastolic BP in males, but they did not show any correlation in females. There was no consistent findings between nutrient intake and BP level. However, protein intake correlated positively with diastolic BP level in males. 4. Blood pressure change was positively associated with changes in BMI and serum total cholesterol in both sexes. Change in creatinine was associated with BP change positively in males and negatively in females. Students whose sodium intake was high showed higher systolic and diastolic BP in males, and students whose total fat intake was high maintained lower level of BP in females. The major determinants on BP change was BMI in both sexes.

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Study of East Asia Climate Change for the Last Glacial Maximum Using Numerical Model (수치모델을 이용한 Last Glacial Maximum의 동아시아 기후변화 연구)

  • Kim, Seong-Joong;Park, Yoo-Min;Lee, Bang-Yong;Choi, Tae-Jin;Yoon, Young-Jun;Suk, Bong-Chool
    • The Korean Journal of Quaternary Research
    • /
    • v.20 no.1 s.26
    • /
    • pp.51-66
    • /
    • 2006
  • The climate of the last glacial maximum (LGM) in northeast Asia is simulated with an atmospheric general circulation model of NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. Modern climate is simulated by a prescribed sea surface temperature and sea ice provided from NCAR, and contemporary atmospheric CO2, topography, and orbital parameters, while LGM simulation was forced with the reconstructed CLIMAP sea surface temperatures, sea ice distribution, ice sheet topography, reduced $CO_2$, and orbital parameters. Under LGM conditions, surface temperature is markedly reduced in winter by more than $18^{\circ}C$ in the Korean west sea and continental margin of the Korean east sea, where the ocean exposed to land in the LGM, whereas in these areas surface temperature is warmer than present in summer by up to $2^{\circ}C$. This is due to the difference in heat capacity between ocean and land. Overall, in the LGM surface is cooled by $4{\sim}6^{\circ}C$ in northeast Asia land and by $7.1^{\circ}C$ in the entire area. An analysis of surface heat fluxes show that the surface cooling is due to the increase in outgoing longwave radiation associated with the reduced $CO_2$ concentration. The reduction in surface temperature leads to a weakening of the hydrological cycle. In winter, precipitation decreases largely in the southeastern part of Asia by about $1{\sim}4\;mm/day$, while in summer a larger reduction is found over China. Overall, annual-mean precipitation decreases by about 50% in the LGM. In northeast Asia, evaporation is also overall reduced in the LGM, but the reduction of precipitation is larger, eventually leading to a drier climate. The drier LGM climate simulated in this study is consistent with proxy evidence compiled in other areas. Overall, the high-resolution model captures the climate features reasonably well under global domain.

  • PDF

Short-Term Effect of Mineral Nitrogen Application on Reed Canarygrass (Phalaris arundinacea L.) in Uncultivated Rice Paddy (유휴 논토양에서 Reed Canarygrass (Phalaris arundinacea L.) 에 대한 무기태 질소의 단기 시용 효과)

  • 이주삼;조익환;안종호
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.18 no.2
    • /
    • pp.95-106
    • /
    • 1998
  • A study was made to estimate the economic level(Necon.) of mineral nitrogen and a cutting frequency for the dry matter production of reed canarygrass(Phalaris arundinacea L.) in uncultivated rice paddy during the harvested years in 1993~1995. Annual mineral nitrogen was applied at the levels of 0, 90, 180, 270 and 360 kg $ha^{-1}$ in 3 cuttings, 0, 120, 240, 360 and 480 kg $ha^{-1}$ in 4 cuttings, and 0, 150, 300, 450 and 600 kg $ha^{-1}$ in 5 cuttings, respectively. The results were summarized as follows; 1. The dry matter yields of all cutting frequencies in 1993 were significantly higher than in the other hay years. Mean dry matter yield were 14.40, 13.88 and 15.98 tons $ha^{-1}$ in 3, 4 and 5 cuttings, respectively. 2. Significantly higher matter yields were obtained as 15.37 and 15.80 tons $ha^{-1}$ at the level of 120 kg $ha^{-1}\;cut^{-1}$ in 3 and 4 cuttings, and 14.02~14.08 tons $ha^{-1}$ levels of 90~120 kg $ha^{-1}$ in 5 cuttings, respectively. 3. Higher efficiencies of dry matter production in response to mineral nitrogen application were recorded as 29.7 kg at level of 90 kg $ha^{-1}\;yr^{-1}$ in 3 cuttings, 19.6 kg at level of 240 kg $ha^{-1}\;yr^{-1}$ in 4 cuttings, and 20.1 kg at level of 150 kg $ha^{-1}\;yr^{-1}$ in 5 cuttings, respectively. 4. Significantly higher matter yields appeared as 5.02 tons $ha^{-1}$ at 2nd cut in 3 cuttings, 3.94~4.37 tons $ha^{-1}$ at 2nd and 3rd cut in 4 cuttings, and 3.81~3.58 tons $ha^{-1}$ at 2nd and 3rd cut in 5 cuttings, respectively. 5. The highest values of relative dry matter yield were 40.4% for 2nd cut in 3 cuttings, 34.9% for 3rd cut in 4 cuttings, and 31.5% for 2nd cut in 5 cuttings, respectively. 6. The estimated marginal dry matter yields(Ymar.) were 13.8~14.7 tons $ha^{-1}$ at ranges of economic N level of 228.5~291.9 kg $ha^{-1}\;yr^{-1}$ in 3 cuttings, 13.8~14.2 tons $ha^{-1}$ at ranges of 293.5~335.7 kg $ha^{-1}\;yr^{-1}$ in 4 cuttings, and 12.2~12.8 tons $ha^{-1}$ at ranges of 237.5~302.5 kg $ha^{-1}\;yr^{-1}$ in 5 cuttings, respectively. 7. Maximun dry matter yields(Ymax.) were 17.0 tons at the level of limiting N(Nmax.) of 558.9 kg $ha^{-1}\;yr^{-1}$ in 3 cuttings, 16.1 ton at level of limiting N of 531.4 kg $ha^{-1}\;yr^{-1}$ in 4 cuttings, and 13.9 ton at level of limiting N of 546.3 kg $ha^{-1}\;yr^{-1}$ in 5 cuttings, respectively. 8. Economic N level in all cuts were in the ranges of 42.6~123.8 kg $ha^{-1}$ in 3 cuttings, 27.3~144.1 kg $ha^{-1}$ in 4 cuttings, and 9.3~159.4 kg $ha^{-1}$ in 5 cuttings, respectively. 9. The proper cutting frequency for matter production of reed canarygrass was 3 cuttings during the h harvested years in 1993~1995, due mainly to the higher efficiency of N for the dry matter production.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.