• Title/Summary/Keyword: Field effect

Search Result 12,354, Processing Time 0.052 seconds

The Characteristics of Rural Population, Korea, 1960~1995: Population Composition and Internal Migration (농촌인구의 특성과 그 변화, 1960~1995: 인구구성 및 인구이동)

  • 김태헌
    • Korea journal of population studies
    • /
    • v.19 no.2
    • /
    • pp.77-105
    • /
    • 1996
  • The rural problems which we are facing start from the extremely small sized population and the skewed population structure by age and sex. Thus we analyzed the change of the rural population. And we analyzed the recent return migration to the rural areas by comparing the recent in-migrants with out-migrants to rural areas. And by analyzing the rural village survey data which was to show the current characteristics of rural population, we found out the effects of the in-migrants to the rural areas and predicted the futures of rural villages by characteristics. The changes of rural population composition by age was very clear. As the out-migrants towards cities carried on, the population composition of young children aged 0~4 years was low and the aged became thick. The proportion of the population aged 0~4 years was 45.1% of the total population in 1970 and dropped down to 20.4% in 1995, which is predicted to become under 20% from now on. In the same period(1970~1995), the population aged 65 years and over rose from 4.2% to 11.9%. In 1960, before industrialization, the proportion of the population aged 0~4 years in rural areas was higher than that of cities. As the rural young population continuously moves to cities it became lower than that in urban areas from 1975 and the gap grew till 1990. But the proportion of rural population aged 0~4 years in 1995 became 6.2% and the gap reduced. We can say this is the change of the characteristics of in-migrants and out-migrants in the rural areas. Also considering the composition of the population by age group moving from urban to rural area in the late 1980s, 51.8% of the total migrants concentrates upon age group of 20~34 years and these people's educational level was higher than that of out-migrants to urban areas. This fact predicted the changes of the rural population, and the results will turn out as a change in the rural society. However, after comparing the population structure between the pure rural village of Boeun-gun and suburban village of Paju-gun which was agriculture centered village but recently changed rapidly, the recent change of the rural population structure which the in-migrants to rural areas becomes younger is just a phenomenon in the suburban rural areas, not the change of the total rural areas in general. From the characteristics of the population structure of rural village from the field survey on these villages, we can see that in the pure rural villages without any effects from cities the regidents are highly aged, while industrialization and urbanization are making a progress in suburban villages. Therefore, the recent partial change of the rural population structure and the change of characteristics of the in-migrants toward rural areas is effecting and being effected by the population change of areas like suburban rural villages. Although there are return migrants to rural areas to change their jobs into agriculture, this is too minor to appear as a statistic effect.

  • PDF

Physiological studies on the sudden wilting of JAPONICA/INDICA crossed rice varieties in Korea -I. The effects of plant nutritional status on the occurrence of sudden wilting (일(日). 인원연교잡(印遠緣交雜) 수도품종(水稻品種)의 급성위조증상(急性萎凋症狀) 발생(發生)에 관(關)한 영양생리학적(營養生理學的) 연구(硏究) -I. 수도(水稻)의 영양상태(營養狀態)가 급성위조증상(急性萎凋症狀) 발생(發生)에 미치는 영향(影響))

  • Kim, Yoo-Seob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.3
    • /
    • pp.316-338
    • /
    • 1988
  • To identify the physiological phenomena on the sudden wilting of japonica/indica crossed varieties, Pot experiment was carried out under the heavy N application with various levels of potassium in Japan. The results obtained are as follows. 1. Sudden wilting was occurred in both varieties used, Yushin and Milyang 23. The former showed a higher degree than the latter. 2. Sudden wilting was occurred into two types, one at early ripening stage and the other at late ripening stage. The former type was found in the field with low potassium supply and the latter was seemed to be related to varietal wilting tolerence. 3. By the investigation of concerning the effective tillering rate and the change of dry weight of each organ at the heading stage, it was inferred that the growth status from young panicle formation stage to heading stage were related to sudden wilting tolerence. 4. Manganese content at heading stage, ratio of Fe/Mn and Fe. Fe/Mn in stern at late ripening stage and $K_2$ O/N ratio of stem at harvesting stage were recognized as the specific factors in connection with sudden wilting. Mn content in the sudden wilting rice plant was already in creased remarkably at heading stage. In relation to root age and absoption characteristics of Mn, the senility of root before heading stage was inferred as the cause of increase the value of Fe/Mn or Fe. Fe/Mn. 5. The $K_2$ O/N ratio of culm at harvesting stage was lower in upper node than lower node in relation to sudden wilting. And it was well accordance with the fact that the symptoms of sudden wilting proceeded from upper leaf to lower leaf. These phenomenon was different from the usual one that the effect of potassium deficiency was more remarkable in lower node than upper node. 6. All varieties which have a condition of potassium deficiency have a high degree of nitrogen content of leaves at heading stage and the $K_2$ O/N ratio of each organ was low, Especialy, $K_2$ O/N ratio is much lower in sheath and culm than leaves.

  • PDF

Application of LCA on Lettuce Cropping System by Bottom-up Methodology in Protected Cultivation (시설상추 농가를 대상으로 하는 bottom-up 방식 LCA 방법론의 농업적 적용)

  • Ryu, Jong-Hee;Kim, Kye-Hoon;Kim, Gun-Yeob;So, Kyu-Ho;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1195-1206
    • /
    • 2011
  • This study was conducted to apply LCA (Life cycle assessment) methodology to lettuce (Lactuca sativa L.) production systems in Namyang-ju as a case study. Five lettuce growing farms with three different farming systems (two farms with organic farming system, one farm with a system without agricultural chemicals and two farms with conventional farming system) were selected at Namyangju city of Gyeonggi-province in Korea. The input data for LCA were collected by interviewing with the farmers. The system boundary was set at a cropping season without heating and cooling system for reducing uncertainties in data collection and calculation. Sensitivity analysis was carried out to find out the effect of type and amount of fertilizer and energy use on GHG (Greenhouse Gas) emission. The results of establishing GTG (Gate-to-Gate) inventory revealed that the quantity of fertilizer and energy input had the largest value in producing 1 kg lettuce, the amount of pesticide input the smallest. The amount of electricity input was the largest in all farms except farm 1 which purchased seedlings from outside. The quantity of direct field emission of $CO_2$, $CH_4$ and $N_2O$ from farm 1 to farm 5 were 6.79E-03 (farm 1), 8.10E-03 (farm 2), 1.82E-02 (farm 3), 7.51E-02 (farm 4) and 1.61E-02 (farm 5) kg $kg^{-1}$ lettuce, respectively. According to the result of LCI analysis focused on GHG, it was observed that $CO_2$ emission was 2.92E-01 (farm 1), 3.76E-01 (farm 2), 4.11E-01 (farm 3), 9.40E-01 (farm 4) and $5.37E-01kg\;CO_2\;kg^{-1}\;lettuce$ (farm 5), respectively. Carbon dioxide contribute to the most GHG emission. Carbon dioxide was mainly emitted in the process of energy production, which occupied 67~91% of $CO_2$ emission from every production process from 5 farms. Due to higher proportion of $CO_2$ emission from production of compound fertilizer in conventional crop system, conventional crop system had lower proportion of $CO_2$ emission from energy production than organic crop system did. With increasing inorganic fertilizer input, the process of lettuce cultivation covered higher proportion in $N_2O$ emission. Therefore, farms 1 and 2 covered 87% of total $N_2O$ emission; and farm 3 covered 64%. The carbon footprints from farm 1 to farm 5 were 3.40E-01 (farm 1), 4.31E-01 (farm 2), 5.32E-01 (farm 3), 1.08E+00 (farm 4) and 6.14E-01 (farm 5) kg $CO_2$-eq. $kg^{-1}$ lettuce, respectively. Results of sensitivity analysis revealed the soybean meal was the most sensitive among 4 types of fertilizer. The value of compound fertilizer was the least sensitive among every fertilizer imput. Electricity showed the largest sensitivity on $CO_2$ emission. However, the value of $N_2O$ variation was almost zero.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

School Experiences and the Next Gate Path : An analysis of Univ. Student activity log (대학생의 학창경험이 사회 진출에 미치는 영향: 대학생활 활동 로그분석을 중심으로)

  • YI, EUNJU;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.149-171
    • /
    • 2020
  • The period at university is to make decision about getting an actual job. As our society develops rapidly and highly, jobs are diversified, subdivided, and specialized, and students' job preparation period is also getting longer and longer. This study analyzed the log data of college students to see how the various activities that college students experience inside and outside of school might have influences on employment. For this experiment, students' various activities were systematically classified, recorded as an activity data and were divided into six core competencies (Job reinforcement competency, Leadership & teamwork competency, Globalization competency, Organizational commitment competency, Job exploration competency, and Autonomous implementation competency). The effect of the six competency levels on the employment status (employed group, unemployed group) was analyzed. As a result of the analysis, it was confirmed that the difference in level between the employed group and the unemployed group was significant for all of the six competencies, so it was possible to infer that the activities at the school are significant for employment. Next, in order to analyze the impact of the six competencies on the qualitative performance of employment, we had ANOVA analysis after dividing the each competency level into 2 groups (low and high group), and creating 6 groups by the range of first annual salary. Students with high levels of globalization capability, job search capability, and autonomous implementation capability were also found to belong to a higher annual salary group. The theoretical contributions of this study are as follows. First, it connects the competencies that can be extracted from the school experience with the competencies in the Human Resource Management field and adds job search competencies and autonomous implementation competencies which are required for university students to have their own successful career & life. Second, we have conducted this analysis with the competency data measured form actual activity and result data collected from the interview and research. Third, it analyzed not only quantitative performance (employment rate) but also qualitative performance (annual salary level). The practical use of this study is as follows. First, it can be a guide when establishing career development plans for college students. It is necessary to prepare for a job that can express one's strengths based on an analysis of the world of work and job, rather than having a no-strategy, unbalanced, or accumulating excessive specifications competition. Second, the person in charge of experience design for college students, at an organizations such as schools, businesses, local governments, and governments, can refer to the six competencies suggested in this study to for the user-useful experiences design that may motivate more participation. By doing so, one event may bring mutual benefits for both event designers and students. Third, in the era of digital transformation, the government's policy manager who envisions the balanced development of the country can make a policy in the direction of achieving the curiosity and energy of college students together with the balanced development of the country. A lot of manpower is required to start up novel platform services that have not existed before or to digitize existing analog products, services and corporate culture. The activities of current digital-generation-college-students are not only catalysts in all industries, but also for very benefit and necessary for college students by themselves for their own successful career development.

Effects of Planting and Harvest Times on the Forage Yield and Quality of Spring and Summer Oats in Mountainous Areas of Southern Korea (남부산간지에서 봄과 여름 조사료 귀리의 파종과 수확 시기에 따른 조사료 품질과 생산성 변화)

  • Shin, Seonghyu;Lee, Hyunjung;Ku, Jahwan;Park, Myungryeong;Rha, Kyungyoon;Kim, Byeongju
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.155-170
    • /
    • 2021
  • Oats (Avena sativa L.) represent a good forage crop for cultivation in regions with short growing periods and/or cool weather, such as the mountainous areas of southern Korea. In this study, using the Korean elite summer oat varieties 'High speed' and 'Dark horse', we aimed to determine the optimal time to plant and harvest forage oats seeded in spring and summer in a mountainous area. Seeds were planted three times from late February and early August at 9- or 10-days intervals, respectively, and plants were harvested three times from late May to October at 10-day intervals. The experiment was carried out in an upland field (Jangsu-gun Jeonbuk) in 2015 and 2016. We investigated the changes in forage yield (FY) and quality [crude protein (CP) and total digestible nutrient (TDN) contents] based on the time of planting and harvest. Neither the forage quality nor yield of either spring and summer oats was significantly influenced by the time of planting. The CP of spring oats harvested three times at 10-day intervals from late May was 12.0%, 8.2%, and 6.5%, thereby indicating a reduction with a delay in the time of harvest. In summer oats, CP ranged from 8.4% to 8.7%, although unlike CP in spring oats, was not significantly influenced by the time of harvest. For both forage types, harvest time had no significant effect on TDN. The FY of spring oats harvested in late May and early and mid-June was 10.2, 18.7, and 19.5 ton ha-1, respectively, with that of oats harvested on the latter two dates being significantly increased by 83% and 91%, respectively, compared with that in late May. Similarly, the FY of spring oats harvested in late October and early and mid-November was 7.1, 12.5, and 12.1 ton ha-1, respectively, with that of oats harvested on the latter two dates being significantly increased by 75% and 71%, respectively, compared with that in late October. Taking into consideration forage yield and quality (not less than 8% CP), it would be profitable to plant spring oats in the mountainous areas of southern Korea until March 15 and harvest around June 10, whereas summer oats could be beneficially planted until August 25 and harvested from early November.

Mineral Nutrition of the Field-Grown Rice Plant -[I] Recovery of Fertilizer Nitrogen, Phosphorus and Potassium in Relation to Nutrient Uptake, Grain and Dry Matter Yield- (포장재배(圃場栽培) 수도(水稻)의 무기영양(無機營養) -[I] 삼요소이용률(三要素利用率)과 양분흡수량(養分吸收量), 수량(收量) 및 건물생산량(乾物生産量)과(乾物生産量)의 관계(關係)-)

  • Park, Hoon
    • Applied Biological Chemistry
    • /
    • v.16 no.2
    • /
    • pp.99-111
    • /
    • 1973
  • Percentage recovery or fertilizer nitrogen, phosphorus and potassium by rice plant(Oriza sativa L.) were investigated at 8, 10, 12, 14 kg/10a of N, 6 kg of $P_2O_5$ and 8 kg of $K_2O$ application level in 1967 (51 places) and 1968 (32 places). Two types of nutrient contribution for the yield, that is, P type in which phosphorus firstly increases silicate uptake and secondly silicate increases nitrogen uptake, and K type in which potassium firstly increases P uptake and secondly P increases nitrogen uptake were postulated according to the following results from the correlation analyses (linear) between percentage recovery of fertilizer nutrient and grain or dry matter yields and nutrient uptake. 1. Percentage frequency of minus or zero recovery occurrence was 4% in nitrogen, 48% in phosphorus and 38% in potassium. The frequency distribution of percentage recovery appeared as a normal distribution curve with maximum at 30 to 40 recovery class in nitrogen, but appeared as a show distribution with maximum at below zero class in phosphorus and potassium. 2. Percentage recovery (including only above zero) was 33 in N (above 10kg/10a), 27 in P, 40 in K in 1967 and 40 in N, 20 in P, 46 in Kin 1968. Mean percentage recovery of two years including zero for zero or below zero was 33 in N, 13 in P and 27 in K. 3. Standard deviation of percentage recovery was greater than percentage recovery in P and K and annual variation of CV (coefficient of variation) was greatest in P. 4. The frequency of significant correlation between percentage recovery and grain or dry matter yield was highest in N and lowest in P. Percentage recovery of nitrogen at 10 kg level has significant correlation only with percentage recovery of P in 1967 and only with that of potassium in 1968. 5. The correlation between percentage recovery and dry matter yield of all treatments showed only significant in P in 1967, and only significant in K in 1968, Negative correlation coefficients between percentage recovery and grain or dry matter yield of no or minus fertilizer plots were shown only in K in 1967 and only in P in 1968 indicating that phosphorus fertilizer gave a distinct positive role in 1967 but somewhat' negative role in 1968 while potassium fertilizer worked positively in 1968 but somewhat negatively in 1967. 6. The correlation between percentage recovery of nutrient and grain yield showed similar tendency as with dry matter yield but lower coefficients. Thus the role of nutrients was more precisely expressed through dry matter yield. 7. Percentage recovery of N very frequently had significant correlation with nitrogen uptake of nitrogen applied plot, and significant negative correlation with nitrogen uptake of minus nitrogen plot, and less frequently had significant correlation with P, K and Si uptake of nitrogen applied plot. 8. Percentage recovery of P had significant correlation with Si uptake of all treatments and with N uptake of all treatments except minus phosphorus plot in 1967 indicating that phosphorus application firstly increases Si uptake and secondly silicate increases nitrogen uptake. Percentage recovery of P also frequently had significant correlation with P or K uptake of nitrogen applied plot. 9. Percentage recovery of K had significant correlation with P uptake of all treatments, N uptake of all treatments except minus phosphorus plot, and significant negative correlation with K uptake of minus K plot and with Si uptake of no fertilizer plot or the highest N applied plot in 1968, and negative correlation coefficient with P uptake of no fertilizer or minus nutrient plot in 1967. Percentage recovery of K had higher correlation coefficients with dry matter yield or grain yield than with K uptake. The above facts suggest that K application firstly increases P uptake and secondly phosphorus increases nitrogen uptake for dry matter yied. 10. Percentage recovery of N had significant higher correlation coefficient with grain yield or dry matter yield of minus K plot than with those of minus phosphorus plot, and had higher with those of fertilizer plot than with those of minus K plot. Similar tendency was observed between N uptake and percentage recovery of N among the above treatments. Percentage recovery of K had negative correlation coefficient with grain or-dry matter yield of no fertilizer plot or minus nutrient plot. These facts reveal that phosphorus increases nitrogen uptake and when phosphorus or nitrogen is insufficient potassium competatively inhibits nitrogen uptake. 11. Percentage recovery of N, Pand K had significant negative correlation with relative dry matter yield of minus phosphorus plot (yield of minus plot x 100/yield of complete plot; in 1967 and with relative grain yield of minus K plot in 1968. These results suggest that phosphorus affects tillering or vegetative phase more while potassium affects grain formation or Reproductive phase more, and that clearly show the annual difference of P and K fertilizer effect according to the weather. 12. The correlation between percentage recovery of fertilizer and the relative yield of minus nutrient plat or that of no fertilizer plot to that of minus nutrient plot indicated that nitrogen is the most effective factor for the production even in the minus P or K plot. 13. From the above facts it could be concluded that about 40 to 50 percen of paddy fields do rot require P or K fertilizer and even in the case of need the application amount should be greatly different according to field and weather of the year, especially in phosphorus.

  • PDF

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.