• Title/Summary/Keyword: ESTIMATED MODEL

Search Result 8,641, Processing Time 0.044 seconds

Hydrochemistry, Isotopic Characteristics, and Formation Model Geothermal Waters in Dongrae, Busan, South Korea (부산 동래 온천수의 수리화학 및 동위원소 특성, 생성모델 연구)

  • Yujin Lee;Chanho Jeong;Yongcheon Lee
    • The Journal of Engineering Geology
    • /
    • v.34 no.2
    • /
    • pp.229-248
    • /
    • 2024
  • This investigated the hydrogeochemical and isotopic characteristics of geothermal waters, groundwaters, and surface waters in Dongrae-gu, Busan, South Korea, in order to determine the origins of the salinity components in the geothermal waters, and their formation mechanisms and heat sources The geothermal waters are Na-Cl-type, distinct from surrounding groundwaters (Na-HCO3- and, Ca-HCO3- (SO4, Cl)-type) and surface waters (Ca-HCO3(SO4, Cl)-type). This indicates the geothermal waters formed at depth as compared with the groundwaters. δ18O and δD values of the geothermal waters are relatively depleted as compared with the groundwaters, due to altitude effects and deep circulation of the geothermal waters. Helium and neon isotope ratios (3 He/4He and, 4He/20Ne) of the geothermal waters plot on a single mixing line between mantle (3He = 3.76~4.01%) and crust (4He = 95.99~96.24 %), indirectly suggesting that the heat source is due to the decay of radioactive elements in rocks. The geothermal reservoir temperatures were calculated using the silica-enthalpy and Giggenbach models, yielding values of 82~130℃, and the depth of the geothermal reservoir is estimated to be 1.7~2.9 km below the surface. The correlation between Cl/Na and Cl/HCO3 for the Dongrae geothermal waters requires the input of salty water. The supply of saline composition is interpreted due to the dissolution of residual paleo-seawater.

State of Health and State of Charge Estimation of Li-ion Battery for Construction Equipment based on Dual Extended Kalman Filter (이중확장칼만필터(DEKF)를 기반한 건설장비용 리튬이온전지의 State of Charge(SOC) 및 State of Health(SOH) 추정)

  • Hong-Ryun Jung;Jun Ho Kim;Seung Woo Kim;Jong Hoon Kim;Eun Jin Kang;Jeong Woo Yun
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.1
    • /
    • pp.16-22
    • /
    • 2024
  • Along with the high interest in electric vehicles and new renewable energy, there is a growing demand to apply lithium-ion batteries in the construction equipment industry. The capacity of heavy construction equipment that performs various tasks at construction sites is rapidly decreasing. Therefore, it is essential to accurately predict the state of batteries such as SOC (State of Charge) and SOH (State of Health). In this paper, the errors between actual electrochemical measurement data and estimated data were compared using the Dual Extended Kalman Filter (DEKF) algorithm that can estimate SOC and SOH at the same time. The prediction of battery charge state was analyzed by measuring OCV at SOC 5% intervals under 0.2C-rate conditions after the battery cell was fully charged, and the degradation state of the battery was predicted after 50 cycles of aging tests under various C-rate (0.2, 0.3, 0.5, 1.0, 1.5C rate) conditions. It was confirmed that the SOC and SOH estimation errors using DEKF tended to increase as the C-rate increased. It was confirmed that the SOC estimation using DEKF showed less than 6% at 0.2, 0.5, and 1C-rate. In addition, it was confirmed that the SOH estimation results showed good performance within the maximum error of 1.0% and 1.3% at 0.2 and 0.3C-rate, respectively. Also, it was confirmed that the estimation error also increased from 1.5% to 2% as the C-rate increased from 0.5 to 1.5C-rate. However, this result shows that all SOH estimation results using DEKF were excellent within about 2%.

Numerical Study on Thermochemical Conversion of Non-Condensable Pyrolysis Gas of PP and PE Using 0D Reaction Model (0D 반응 모델을 활용한 PP와 PE의 비응축성 열분해 기체의 열화학적 전환에 대한 수치해석 연구)

  • Eunji Lee;Won Yang;Uendo Lee;Youngjae Lee
    • Clean Technology
    • /
    • v.30 no.1
    • /
    • pp.37-46
    • /
    • 2024
  • Environmental problems caused by plastic waste have been continuously growing around the world, and plastic waste is increasing even faster after COVID-19. In particular, PP and PE account for more than half of all plastic production, and the amount of waste from these two materials is at a serious level. As a result, researchers are searching for an alternative method to plastic recycling, and plastic pyrolysis is one such alternative. In this paper, a numerical study was conducted on the pyrolysis behavior of non-condensable gas to predict the chemical reaction behavior of the pyrolysis gas. Based on gas products estimated from preceding literature, the behavior of non-condensable gas was analyzed according to temperature and residence time. Numerical analysis showed that as the temperature and residence time increased, the production of H2 and heavy hydrocarbons increased through the conversion of the non-condensable gas, and at the same time, the CH4 and C6H6 species decreased by participating in the reaction. In addition, analysis of the production rate showed that the decomposition reaction of C2H4 was the dominant reaction for H2 generation. Also, it was found that more H2 was produced by PE with higher C2H4 contents. As a future work, an experiment is needed to confirm how to increase the conversion rate of H2 and carbon in plastics through the various operating conditions derived from this study's numerical analysis results.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Metabolic risk and nutritional state according to breakfast energy level of Korean adults: Using the 2007~2009 Korea National Health and Nutrition Examination Survey (한국 성인의 아침식사 에너지 수준에 따른 대사적 위험과 영양상태: 2007~2009년 국민건강영양조사 자료 이용)

  • Jang, So-Hyoun;Suh, Yoon Suk;Chung, Young-Jin
    • Journal of Nutrition and Health
    • /
    • v.48 no.1
    • /
    • pp.46-57
    • /
    • 2015
  • Purpose: The aim of this study was to determine an appropriate energy level of breakfast with less risk of chronic disease for Korean adults. Methods: Using data from the 2007~2009 Korean National Health & Nutrition Examination Survey, from a total of 12,238 adults aged 19~64, the final 7,769 subjects were analyzed except subjects who were undergoing treatment for cancer or metabolic disorder. According to the percent of breakfast energy intake versus their estimated energy requirement (EER), the subjects were divided into four groups: < 10% (very low, VL), 10~20% (low, L), 20~30% (moderate, M), ${\geq}30%$ (sufficient, S). All data were analyzed on the metabolic risk and nutritional state after application of weighted value and adjustment of sex, age, residential area, income, education, job or jobless, and energy intake using a general linear model or logistic regression. Results: The subjects of group S were 16.9% of total subjects, group M 39.2%, group L 37.6%, and group VL 6.3%. The VL group included more male subjects, younger-aged (19 to 40 years), urban residents, higher income, higher education, and fewer breakfasts eaters together with family members. Among the 4 groups, the VL group showed the highest waist circumference, while the S group showed the lowest waist circumference, body mass index, and serum total cholesterol. The groups of VL and L with lower intake of breakfast energy showed high percent of energy from protein and fat, and low percent of energy from carbohydrate. With the increase of breakfast energy level, intake of energy, most nutrients and food groups increased, and the percentage of subjects consuming nutrients below EAR decreased. The VL group showed relatively higher intake of snacks, sugar, meat and eggs, oil, and seasonings, and the lowest intake of vegetable. Risk of obesity by waist circumference was highest in the VL group by 1.90 times of the S group and the same trend was shown in obesity by BMI. Risk of dyslipidemia by serum total cholesterol was 1.84 times higher in the VL group compared to the S group. Risk of diabetes by Glu-FBS (fasting blood sugar) was 1.57 times higher in the VL group compared to the S group. Conclusion: The results indicate that higher breakfast energy level is positively related to lower metabolic risk and more desirable nutritional state in Korean adults. Therefore, breakfast energy intake more than 30% of their own EER would be highly recommended for Korean adults.

The Effect of Retailer-Self Image Congruence on Retailer Equity and Repatronage Intention (자아이미지 일치성이 소매점자산과 고객의 재이용의도에 미치는 영향)

  • Han, Sang-Lin;Hong, Sung-Tai;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.29-62
    • /
    • 2012
  • As distribution environment is changing rapidly and competition is more intensive in the channel of distribution, the importance of retailer image and retailer equity is increasing as a different competitive advantages. Also, consumers are not functionally oriented and that their behavior is significantly affected by the symbols such as retailer image which identify retailer in the market place. That is, consumers do not choose products or retailers for their material utilities but consume the symbolic meaning of those products or retailers as expressed in their self images. The concept of self-image congruence has been utilized by marketers and researchers as an aid in better understanding how consumers identify themselves with the brands they buy and the retailer they patronize. Although self-image congruity theory has been tested across many product categories, the theory has not been tested extensively in the retailing. Therefore, this study attempts to investigate the impact of self image congruence between retailer image and self image of consumer on retailer equity such as retailer awareness, retailer association, perceived retailer quality, and retailer loyalty. The purpose of this study is to find out whether retailer-self image congruence can be a new antecedent of retailer equity. In addition, this study tries to examine how four-dimensional retailer equity constructs (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) affect customers' repatronage intention. For this study, data were gathered by survey and analyzed by structural equation modeling. The sample size in the present study was 254. The reliability of the all seven dimensions was estimated with Cronbach's alpha, composite reliability values and average variance extracted values. We determined whether the measurement model supports the convergent validity and discriminant validity by Exploratory factor analysis and Confirmatory Factor Analysis. For each pair of constructs, the square root of the average variance extracted values exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the AMOS 18.0. As expected, the image congruence hypotheses were supported. The greater the degree of congruence between retailer image and self-image, the more favorable were consumers' retailer evaluations. The all two retailer-self image congruence (actual self-image congruence and ideal self-image congruence) affected customer based retailer equity. This result means that retailer-self image congruence is important cue for customers to estimate retailer equity. In other words, consumers are often more likely to prefer products and retail stores that have images similar to their own self-image. Especially, it appeared that effect for the ideal self-image congruence was consistently larger than the actual self-image congruence on the retailer equity. The results mean that consumers prefer or search for stores that have images compatible with consumer's perception of ideal-self. In addition, this study revealed that customers' estimations toward customer based retailer equity affected the repatronage intention. The results showed that all four dimensions (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) had positive effect on the repatronage intention. That is, management and investment to improve image congruence between retailer and consumers' self make customers' positive evaluation of retailer equity, and then the positive customer based retailer equity can enhance the repatonage intention. And to conclude, retailer's image management is an important part of successful retailer performance management, and the retailer-self image congruence is an important antecedent of retailer equity. Therefore, it is more important to develop and improve retailer's image similar to consumers' image. Given the pressure to provide increased image congruence, it is not surprising that retailers have made significant investments in enhancing the fit between retailer image and self image of consumer. The enhancing such self-image congruence may allow marketers to target customers who may be influenced by image appeals in advertising.

  • PDF

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.