• Title/Summary/Keyword: 수

Search Result 351,613, Processing Time 0.29 seconds

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

Risk Analysis of Arsenic in Rice Using by HPLC-ICP-MS (HPLC-ICP-MS를 이용한 쌀의 비소 위해도 평가)

  • An, Jae-Min;Park, Dae-Han;Hwang, Hyang-Ran;Chang, Soon-Young;Kwon, Mi-Jung;Kim, In-Sook;Kim, Ik-Ro;Lee, Hye-Min;Lim, Hyun-Ji;Park, Jae-Ok;Lee, Gwang-Hee
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.291-301
    • /
    • 2018
  • BACKGROUND: Rice is one of the main sources for inorganic arsenic among the consumed crops in the world population's diet. Arsenic is classified into Group 1 as it is carcinogenic for humans, according to the IARC. This study was carried out to assess dietary exposure risk of inorganic arsenic in husked rice and polished rice to the Korean population health. METHODS AND RESULTS: Total arsenic was determined using microwave device and ICP-MS. Inorganic arsenic was determined by ICP-MS coupled with HPLC system. The HPLC-ICP-MS analysis was optimized based on the limit of detection, limit of quantitation, and recovery ratio to be $0.73-1.24{\mu}g/kg$, $2.41-4.09{\mu}g/kg$, and 96.5-98.9%, respectively. The inorganic arsenic concentrations of daily exposure (included in body weight) were $4.97{\times}10^{-3}$ (${\geq}20$ years old) $-1.36{\times}10^{-2}$ (${\leq}2$ years old) ${\mu}g/kg\;b.w./day$ (PTWI 0.23-0.63%) by the husked rice, and $1.39{\times}10^{-1}$ (${\geq}20$ years old) $-3.21{\times}10^{-1}$ (${\leq}2$ years old) ${\mu}g/kg\;b.w./day$ (PTWI 6.47-15.00%) by the polished rice. CONCLUSION: The levels of overall exposure to total and inorganic arsenic by the husked and polished rice were far lower than the recommended levels of The Joint FAO/WHO Expert Committee on Food Additives (JECFA), indicating of little possibility of risk.

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.

Effects of an Aspirated Radiation Shield on Temperature Measurement in a Greenhouse (강제 흡출식 복사선 차폐장치가 온실의 기온측정에 미치는 영향)

  • Jeong, Young Kyun;Lee, Jong Goo;Yun, Sung Wook;Kim, Hyeon Tae;Ahn, Enu Ki;Seo, Jae Seok;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.28 no.1
    • /
    • pp.78-85
    • /
    • 2019
  • This study was designed to examine the performance of an aspirated radiation shield(ARS), which was made at the investigator's lab and characterized by relatively easier making and lower costs based on survey data and reports on errors in its measurements of temperature and relative humidity. The findings were summarized as follows: the ARS and the Jinju weather station made measurements and recorded the range of maximum, average, and minimum temperature at $2.0{\sim}34.1^{\circ}C$, $-6.1{\sim}22.2^{\circ}C$, $-14.0{\sim}15.1^{\circ}C$ and $0.4{\sim}31.5^{\circ}C$, $-5.8{\sim}22.0^{\circ}C$, $-14.1{\sim}16.3^{\circ}C$, respectively. There were no big differences in temperature measurements between the two institutions except that the lowest and highest point of maximum temperature was higher on the campus by $1.6^{\circ}C$ and $2.6^{\circ}C$, respectively. The measurements of ARS were tested against those of a standard thermometer. The results show that the temperature measured by ARS was lower by $-2.0^{\circ}C$ or higher by $1.8^{\circ}C$ than the temperature measured by a standard thermometer. The analysis results of its correlations with a standard thermometer reveal that the coefficient of determination was 0.99. Temperature was compared between fans and no fans, and the results show that maximum, average, and minimum temperature was higher overall with no fans by $0.5{\sim}7.6^{\circ}C$, $0.3{\sim}4.6^{\circ}C$ and $0.5{\sim}3.9^{\circ}C$, respectively. The daily average relative humidity measurements were compared between ARS and the weather station of Jinju, and the results show that the measurements of ARS were a little bit higher than those of the Jinju weather station. The measurements on June 27, July 26 and 29, and August 20 were relatively higher by 5.7%, 5.2%, 9.1%, and 5.8%, respectively, but differences in the monthly average between the two institutions were trivial at 2.0~3.0%. Relative humidity was in the range of -3.98~+7.78% overall based on measurements with ARS and Assman's psychometer. The study analyzed correlations in relative humidity between the measurements of the Jinju weather station and those of Assman's psychometer and found high correlations between them with the coefficient of determination at 0.94 and 0.97, respectively.

Improvement of Certification Criteria based on Analysis of On-site Investigation of Good Agricultural Practices(GAP) for Ginseng (인삼 GAP 인증기준의 현장실천평가결과 분석에 따른 인증기준 개선방안)

  • Yoon, Deok-Hoon;Nam, Ki-Woong;Oh, Soh-Young;Kim, Ga-Bin
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.1
    • /
    • pp.40-51
    • /
    • 2019
  • Ginseng has a unique production system that is different from those used for other crops. It is subject to the Ginseng Industry Act., requires a long-term cultivation period of 4-6 years, involves complicated cultivation characteristics whereby ginseng is not produced in a single location, and many ginseng farmers engage in mixed-farming. Therefore, to bring the production of Ginseng in line with GAP standards, it is necessary to better understand the on-site practices of Ginseng farmers according to established control points, and to provide a proper action plan for improving efficiency. Among ginseng farmers in Korea who applied for GAP certification, 77.6% obtained it, which is lower than the 94.1% of farmers who obtained certification for other products. 13.7% of the applicants were judged to be unsuitable during document review due to their use of unregistered pesticides and soil heavy metals. Another 8.7% of applicants failed to obtain certification due to inadequate management results. This is a considerably higher rate of failure than the 5.3% incompatibility of document inspection and 0.6% incompatibility of on-site inspection, which suggests that it is relatively more difficult to obtain GAP certification for ginseng farming than for other crops. Ginseng farmers were given an average of 2.65 points out of 10 essential control points and a total 72 control points, which was slightly lower than the 2.81 points obtained for other crops. In particular, ginseng farmers were given an average of 1.96 points in the evaluation of compliance with the safe use standards for pesticides, which was much lower than the average of 2.95 points for other crops. Therefore, it is necessary to train ginseng farmers to comply with the safe use of pesticides. In the other essential control points, the ginseng farmers were rated at an average of 2.33 points, lower than the 2.58 points given for other crops. Several other areas of compliance in which the ginseng farmers also rated low in comparison to other crops were found. These inclued record keeping over 1 year, record of pesticide use, pesticide storages, posts harvest storage management, hand washing before and after work, hygiene related to work clothing, training of workers safety and hygiene, and written plan of hazard management. Also, among the total 72 control points, there are 12 control points (10 required, 2 recommended) that do not apply to ginseng. Therefore, it is considered inappropriate to conduct an effective evaluation of the ginseng production process based on the existing certification standards. In conclusion, differentiated certification standards are needed to expand GAP certification for ginseng farmers, and it is also necessary to develop programs that can be implemented in a more systematic and field-oriented manner to provide the farmers with proper GAP management education.

The Ruling System of Silla to Gangneung Area Judged from Archaeological Resources in 5th to 6th Century (고고자료로 본 5~6세기 신라의 강릉지역 지배방식)

  • Shim, Hyun Yong
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.3
    • /
    • pp.4-24
    • /
    • 2009
  • This paper examined archaeological resources that discuss how Silla entered the Gangneung area, the coastal region along the East Sea that has been excavated most actively. Silla expanded its territories while organizing the its system as an ancient state and acquired several independent townships in various regions, stretching its forces to the East Sea area faster than any other ancient states of the time. In particular, many early relics and heritages of Silla have been found in Gangneung, the center of the East Sea area. Many archaeological resources prove these circumstances of that time and provide brief texts that are valuable for our interpretation of historical facts. In this respect, it was possible for me to examine these resources to answer my question as to why early relics and heritages of Silla are found in the Gangneung area. Based on my research on Silla's advancement into the Gangneung area, I have acquired the following results: How did Silla rule this area after conquering Yeguk in the Gangneung area? After conquering the Gangneung area, Silla attempted an indirect ruling at first. Later, Silla adopted a direct ruling system. I divided the indirect ruling period into two phases: introduction and settlement. In detail, Silla's earthenware and stone chamber tombs first appeared in Hasi-dong in the fourth quarter of the 4th Century and the tombs spread to Chodang-dong in the second quarter of the 5th Century. A belt with dragon pattern openwork, which seems to be from the second quarter of the 5th Century, was found to tell us that the Gangneung region began receiving rewards from Silla during this time. Thus, the period from the fourth quarter of the 4th Century to the second quarter of the 5th Century is designated as the 1st Phase (Introduction) of indirect ruling in terms of aechaeological findings. This is when Silla was first advanced to the Gangneung area and tolerated independent administration of the conquered. In the third and fourth quarters of the 5th Century, old mound tombs appeared and burials of relics that symbolized power emerged. In the third quarter of the 5th Century, stone chamber tombs were prevalent, but wooden chamber tombs, stone mounded wooden chamber tombs, and lateral entrance stone chamber tombs began to emerge. Also, tombs that were clustered in Hasi-dong and Chodang-dong began to scatter to Byeongsan-dong, Yeongjin-ri, and Bangnae-ri nearby. Steel pots were the symbol of power that emerged at this time. In the fourth quarter of the 5th Century, stone chamber tombs were still dominating, but wooden chamber tombs, stone mounded wooden chamber tombs, and lateral entrance stone chamber tombs became more popular. More crowns, crown ornaments, big daggers, and belts were bestowed by Silla, mostly in Chodang-dong and Byeongsan-dong. The period from the third quarter to the fourth quarter of the 5th Century was designated as the 2nd Phase (Settlement) of indirect ruling in terms of aechaeological findings. At this time, Silla bestowed items of power to the ruling class of the Gangneung area and gave equal power to the rulers of Chodang-dong and Byeongsan-dong to keep them restrained by each other. However, Silla converted the ruling system to direct ruling once it recognized the Gangneung area as the base of its expedition of conquest to the north. In the first quarter of the 6th Century, old mound tombs disappeared and small/medium-sized mounds appeared in the western inlands and the northern areas. In this period, the tunnel entrance stone chamber tombs were large enough for people to enter with doors. A cluster of several tunnel entrance stone chamber tombs was formed in Yeongjin-ri and Bangnae-ri at this time, probably with the influence of Silla's direct ruling. In the first quarter of the 6th Century, Silla dispatched officers from the central government to complete the local administration system and replaced the ruling class of Chodang-dong and Byeongsan-dong with that of Silla-friendly Yeonjin-ri and Bangnae-ri to reorganize the local administration system and gain full control of the Gangneung area.

Effectiveness Assessment on Jaw-Tracking in Intensity Modulated Radiation Therapy and Volumetric Modulated Arc Therapy for Esophageal Cancer (식도암 세기조절방사선치료와 용적세기조절회전치료에 대한 Jaw-Tracking의 유용성 평가)

  • Oh, Hyeon Taek;Yoo, Soon Mi;Jeon, Soo Dong;Kim, Min Su;Song, Heung Kwon;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.33-41
    • /
    • 2019
  • Purpose : To evaluate the effectiveness of Jaw-tracking(JT) technique in Intensity-modulated radiation therapy(IMRT) and Volumetric-modulated arc therapy(VMAT) for radiation therapy of esophageal cancer by analyzing volume dose of perimetrical normal organs along with the low-dose volume regions. Materials and Method: A total of 27 patients were selected who received radiation therapy for esophageal cancer with using $VitalBeam^{TM}$(Varian Medical System, U.S.A) in our hospital. Using Eclipse system(Ver. 13.6 Varian, U.S.A), radiation treatment planning was set up with Jaw-tracking technique(JT) and Non-Jaw-tracking technique(NJT), and was conducted for the patients with T-shaped Planning target volume(PTV), including Supraclavicular lymph nodes(SCL). PTV was classified into whether celiac area was included or not to identify the influence on the radiation field. To compare the treatment plans, Organ at risk(OAR) was defined to bilateral lung, heart, and spinal cord and evaluated for Conformity index(CI) and Homogeneity index(HI). Portal dosimetry was performed to verify a clinical application using Electronic portal imaging device(EPID) and Gamma analysis was performed with establishing thresholds of radiation field as a parameter, with various range of 0 %, 5 %, and 10 %. Results: All treatment plans were established on gamma pass rates of 95 % with 3 mm/3 % criteria. For a threshold of 10 %, both JT and NJT passed with rate of more than 95 % and both gamma passing rate decreased more than 1 % in IMRT as the low dose threshold decreased to 5 % and 0 %. For the case of JT in IMRT on PTV without celiac area, $V_5$ and $V_{10}$ of both lung showed a decrease by respectively 8.5 % and 5.3 % in average and up to 14.7 %. A $D_{mean}$ decreased by $72.3{\pm}51cGy$, while there was an increase in radiation dose reduction in PTV including celiac area. A $D_{mean}$ of heart decreased by $68.9{\pm}38.5cGy$ and that of spinal cord decreased by $39.7{\pm}30cGy$. For the case of JT in VMAT, $V_5$ decreased by 2.5 % in average in lungs, and also a little amount in heart and spinal cord. Radiation dose reduction of JT showed an increase when PTV includes celiac area in VMAT. Conclusion: In the radiation treatment planning for esophageal cancer, IMRT showed a significant decrease in $V_5$, and $V_{10}$ of both lungs when applying JT, and dose reduction was greater when the irradiated area in low-dose field is larger. Therefore, IMRT is more advantageous in applying JT than VMAT for radiation therapy of esophageal cancer and can protect the normal organs from MLC leakage and transmitted doses in low-dose field.