• Title/Summary/Keyword: Produce

Search Result 16,929, Processing Time 0.051 seconds

Physio-Ecological Studies on Stevia(Stevia rebaudiana Bertoni) (스테비아(Stevia rebaudiana Bertoni)에 관한 생리 생태적 연구)

  • Kwang-He Kang;Eun-Woong Lee
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.26 no.1
    • /
    • pp.69-89
    • /
    • 1981
  • Stevia (Stevia rebaudiana Bertoni) is a perennial herb widely distributed in the mountainous area of Paraguay. It belongs to the family Compositae and contains 6 to 12 percent stevioside in the leaves. Stevioside is a glucoside having similar sweetening character to surgar and the degree of sweetness is approximately 300 times of sugar. Since Korea does not produce any sugar crops, and the synthetic sweetenings are potentially hazardous for health, it is rather urgent to develop an economical new sweetener. Consequently, the current experiments are conducted to establish cultural practices of stevia, a new sweetening herbs, introduced into Korea in 1973 and the results are summarized as followings: 1. Days from transplanting of cuttings to the flower bud formation of 6 stevia lines were similar among daylengths of 8, 10 and 12 hours, but it was much greater at daylengths of 14 or 24 hour and varietal differences were noticable. All lines were photosensitive, but a line, 77013, was the most sensitive and 77067 and Suweon 2 were less sensitive to daylength. 2. Critical daylength of all lines seemed to be approximately 12 hours. Growth of plants was severely retarded at daylengths less than 12 hours. 3. Cutting were responded to short daylength before rooting. Number of days from transplanting to flower bud formation of 40-day old cuttings in the nursery bed was 20 days and it was delayed as duration of nursery were shorter. 4. Number of days from emergence to flower bud formation was shortest at short day treatment from 20 days after emergence. It was became longer as initiation of short day treatment was earlier or later than 20 days. 5. Plant height, number of branches, and top dry weight of stevia were reduced as cutting date was delayed from March 20 to May 20. The highest yield of dry leaf was obtained at nursery duration of 40-50 days in march 20 cutting, 30-40 days in April 20 cutting, and 30 days in May 20 cutting. 6. An asymptotic relationship was observed between plant population and leaf dry weight. Yield of dry leaf increased rapidly as plant population increased from 5,000 to 10,000 plants/10a with a reduced increasing rate from 10,000 to 20,000 plants/l0a, and levelled off at the plant population higher than 20,000 plants/l0a. 7. Stevia was adaptable in Suweon, Chengju, Mokpo and Jeju and drought was one of the main factors reducing yield of dry leaf. Yield of dry leaf was reduced significantly (approximately 30%) at June 20 transplanting compared to optimum transplanting. 8. Yield of dry leaf was higher in a vinyl house compared to unprotected control at long daylength or natural daylength except at short day treatment at March 20. Higher temperature ill a vinyl house does not have benefital effects at April 20 transplanting. 9. The highest content of stevioside was noted at the upper leaves of the plant but the lowest was measured at the plant parts of 20cm above ground. Leaf dry weight and stevioside yield was mainly contributed by the plant parts of 60 to 120cm above ground but the varietal differences were also significant. 10. Delayed harvest by the time of flower bud formation increased leaf dry weight remarkably. However, there were insignificant changes of yield as harvests were made at any time after flower bud formation. Content of stevioside was highest at the time of flower bud formation and earlier or later harvest than this time was low in its content. The optimum harvesting time determined by leaf dry weight and stevioside content was the periods from flower bud formation to right before flowering that would be the period from September 10 to September 15 in Suweon area. 11. Stevioside and rebaudioside content in the leaves of Stevia varieties were ranged from 5.4% to 14.3% and 1.5% to 8.3% respectively. However, no definit relationships between stevioside and rebaudioside were observed in these particular experiments.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Study on the Content of ${NO_3}^-$ in Green Vegetable Juice by Different Sorts, Harvesting Time, Mixing Rate of Vegetable, Storage Condition and Manufacturers (채소 모재료의 종류, 수확시기별, 부위별 혼합비율, 저장조건 및 생산회사에 따른 녹즙의 ${NO_3}^-$ 함량차이)

  • Sohn, Sang-Mok;Yoon, Ji-Young
    • Korean Journal of Organic Agriculture
    • /
    • v.7 no.1
    • /
    • pp.91-103
    • /
    • 1998
  • After the consumption of green vegetable juice by Korean increase rapidly, the ${NO_3}^-$ intake through green vegetable juice have been ignored to consider for the calculation of daily ${NO_3}^-$ intake. It is necessary to collect the basic data on the ${NO_3}^-$ content in green vegetable juice by different sorts, harvesting time and mixing rate of vegetable, manufacturers and storage conditions for the next calculation of daily ${NO_3}^-$ intake for Korean. Followings are the research results from monitoring and laboratory experiment related with ${NO_3}^-$ and vitamine C in green vegetable juice. The ${NO_3}^-$ content of angelica plaant(tomorrow's leaf) and kale were higher in spring than those in summer and autumm. The highest value of ${NO_3}^-$ content in tomorrow's leaf and kale were 4.85 and 2.94 times higher compare to the lowest value. The average ${NO_3}^-$ content in the midribs of tomorrow's leaf and kale were 7.5 and 2.1 times higher than those in leafblades. It indicate the green vegetable juice made from leadblade of tomorrow's leaf and kale might be better compare to those from midrib in terms of ${NO_3}^-$ content. The content of ${NO_3}^-$ and vitamine C as affected by the timecourse after juice making were decreased rapidly compare to those by storage temperature in case of carrot, kale and cucumber juice. It show the positive comelation between the content of ${NO_3}^-$ and vitamine C in carrot, kale and cucumber juice regardless of room temperature(${NO_3}^-$) or cold temperature(${NO_3}^-$). The content of ${NO_3}^-$ and vitamine C of green vegetable juice by P company were the highest among the manufactuers. The lower content of ${NO_3}^-$ and vitamine C of green vegecable juice by TW company and GB company compare to P company is due to dilution with water to produce the juice. The content of ${NO_3}^-$ of green vegetable juice which were available in market showed 143ppm in carrot juice, 506ppm in tomorrow's leaf juice, 669ppm in wild water celery juice, 985ppm in kale juice, whereas the content of vitamine C were 43ppm in carrot juice, 289ppm in wild water celery juice, 353ppm in kale juice and 768ppm in tomorrow's leaf juice. It was calculated that people take 253mg by tomorrow's leaf juice, 335mg by wild water celery juice, 483mg by kale juice if they drink 500ml of green vegetable juice perday, and it suggest to excess 1.16, 1.53 and 2.21 times respectively only by green vegetable juice consumption.

  • PDF

Studies on the growth of Korea Lawn Grass (Zoysia japonica Steud.)in Reponse to Nitrogen Application, Clipping Treatment and Plant Density (질소시용, 예초 및 재식밀도가 한국잔디(Zoysia Japonica Steud)의 생육에 미치는 영향)

  • Sim, Jae-Seong
    • The Journal of Natural Sciences
    • /
    • v.1
    • /
    • pp.61-113
    • /
    • 1987
  • The increasing emphasis placed on the production of fine turf for lawns, golf courses, parks, and other recreational sites has led to many unsolved problems as to how such turf could be best established and mainteined. For this purpose, a series of experiments were conducted under con ditions of pot and field. The results obtained were as follows EXPERIMENT I. The effect of nitrogen fertilizer and clipping interval on Zoysia japonica. 1. Increasing the rate of nitrogen and frequent clipping increased tiller number of Zoysis japonica and the maximum number of tillers were obtained from 700 kg N application and freqnent clippings (10 days interval ) in October. Treatment of 350kg N with 10 days clipping interval increased tillers much more than those of 700 kgN with 20 and 30 days clipping intervals. 2. The average number of green leaves occurred during the growth period maximized by applying 700 kg N and clipping 10 days interval. 3. Increasing tiller numbers significantly decreased tops DM weight per tiller by clipping plants at interval of 10 and 20 days, irrespective of nitrogen applied, and with nil N, at the interval of 30 days. By applying 700 kg N, however, top DM weight per tiller increased as the number of tillers increased consistently. 4. The highest top DM weight was achieved from late August to early September by applying 350 and 700kgN. 5. During the growth period, differences in unders ( stolon + root ) DM weight occurred bynitrogen application were found between nil N and two applied nitrogen levels, whereas, at the same level of nitrogen applied, the increase in stolon DM weight enhanced by lengthening the clipping interval to 30 days. 6. Nitrogen efficiency to green leaves, stolon nodes and DM weight of root with high nitrogen was achieved as clipping interval was shortened. 7. By increasing fertilizer nitrogen rate applied, N content n the leaves and stems of Zoysiajaponica was increased. On the other hand, N content in root and stolon had little effect onfertilizer nitrogen, resulting in the lowest content among plant fractions. The largest content of N was recorded in leaves. Lengthening the clipping interval from 10 or 20 to 30 days tends to decrease the N content in the leaves and stems, whereas this trend did not appeared in stolon androot. 8. A positive correlations between N and K contents in tops and stolon were established andthus K content increased as N content in tops and stolon increased. Meanwhile, P content was not affected by N and clipping treatments. 9. Total soluble carbohydrate content in Zoysia japonica was largest in stolon and stem, and was reduced by increasing fertilizer nitrogen rate. Reduction in total soluble carbohydrate due to increased nitrogen rate was severer in the stolons and stems than in the leaves. 10. Increasing the rate of nitrogen applied increased the number of small and large vascular bundles in leaf blade, but shortened distance among the large vascular bundles. Shortening the clipping interval resulted in increase of the number of large vascular bundles but decrease ofdistance between large vascular bundles.EXPERIMENT II. Growth response of Zoysia japonica imposed by different plant densities. 1. Tiller numbers per unit area increased as plant density heightened. Differences in num ber between densities at higher densities than 120 D were of no significance. 2. Tiller numbers per clone attained by 110 days after transplanting were 126 at 40D,77 at 80D, 67 at 120D, 54 at 160D, and 41 at 200D. A decreasing trend of tiller numbers per clone with increasing density was noticable from 100 days after transplanting onwards. 3. During the growth period, the greatest number of green leaves per unit area were attainedin 90days after transplanting at 160D and 200D, and 100 days after transplanting at 40D, 80Dand 120D. Thus the period to reach the maximum green leaf number with the high plantdensity was likely to be earlier that with the low plant density. 4. Stolon growth up to 80 days after transplaning was relatively slow, but from 80 daysonwards, the growth quickened to range from 1.9 m/clone at 40D to 0.6m/clone at 200Din 200 days after transplanting, these followed by the stolon node produced. 5. Plant density did not affect stolon weight/clone and root weight/clone until 80 daysafter transplanting. 6. DM weight of root was heavier in the early period of growth than that of stolon, butthis trend was reversed in the late period of growth : DM weight of stolon was much higherthan that of root.EXPERIMENT Ill. Vegetative growth of Zoysia japonica and Zoysia matrella as affected by nitrogen and clipping height. 1. When no nitrogen was applied to Zoysia japonica, leaf blade which appeared during theAugust-early September period remained green for a perid of about 10 weeks and even leavesemerged in rate September lived for 42 days. However, leaf longevity did not exceed 8 weeks asnitrogen was applied. In contrast the leaf longevity of Zoysia matrella which emerged during the mid August-earlySeptember period was 11 weeks and, under the nitrogen applied, 9 weeks, indicating that thelife-spen of individual leaf of Zoysia matrella may be longer than that of Zoysia japorica. Clipping height had no effect on the leaf longevity in both grasses. 2. During the July-August period, tiller number, green leaf number and DM weightof Zoysia japonica were increased significantly with fertilizer nitrogen, but were not with twolevel of clipping height. This trend was reversed after late September ; no effect of nitrogen wasappeared. Instead, lax clipping increased tiller number, green leaf number and DM weight. Greenleaves stimulated by lax clipping resulted in the occurrance of more dead leaves in late October. 3. Among the stolons outgrown until early September, the primary stolon was not influencedby nitrogen and clipping treatments to produce only 2-3 stolons. However, 1st branch stoIon asaffected by nitrogen increased significantly, so most of stolons which occurred consisted of 1st branch stolons. 4. Until early September, stolon length obtained at nil nitrogen level was chiefly caused bythe primary stolons. By applying nitrogen, the primary stolons of Zoysia japonica waslonger than 1st branch stolons when severe clipping was involved and in turn, shorter than 1stbranch stolons when lax clipping was concerned. In Zoysia matrella, 1st branch stolons were muchlonger than the primary stolon when turf was clipped severely but in conditions of lax clippingthere was little difference in length between primary and 1st branch stolons. 5. Stolon nodes of both Zoysia japonica and Z. matrella were positively influenced by nit rogen, but no particular increase by imposing clipping height treatment was marked in Zoysiamatrella. Although the stolon of Zoysia japonica grew until late October, the growthstimulated by nitrogen was not so remarkable as to exceed that by nil N.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Jangdo(Small Ornamental Knives) manufacturing process and restoration research using Odong Inlay application (오동상감(烏銅象嵌)기법을 활용한 장도(粧刀)의 제작기술 및 복원연구)

  • Yun, Yong Hyun;Cho, Nam Chul;Jeong, Yeong Sang;Jang, Chu Nam
    • Korean Journal of Heritage: History & Science
    • /
    • v.49 no.2
    • /
    • pp.172-189
    • /
    • 2016
  • In this research, literature research on the Odong material, mixture ratio, casting method and casting facility was conducted on contemporary documents, such as Cheongong Geamul. Also, a long sword was produced using the Odong inlay technique. The sword reproduction steps were as follows; Odong alloying, silver soldering alloying, Odong plate and Silver plate production, hilt and sheath production, metal frame and decorative elements, such as a Dugup (metal frame), production, Odong inlay assembly and final assembly. For the Odong alloy production, the mixture ratio of the true Odong, which has copper and gold ratio of 20:1, was used. This is traditional ratio for high quality product according to $17^{th}$ century metallurgy instruction manual. The silver soldering alloy was produced with silver and brass(Cu 7 : Zn 3) ratio of 5:1 for inlay purpose and 5:2 ratio for simple welding purpose. The true Odong alloy laminated with silver plate was used to produce hilt and sheath. The alloy went through annealing and forging steps to make it into 0.6 mm thick plate and its backing layer, which is a silver plate, had the matching thickness. After the two plates were adhered, the laminated plate went through annealing, forging, engraving, silver inlaying, shaping, silver welding, finishing and polishing steps. During the Odong colouring process, its red surface turns black by induced corrosion and different hues can be achieved depending on its quality. To accomplish the silver inlay Odong techniques, a Hanji saturated with thirty day old urine is wrapped around a hilt and sheath material, then it is left at warm room temperature for two to three hours. The Odong's surface will turn black when silver inlay remains unchanged. Various scientific analysis were conducted to study composition of recreated Odong panel, silver soldering, silver plate and the colouring agent on Odong's surface. The recreated Odong had average out at Cu 95.57 wt% Au 4.16wt% and Cu 98.04 wt% Au 1.95wt%, when documented ratio in the old record is Cu 95wt% and Au 5wt%. The recreated Odong was prone to surface breakage during manufacturing process unlike material made with composition ratio written in the old record. On the silver plate of the silver and Odong laminate, 100wt% Ag was detected and between the two layers Cu, Ag and Au were detected. This proves that the adhesion between the two layers was successfully achieved. The silver soldering had varied composition of Ag depending on the location. This shows uneven composition of the silver welding. A large quantities of S, that was not initially present, was detected on the surface of the black Odong. This indicates that presence of S has influence on Odong colour. Additional study on the chromaticity, additional chemical compounds and its restoration are needed for the further understanding of the origin of Odong colour. The result of Odong alloy testing and recreation, Odong silver inlay long sword production, scientific analysis of the Odong black colouring agent will form an important foundation of knowledge for conservation of Odong artifact.

A Study on Management of Records for Accountability of University student body's autonomy activity - Focused on Myongji University's student body - (대학 총학생회 자치활동의 설명책임성을 위한 기록관리 방안 연구 - 명지대학교 총학생회를 중심으로 -)

  • Lee, Yu Bin;Lee, Seung Hwi
    • The Korean Journal of Archival Studies
    • /
    • no.29
    • /
    • pp.175-223
    • /
    • 2011
  • A university is an organization charged with publicity and has accountability to the community for the operating process. Students account for a majority of members in a university. In universities, numerous creatures are pouring out every year and university students are major producers of these records. However, roles and functions of university students producing enormous amount of records as main agents of universities and focused concentration on produced records have not been made yet. It is reality that from the archival point of view, the importance of produced records of which main agents are university students has been relatively underestimated. In this background, this study attempted approach in archival point of view on records produced by university students, main agents. There are various types of records that university students produce such as records produced in the process of research and teaching as well as records produced in the process of various autonomy activities like clubs, students' associations. This study especially focused on university student autonomy activity process and placed emphasis on accountability securing measures on autonomy activity process of university students. To secure accountability of activities, records management should be based. Therefore, as a way to ensure accountability of unversity students autonomy activity, we tried to present records management systematization and records utilization measures. For this, a student body, a university student autonomy organization was analyzed and a student body of Myongji University Humanities Campus was selected as a specific target. First, to identify records management status, activities and organization and functions of the student body, we conducted an interview with the president of the student body. Through this, we analyzed the activities of the university student body and examined the necessity of accountability accordingly. Also, we derived the types and characteristics of records to be produced at each stage by analyzing the organization and functions of the student body of Myongji University. Like this, after deriving the types of production records according to the necessity, organization and functions of accountability and activities of the student body, we analyzed records management status of the present student body. First, to identify the general process status of activities of the student body, we analyzed activity process by stage of the student body of Myongji University. And we analyzed records management method of the student body and responsibility principal and conducted real condition analysis. Through this analysis, we presented the measures to ensure accountability of a university student body in three categories such as systematization of records management process, establishment of records management infrastructure, accountability guarantee measures. This study discussed accountability on society by analyzing activities and functions of a student body, targeting a student body, an autonomy organization of university students. And as a measure to secure accountability of a student body, we proposed a model for records management environment settlement. But in terms that a student body is an organization operated in one year basis, there is a limit that records management environment is hard to settle. This study pointed out this limit and was to provide clues when more active researches were carried out in the field of student records management in the future through presentation of student body records management model. Also, it is expected that the analysis results derived from this research will have significance in terms of school history arrangement and conservation.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.