• Title/Summary/Keyword: 시스템 개선

Search Result 14,450, Processing Time 0.05 seconds

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Feasibility of Mixed-Energy Partial Arc VMAT Plan with Avoidance Sector for Prostate Cancer (전립선암 방사선치료 시 회피 영역을 적용한 혼합 에너지 VMAT 치료 계획의 평가)

  • Hwang, Se Ha;NA, Kyoung Su;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.17-29
    • /
    • 2020
  • Purpose: The purpose of this work was to investigate the dosimetric impact of mixed energy partial arc technique on prostate cancer VMAT. Materials and Methods: This study involved prostate only patients planned with 70Gy in 30 fractions to the planning target volume (PTV). Femoral heads, Bladder and Rectum were considered as oragan at risk (OARs). For this study, mixed energy partial arcs (MEPA) were generated with gantry angle set to 180°~230°, 310°~50° for 6MV arc and 130°~50°, 310°~230° for 15MV arc. Each arc set the avoidance sector which is gantry angle 230°~310°, 50°~130° at first arc and 50°~310° at second arc. After that, two plans were summed and were analyzed the dosimetry parameter of each structure such as Maximum dose, Mean dose, D2%, Homogeneity index (HI) and Conformity Index (CI) for PTV and Maximum dose, Mean dose, V70Gy, V50Gy, V30Gy, and V20Gy for OARs and Monitor Unit (MU) with 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC plan. Results: In MEPA, the maximum dose, mean dose and D2% were lower than 6MV 1 ARC plan(p<0.0005). However, the average difference of maximum dose was 0.24%, 0.39%, 0.60% (p<0.450, 0.321, 0.139) higher than 6MV, 10MV, 15MV 2 ARC plan, respectively and D2% was 0.42%, 0.49%, 0.59% (p<0.073, 0.087, 0.033) higher than compared plans. The average difference of mean dose was 0.09% lower than 10MV 2 ARC plan, but it is 0.27%, 0.12% (p<0.184, 0.521) higher than 6MV 2 ARC, 15MV 2 ARC plan, respectively. HI was 0.064±0.006 which is the lowest value (p<0.005, 0.357, 0.273, 0.801) among the all plans. For CI, there was no significant differences which were 1.12±0.038 in MEPA, 1.12±0.036, 1.11±0.024, 1.11±0.030, 1.12±0.027 in 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC, respectively. MEPA produced significantly lower rectum dose. Especially, V70Gy, V50Gy, V30Gy, V20Gy were 3.40, 16.79, 37.86, 48.09 that were lower than other plans. For bladder dose, V30Gy, V20Gy were lower than other plans. However, the mean dose of both femoral head were 9.69±2.93, 9.88±2.5 which were 2.8Gy~3.28Gy higher than other plans. The mean MU of MEPA were 19.53% lower than 6MV 1 ARC, 5.7% lower than 10MV 2 ARC respectively. Conclusion: This study for prostate radiotherapy demonstrated that a choice of MEPA VMAT has the potential to minimize doses to OARs and improve homogeneity to PTV at the expense of a moderate increase in maximum and mean dose to the femoral heads.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Analysis of Surveys to Determine the Real Prices of Ingredients used in School Foodservice (학교급식 식재료별 시장가격 조사 실태 분석)

  • Lee, Seo-Hyun;Lee, Min A;Ryoo, Jae-Yoon;Kim, Sanghyo;Kim, Soo-Youn;Lee, Hojin
    • Korean Journal of Community Nutrition
    • /
    • v.26 no.3
    • /
    • pp.188-199
    • /
    • 2021
  • Objectives: The purpose was to identify the ingredients that are usually surveyed for assessing real prices and to present the demand for such surveys by nutrition teachers and dietitians for ingredients used by school foodservice. Methods: A survey was conducted online from December 2019 to January 2020. The survey questionnaire was distributed to 1,158 nutrition teachers and dietitians from elementary, middle, and high schools nationwide, and 439 (37.9% return rate) of the 1,158 were collected and used for data analysis. Results: The ingredients which were investigated for price realities directly by schools were industrial products in 228 schools (51.8%), fruits in 169 schools (38.4%), and specialty crops in 166 schools (37.7%). Moreover, nutrition teachers and dietitians in elementary, middle, and high schools searched in different ways for the real prices of ingredients. In elementary schools, there was a high demand for price information about grains, vegetables or root and tuber crops, special crops, fruits, eggs, fishes, and organic and locally grown ingredients by the School Foodservice Support Centers. Real price information about meats, industrial products, and pickled processed products were sought from the external specialized institutions. In addition, nutrition teachers and dietitians in middle and high schools wanted to obtain prices of all of the ingredients from the Offices of Education or the District Office of Education. Conclusions: Schools want to efficiently use the time or money spent on research for the real prices of ingredients through reputable organizations or to co-work with other nutrition teachers and dietitians. The results of this study will be useful in understanding the current status of the surveys carried out to determine the real price information for ingredients used by the school foodservice.

A Study on the Establishment of Acceptable Range for Internal Quality Control of Radioimmunoassay (핵의학 검체검사 내부정도관리 허용범위 설정에 관한 고찰)

  • Young Ji, LEE;So Young, LEE;Sun Ho, LEE
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.43-47
    • /
    • 2022
  • Purpose Radioimmunoassay implement quality control by systematizing the internal quality control system for quality assurance of test results. This study aims to contribute to the quality assurance of radioimmunoassay results and to implement systematic quality control by measuring the average CV of internal quality control and external quality control by plenty of institutions for reference when setting the laboratory's own acceptable range. Materials and Methods We measured the average CV of internal quality control and the bounce rate of more than 10.0% for a total of 42 items from October 2020 to December 2021. According to the CV result, we classified and compared the upper group (5.0% or less), the middle group (5.0~10.0%) and the lower group (10.0% or more). The bounce rate of 10.0% or more was compared by classifying the item of five or more institutions into tumor markers, thyroid hormones and other hormones. The average CV was measured by the overall average and standard deviation of the external quality control results for 28 items from the first quarter to the fourth quarter of 2021. In addition, the average CV was measured by the overall average and standard deviation of the proficiency results between institutions for 13 items in the first half and the second half of 2021. The average CV of internal quality control and external quality control was compared by item so we compared and analyzed the items that implement well to quality control and the items that require attention to quality control. Results As a result of measuring the precision average of internal quality control for 42 items of six institutions, the top group (5.0% or less) are Ferritin, HGH, SHBG, and 25-OH-VitD, while the bottom group (≤10.0%) are cortisol, ATA, AMA, renin, and estradiol. When comparing more than 10.0% bounce rate of CV for tumor markers, CA-125 (6.7%), CA-19-9 (9.8%) implemented well, while SCC-Ag (24.3%), CA-15-3 (26.7%) were among the items that require attention to control. As a result of comparing the bounce rate of more than 10.0% of CV for thyroid hormones examination, free T4 (2.1%), T3 (9.3%) showed excellent performance and AMA (39.6%), ATA (51.6%) required attention to control. When comparing the bounce rate of 10.0% or more of CV for other hormones, IGF-1 (8.8%), FSH (9.1%), prolactin (9.2%) showed excellent performance, however estradiol (37.3%), testosterone (37.7%), cortisol (44.4%) required attention to control. As a result of measuring the average CV of the whole institutions participating at external quality control for 28 items, HGH and SCC-Ag were included in the top group (≤10.0%), however ATA, estradiol, TSI, and thyroglobulin included in bottom group (≥30.0%). Conclusion As a result of evaluating 42 items of six institutions, the average CV was 3.7~12.2% showing a 3.3 times difference between the upper group and the lower group. Cortisol, ATA, AMA, Renin and estradiol tests with high CV will require continuous improvement activities to improve precision. In addition, we measured and compared the overall average CV of the internal quality control, the external quality control and the proficiency between institutions participating of six institutions for 41 items excluding HBs-Ab. As a result, ATA, AMA, Renin and estradiol belong to the same subgroup so we require attention to control and consider setting a higher acceptable range. It is recommended to set and control the acceptable range standard of internal quality control CV in consideration of many things in the laboratory due to the different reagents and instruments, and the results vary depending on the test's proficiency and quality control materials. It is thought that the accuracy and reliability of radioimmunoassay results can be improved if systematic quality control is implemented based on the set acceptable range.

The Innovation Ecosystem and Implications of the Netherlands. (네덜란드의 혁신클러스터정책과 시사점)

  • Kim, Young-woo
    • Journal of Venture Innovation
    • /
    • v.5 no.1
    • /
    • pp.107-127
    • /
    • 2022
  • Global challenges such as the corona pandemic, climate change and the war-on-tech ensure that the demand who the technologies of the future develops and monitors prominently for will be on the agenda. Development of, and applications in, agrifood, biotech, high-tech, medtech, quantum, AI and photonics are the basis of the future earning capacity of the Netherlands and contribute to solving societal challenges, close to home and worldwide. To be like the Netherlands and Europe a strategic position in the to obtain knowledge and innovation chain, and with it our autonomy in relation to from China and the United States insurance, clear choices are needed. Brainport Eindhoven: Building on Philips' knowledge base, there is create an innovative ecosystem where more than 7,000 companies in the High-tech Systems & Materials (HTSM) collaborate on new technologies, future earning potential and international value chains. Nearly 20,000 private R&D employees work in 5 regional high-end campuses and for companies such as ASML, NXP, DAF, Prodrive Technologies, Lightyear and many others. Brainport Eindhoven has a internationally leading position in the field of system engineering, semicon, micro and nanoelectronics, AI, integrated photonics and additive manufacturing. What is being developed in Brainport leads to the growth of the manufacturing industry far beyond the region thanks to chain cooperation between large companies and SMEs. South-Holland: The South Holland ecosystem includes companies as KPN, Shell, DSM and Janssen Pharmaceutical, large and innovative SMEs and leading educational and knowledge institutions that have more than Invest €3.3 billion in R&D. Bearing Cores are formed by the top campuses of Leiden and Delft, good for more than 40,000 innovative jobs, the port-industrial complex (logistics & energy), the manufacturing industry cluster on maritime and aerospace and the horticultural cluster in the Westland. South Holland trains thematically key technologies such as biotech, quantum technology and AI. Twente: The green, technological top region of Twente has a long tradition of collaboration in triple helix bandage. Technological innovations from Twente offer worldwide solutions for the large social issues. Work is in progress to key technologies such as AI, photonics, robotics and nanotechnology. New technology is applied in sectors such as medtech, the manufacturing industry, agriculture and circular value chains, such as textiles and construction. Being for Twente start-ups and SMEs of great importance to the jobs of tomorrow. Connect these companies technology from Twente with knowledge regions and OEMs, at home and abroad. Wageningen in FoodValley: Wageningen Campus is a global agri-food magnet for startups and corporates by the national accelerator StartLife and student incubator StartHub. FoodvalleyNL also connects with an ambitious 2030 programme, the versatile ecosystem regional, national and international - including through the WEF European food innovation hub. The campus offers guests and the 3,000 private R&D put in an interesting programming science, innovation and social dialogue around the challenges in agro production, food processing, biobased/circular, climate and biodiversity. The Netherlands succeeded in industrializing in logistics countries, but it is striving for sustainable growth by creating an innovative ecosystem through a regional industry-academic research model. In particular, the Brainport Cluster, centered on the high-tech industry, pursues regional innovation and is opening a new horizon for existing industry-academic models. Brainport is a state-of-the-art forward base that leads the innovation ecosystem of Dutch manufacturing. The history of ports in the Netherlands is transforming from a logistics-oriented port symbolized by Rotterdam into a "port of digital knowledge" centered on Brainport. On the basis of this, it can be seen that the industry-academic cluster model linking the central government's vision to create an innovative ecosystem and the specialized industry in the region serves as the biggest stepping stone. The Netherlands' innovation policy is expected to be more faithful to its role as Europe's "digital gateway" through regional development centered on the innovation cluster ecosystem and investment in job creation and new industries.

Changes in Agricultural Extension Services in Korea (한국농촌지도사업(韓國農村指導事業)의 변동(變動))

  • Fujita, Yasuki;Lee, Yong-Hwan;Kim, Sung-Soo
    • Journal of Agricultural Extension & Community Development
    • /
    • v.7 no.1
    • /
    • pp.155-166
    • /
    • 2000
  • When the marcher visited Korea in fall 1994, he was shocked to see high rise apartment buildings around the capitol region including Seoul and Suwon, resulting from rising demand of housing because of urban migration followed by second and third industrial development. After 6 years in March 2000, the researcher witnessed more apartment buildings and vinyl house complexes, one of the evidences of continued economic progress in Korea. Korea had to receive the rescue finance from International Monetary Fund (IMF) because of financial crisis in 1997. However, the sign of recovery was seen in a year, and the growth rate of Gross Domestic Products (GDP) in 1999 recorded as high as 10.7 percent. During this period, the Korean government has been working on restructuring of banks, enterprises, labour and public sectors. The major directions of government were; localization, reducing administrative manpower, limiting agricultural budgets, privatization of public enterprises, integration of agricultural organization, and easing of various regulations. Thus, the power of central government shifted to local government resulting in a power increase for city mayors and county chiefs. Agricultural extension services was one of targets of government restructuring, transferred to local governments from central government. At the same time, the number of extension offices was reduced by 64 percent, extension personnel reduced by 24 percent, and extension budgets reduced. During the process of restructuring, the basic direction of extension services was set by central Rural Development Administration Personnel management, technology development and supports were transferred to provincial Rural Development Administrations, and operational responsibilities transferred to city/county governments. Agricultural extension services at the local levels changed the name to Agricultural Technology Extension Center, established under jurisdiction of city mayor or county chief. The function of technology development works were added, at the same time reducing the number of educators for agriculture and rural life. As a result of observations of rural areas and agricultural extension services at various levels, functional responsibilities of extension were not well recognized throughout the central, provincial, and local levels. Central agricultural extension services should be more concerned about effective rural development by monitoring provincial and local level extension activities more throughly. At county level extension services, it may be desirable to add a research function to reflect local agricultural technological needs. Sometimes, adding administrative tasks for extension educators may be helpful far farmers. However, tasks such as inspection and investigation should be avoided, since it may hinder the effectiveness of extension educational activities. It appeared that major contents of the agricultural extension service in Korea were focused on saving agricultural materials, developing new agricultural technology, enhancing agricultural export, increasing production and establishing market oriented farming. However these kinds of efforts may lead to non-sustainable agriculture. It would be better to put more emphasis on sustainable agriculture in the future. Agricultural extension methods in Korea may be better classified into two approaches or functions; consultation function for advanced farmers and technology transfer or educational function for small farmers. Advanced farmers were more interested in technology and management information, while small farmers were more concerned about information for farm management directions and timely diffusion of agricultural technology information. Agricultural extension service should put more emphasis on small farmer groups and active participation of farmers in these groups. Providing information and moderate advice in selecting alternatives should be the major activities for consultation for advanced farmers, while problem solving processes may be the major educational function for small farmers. Systems such as internet and e-mail should be utilized for functions of information exchange. These activities may not be an easy task for decreased numbers of extension educators along with increased administrative tasks. It may be difficult to practice a one-to-one approach However group guidance may improve the task to a certain degree.

  • PDF