• Title/Summary/Keyword: Accuracy Measure

Search Result 2,031, Processing Time 0.032 seconds

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • The Benefit of KT-2000 Knee Ligament Arthrometer in Diagnosis of Anterior Cruciate Ligament Injury (슬관절 전방 십자 인대 파열의 진단에 있어서 KT-2000 기기의 유용성)

    • Park, Jai-Hyung;Kim, Hyoung-Soo;Jung, Kwang-Gyu;Yoo, Jeong-Hyun
      • Journal of the Korean Arthroscopy Society
      • /
      • v.8 no.2
      • /
      • pp.82-88
      • /
      • 2004
    • Purpose: In this study, we intended to ascertain the benefit of KT-2000 Knee arthrometer(KT-2000) in the diagnosis of ACL(Anterior cruciate ligament) injury by comparing the anterior displacement of normal knee with that of ACL deficient knee. Materials and Methods: We designated two examiners to measure the anterior displacement of the knee joint of 30 healthy individuals, using KT-2000, at 30$^{\circ}$ flexion setting of muscle full relaxation, contraction, 25$^{\circ}$ internal rotation and 25$^{\circ}$ external rotation and analyzed these results according to the variables and measured the preoperative anterior displacement of the ACL injured knee in the 30 patients who have gone through an arthroscopic ACL reconstruction later. Results: The results of examiner 1 are 6.5${\pm}$1.5 mm, 2.5${\pm}$0.9 mm, 4.8${\pm}$1.2 mm, 6.4${\pm}$1.3 mm in right knee and 5.6${\pm}$1.3 mm, 2.1${\pm}$0.8 mm, 4.5${\pm}$1.2 mm, 5.2${\pm}$1.3 mm in left knee, in order of muscle full relaxation, contraction, 25$^{\circ}$ internal rotation and 25$^{\circ}$ external rotation. The results of examiner 2 are 6.9${\pm}$1.2mm, 2.9${\pm}$1.1mm, 5.6${\pm}$1.6mm, 6.9${\pm}$1.5mm in right, 5.5${\pm}$1.7 mm,1.9${\pm}$0.9 mm, 5.1${\pm}$1.9 mm, 5.7${\pm}$1.6 mm in left knee, The side to side difference of examiner 1 in the setting of muscle relaxation is 0.9${\pm}$1.0 mm. The anterior displaement of ACL injured knee is average 11${\pm}$2.93 mm and difference of average 6.5${\pm}$2.31 mm form that of normal. In comparison between the right and left knees of healthy individuals, the both results of two examiners showed the statistical difference in the setting of muscle full relaxation but, the results showed the side to side difference below 2 mm in 25case(83%), 21case(70%) respectively and above 3 mm in just 1 case. In the comparison between the normal and ACL injured knees, the results show the statistical difference of the side to side difference in the setting of muscle relaxation(p<0.05). Conclusion: The KT-2000 result is affected by relaxation of muscles around knee, flexion angle of knee joint, rotation of tibia, the strength of displacing force, time of the test and physical factors as height and weight. However, the Accuracy of diagnosis of ACL injury by KT-2000 will increase if the examiner is skillful and the tests are made on the exact position of knee joint.

    • PDF

    Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

    • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.1
      • /
      • pp.63-83
      • /
      • 2019
    • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

    A Study on the Characteristics of Enterprise R&D Capabilities Using Data Mining (데이터마이닝을 활용한 기업 R&D역량 특성에 관한 탐색 연구)

    • Kim, Sang-Gook;Lim, Jung-Sun;Park, Wan
      • Journal of Intelligence and Information Systems
      • /
      • v.27 no.1
      • /
      • pp.1-21
      • /
      • 2021
    • As the global business environment changes, uncertainties in technology development and market needs increase, and competition among companies intensifies, interests and demands for R&D activities of individual companies are increasing. In order to cope with these environmental changes, R&D companies are strengthening R&D investment as one of the means to enhance the qualitative competitiveness of R&D while paying more attention to facility investment. As a result, facilities or R&D investment elements are inevitably a burden for R&D companies to bear future uncertainties. It is true that the management strategy of increasing investment in R&D as a means of enhancing R&D capability is highly uncertain in terms of corporate performance. In this study, the structural factors that influence the R&D capabilities of companies are explored in terms of technology management capabilities, R&D capabilities, and corporate classification attributes by utilizing data mining techniques, and the characteristics these individual factors present according to the level of R&D capabilities are analyzed. This study also showed cluster analysis and experimental results based on evidence data for all domestic R&D companies, and is expected to provide important implications for corporate management strategies to enhance R&D capabilities of individual companies. For each of the three viewpoints, detailed evaluation indexes were composed of 7, 2, and 4, respectively, to quantitatively measure individual levels in the corresponding area. In the case of technology management capability and R&D capability, the sub-item evaluation indexes that are being used by current domestic technology evaluation agencies were referenced, and the final detailed evaluation index was newly constructed in consideration of whether data could be obtained quantitatively. In the case of corporate classification attributes, the most basic corporate classification profile information is considered. In particular, in order to grasp the homogeneity of the R&D competency level, a comprehensive score for each company was given using detailed evaluation indicators of technology management capability and R&D capability, and the competency level was classified into five grades and compared with the cluster analysis results. In order to give the meaning according to the comparative evaluation between the analyzed cluster and the competency level grade, the clusters with high and low trends in R&D competency level were searched for each cluster. Afterwards, characteristics according to detailed evaluation indicators were analyzed in the cluster. Through this method of conducting research, two groups with high R&D competency and one with low level of R&D competency were analyzed, and the remaining two clusters were similar with almost high incidence. As a result, in this study, individual characteristics according to detailed evaluation indexes were analyzed for two clusters with high competency level and one cluster with low competency level. The implications of the results of this study are that the faster the replacement cycle of professional managers who can effectively respond to changes in technology and market demand, the more likely they will contribute to enhancing R&D capabilities. In the case of a private company, it is necessary to increase the intensity of input of R&D capabilities by enhancing the sense of belonging of R&D personnel to the company through conversion to a corporate company, and to provide the accuracy of responsibility and authority through the organization of the team unit. Since the number of technical commercialization achievements and technology certifications are occurring both in the case of contributing to capacity improvement and in case of not, it was confirmed that there is a limit in reviewing it as an important factor for enhancing R&D capacity from the perspective of management. Lastly, the experience of utility model filing was identified as a factor that has an important influence on R&D capability, and it was confirmed the need to provide motivation to encourage utility model filings in order to enhance R&D capability. As such, the results of this study are expected to provide important implications for corporate management strategies to enhance individual companies' R&D capabilities.

    Evaluation of usefulness of the Gated Cone-beam CT in Respiratory Gated SBRT (호흡동조 정위체부방사선치료에서 Gated Cone-beam CT의 유용성 평가)

    • Hong sung yun;Lee chung hwan;Park je wan;Song heung kwon;Yoon in ha
      • The Journal of Korean Society for Radiation Therapy
      • /
      • v.34
      • /
      • pp.61-72
      • /
      • 2022
    • Purpose: Conventional CBCT(Cone-beam Computed-tomography) caused an error in the target volume due to organ movement in the area affected by respiratory movement. The purpose of this paper is to evaluate the usefulness of accuracy and time spent using the Gated CBCT function, which reduces errors when performing RGRT(respiratory gated radiation therapy), and to examine the appropriateness of phase. Materials and methods: To evaluate the usefulness of Gated CBCT, the QUASARTM respiratory motion phantom was used in the Truebeam STxTM. Using lead marker inserts, Gated CBCT was scaned 5 times for every 20~80% phase, 30~70% phase, and 40~60% phase to measure the blurring length of the lead marker, and the distance the lead marker moves from the top phase to the end of the phase was measured 5 times. Using Cedar Solid Tumor Inserts, 4DCT was scanned for every phase, 20-80%, 30-70%, and 40-60%, and the target volume was contoured and the length was measured five times in the axial direction (S-I direction). Result: In Gated CBCT scaned using lead marker inserts, the axial moving distance of the lead marker on average was measured to be 4.46cm in the full phase, 3.11cm in the 20-80% phase, 1.94cm in the 30-70% phase, 0.90cm in the 40-60% phase. In Fluoroscopy, the axial moving distance of the lead marker on average was 4.38cm and the distance on average from the top phase to the beam off phase was 3.342cm in the 20-80% phase, 3.342cm in the 30-70% phase, and 0.84cm in the 40-60% phase. Comparing the results, the difference in the full phase was 0.08cm, the 20~80% phase was 0.23cm, the 30~70% phase was 0.10cm, and the 40~60% phase was 0.07cm. The axial lengths of ITV(Internal Target Volume) and PTV(Planning Target Volume) contoured by 4DCT taken using cedar solid tumor inserts were measured to be 6.40cm and 7.40cm in the full phase, 4.96cm and 5.96cm in the 20~80% phase, 4.42cm and 5.42cm in the 30~70% phase, and 2.95cm and 3.95cm in the 40~60% phase. In the Gated CBCT, the axial lengths on average was measured to be 6.35 cm in the full phase, 5.25 cm in the 20-80% phase, 4.04 cm in the 30-70% phase, and 3.08 cm in the 40-60% phase. Comparing the results, it was confirmed that the error was within ±8.5% of ITV Conclusion: Conventional CBCT had a problem that errors occurred due to organ movement in areas affected by respiratory movement, but through this study, obtained an image similar to the target volume of the setting phase using Gated CBCT and verified its usefulness. However, as the setting phase decreases, the scan time was increases. Therefore, considering the scan time and the error in setting phase, It is recommended to apply it to patients with respiratory coordinated stereotactic radiation therapy using a wide phase of 30-70% or more.

    Evaluation of the Usefulness of MapPHAN for the Verification of Volumetric Modulated Arc Therapy Planning (용적세기조절회전치료 치료계획 확인에 사용되는 MapPHAN의 유용성 평가)

    • Woo, Heon;Park, Jang Pil;Min, Jae Soon;Lee, Jae Hee;Yoo, Suk Hyun
      • The Journal of Korean Society for Radiation Therapy
      • /
      • v.25 no.2
      • /
      • pp.115-121
      • /
      • 2013
    • Purpose: Latest linear accelerator and the introduction of new measurement equipment to the agency that the introduction of this equipment in the future, by analyzing the process of confirming the usefulness of the preparation process for applying it in the clinical causes some problems, should be helpful. Materials and Methods: All measurements TrueBEAM STX (Varian, USA) was used, and a file specific to each energy, irradiation conditions, the dose distribution was calculated using a computerized treatment planning equipment (Eclipse ver 10.0.39, Varian, USA). Measuring performance and cause errors in MapCHECK 2 were analyzed and measured against. In order to verify the performance of the MapCHECK 2, 6X, 6X-FFF, 10X, 10X-FFF, 15X field size $10{\times}10$ cm, gantry $0^{\circ}$, $180^{\circ}$ direction was measured by the energy. IGRT couch of the CT values affect the measurements in order to confirm, CT number values : -800 (Carbon) & -950 (COUCH in the air), -100 & 6X-950 in the state for FFF, 15X of the energy field sizes $10{\times}10$, gantry $180^{\circ}$, $135^{\circ}$, $275^{\circ}$ directionwas measured at, MapPHAN allocated to confirm the value of HU were compared, using the treatment planning computer for, Measurement error problem by the sharp edges MapPHAN Learn gantry direction MapPHAN of dependence was measured in three ways. GANTRY $90^{\circ}$, $270^{\circ}$ in the direction of the vertically erected settings 6X-FFF, 15X respectively, and Setting the state established as a horizontal field sizes $10{\times}10$, $90^{\circ}$, $45^{\circ}$, $315^{\circ}$, $270^{\circ}$ of in the direction of the energy-6X-FFF, 15X, respectively, were measured. Without intensity modulated beam of the third open arc were investigated. Results: Of basic performance MapCHECK confirm the attenuation measured by Couch, measured from the measured HU values that are assigned to the MAP-PHAN, check for calculation accuracy for the angled edge of the MapPHAN all come in a range of valid measurement errors do not affect the could see. three ways for the Gantry direction dependence, the first of the meter built into the value of the Gantry $270^{\circ}$ (relative $0^{\circ}$), $90^{\circ}$ (relative $180^{\circ}$), 6X-FFF, 15X from each -1.51, 0.83% and -0.63, -0.22% was not affected by the AP/PA direction represented. Setting the meter horizontally Gantry $90^{\circ}$, $270^{\circ}$ from the couch, Energy 6X-FFF 4.37, 2.84%, 15X, -9.63, -13.32% the difference. By-side direction measurements MapPHAN in value is not within the valid range can not, because that could be confirmed as gamma pass rate 3% of the value is greater than the value shown. You can check the Open Arc 6X-FFF, 15X energy, field size $10{\times}10$ cm $360^{\circ}$ rotation of the dose distribution in the state to look at nearly 90% pass rate to emerge. Conclusion: Based on the above results, the MapPHAN gantry direction dependence by side in the direction of the beam relative dose distribution suitable for measuring the gamma value, but accurate measurement of the absolute dose can not be considered is. this paper, a more accurate treatment plan in order to confirm, Reduce the tolerance for VMAT, such as lateral rotation investigation in order to measure accurate absolute isodose using a combination of IMF (Isocentric Mounting Fixture) MapCHEK 2, will be able to minimize the impact due to the angular dependence.

    • PDF

    Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

    • Kim, Sun Woong;Choi, Heung Sik
      • Journal of Intelligence and Information Systems
      • /
      • v.23 no.2
      • /
      • pp.107-122
      • /
      • 2017
    • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.