• Title/Summary/Keyword: Data accuracy

Search Result 11,657, Processing Time 0.037 seconds

Evaluation of the Positional Uncertainty of a Liver Tumor using 4-Dimensional Computed Tomography and Gated Orthogonal Kilovolt Setup Images (사차원전산화단층촬영과 호흡연동 직각 Kilovolt 준비 영상을 이용한 간 종양의 움직임 분석)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Park, Hee-Chul;Ahn, Jong-Ho;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jin-Sung;Han, Young-Yih;Lim, Do-Hoon;Choi, Doo-Ho
    • Radiation Oncology Journal
    • /
    • v.28 no.3
    • /
    • pp.155-165
    • /
    • 2010
  • Purpose: In order to evaluate the positional uncertainty of internal organs during radiation therapy for treatment of liver cancer, we measured differences in inter- and intra-fractional variation of the tumor position and tidal amplitude using 4-dimentional computed radiograph (DCT) images and gated orthogonal setup kilovolt (KV) images taken on every treatment using the on board imaging (OBI) and real time position management (RPM) system. Materials and Methods: Twenty consecutive patients who underwent 3-dimensional (3D) conformal radiation therapy for treatment of liver cancer participated in this study. All patients received a 4DCT simulation with an RT16 scanner and an RPM system. Lipiodol, which was updated near the target volume after transarterial chemoembolization or diaphragm was chosen as a surrogate for the evaluation of the position difference of internal organs. Two reference orthogonal (anterior and lateral) digital reconstructed radiograph (DRR) images were generated using CT image sets of 0% and 50% into the respiratory phases. The maximum tidal amplitude of the surrogate was measured from 3D conformal treatment planning. After setting the patient up with laser markings on the skin, orthogonal gated setup images at 50% into the respiratory phase were acquired at each treatment session with OBI and registered on reference DRR images by setting each beam center. Online inter-fractional variation was determined with the surrogate. After adjusting the patient setup error, orthogonal setup images at 0% and 50% into the respiratory phases were obtained and tidal amplitude of the surrogate was measured. Measured tidal amplitude was compared with data from 4DCT. For evaluation of intra-fractional variation, an orthogonal gated setup image at 50% into the respiratory phase was promptly acquired after treatment and compared with the same image taken just before treatment. In addition, a statistical analysis for the quantitative evaluation was performed. Results: Medians of inter-fractional variation for twenty patients were 0.00 cm (range, -0.50 to 0.90 cm), 0.00 cm (range, -2.40 to 1.60 cm), and 0.00 cm (range, -1.10 to 0.50 cm) in the X (transaxial), Y (superior-inferior), and Z (anterior-posterior) directions, respectively. Significant inter-fractional variations over 0.5 cm were observed in four patients. Min addition, the median tidal amplitude differences between 4DCTs and the gated orthogonal setup images were -0.05 cm (range, -0.83 to 0.60 cm), -0.15 cm (range, -2.58 to 1.18 cm), and -0.02 cm (range, -1.37 to 0.59 cm) in the X, Y, and Z directions, respectively. Large differences of over 1 cm were detected in 3 patients in the Y direction, while differences of more than 0.5 but less than 1 cm were observed in 5 patients in Y and Z directions. Median intra-fractional variation was 0.00 cm (range, -0.30 to 0.40 cm), -0.03 cm (range, -1.14 to 0.50 cm), 0.05 cm (range, -0.30 to 0.50 cm) in the X, Y, and Z directions, respectively. Significant intra-fractional variation of over 1 cm was observed in 2 patients in Y direction. Conclusion: Gated setup images provided a clear image quality for the detection of organ motion without a motion artifact. Significant intra- and inter-fractional variation and tidal amplitude differences between 4DCT and gated setup images were detected in some patients during the radiation treatment period, and therefore, should be considered when setting up the target margin. Monitoring of positional uncertainty and its adaptive feedback system can enhance the accuracy of treatments.

Computed Tomography-guided Localization with a Hook-wire Followed by Video-assisted Thoracic Surgery for Small Intrapulmonary and Ground Glass Opacity Lesions (폐실질 내에 위치한 소결질 및 간유리 병변에서 흉부컴퓨터단층촬영 유도하에 Hook Wire를 이용한 위치 선정 후 시행한 흉강경 폐절제술의 유용성)

  • Kang, Pil-Je;Kim, Yong-Hee;Park, Seung-Il;Kim, Dong-Kwan;Song, Jae-Woo;Do, Kyoung-Hyun
    • Journal of Chest Surgery
    • /
    • v.42 no.5
    • /
    • pp.624-629
    • /
    • 2009
  • Background: Making the histologic diagnosis of small pulmonary nodules and ground glass opacity (GGO) lesions is difficult. CT-guided percutaneous needle biopsies often fail to provide enough specimen for making the diagnosis. Video-assisted thoracoscopic surgery (VATS) can be inefficient for treating non-palpable lesions. Preoperative localization of small intrapulmonary lesions provides a more obvious target to facilitate performing intraoperative. resection. We evaluated the efficacy of CT-guided localization with using a hook wire and this was followed by VATS for making the histologic diagnosis of small intrapulmonary nodules and GGO lesions. Material and Method: Eighteen patients (13 males) were included in this study from August 2005 to March 2008. 18 intrapulmonary lesions underwent preoperative localization by using a CT-guided a hook wire system prior to performing VATS resection for intrapulmonary lesions and GGO lesions. The clinical data such as the accuracy of localization, the rate of conversion-to-thoracotomy, the operation time, the postoperative complications and the histology of the pulmonary lesion were retrospectively collected. Result: Eighteen VATS resections were performed in 18 patients. Preoperative CT-guided localization with a hook-wire was successful in all the patients. Dislodgement of a hook wire was observed in one case. There was no conversion to thoracotomy, The median diameter of lesions was 8 mm (range: $3{\sim}15\;mm$). The median depth of the lesions from the pleural surfaces was 5.5 mm (range: $1{\sim}30\;mm$). The median interval between preoperative CT-guided with a hook-wire and VATS was 34.5 min (range: ($10{\sim}226$ min). The median operative time was 43.5.min (range: $26{\sim}83$ min). In two patients, clinically insignificant pneumothorax developed after CT-guided localization with a hook-wire and there were no other complications. Histological examinations confirmed 8 primary lung cancers, 3 cases of metastases, 3 cases of inflammation, 2 intrapulmonary lymph nodes and 2 other benign lesions. Conclusion: CT-guided localization with a hook-wire followed by VATS for treating small intrapulmonary nodules and GGO lesions provided a low conversion thoracotomy rate, a short operation time and few localization-related or postoperative complications. This procedure was efficient to confirm intrapulmonary lesions and GGO lesions.

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

Inhomogeneity correction in on-line dosimetry using transmission dose (투과선량을 이용한 온라인 선량측정에서 불균질조직에 대한 선량 보정)

  • Wu, Hong-Gyun;Huh, Soon-Nyung;Lee, Hyoung-Koo;Ha, Sung-Whan
    • Journal of Radiation Protection and Research
    • /
    • v.23 no.3
    • /
    • pp.139-147
    • /
    • 1998
  • Purpose: Tissue inhomogeneity such as lung affects tumor dose as well as transmission dose in new concept of on-line dosimetry which estimates tumor dose from transmission dose using the new algorithm. This study was carried out to confirm accuracy of correction by tissue density in tumor dose estimation utilizing transmission dose. Methods: Cork phantom (CP, density $0.202\;gm/cm^3$) having similar density with lung parenchyme and polystyrene phantom (PP, density $1.040\;gm/cm^3$) having similar density with soft tissue were used. Dose measurement was carried out under condition simulating human chest. On simulating AP-PA irradiation, PPs with 3 cm thickness were placed above and below CP, which had thickness of 5, 10, and 20 cm. On simulating lateral irradiation, 6 cm thickness of PP was placed between two 10 cm thickness CPs additional 3 cm thick PP was placed to both lateral sides. 4, 6, and 10 MV x-ray were used. Field size was in the range of $3{\times}3$ cm through $20{\times}20$ cm, and phantom-chamber distance (PCD) was 10 to 50 cm. Above result was compared with another sets of data with equivalent thickness of PP which was corrected by density. Result: When transmission dose of PP was compared with equivalent thickness of CP which was corrected with density, the average error was 0.18 (${\pm}0.27$) % for 4 MV, 0.10 (${\pm}0.43$) % for 6 MV, and 0.33 (${\pm}0.30$) % for 10 MV with CP having thickness of 5 cm. When CP was 10 cm thick, the error was 0.23 (${\pm}0.73$) %, 0.05 (${\pm}0.57$) %, and 0.04 (${\pm}0.40$) %, while for 20 cm, error was 0.55 (${\pm}0.36$) %, 0.34 (${\pm}0.27$) %, and 0.34 (${\pm}0.18$) % for corresponding energy. With lateral irradiation model, difference was 1.15 (${\pm}1.86$) %, 0.90 (${\pm}1.43$) %, and 0.86 (${\pm}1.01$) % for corresponding energy. Relatively large difference was found in case of PCD having value of 10 cm. Omitting PCD with 10 cm, the difference was reduced to 0.47 (${\pm}$1.17) %, 0.42 (${\pm}$0.96) %, and 0.55 (${\pm}$0.77) % for corresponding energy. Conclusion When tissue inhomogeneity such as lung is in tract of x-ray beam, tumor dose could be calculated from transmission dose after correction utilizing tissue density.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on the Effect of Booth Recommendation System on Exhibition Visitors Unplanned Visit Behavior (전시장 참관객의 계획되지 않은 방문행동에 있어서 부스추천시스템의 영향에 대한 연구)

  • Chung, Nam-Ho;Kim, Jae-Kyung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.175-191
    • /
    • 2011
  • With the MICE(Meeting, Incentive travel, Convention, Exhibition) industry coming into the spotlight, there has been a growing interest in the domestic exhibition industry. Accordingly, in Korea, various studies of the industry are being conducted to enhance exhibition performance as in the United States or Europe. Some studies are focusing particularly on analyzing visiting patterns of exhibition visitors using intelligent information technology in consideration of the variations in effects of watching exhibitions according to the exhibitory environment or technique, thereby understanding visitors and, furthermore, drawing the correlations between exhibiting businesses and improving exhibition performance. However, previous studies related to booth recommendation systems only discussed the accuracy of recommendation in the aspect of a system rather than determining changes in visitors' behavior or perception by recommendation. A booth recommendation system enables visitors to visit unplanned exhibition booths by recommending visitors suitable ones based on information about visitors' visits. Meanwhile, some visitors may be satisfied with their unplanned visits, while others may consider the recommending process to be cumbersome or obstructive to their free observation. In the latter case, the exhibition is likely to produce worse results compared to when visitors are allowed to freely observe the exhibition. Thus, in order to apply a booth recommendation system to exhibition halls, the factors affecting the performance of the system should be generally examined, and the effects of the system on visitors' unplanned visiting behavior should be carefully studied. As such, this study aims to determine the factors that affect the performance of a booth recommendation system by reviewing theories and literature and to examine the effects of visitors' perceived performance of the system on their satisfaction of unplanned behavior and intention to reuse the system. Toward this end, the unplanned behavior theory was adopted as the theoretical framework. Unplanned behavior can be defined as "behavior that is done by consumers without any prearranged plan". Thus far, consumers' unplanned behavior has been studied in various fields. The field of marketing, in particular, has focused on unplanned purchasing among various types of unplanned behavior, which has been often confused with impulsive purchasing. Nevertheless, the two are different from each other; while impulsive purchasing means strong, continuous urges to purchase things, unplanned purchasing is behavior with purchasing decisions that are made inside a store, not before going into one. In other words, all impulsive purchases are unplanned, but not all unplanned purchases are impulsive. Then why do consumers engage in unplanned behavior? Regarding this question, many scholars have made many suggestions, but there has been a consensus that it is because consumers have enough flexibility to change their plans in the middle instead of developing plans thoroughly. In other words, if unplanned behavior costs much, it will be difficult for consumers to change their prearranged plans. In the case of the exhibition hall examined in this study, visitors learn the programs of the hall and plan which booth to visit in advance. This is because it is practically impossible for visitors to visit all of the various booths that an exhibition operates due to their limited time. Therefore, if the booth recommendation system proposed in this study recommends visitors booths that they may like, they can change their plans and visit the recommended booths. Such visiting behavior can be regarded similarly to consumers' visit to a store or tourists' unplanned behavior in a tourist spot and can be understand in the same context as the recent increase in tourism consumers' unplanned behavior influenced by information devices. Thus, the following research model was established. This research model uses visitors' perceived performance of a booth recommendation system as the parameter, and the factors affecting the performance include trust in the system, exhibition visitors' knowledge levels, expected personalization of the system, and the system's threat to freedom. In addition, the causal relation between visitors' satisfaction of their perceived performance of the system and unplanned behavior and their intention to reuse the system was determined. While doing so, trust in the booth recommendation system consisted of 2nd order factors such as competence, benevolence, and integrity, while the other factors consisted of 1st order factors. In order to verify this model, a booth recommendation system was developed to be tested in 2011 DMC Culture Open, and 101 visitors were empirically studied and analyzed. The results are as follows. First, visitors' trust was the most important factor in the booth recommendation system, and the visitors who used the system perceived its performance as a success based on their trust. Second, visitors' knowledge levels also had significant effects on the performance of the system, which indicates that the performance of a recommendation system requires an advance understanding. In other words, visitors with higher levels of understanding of the exhibition hall learned better the usefulness of the booth recommendation system. Third, expected personalization did not have significant effects, which is a different result from previous studies' results. This is presumably because the booth recommendation system used in this study did not provide enough personalized services. Fourth, the recommendation information provided by the booth recommendation system was not considered to threaten or restrict one's freedom, which means it is valuable in terms of usefulness. Lastly, high performance of the booth recommendation system led to visitors' high satisfaction levels of unplanned behavior and intention to reuse the system. To sum up, in order to analyze the effects of a booth recommendation system on visitors' unplanned visits to a booth, empirical data were examined based on the unplanned behavior theory and, accordingly, useful suggestions for the establishment and design of future booth recommendation systems were made. In the future, further examination should be conducted through elaborate survey questions and survey objects.