• Title/Summary/Keyword: Position Prediction

Search Result 530, Processing Time 0.03 seconds

The Foreign Asset Leverage Effect of Oil & Gas Companies after the Financial Crisis (금융위기 이후 정유산업의 외화자산 레버리지효과 분석)

  • Dong-Gyun Kim
    • Korea Trade Review
    • /
    • v.46 no.2
    • /
    • pp.19-38
    • /
    • 2021
  • This study aims to analyze the foreign asset leverage effect on Korean oil & gas companies' foreign profits and to maintain the appropriate foreign asset volume for reducing exchange risk. For a long time, large Korean companies, including oil companies, overheld foreign currency liabilities. For this reason, most large companies have been burdened to hedge exchange risk and this excess limit holding deteriorated total profit and reduced foreign currency asset management efficiency. Our paper proceeds in presenting a three-stage analysis considering diversified exchange risk factors through estimation on transformation of foreign transactions a/c including annual trends of foreign asset and industry specifics. We also supplement incomplete the estimation method through a practical hedging case investigation. Our research parts are differentiated on the analyzing four periods considering period-specifics The FER value of the oil firms ranged from -0.3 to +2.3 over the entire period. The results of the FER Value are volatile and irregular; those results do not represent the industry standard comparative index. The Korean oil firms are over the credit limit without accurate prediction and finance high interest rate funds from foreign-owned banks on the basis on a biased relationship. Since the IMF crisis, liabilities of global firms have decreased. Above all, oil firms need to finance a minimum limit without opportunity losses on the demand forecast and prepare for uncertainty in the market. To reduce exchange risk from the over-the-limit position, we must consider factors that affect the corporate exchange risk on the entire business process, including the contract phase.

Computer Vision-Based Measurement Method for Wire Harness Defect Classification

  • Yun Jung Hong;Geon Lee;Jiyoung Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.77-84
    • /
    • 2024
  • In this paper, we propose a method for accurately and rapidly detecting defects in wire harnesses by utilizing computer vision to calculate six crucial measurement values: the length of crimped terminals, the dimensions (width) of terminal ends, and the width of crimped sections (wire and core portions). We employ Harris corner detection to locate object positions from two types of data. Additionally, we generate reference points for extracting measurement values by utilizing features specific to each measurement area and exploiting the contrast in shading between the background and objects, thus reflecting the slope of each sample. Subsequently, we introduce a method using the Euclidean distance and correction coefficients to predict values, allowing for the prediction of measurements regardless of changes in the wire's position. We achieve high accuracy for each measurement type, 99.1%, 98.7%, 92.6%, 92.5%, 99.9%, and 99.7%, achieving outstanding overall average accuracy of 97% across all measurements. This inspection method not only addresses the limitations of conventional visual inspections but also yields excellent results with a small amount of data. Moreover, relying solely on image processing, it is expected to be more cost-effective and applicable with less data compared to deep learning methods.

Respiratory signal analysis of liver cancer patients with respiratory-gated radiation therapy (간암 호흡동조 방사선치료 환자의 호흡신호분석)

  • Kang, dong im;Jung, sang hoon;Kim, chul jong;Park, hee chul;Choi, byung ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.23-30
    • /
    • 2015
  • Purpose : External markers respiratory movement measuring device (RPM; Real-time Position Management, Varian Medical System, USA) Liver Cancer Radiation Therapy Respiratory gated with respiratory signal with irradiation time and the actual research by analyzing the respiratory phase with the breathing motion measurement device respiratory tuning evaluate the accuracy of radiation therapy Materials and Methods : May-September 2014 Novalis Tx. (Varian Medical System, USA) and liver cancer radiotherapy using respiratory gated RPM (Duty Cycle 20%, Gating window 40% ~ 60%) of 16 patients who underwent total when recording the analyzed respiratory movement. After the breathing motion of the external markers recorded on the RPM was reconstructed by breathing through the acts phase analysis, for Beam-on Time and Duty Cycle recorded by using the reconstructed phase breathing breathing with RPM gated the prediction accuracy of the radiation treatment analysis and analyzed the correlation between prediction accuracy and Duty Cycle in accordance with the reproducibility of the respiratory movement. Results : Treatment of 16 patients with respiratory cycle during the actual treatment plan was analyzed with an average difference -0.03 seconds (range -0.50 seconds to 0.09 seconds) could not be confirmed statistically significant difference between the two breathing (p = 0.472). The average respiratory period when treatment is 4.02 sec (${\pm}0.71sec$), the average value of the respiratory cycle of the treatment was characterized by a standard deviation 7.43% (range 2.57 to 19.20%). Duty Cycle is that the actual average 16.05% (range 13.78 to 17.41%), average 56.05 got through the acts of the show and then analyzed% (range 39.23 to 75.10%) is planned in respiratory research phase (40% to 60%) in was confirmed. The investigation on the correlation between the ratio Duty Cycle and planned respiratory phase and the standard deviation of the respiratory cycle was analyzed in each -0.156 (p = 0.282) and -0.385 (p = 0.070). Conclusion : This study is to analyze the acts after the breathing motion of the external markers recorded during the actual treatment was confirmed in a reproducible ratios of actual treatment of breathing motion during treatment, and Duty Cycle, planned respiratory gated window. Minimizing an error of the treatment plan using 4DCT and enhance the respiratory training and respiratory signal monitoring for effective treatment it is determined to be necessary.

  • PDF

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

On the Determination of Slope Stability to Landslide by Quantification(II) (수량화(數量化)(II)에 의한 산사태사면(山沙汰斜面)의 위험도(危險度) 판별(判別))

  • Kang, Wee Pyeong;Murai, Hiroshi;Omura, Hiroshi;Ma, Ho Seop
    • Journal of Korean Society of Forest Science
    • /
    • v.75 no.1
    • /
    • pp.32-37
    • /
    • 1986
  • In order to get the fundamental information that could be useful to judge the potentiality of occurrence of rapid shallow landslide in the objective slope, factors selected on Jinhae regions in Korea, where many landslides were caused by heavy rainfall of daily 465 mm and hourly 52mm in August 1979, was carried out through the multiple statistics of quantification method (II) by the electronic computer. The net system with $2{\times}2cm$ unit mesh was overlayed with the contour map of scale 1:5000. 74 meshes of landslides and 119 meshes of non-landslide were sampled out to survey the state of vegetative cover and geomorphological conditions, those were divided into 6 items arid 27 categories. As a result, main factors that would lead to landslide were shown in order of vegetation, slope type, slope position, slope, aspect and numbers of stream. Particularly, coniferous forest of 10 years old, concave slope and foot of mountain were main factors making slope instability. On the contrary, coniferous forest of 20-30 years old, deciduous forest, convex slope and summit contributed to the stable against Landslide. The boundary value between two groups of existence and none of landslides was -0.123, and its prediction was 72%. It was well predicted to divide into two groups of them.

  • PDF

Evaluation of the Wet Bulb Globe Temperature (WBGT) Index for Digital Fashion Application in Outdoor Environments

  • Kwon, JuYoun;Parsons, Ken
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.23-36
    • /
    • 2017
  • Objective: This paper presents a study to evaluate the WBGT index for assessing the effects of a wide range of outdoor weather conditions on human responses. Background: The Wet Bulb Globe Temperature (WBGT) index was firstly developed for the assessment of hot outdoor conditions. It is a recognised index that is used world-wide. It may be useful over a range of outdoor conditions and not just for hot climates. Method: Four group experiments, involving people performing a light stepping activity, were conducted to determine human responses to outside conditions in the U.K. They were conducted in September 2007 (autumn), December 2007 (winter), March 2008 (spring) and June 2008 (summer). Environmental measurements included WBGT, air temperature, radiant temperature (including solar load), humidity and wind speed all measured at 1.2m above the ground, as well as weather data measured by a standard weather station at 3m to 4m above the ground. Participants' physiological and subjective responses were measured. When the overall results of the four seasons are considered, WBGT provided a strong prediction of physiological responses as well as subjective responses if aural temperature, heart rate and sweat production were measured. Results: WBGT is appropriate to predict thermal strain on a large group of ordinary people in moderate conditions. Consideration should be given to include the WBGT index in warning systems for a wide range of weather conditions. However, the WBGT overestimated physiological responses of subjects. In addition, tenfold Borg's RPE was significantly different with heart rate measured for the four conditions except autumn (p<0.05). Physiological and subjective responses over 60 minutes consistently showed a similar tendency in the relationships with the $WBGT_{head}$ and $WBGT_{abdomen}$. Conclusion: It was found that either $WBGT_{head}$ or $WBGT_{abdomen}$ could be measured if a measurement should be conducted at only one height. The relationship between the WBGT values and weather station data was also investigated. There was a significant relationship between WBGT values at the position of a person and weather station data. For UK daytime weather conditions ranging from an average air temperature of $6^{\circ}C$ to $21^{\circ}C$ with mean radiant temperatures of up to $57^{\circ}C$, the WBGT index could be used as a simple thermal index to indicate the effects of weather on people. Application: The result of evaluation of WBGT might help to develop the smart clothing for workers in industrial sites and improve the work environment in terms of considering workers' wellness.

A CLINICAL STUDY OF THE NASAL MORPHOLOGIC CHANGES FOLLOWING LEFORT I OSTEOTOMY (상악골 수평골절단술 후 비외형 변화에 관한 임상적 연구)

  • Bae, Jun-Soo;You, Jun-Young;Lyoo, Jong-Ho;Kim, Yong-Kwan
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.25 no.4
    • /
    • pp.324-329
    • /
    • 1999
  • The facial esthetics are much affected by nasal changes due to especially its central position in relation to facial outline and so appropriately evaluated should be the functional and esthetic aspects of the nose associated with the facial appearance. Generally, a maxillary surgical movement is known to induce the changes of nasolabial morphology secondary to the skeletal repositioning accompanied by muscular retraction. These changes can be desirable or undesirable to individuals according to the direction and amount of maxillary repositioning. We investigated the surgical changes of bony maxilla and its effects to nasal morphology through the analysis of the lateral cephalogram in the Le Fort I osteotomy. Subjects were 10 patients(male 2, female 8, mean age 22.3 years) and cephalograms were obtained 2 weeks before surgery(T1) and 6 months after surgery(T2). The surgical maxillary movement was identified through the horizontal and vertical repositioning of point A. Soft-tissue analysis of the nasal profile was performed employing two angles: nasal tip projection(NTP), columellar angle(CA). Also, alar base width(ABW) was assessed directly on the patients with a slide gauge. The results were as follows; 1. Both anterior and superior movement above 2mm of maxilla rotated up nasal tip above 1mm. Either anterior or superior movement above 2mm of maxilla made prediction of the amount & direction of NTP changes difficult. Especially, a correlation between horizontal movement of maxilla and NTP rotated-up was P<0.01. 2. Both much highly anterior and superior movement of maxilla is accompanied by more CA increase than either highly. Especially, the correlation between horizontal movement of maxilla and CA change was P<0.05. 3. Anterior and/or superior movement of maxilla was accompanied by the unpredictable ABW widening. 4. The amount of changes of NTP, CA, and ABW is not in direct proportion to amout of anterior and/or superior movement of maxilla. 5. Nasal morphologic changes following Le Fort I osteotomy are affacted by not merely bony repositioning but other multiple factors.

  • PDF