• Title/Summary/Keyword: null hypothesis

Search Result 197, Processing Time 0.022 seconds

A Multi-level Representation of the Korean Narrative Text Processing and Construction-Integration Theory: Morpho- syntactic and Discourse-Pragmatic Effects of Verb Modality on Topic Continuity (한국어 서사 텍스트 처리의 다중 표상과 구성 통합 이론: 주제어 연속성에 대한 양태 어미의 형태 통사적, 담화 화용적 기능)

  • Cho Sook-Whan;Kim Say-Young
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.2
    • /
    • pp.103-118
    • /
    • 2006
  • The main purpose of this paper is to investigate the effects of discourse topic and morpho-syntactic verbal information on the resolution of null pronouns in the Korean narrative text within the framework of the construction-integration theory (Kintsch, 1988, Singer & Kintsch, 2001, Graesser, Gernsbacher, & Goldman. 2003). For the purpose of this paper, two conditions were designed: an explicit condition with both a consistently maintained discourse topic and the person-specific verb modals on one hand, and a neutral condition with no discourse topic or morpho-syntactic information provided, on the other. We measured the reading tines far the target sentence containing a null pronoun and the question response times for finding an antecedent, and the accuracy rates for finding an antecedent. During the experiments each passage was presented at a tine on a computer-controlled display. Each new sentence was presented on the screen at the moment the participant pressed the button on the computer keyboard. Main findings indicate that processing is facilitated by macro-structure (topicality) in conjunction with micro-structure (morpho-syntax) in pronoun interpretation. It is speculated that global processing alone may not be able to determine which potential antecedent is to be focused unless aided by lexical information. It is argued that the results largely support the resonance-based model, but not the minimalist hypothesis.

  • PDF

An Analysis on Export Behavior to China of Container Port (국내 컨테이너항만의 대중국 수출행태 분석)

  • Son, Yong-Jung
    • Journal of Korea Port Economic Association
    • /
    • v.25 no.2
    • /
    • pp.115-128
    • /
    • 2009
  • This study aims to identify the influence of exchange rate and national economy on Export through container ports (Busan Port, Incheon Port, Gwangyang Port, and Pyeongtaek Port) from January 2001 to October 2007. This study carried a unit root test on the results of the analysis and failed to reject the null hypothesis that level variables have a unit root at the level of 1%. However, it carried out a unit root test on the variables by the first order difference and succeeded in rejecting the null hypothesis aforementioned at the level of 1%. As a result of the cointegration test, it was found that the model is stable. When this study carried out a variance decomposition on the prediction error of export at container various container ports, it found 89% for Busan Port, 83% for Incheon Port, 86% for Gwangyang Port, and 84% for Pyeongtaek Port. These figures indicate that such variables significantly account for export at container ports. For Busan Port, Step 2 of exchange rate showed negative (-) effect, and Step 3 shows an extreme transition into a positive (+) effect. The national economy showed an extreme change from Steps 2 to Step 7, and then a positive effect has been maintained. The Incheon Port, Gwangyang Port and Pyeongtaek Port showed similar trends to Busan Port. From Step 7, it seems that they have Shifted to more stable trends.

  • PDF

Impact of piezocision on orthodontic tooth movement

  • Papadopoulos, Nikolaos;Beindorff, Nicola;Hoffmann, Stefan;Jost-Brinkmann, Paul-Georg;Prager, Thomas Michael
    • The korean journal of orthodontics
    • /
    • v.51 no.6
    • /
    • pp.366-374
    • /
    • 2021
  • Objective: This study investigated the impact of a single piezocision in the maxillary alveolar process on the speed of tooth movement. The null hypothesis was that the speed of tooth movement will be equal with and without piezocision. Methods: All maxillary molars on one side were moved against the combined incisors in 10 ten-week-old male Wistar rats. Under general anesthesia, a force of 25 cN was applied on either side using a Sentalloy closed coil spring. After placing the orthodontic appliance, vertical corticision was performed using a piezotome under local anesthesia, 2 mm mesial from the mesial root of the first molar on a randomly selected side; the other side served as the control. At the beginning of the treatment, and 2 and 4 weeks later, skull micro-computed tomography was performed. After image reconstruction, the distance between the mesial root of the first molar and the incisive canal, and the length of the mesial root of the first maxillary molar were measured. Moreover, the root resorption score was determined as described by Lu et al. Results: Significantly higher speed of tooth movement was observed on the corticision side; thus, the null hypothesis was rejected. The loss of root length and root resorption score were significantly more pronounced after piezocision than before. A strong correlation was observed between the speed of tooth movement and root resorption on the surgical side, but the control side only showed a weak correlation. Conclusions: Piezocision accelerates orthodontic tooth movement and causes increased root resorption.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Equity in the Delivery of Health care in the Republic of Korea (의료이용의 형평성에 관한 실증적 연구 -공.교 의료보험 피부양자를 대상으로-)

  • 명지영;문옥륜
    • Health Policy and Management
    • /
    • v.5 no.2
    • /
    • pp.155-172
    • /
    • 1995
  • This study is an empirical analysis on the equity in the delivery of heatlh care under the Korean Medical Insurance Corporation System. The purposes of this study are to find out effects of income on the health care utiliztion and measure the income-related inequity in the distribution of health care. This study was carried out based on the fact that the health insurance program has been organized to achieve the equity objective, "equal treatment for equal needs". Of 41, 828 insured persons who had been diagnosed in the 1993 Health Screening Test and utilifzation data from 1, January 1993 through 31, December 1993 were derived from the Benefit Managment File. Inequity was measured by means of I) share approach, ii) standardization concentration curve approach, iii) inequity index, iv) test for inequity. The major findings were as follows : 1. The expenditure shares of the top two quintile groups exceeded their morbidity shares, whereas the opposite was true of the bottom three quintile groups, Which showed a positive HI$_{LG}$ inequity index, suggesting the presence of some inequity favoring the rich group. 2. Compared with other residential areas, the rural area showed the highest positive HI$_{LG}$ irrespective of need indicatior applied. 3. Standardized expenditure concentration indices adjusted by age, gender and need structure were also found to be positive, and therefore still indicated that there has been inequity favoring the rich after the standardization. 4. The Loglikelihood Ratio (LR) test for the statistical significance of income-related inequity of medical care utilization was carried out using the logistic regression model. The resulting loglikelihood ratio test statistic value was 176, which did exceed the 0.5 percent critical value of the chi-square distribution with 28 degrees of freedom, which is 50.993. Therefore, the null hypothesis of no income-related inequity of medical care utilization was rejected at the 99.5 percent confidence level. 5. The Regression based F-test has been carried out for analyzing the income-related inequity of medical expenditure in terms of age, gender, morbidity indicators as explanary variables. The hypothesis of the absence of income-relate inequity was rejected for all need indicators at the 95% confidence level.nce level.

  • PDF

A Study on Delay of VR Game Operation for Experienced Game Users (숙련된 게임유저에게 발생되는 VR 게임 조작 지연에 관한 연구)

  • Jung, Won-Joe;Lee, Chang-Jo
    • Journal of Korea Game Society
    • /
    • v.18 no.1
    • /
    • pp.19-26
    • /
    • 2018
  • In this study, the hardcore game user verified the manipulation delay that occurred during VR game play because of the experienced game. Based on the HCI - based research approach, we created a 2D, 3D, and VR prototype game with user manipulation cycle hypothesis. Based on this, 121 users were experimented with 2D, 3D, VR format user interface. The average user manipulation period extracted by the experiment was compared with the independent sample T test. Based on the test results give the average time difference between the user's operation of the 2D VR format has been verified. User operation period of the average time difference in 3D VR format proved the null hypothesis of no significant difference has been adopted.

An Empirical Study on the Comparison of LSTM and ARIMA Forecasts using Stock Closing Prices

  • Gui Yeol Ryu
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.18-30
    • /
    • 2023
  • We compared empirically the forecast accuracies of the LSTM model, and the ARIMA model. ARIMA model used auto.arima function. Data used in the model is 100 days. We compared with the forecast results for 50 days. We collected the stock closing prices of the top 4 companies by market capitalization in Korea such as "Samsung Electronics", and "LG Energy", "SK Hynix", "Samsung Bio". The collection period is from June 17, 2022, to January 20, 2023. The paired t-test is used to compare the accuracy of forecasts by the two methods because conditions are same. The null hypothesis that the accuracy of the two methods for the four stock closing prices were the same were rejected at the significance level of 5%. Graphs and boxplots confirmed the results of the hypothesis tests. The accuracies of ARIMA are higher than those of LSTM for four cases. For closing stock price of Samsung Electronics, the mean difference of error between ARIMA and LSTM is -370.11, which is 0.618% of the average of the closing stock price. For closing stock price of LG Energy, the mean difference is -4143.298 which is 0.809% of the average of the closing stock price. For closing stock price of SK Hynix, the mean difference is -830.7269 which is 1.00% of the average of the closing stock price. For closing stock price of Samsung Bio, the mean difference is -4143.298 which is 0.809% of the average of the closing stock price. The auto.arima function was used to find the ARIMA model, but other methods are worth considering in future studies. And more efforts are needed to find parameters that provide an optimal model in LSTM.

Foreign Equity Ownership and Investors' Heterogeneous Beliefs (외국인지분율과 투자자들 간의 상이한 믿음)

  • Byun, Sun-Young;Jung, Hyun-Uk
    • Korea Trade Review
    • /
    • v.42 no.2
    • /
    • pp.227-249
    • /
    • 2017
  • This study investigates whether foreign equity ownership is associated with trading volume. This study establishes null hypothesis indicating that foreign equity ownership is not related to trading volume based on prior studies regarding foreign equity ownership. We measured trading volume as proxy of investors' heterogeneous beliefs. To exam The the hypothesis, we collected sample firms listed on the Korean Stock Exchange from the year of 2001 to the year of 2011 inclusively. Controlling for variables related with trading volume as reported in the previous studies, the regression coefficient for the foreign equity ownership showed statistically significant negative sign. These results indicate that the foreign equity ownership is negatively associated with investors' heterogeneous beliefs. This study contributes to extant literature on foreign equity ownership by providing evidence that foreign equity ownership affects investors' trading decisions. The results also help policy makers in their policy development.

  • PDF

Statistical Inference in Non-Identifiable and Singular Statistical Models

  • Amari, Shun-ichi;Amari, Shun-ichi;Tomoko Ozeki
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.179-192
    • /
    • 2001
  • When a statistical model has a hierarchical structure such as multilayer perceptrons in neural networks or Gaussian mixture density representation, the model includes distribution with unidentifiable parameters when the structure becomes redundant. Since the exact structure is unknown, we need to carry out statistical estimation or learning of parameters in such a model. From the geometrical point of view, distributions specified by unidentifiable parameters become a singular point in the parameter space. The problem has been remarked in many statistical models, and strange behaviors of the likelihood ratio statistics, when the null hypothesis is at a singular point, have been analyzed so far. The present paper studies asymptotic behaviors of the maximum likelihood estimator and the Bayesian predictive estimator, by using a simple cone model, and show that they are completely different from regular statistical models where the Cramer-Rao paradigm holds. At singularities, the Fisher information metric degenerates, implying that the cramer-Rao paradigm does no more hold, and that he classical model selection theory such as AIC and MDL cannot be applied. This paper is a first step to establish a new theory for analyzing the accuracy of estimation or learning at around singularities.

  • PDF

Two-sample chi-square test for randomly censored data (임의로 관측중단된 두 표본 자료에 대한 카이제곱 검정방법)

  • 김주한;김정란
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.2
    • /
    • pp.109-119
    • /
    • 1995
  • A two sample chi-square test is introduced for testing the equality of the distributions of two populations when observations are subject to random censorship. The statistic is appropriate in testing problems where a two-sided alternative is of interest. Under the null hypothesis, the asymptotic distribution of the statistic is a chi-square distribution. We obtain two types of chi-square statistics ; one as a nonnegative definite quadratic form in difference of observed cell probabilities based on the product-limit estimators, the other one as a summation form. Data pertaining to a cancer chemotheray experiment are examined with these statistics.

  • PDF