• Title/Summary/Keyword: statistical verification

Search Result 499, Processing Time 0.025 seconds

On-line Signature Verification using Segment Matching and LDA Method (구간분할 매칭방법과 선형판별분석기법을 융합한 온라인 서명 검증)

  • Lee, Dae-Jong;Go, Hyoun-Joo;Chun, Myung-Geun
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1065-1074
    • /
    • 2007
  • Among various methods to compare reference signatures with an input signature, the segment-to-segment matching method has more advantages than global and point-to-point methods. However, the segment-to-segment matching method has the problem of having lower recognition rate according to the variation of partitioning points. To resolve this drawback, this paper proposes a signature verification method by considering linear discriminant analysis as well as segment-to-segment matching method. For the final decision step, we adopt statistical based Bayesian classifier technique to effectively combine two individual systems. Under the various experiments, the proposed method shows better performance than segment-to-segment based matching method.

A Study on the Influence of Service Quality in Commercial Bank of China on Customer Satisfaction and Intent of Use: Focused on the Mediated Effect of Bank Image (중국 상업은행의 서비스품질이 고객만족도와 이용의도에 미치는 영향에 관한 연구: 은행 이미지의 매개효과를 중심으로)

  • Liu, Zi-Yang;Liang, Yaqing
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.401-402
    • /
    • 2019
  • The purpose of this study is to find specific service quality factors of enterprises that can maximize the perception of banks' services to users of commercial banks in China, and to establish empirically how these quality factors affect the bank's image. They also want to verify the impact of the positive image of the bank on the user's satisfaction and the willingness to use the bank's services. For empirical verification of this study, questionnaires will be used to customers who have used the services of each of the four commercial banks in China, and the survey was conducted. The collected data were analyzed using the SPSS using statistical techniques such as Cronbach' ${\alpha}$, Investigative Factor Analysis, Reliability Analysis, Correlation Analysis, Regression and Difference Verification. The results of the verification were summarized below. First, the quality of service of commercial banks has a partial positive effect on the bank's image. Second, the image of a commercial bank has a positive effect on customer satisfaction. Third, the image of a commercial bank has a positive effect on the purpose of use. Fourth, the image of commercial banks has a partial mediated effect between service quality and customer satisfaction. Fifth, the image of a commercial bank has a partial mediated effect between the quality of service and its intended use.

  • PDF

An Analysis of Phonetic Parameters for Individual Speakers (개별화자 음성의 특징 파라미터 분석)

  • Ko, Do-Heung
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.177-189
    • /
    • 2000
  • This paper investigates how individual speakers' speech can be distinguished using acoustic parameters such as amplitude, pitch, and formant frequencies. Word samples from fifteen male speakers in their 20's in three different regions were recorded in two different modes (i.e., casual and clear speech) in quiet settings, and were analyzed with a Praat macro scrip. In order to determine individual speakers' acoustical values, the total duration of voicing segments was measured in five different timepoints. Results showed that a high correlation coefficient between $F_1\;and\;F_2$ in formant frequency was found among the speakers although there was little correlation coefficient between amplitude and pitch. Statistical grouping shows that individual speakers' voices were not reflected in regional dialects for both casual and clear speech. In addition, the difference of maximum and minimum in amplitude was about 10 dB which indicates a perceptually audible degree. These acoustic data can give some meaningful guidelines for implementing algorithms of speaker identification and speaker verification.

  • PDF

Voice Similarities between Sisters

  • Ko, Do-Heung
    • Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.43-50
    • /
    • 2001
  • This paper deals with voice similarities between sisters who are supposed to have common physiological characteristics from a single biological mother. Nine pairs of sisters who are believed to have similar voices participated in this experiment. The speech samples obtained from one pair of sisters were eliminated in the analysis because their perceptual score was relatively low. The words were measured in both isolation and context, and the subjects were asked to read the text five times with about three seconds of interval between readings. Recordings were made at natural speed in a quiet room. The data were analyzed in pitch and formant frequencies using CSL (Computerized Speech Lab) and PCQuirer. It was found that data of the initial vowels are much more similar and homogeneous than those of vowels in other positions. The acoustic data showed that voice similarities are strikingly high in both pitch and formant frequencies. It is assumed that statistical data obtained from this experiment can be used as a guideline for modelling speaker identification and speaker verification.

  • PDF

Statistical division of compressive strength results on the aspect of concrete family concept

  • Jasiczak, Jozef;Kanoniczak, Marcin;Smaga, Lukasz
    • Computers and Concrete
    • /
    • v.14 no.2
    • /
    • pp.145-161
    • /
    • 2014
  • The article presents the statistical method of grouping the results of the compressive strength of concrete in continuous production. It describes the method of dividing the series of compressive strength results into batches of statistically stable strength parameters at specific time intervals, based on the standardized concept of "concrete family". The article presents the examples of calculations made for two series of concrete strength results, from which sets of decreased strength parameters were separated. When assessing the quality of concrete elements and concrete road surfaces, the principal issue is the control of the compressive strength parameters of concrete. Large quantities of concrete mix manufactured in a continuous way should be subject to continuous control. Standardized approach to assessing the concrete strength proves to be insufficient because it does not allow for the detection of subsets of the decreased strength results, which in turn makes it impossible to make adjustments to the concrete manufacturing process and to identify particular product or area on site with decreased concrete strength. In this article two independent methods of grouping the test results of concrete with statistically stable strength parameters were proposed, involving verification of statistical hypothesis based on statistical tests: Student's t-test and Mann - Whitney - U test.

The Statistical Model for Predicting Flood Frequency (홍수 빈도 예측을 위한 통계학적 모형)

  • 노재식;이길춘
    • Water for future
    • /
    • v.25 no.2
    • /
    • pp.89-97
    • /
    • 1992
  • This study is to verify the applicability of statistical models for predicting flood frequency at the stage gaging stations selected by considering whether the flow is natural condition in the Han River basin. From the result of verification, this statistical flood frequency models showed that is fairly reasonable to apply in practice, and also were compared with sampling variance to calibrate the statistical dfficiency of the estimate of the T year flood Q(T) by two different flood frequency models. As a result, it was showed that for return periods greater than about T=10 years the annual exceedence series estimate of Q(T) has smaller sampling variance than the annual maximum series estimate. It was showed that for the range of return periods the partial duration series estimate of Q(T) has smaller sampling varianed than the annual maximum series estimate only if the POT model contains at least 2N(N:record length)items or more in order to estimate Q(T) more efficiently than the ANNMAX model.

  • PDF

A statistical reference-free damage identification for real-time monitoring of truss bridges using wavelet-based log likelihood ratios

  • Lee, Soon Gie;Yun, Gun Jin
    • Smart Structures and Systems
    • /
    • v.12 no.2
    • /
    • pp.181-207
    • /
    • 2013
  • In this paper, a statistical reference-free real-time damage detection methodology is proposed for detecting joint and member damage of truss bridge structures. For the statistical damage sensitive index (DSI), wavelet packet decomposition (WPD) in conjunction with the log likelihood ratio was suggested. A sensitivity test for selecting a wavelet packet that is most sensitive to damage level was conducted and determination of the level of decomposition was also described. Advantages of the proposed method for applications to real-time health monitoring systems were demonstrated by using the log likelihood ratios instead of likelihood ratios. A laboratory truss bridge structure instrumented with accelerometers and a shaker was used for experimental verification tests of the proposed methodology. The statistical reference-free real-time damage detection algorithm was successfully implemented and verified by detecting three damage types frequently observed in truss bridge structures - such as loss of bolts, loosening of bolts at multiple locations, sectional loss of members - without reference signals from pristine structure. The DSI based on WPD and the log likelihood ratio showed consistent and reliable results under different damage scenarios.

Analyzing seventh graders' statistical thinking through statistical processes by phases and instructional settings (통계적 과정의 학습에서 나타난 중학교 1학년 학생들의 단계별·수업 형태별 통계적 사고 분석)

  • Kim, Ga Young;Kim, Rae Young
    • The Mathematical Education
    • /
    • v.58 no.3
    • /
    • pp.459-481
    • /
    • 2019
  • This study aims to investigate students' statistical thinking through statistical processes in different instructional settings: Teacher-centered instruction vs. student-centered learning. We first developed instructional materials that allowed students to experience all the processes of statistics, including data collection, data analysis, data representation, and interpretation of the results. Using the instructional materials for four classes, we collected and analyzed the data from 57 seventh graders' discourse and artifacts from two different instructional settings using the analytic framework generated on the basis of literature review. The results showed that students felt difficulty particularly in the process of data collection and graph representations. In addition, even though data description has been heavily emphasized for data analysis in statistics education, it is surprisingly discovered that students had a hard time to understand the relationship between data and representations. Also, there were relationships between students' statistical thinking and instructional settings. Even though both groups of students showed difficulty in data collection and graph representations of the data, there were significant differences between the groups in terms of their performance. Whereas students from student-centered learning class outperformed in making decisions considering verification and justification, students from teacher-centered lecture class did better in problems requiring accuracy than the counterpart. The results from the study provide meaningful implications on developing curriculum and instructional methods for statistics education.

Factors Affecting Intention to Introduce Smart Factory in SMEs - Including Government Assistance Expectancy and Task Technology Fit - (중소기업의 스마트팩토리 도입의도에 영향을 미치는 요인에 관한 연구 - 정부지원기대와 과업기술적합도를 포함하여)

  • Kim, Joung-rae
    • Journal of Venture Innovation
    • /
    • v.3 no.2
    • /
    • pp.41-76
    • /
    • 2020
  • This study confirmed factors affecting smart factory technology acceptance through empirical analysis. It is a study on what factors have an important influence on the introduction of the smart factory, which is the core field of the 4th industry. I believe that there is academic and practical significance in the context of insufficient research on technology acceptance in the field of smart factories. This research was conducted based on the Unified Theory of Acceptance and Use of Technology (UTAUT), whose explanatory power has been proven in the study of the acceptance factors of information technology. In addition to the four independent variables of the UTAUT : Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions, Government Assistance Expectancy, which is expected to be an important factor due to the characteristics of the smart factory, was added to the independent variable. And, in order to confirm the technical factors of smart factory technology acceptance, the Task Technology Fit(TTF) was added to empirically analyze the effect on Behavioral Intention. Trust is added as a parameter because the degree of trust in new technologies is expected to have a very important effect on the acceptance of technologies. Finally, empirical verification was conducted by adding Innovation Resistance to a research variable that plays a role as a moderator, based on previous studies that innovation by new information technology can inevitably cause refusal to users. For empirical analysis, an online questionnaire of random sampling method was conducted for incumbents of domestic small and medium-sized enterprises, and 309 copies of effective responses were used for empirical analysis. Amos 23.0 and Process macro 3.4 were used for statistical analysis. For accurate statistical analysis, the validity of Research Model and Measurement Variable were secured through confirmatory factor analysis. Accurate empirical analysis was conducted through appropriate statistical procedures and correct interpretation for causality verification, mediating effect verification, and moderating effect verification. Performance Expectancy, Social Influence, Government Assistance Expectancy, and Task Technology Fit had a positive (+) effect on smart factory technology acceptance. The magnitude of influence was found in the order of Government Assistance Expectancy(β=.487) > Task Technology Fit(β=.218) > Performance Expectancy(β=.205) > Social Influence(β=.204). Both the Task Characteristics and the Technology Characteristics were confirmed to have a positive (+) effect on Task Technology Fit. It was found that Task Characteristics(β=.559) had a greater effect on Task Technology Fit than Technology Characteristics(β=.328). In the mediating effect verification on Trust, a statistically significant mediating role of Trust was not identified between each of the six independent variables and the intention to introduce a smart factory. Through the verification of the moderating effect of Innovation Resistance, it was found that Innovation Resistance plays a positive (+) moderating role between Government Assistance Expectancy, and technology acceptance intention. In other words, the greater the Innovation Resistance, the greater the influence of the Government Assistance Expectancy on the intention to adopt the smart factory than the case where there is less Innovation Resistance. Based on this, academic and practical implications were presented.

An Object-Based Verification Method for Microscale Weather Analysis Module: Application to a Wind Speed Forecasting Model for the Korean Peninsula (미기상해석모듈 출력물의 정확성에 대한 객체기반 검증법: 한반도 풍속예측모형의 정확성 검증에의 응용)

  • Kim, Hea-Jung;Kwak, Hwa-Ryun;Kim, Sang-il;Choi, Young-Jean
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1275-1288
    • /
    • 2015
  • A microscale weather analysis module (about 1km or less) is a microscale numerical weather prediction model designed for operational forecasting and atmospheric research needs such as radiant energy, thermal energy, and humidity. The accuracy of the module is directly related to the usefulness and quality of real-time microscale weather information service in the metropolitan area. This paper suggests an object based verification method useful for spatio-temporal evaluation of the accuracy of the microscale weather analysis module. The method is a graphical method comprised of three steps that constructs a lattice field of evaluation statistics, merges and identifies objects, and evaluates the accuracy of the module. We develop lattice fields using various evaluation spatio-temporal statistics as well as an efficient object identification algorithm that conducts convolution, masking, and merging operations to the lattice fields. A real data application demonstrates the utility of the verification method.