• Title/Summary/Keyword: Statistics Matching

Search Result 184, Processing Time 0.026 seconds

Performance Evaluation of VSDA Blind Equalization Algorithm for 16-QAM Signal (16-QAM 신호에 대한 VSDA 블라인드 등화 알고리즘의 성능 평가)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.85-91
    • /
    • 2014
  • This paper relates with the VSDA (Variable stepsize Square contour Decision directed Algorithm) adaptive equalization algorithm that is used for the minimization of the intersymbol interference due to the distortion which occurs in the time dispersive channel for the transmission of 16-QAM signal.. In the conventional SCA, it is possible to compensates the amplitude and phase in the received signal that are mixed with the intersymbol interference by the constellatin dependent constant by using the 2nd order statistics of the transmitted signal. But in the VSDA, it is possible to the increasing the equalization performance by adding the concept of distance adjusted approach for constellation matching and the cost function of decision directed. We compare the performance of VSDA and SCA algorithm by the computer simulation. For this, the equalizer output signal constellation, residual isi, maximum distortion and MSE were used in the performace index. As a result of computer simulation, the VSDA algorithm has better than the SCA in convergence speed, but it gives nearly same equalization performance in other index.

Performance of VSCA Adaptive Equalization Algorithm for 16-QAM Signal (16-QAM 신호에 대한 VSCA 적응 등화 알고리즘의 성능)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.67-73
    • /
    • 2013
  • This paper relates with the performance of VSCA adaptive equalization algorithm that is used for the minimization of the intersymbol interference due to the distortion which occurs in the time dispersive channel for the transmission of 16-QAM signal. In the conventional SCA, it is possible to compensates the amplitude and phase in the received signal that are mixed with the intersymbol interference by the constellatin dependent constant by using the 2nd order statistics of the transmitted signal. But in the VSCA, it is possible to the increase the equalization performance by adding the concept of distance adjusted approach for constellation matching. We compare the performance of VSCA and SCA algorithm by computer simulation. For this, the equalizer output signal constellation, residual isi, maximum distortion and MSE were used for perfomance comparison. It was confirmed that, the VSCA algorithm has better than the SCA in every performance index by computer simulation.

Attribute-based Approach for Multiple Continuous Queries over Data Streams (데이터 스트림 상에서 다중 연속 질의 처리를 위한 속성기반 접근 기법)

  • Lee, Hyun-Ho;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.5
    • /
    • pp.459-470
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Query processing for such a data stream should also be continuous and rapid, which requires strict time and space constraints. In most DSMS(Data Stream Management System), the selection predicates of continuous queries are grouped or indexed to guarantee these constraints. This paper proposes a new scheme tailed an ASC(Attribute Selection Construct) that collectively evaluates selection predicates containing the same attribute in multiple continuous queries. An ASC contains valuable information, such as attribute usage status, partially pre calculated matching results and selectivity statistics for its multiple selection predicates. The processing order of those ASC's that are corresponding to the attributes of a base data stream can significantly influence the overall performance of multiple query evaluation. Consequently, a method of establishing an efficient evaluation order of multiple ASC's is also proposed. Finally, the performance of the proposed method is analyzed by a series of experiments to identify its various characteristics.

Effect of tranexamic acid on blood loss reduction in patients undergoing orthognathic surgery under hypotensive anesthesia: a single-center, retrospective, observational study

  • Keisuke Harada;Noritaka Imamachi;Yuhei Matsuda;Masato Hirabayashi;Yoji Saito;Takahiro Kanno
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.50 no.2
    • /
    • pp.86-93
    • /
    • 2024
  • Objectives: Orthognathic surgery is a surgical procedure performed by intraoral approach with established and safe techniques; however, excessive blood loss has been reported in rare cases. In response, investigative efforts to identify methods to reduce the amount of blood loss have been made. Among such methods, the administration of tranexamic acid was reported to reduce the amount of intraoperative blood loss. However, few studies to date have reported the effect of tranexamic acid in orthognathic surgery under hypotensive anesthesia. The present study aimed to investigate the effect of the administration of tranexamic acid on intraoperative blood loss in patients undergoing bimaxillary (maxillary and mandibular) orthognathic surgery under hypotensive anesthesia. Patients and Methods: A total of 156 patients (mean age, 27.0±10.8 years) who underwent bimaxillary orthognathic surgery under hypotensive anesthesia performed by the same surgeon between June 2013 and February 2022 were included in this study. The following data were collected from the medical records of each patient: background factors (age, sex, and body mass index), use of tranexamic acid, surgical procedures, previous medical history, duration of surgery, American Society of Anesthesiology physical status findings before surgery, intraoperative blood loss as a primary outcome, in-out balance, and blood test results. Descriptive statistics were calculated for statistical analysis, and a t-test and the chi-squared test were used for between-group comparisons. Group comparisons were performed after 1:1 propensity score matching to adjust for confounding factors. Statistical significance was set at P<0.05. Results: Comparison between the groups based on the use of tranexamic acid revealed a significant difference in operation time. Propensity score matching analysis revealed that intraoperative blood loss was significantly lower in the tranexamic acid group. Conclusion: The administration of tranexamic acid was effective in reducing intraoperative blood loss in patients undergoing bimaxillary orthognathic surgery under hypotensive anesthesia.

Efficient Construction of Generalized Suffix Arrays by Merging Suffix Arrays (써픽스 배열 합병을 이용한 일반화된 써픽스 배열의 효율적인 구축 알고리즘)

  • Jeon, Jeong-Eun;Park, Heejin;Kim, Dong-Kyue
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.6
    • /
    • pp.268-278
    • /
    • 2005
  • We consider constructing the generalized suffix way of strings A and B when the suffix arrays of A and B are given, j.e., merging two suffix arrays of A and B. There are efficient algorithms to merge some special suffix arrays such as the odd array and the even array. However, for the general case that A and B are arbitrary strings, no efficient merging algorithms have been developed. Thus, one had to construct the generalized suffix arrays of A and B by constructing the suffix array of A$\#$B$\$$ from scratch, even though the suffix ways of A and B are given. In this paper, we Present efficient merging algorithms for the suffix arrays of two arbitrary strings A and B drawn from constant and integer alphabets. The experimental results show that merging two suffix ways of A and B are about 5 times faster than constructing the suffix way of A$\#$B$\$$ from scratch for constant alphabets. Our algorithms include searching all suffixes of string B in the suffix array of A. To do this, we use suffix links in suffix ways and we developed efficient algorithms for computing the suffix links. Efficient computation of suffix links is another contribution of this paper because it can be used to solve other problems occurred in bioinformatics that should search all suffixes of a given string in the suffix array of another string such as computing matching statistics, finding longest common substrings, and so on. The experimental results show that our methods for computing suffix links is about 3-4 times faster than the previous fastest methods.

Identification and classification of fresh lubricants and used engine oils by GC/MS and bayesian model (GC/MS 분석과 베이지안 분류 모형을 이용한 새 윤활유와 사용 엔진 오일의 동일성 추적과 분류)

  • Kim, Nam Yee;Nam, Geum Mun;Kim, Yuna;Lee, Dong-Kye;Park, Seh Youn;Lee, Kyoungjae;Lee, Jaeyong
    • Analytical Science and Technology
    • /
    • v.27 no.1
    • /
    • pp.41-59
    • /
    • 2014
  • The aims of this work were the identification and the classification of fresh lubricants and used engine oils of vehicles for the application in forensic science field-80 kinds of fresh lubricants were purchased and 86 kinds of used engine oils were sampled from 24 kinds of diesel and gasoline vehicles with different driving conditions. The sample of lubricants and used engine oils were analyzed by GC/MS. The Bayesian model technique was developed for classification or identification. Both the wavelet fitting and the principal component analysis (PCA) techniques as a data dimension reduction were applied. In fresh lubricants classification, the rates of matching by Bayesian model technique with wavelet fitting and PCA were 97.5% and 96.7%, respectively. The Bayesian model technique with wavelet fitting was better to classify lubricants than it with PCA based on dimension reduction. And we selected the Bayesian model technique with wavelet fitting for classification of lubricants. The other experiment was the analysis of used engine oils which were collected from vehicles with the several mileage up to 5,000 km after replacing engine oil. The eighty six kinds of used engine oil sample with the mileage were collected. In vehicle classification (total 24 classes), the rate of matching by Bayesian model with wavelet fitting was 86.4%. However, in the vehicle's fuel type classification (whether it is gasoline vehicle or diesel vehicle, only total 2 classes), the rate of matching was 99.6%. In the used engine oil brands classification (total 6 classes), the rate of matching was 97.3%.

Bootstrap estimation of the standard error of treatment effect with double propensity score adjustment (이중 성향점수 보정 방법을 이용한 처리효과 추정치의 표준오차 추정: 붓스트랩의 적용)

  • Lim, So Jung;Jung, Inkyung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.453-462
    • /
    • 2017
  • Double propensity score adjustment is an analytic solution to address bias due to incomplete matching. However, it is difficult to estimate the standard error of the estimated treatment effect when using double propensity score adjustment. In this study, we propose two bootstrap methods to estimate the standard error. The first is a simple bootstrap method that involves drawing bootstrap samples from the matched sample using the propensity score as well as estimating the standard error from the bootstrapped samples. The second is a complex bootstrap method that draws bootstrap samples first from the original sample and then applies the propensity score matching to each bootstrapped sample. We examined the performances of the two methods using simulations under various scenarios. The estimates of standard error using the complex bootstrap were closer to the empirical standard error than those using the simple bootstrap. The simple bootstrap methods tended to underestimate. In addition, the coverage rates of a 95% confidence interval using the complex bootstrap were closer to the advertised rate of 0.95. We applied the two methods to a real data example and found also that the estimate of the standard error using the simple bootstrap was smaller than that using the complex bootstrap.

Computation of geographic variables for air pollution prediction models in South Korea

  • Eum, Youngseob;Song, Insang;Kim, Hwan-Cheol;Leem, Jong-Han;Kim, Sun-Young
    • Environmental Analysis Health and Toxicology
    • /
    • v.30
    • /
    • pp.10.1-10.14
    • /
    • 2015
  • Recent cohort studies have relied on exposure prediction models to estimate individual-level air pollution concentrations because individual air pollution measurements are not available for cohort locations. For such prediction models, geographic variables related to pollution sources are important inputs. We demonstrated the computation process of geographic variables mostly recorded in 2010 at regulatory air pollution monitoring sites in South Korea. On the basis of previous studies, we finalized a list of 313 geographic variables related to air pollution sources in eight categories including traffic, demographic characteristics, land use, transportation facilities, physical geography, emissions, vegetation, and altitude. We then obtained data from different sources such as the Statistics Geographic Information Service and Korean Transport Database. After integrating all available data to a single database by matching coordinate systems and converting non-spatial data to spatial data, we computed geographic variables at 294 regulatory monitoring sites in South Korea. The data integration and variable computation were performed by using ArcGIS version 10.2 (ESRI Inc., Redlands, CA, USA). For traffic, we computed the distances to the nearest roads and the sums of road lengths within different sizes of circular buffers. In addition, we calculated the numbers of residents, households, housing buildings, companies, and employees within the buffers. The percentages of areas for different types of land use compared to total areas were calculated within the buffers. For transportation facilities and physical geography, we computed the distances to the closest public transportation depots and the boundary lines. The vegetation index and altitude were estimated at a given location by using satellite data. The summary statistics of geographic variables in Seoul across monitoring sites showed different patterns between urban background and urban roadside sites. This study provided practical knowledge on the computation process of geographic variables in South Korea, which will improve air pollution prediction models and contribute to subsequent health analyses.

Evaluation of shape similarity for 3D models (3차원 모델을 위한 형상 유사성 평가)

  • Kim, Jeong-Sik;Choi, Soo-Mi
    • The KIPS Transactions:PartA
    • /
    • v.10A no.4
    • /
    • pp.357-368
    • /
    • 2003
  • Evaluation of shape similarity for 3D models is essential in many areas - medicine, mechanical engineering, molecular biology, etc. Moreover, as 3D models are commonly used on the Web, many researches have been made on the classification and retrieval of 3D models. In this paper, we describe methods for 3D shape representation and major concepts of similarity evaluation, and analyze the key features of recent researches for shape comparison after classifying them into four categories including multi-resolution, topology, 2D image, and statistics based methods. In addition, we evaluated the performance of the reviewed methods by the selected criteria such as uniqueness, robustness, invariance, multi-resolution, efficiency, and comparison scope. Multi-resolution based methods have resulted in decreased computation time for comparison and increased preprocessing time. The methods using geometric and topological information were able to compare more various types of models and were robust to partial shape comparison. 2D image based methods incurred overheads in time and space complexity. Statistics based methods allowed for shape comparison without pose-normalization and showed robustness against affine transformations and noise.

A Design and Implementation of Music & Image Retrieval Recommendation System based on Emotion (감성기반 음악.이미지 검색 추천 시스템 설계 및 구현)

  • Kim, Tae-Yeun;Song, Byoung-Ho;Bae, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • Emotion intelligence computing is able to processing of human emotion through it's studying and adaptation. Also, Be able more efficient to interaction of human and computer. As sight and hearing, music & image is constitute of short time and continue for long. Cause to success marketing, understand-translate of humanity emotion. In this paper, Be design of check system that matched music and image by user emotion keyword(irritability, gloom, calmness, joy). Suggested system is definition by 4 stage situations. Then, Using music & image and emotion ontology to retrieval normalized music & image. Also, A sampling of image peculiarity information and similarity measurement is able to get wanted result. At the same time, Matched on one space through pared correspondence analysis and factor analysis for classify image emotion recognition information. Experimentation findings, Suggest system was show 82.4% matching rate about 4 stage emotion condition.