• Title/Summary/Keyword: statistical representation

Search Result 168, Processing Time 0.026 seconds

Predicting Unknown Composition of a Mixture Using Independent Component Analysis

  • Lee, Hye-Seon;Park, Hae-Sang;Jun, Chi-Hyuck
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.04a
    • /
    • pp.127-134
    • /
    • 2005
  • A suitable representation for the conceptual simplicity of the data in statistics and signal processing is essential for a subsequent analysis such as prediction, pattern recognition, and spatial analysis. Independent component analysis (ICA) is a statistical method for transforming an observed high-dimensional multivariate data into statistically independent components. ICA has been applied increasingly in wide fields of spectrum application since ICA is able to extract unknown components of a mixture from spectra. We focus on application of ICA for separating independent sources and predicting each composition using extracted components. The theory of ICA is introduced and an application to a metal surface spectra data will be described, where subsequent analysis using non-negative least square method is performed to predict composition ratio of each sample. Furthermore, some simulation experiments are performed to demonstrate the performance of the proposed approach.

  • PDF

3-D Representation of Cavity Region from Ultrasonic Image Acquired in the Time Domain (시간 영역에서 획득된 초음파 영상의 심내강 영역에 대한 3차원 표현)

  • Won, C.H.;Chae, S.P.;Koo, S.M.;Kim, M.N.;Cho, J.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.05
    • /
    • pp.119-122
    • /
    • 1997
  • In this paper, we represented the variation of heart cavity area in the space domain by 3-d rendering. We arranged the 2-d sequence of ultrasonic image acquired in the time domain as volumetric data, and extracted heart cavity region from 3-d data. For the segmentation of 3-d volume data, we extracted the cavity region using the method of expanding the cavity region that is same statistical property. By shading which is using light and object normal vector, we visualized the volume data on image plane.

  • PDF

An Application of MDS(Multidimensional Scaling) Methods to the Study of Furniture Usage and Behavior in the Living Room (MDS 분석방법을 이용한 거실의 가구사용행태연구)

  • SungHeuiCho
    • Journal of the Korean housing association
    • /
    • v.1 no.2
    • /
    • pp.1-11
    • /
    • 1990
  • A study of domestic furniture arrangements may reveal the living style relevant to the room as conceived and coded by occupants and the effects of the physical environment on the structure of behavior settings. The purpose of this study was to investigate, through analizing the furniture usage and behavior as a non-reactive and activity oriented behavioral measures, the occupants` domestic habits as a living style using MDS. MDS(multidimensional scaling technique) is a statistical technique for creating a spatial representation of data. It Is a particularly appropriate technique for analizing qualitative data such as the furniture usage and behavior because it takes into account all of the relationships between items. For the MDS analysis, the furniture usage and behavior examined by housing types based on 114 households in Seoul. The result of spatial configuration by MDS has three dimensions : recogn;lion of room function, pattern of room organization, understanding of room meaning. The effect of housing types for dimensions is identical but configuration of furniture items is different.

  • PDF

A New Adaptive Image Separation Scheme using ICA and Innovation Process with EM

  • Kim, Sung-Soo;Ryu, Jeong-Woong;Oh, Bum-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.96.2-96
    • /
    • 2002
  • In this paper, a new method for the mixed image separation is presented using the independent component analysis, the innovation process, and the expectation-maximization. In general, the independent component analysis (ICA) is one of the widely used statistical signal processing scheme that represents the information from observations as a set of random variables in the form of linear combinations of another statistically independent component variables. In various useful applications, ICA provides a more meaningful representation of the data than the principal component analysis through the transformation of the data to be quasi-orthogonal to each other, which can be utilized in linear p...

  • PDF

English Syntactic Disambiguation Using Parser's Ambiguity Type Information

  • Lee, Jae-Won;Kim, Sung-Dong;Chae, Jin-Seok;Lee, Jong-Woo;Kim, Do-Hyung
    • ETRI Journal
    • /
    • v.25 no.4
    • /
    • pp.219-230
    • /
    • 2003
  • This paper describes a rule-based approach for syntactic disambiguation used by the English sentence parser in E-TRAN 2001, an English-Korean machine translation system. We propose Parser's Ambiguity Type Information (PATI) to automatically identify the types of ambiguities observed in competing candidate trees produced by the parser and synthesize the types into a formal representation. PATI provides an efficient way of encoding knowledge into grammar rules and calculating rule preference scores from a relatively small training corpus. In the overall scoring scheme for sorting the candidate trees, the rule preference scores are combined with other preference functions that are based on statistical information. We compare the enhanced grammar with the initial one in terms of the amount of ambiguity. The experimental results show that the rule preference scores could significantly increase the accuracy of ambiguity resolution.

  • PDF

GIS based Non-Point Source Pollution Assessment

  • Sadeghi-Niaraki, Abolghasem;Kim, Kye-Hyun;Lee, Chol-Young
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.437-440
    • /
    • 2008
  • In recent years, pollution load calculation has become a topic for research that resulted in the development of numerous GIS modeling methods. The existing pollution method for nonpoint source (NPS) can not be indentified and calculated the amount of the pollution precisely. This research shows that the association of typical pollutant concentrations with land uses in a watershed can provide a reasonably accurate characterization of nonpoint source pollution in the watershed using Expected Mean Concentrations (EMC). The GIS based pollution assessment method is performed for three pollutant constituents: BOD, TN, and TP. First, the runoff grid by means of the precipitation grid and runoff coefficient is estimated. Then, the NPS pollution loads are calculated by grid based method. Finally, the final outputs are evaluated by statistical technique. The results illustrate the merits of the approach. This model verified that GIS based method of estimating spatially distributed NPS pollution loads can lead to more accurate representation of the real world.

  • PDF

Characterization of Predicted Residual Sum of Squares for Detecting Joint Influence in Regression (회귀(回歸)에서 결합영향력(結合影響力)를 위(爲)한 예측잔차(豫測殘差)제곱합(合)의 특성(特性)에 대(對)한 연구(硏究))

  • Oh, Kwang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.3 no.1
    • /
    • pp.1-16
    • /
    • 1992
  • In regression diagnostics, a number of joint influence measures based on various statistical tools have been discussed. We consider an alternate representation in terms of the predicted residual and g-leverage determined by the remaining points. By this approach, we choose the predicted residual sum of squares for the keypoints as joint influence measure and propose a new expression of it so that we can extend the single case form to the multiple case one. Furthermore we suggest a seach method for joint influence after investigating some properties of the new expression.

  • PDF

Selection of a Probability Distribution for Modeling Labor Productivity during Overtime

  • Woo, Sung-Kwon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.6 no.1 s.23
    • /
    • pp.49-57
    • /
    • 2005
  • Construction labor productivity, which is the greatest source of variation in overall construction productivity, is the critical factor for determining the project performance in terms of time and cost, especially during scheduled overtime when extra time and cost are invested. The objective of this research is to select an appropriate type of probability distribution function representing the variability of daily labor productivity during overtime. Based on the results of statistical data analysis of labor performance during different weekly work hours, lognormal distribution is selected in order to take advantage of easiness of generating correlated random numbers. The selected lognormal distribution can be used for development of a simulation model in construction scheduling, cost analysis, and other applications areas where representation of the correlations between variables are essential.

Different approaches towards fuzzy database systems A Survey

  • Rundensteiner, Elke A.;Hawkes, Lois Wright
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.65-75
    • /
    • 1993
  • Fuzzy data is a phenomenon often occurring in real life. There is the inherent vagueness of classification terms referring to a continuous scale, the uncertainty of linguistic terms such as "I almost agree" or the vagueness of terms and concepts due to the statistical variability in communication [20] and many more. Previously, such fuzzy data was approximated by non-fuzzy (crisp) data, which obviously did not lead to a correct and precise representation of the real world. Fuzzy set theory has been developed to represent and manipulate fuzzy data [18]. Explicitly managing the degree of fuzziness in databases allows the system to distinguish between what is known, what is not known and what is partially known. Systems in the literature whose specific objective is to handle imprecision in databases present various approaches. This paper is concerned with the different ways uncertainty and imprecision are handled in database design. It outlines the major areas of fuzzification in (relational) database systems.

  • PDF

Differential Power Analysis on Countermeasures Using Binary Signed Digit Representations

  • Kim, Tae-Hyun;Han, Dong-Guk;Okeya, Katsuyuki;Lim, Jong-In
    • ETRI Journal
    • /
    • v.29 no.5
    • /
    • pp.619-632
    • /
    • 2007
  • Side channel attacks are a very serious menace to embedded devices with cryptographic applications. To counteract such attacks many randomization techniques have been proposed. One efficient technique in elliptic curve cryptosystems randomizes addition chains with binary signed digit (BSD) representations of the secret key. However, when such countermeasures have been used alone, most of them have been broken by various simple power analysis attacks. In this paper, we consider combinations which can enhance the security of countermeasures using BSD representations by adding additional countermeasures. First, we propose several ways the improved countermeasures based on BSD representations can be attacked. In an actual statistical power analysis attack, the number of samples plays an important role. Therefore, we estimate the number of samples needed in the proposed attack.

  • PDF