• 제목/요약/키워드: s theorem.

Search Result 1,346, Processing Time 0.029 seconds

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

A Potential-Based Panel Method for the Analysis of a 2-Dimensional Partially Cavitating Hydrofoil (양력판 이론에 의한 2차원 수중익의 부분 캐비티 문제 해석)

  • Chang-Sup,Lee
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.26 no.4
    • /
    • pp.27-34
    • /
    • 1989
  • A potential-based panel method is formulated for the analysis of a partially cavitating 2-dimensional hydrofoil. The method employs dipoles and sources distributed on the foil surface to represent the lifting and cavity problems, respectively. The kinematic boundry condition on the wetted portion of the foil surface is satisfied by requiring that the total potential vanish in the inner flow region of the foil. The dynamic boundary condition on the cavity surface is satisfied by requiring that the potential vary linearly, i.e., the velocity be constant. Green's theorem then results in a potential-based boundary value problem rather than a usual velocity-based formulation. With the singularities distributed on the exact hydrofoil surface, the pressure distributions are predicted with more improved accuracy than the zero-thickness hydrofoil theory, especially near the leading edge. The theory then predicts the cavity shape and cavitation number for an assumed cavity length. To improve the accuracy, the sources and dipoles on the cavity surface are moved to the newly computed cavity surface, where the boundary conditions are satisfied again. It was found that five iterations are necessary to obtain converged values, while only two iterations are sufficient for engineering purpose.

  • PDF

Identifying Copy Number Variants under Selection in Geographically Structured Populations Based on F-statistics

  • Song, Hae-Hiang;Hu, Hae-Jin;Seok, In-Hae;Chung, Yeun-Jun
    • Genomics & Informatics
    • /
    • v.10 no.2
    • /
    • pp.81-87
    • /
    • 2012
  • Large-scale copy number variants (CNVs) in the human provide the raw material for delineating population differences, as natural selection may have affected at least some of the CNVs thus far discovered. Although the examination of relatively large numbers of specific ethnic groups has recently started in regard to inter-ethnic group differences in CNVs, identifying and understanding particular instances of natural selection have not been performed. The traditional $F_{ST}$ measure, obtained from differences in allele frequencies between populations, has been used to identify CNVs loci subject to geographically varying selection. Here, we review advances and the application of multinomial-Dirichlet likelihood methods of inference for identifying genome regions that have been subject to natural selection with the $F_{ST}$ estimates. The contents of presentation are not new; however, this review clarifies how the application of the methods to CNV data, which remains largely unexplored, is possible. A hierarchical Bayesian method, which is implemented via Markov Chain Monte Carlo, estimates locus-specific $F_{ST}$ and can identify outlying CNVs loci with large values of FST. By applying this Bayesian method to the publicly available CNV data, we identified the CNV loci that show signals of natural selection, which may elucidate the genetic basis of human disease and diversity.

A Korean Homonym Disambiguation System Based on Statistical, Model Using weights

  • Kim, Jun-Su;Lee, Wang-Woo;Kim, Chang-Hwan;Ock, Cheol-young
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.166-176
    • /
    • 2002
  • A homonym could be disambiguated by another words in the context as nouns, predicates used with the homonym. This paper using semantic information (co-occurrence data) obtained from definitions of part of speech (POS) tagged UMRD-S$^1$), In this research, we have analyzed the result of an experiment on a homonym disambiguation system based on statistical model, to which Bayes'theorem is applied, and suggested a model established of the weight of sense rate and the weight of distance to the adjacent words to improve the accuracy. The result of applying the homonym disambiguation system using semantic information to disambiguating homonyms appearing on the dictionary definition sentences showed average accuracy of 98.32% with regard to the most frequent 200 homonyms. We selected 49 (31 substantives and 18 predicates) out of the 200 homonyms that were used in the experiment, and performed an experiment on 50,703 sentences extracted from Sejong Project tagged corpus (i.e. a corpus of morphologically analyzed words) of 3.5 million words that includes one of the 49 homonyms. The result of experimenting by assigning the weight of sense rate(prior probability) and the weight of distance concerning the 5 words at the front/behind the homonym to be disambiguated showed better accuracy than disambiguation systems based on existing statistical models by 2.93%,

  • PDF

The division algorithm for the finite decimals (유한소수에서의 나눗셈 알고리즘(Division algorithm))

  • Kim, Chang-Su;Jun, Young-Bae;Roh, Eun-Hwan
    • The Mathematical Education
    • /
    • v.50 no.3
    • /
    • pp.309-327
    • /
    • 2011
  • In this paper, we extended the division algorithm for the integers to the finite decimals. Though the remainder for the finite decimals is able to be defined as various ways, the remainder could be defined as 'the remained amount' which is the result of the division and as "the remainder" only if 'the remained amount' is decided uniquely by certain conditions. From the definition of "the remainder" for the finite decimal, it could be inferred that 'the division by equal part' and 'the division into equal parts' are proper for the division of the finite decimal concerned with the definition of "the remainder". The finite decimal, based on the unit of measure, seemed to make it possible for us to think "the remainder" both ways: 1" in the division by equal part when the quotient is the discrete amount, and 2" in the division into equal parts when the quotient is not only the discrete amount but also the continuous amount. In this division context, it could be said that the remainder for finite decimal must have the meaning of the justice and the completeness as well. The theorem of the division algorithm for the finite decimal could be accomplished, based on both the unit of measure of "the remainder", and those of the divisor and the dividend. In this paper, the meaning of the division algorithm for the finite decimal was investigated, it is concluded that this theory make it easy to find the remainder in the usual unit as well as in the unusual unit of measure.

Development of near field Acoustic Target Strength equations for polygonal plates and applications to underwater vehicles (근접장에서 다각 평판에 대한 표적강도 이론식 개발 및 수중함의 근거리 표적강도 해석)

  • Cho, Byung-Gu;Hong, Suk-Yoon;Kwon, Hyun-Wung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.05a
    • /
    • pp.1062-1073
    • /
    • 2007
  • Acoustic Target Strength (TS) is a major parameter of the active sonar equation, which indicates the ratio of the radiated intensity from the source to the re-radiated intensity by a target. In developing a TS equation, it is assumed that the radiated pressure is known and the re-radiated intensity is unknown. This research provides a TS equation for polygonal plates, which is applicable to near field acoustics. In this research, Helmholtz-Kirchhoff formula is used as the primary equation for solving the re-radiated pressure field; the primary equation contains a surface (double) integral representation. The double integral representation can be reduced to a closed form, which involves only a line (single) integral representation of the boundary of the surface area by applying Stoke's theorem. Use of such line integral representations can reduce the cost of numerical calculation. Also Kirchhoff approximation is used to solve the surface values such as pressure and particle velocity. Finally, a generalized definition of Sonar Cross Section (SCS) that is applicable to near field is suggested. The TS equation for polygonal plates in near field is developed using the three prescribed statements; the redection to line integral representation, Kirchhoff approximation and a generalized definition of SCS. The equation developed in this research is applicable to near field, and therefore, no approximations are allowed except the Kirchhoff approximation. However, examinations with various types of models for reliability show that the equation has good performance in its applications. To analyze a general shape of model, a submarine type model was selected and successfully analyzed.

  • PDF

Game Algorithm for Power Control in Cognitive Radio Networks (전파 인지 네트워크에서 전력 제어를 위한 게임 알고리즘)

  • Rho, Chang-Bae;Halder, N.;Song, Ju-Bin
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.2
    • /
    • pp.201-207
    • /
    • 2009
  • Recently effective spectrum resource technologies have been studied using a game theorectical approach for cognitive radio networks. Radio resource management is required an effective scheme because the performance of a radio communication system much depends on it's effectiveness. In this paper, we suggest a game theoretical algorithm for adaptive power control which is required an effect scheme in cognitive radio networks. It will be a distributed network. In the network distributed cognitive radio secondary users require an adaptive power control. There are many results which are suggested some possibility of game theoretical approaches for communication resource sharing. However, we suggest a practical game algorithm to achieve Nash equilibrium of all secondary users using a Nash equilibrium theorem in this paper. Particularly, a game model was analyzed for adaptive power control of a cognitive radio network, which is involved in DSSS (Direct Sequence Spread Spectrum) techniques. In case of K=63 and N=12 in the DSSS network, the number of iteration was less than maximum 200 using the suggested algorithm.

  • PDF

Vanishing Points Detection in Indoor Scene Using Line Segment Classification (선분분류를 이용한 실내영상의 소실점 추출)

  • Ma, Chaoqing;Gwun, Oubong
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.8
    • /
    • pp.1-10
    • /
    • 2013
  • This paper proposes a method to detect vanishing points of an indoor scene using line segment classification. Two-stage vanishing points detection is carried out to detect vanishing point in indoor scene efficiently. In the first stage, the method examines whether the image composition is a one-point perspective projection or a two-point one. If it is a two-point perspective projection, a horizontal line through the detected vanishing point is found for line segment classification. In the second stage, the method detects two vanishing points exactly using line segment classification. The method is evaluated by synthetic images and an image DB. In the synthetic image which some noise is added in, vanishing point detection error is under 16 pixels until the percent of the noise to the image becomes 60%. Vanishing points detection ratio by A.Quattoni and A.Torralba's image DB is over 87%.

Statistical analysis for HTS coil considering inhomogeneous Ic distribution of HTS tape

  • Jin, Hongwoo;Lee, Jiho;Lee, Woo Seung;Ko, Tae Kuk
    • Progress in Superconductivity and Cryogenics
    • /
    • v.17 no.2
    • /
    • pp.41-44
    • /
    • 2015
  • Critical current of high-temperature superconducting (HTS) coil is influenced by its own self magnetic field. Direction and density distribution of the magnetic field around the coil are fixed after the shape of the coil is decided. If the entire part of the HTS tape has homogeneous $I_c$ distribution characteristic, quench would be initiated in fixed location on the coil. However, the actual HTS tape has inhomogeneous $I_c$ distribution along the length. If the $I_c$ distribution of the HTS tape is known, we can expect the spot within the HTS coil that has the highest probability to initiate the quench. In this paper, $I_c$ distribution within the HTS coil under self-field effect is simulated by MATLAB. In the simulation procedure, $I_c$ distribution of the entire part of the HTS tape is assume d to follow Gaussian-distribution by central limit theorem. The HTS coil model is divided into several segments, and the critical current of each segment is calculated based on the-generalized Kim model. Single pancake model is simulated and self-field of HTS coil is calculated by Biot-Savart's law. As a result of simulation, quench-initiating spot in the actual HTS coil can be predicted statistically. And that statistical analysis can help detect or protect the quench of the HTS coil.

A Three Dimensional Study on the Probability of Slope Failure (사면(斜面)의 삼차원(三次元) 파괴확률(破壞確率)에 관한 연구(硏究))

  • Kim, Young Su;Lim, Byuong Zo;Paik, Young Shik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.3 no.3
    • /
    • pp.95-106
    • /
    • 1983
  • The probability of failure is used to analyze the reliability of three dimensional slope failure, instead of conventional factor of safety. The strength parameters are assumed to be normal variated and beta variated. These are interval estimated under the specified confidence level and maximum likelihood estimation. The pseudonormal and beta random variables are generated using the uniform probability transformation method according to central limit theorem and rejection method. By means of a Monte-Carlo Simulation, the probability of failure is defined as; $$P_f$$=M/N N: Total number of trials M: Total number of failures some of the conclusions derived from the case study include; 1. If the strength parameters are assumed to be normal variated, the relationship between safety factor and the probability of failure is fairly consistent, regardless of the procedures of analysis and dimensions of assumed rupture surfaces. 2. However if the strength parameters are beta variated, general relationship between $F_s$ and $P_f$ is hardly found.

  • PDF