• Title/Summary/Keyword: Level Set

Search Result 4,844, Processing Time 0.039 seconds

FLUID SIMULATION METHODS FOR COMPUTER GRAPHICS SPECIAL EFFECTS (컴퓨터 그래픽스 특수효과를 위한 유체시뮬레이션 기법들)

  • Jung, Moon-Ryul
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2009.11a
    • /
    • pp.1-1
    • /
    • 2009
  • In this presentation, I talk about various fluid simulation methods that have been developed for computer graphics special effects since 1996. They are all based on CFD but sacrifice physical reality for visual plausability and time. But as the speed of computer increases rapidly and the capability of GPU (graphics processing unit) improves, methods for more physical realism have been tried. In this talk, I will focus on four aspects of fluid simulation methods for computer graphics: (1) particle level-set methods, (2) particle-based simulation, (3) methods for exact satisfaction of incompressibility constraint, and (4) GPU-based simulation. (1) Particle level-set methods evolve the surface of fluid by means of the zero-level set and a band of massless marker particles on both sides of it. The evolution of the zero-level set captures the surface in an approximate manner and the evolution of marker particles captures the fine details of the surface, and the zero-level set is modified based on the particle positions in each step of evolution. (2) Recently the particle-based Lagrangian approach to fluid simulation gains some popularity, because it automatically respects mass conservation and the difficulty of tracking the surface geometry has been somewhat addressed. (3) Until recently fluid simulation algorithm was dominated by approximate fractional step methods. They split the Navier-Stoke equation into two, so that the first one solves the equation without considering the incompressibility constraint and the second finds the pressure which satisfies the constraint. In this approach, the first step introduces error inevitably, producing numerical diffusion in solution. But recently exact fractional step methods without error have been developed by fluid mechanics scholars), and another method was introduced which satisfies the incompressibility constraint by formulating fluid in terms of vorticity field rather than velocity field (by computer graphics scholars). (4) Finally, I want to mention GPU implementation of fluid simulation, which takes advantage of the fact that discrete fluid equations can be solved in parallel.

  • PDF

A Study of Variations in Cost-of-Living Index (도시가계 생계비 산정기준의 다양화를 위한 연구)

    • Journal of Families and Better Life
    • /
    • v.15 no.4
    • /
    • pp.137-148
    • /
    • 1997
  • The purpose of this study is to set the various cost-of-living standards utilizing a published national data. 1995 annual data, The Family Income and Expenditure Survey, were used to set the standards of living. Four index reflecting health and decency level, normal level, minimum of health and decency level, and pauper level were suggested and the cost-of-living of each level were estimated. Results showed that cost-of-living estimated in this study were not quite different from those of former studies, but the name of the standard-of-living need to be changed.

  • PDF

Gene Set and Pathway Analysis of Microarray Data (프마이크로어레이 데이터의 유전자 집합 및 대사 경로 분석)

  • Kim Seon-Young
    • KOGO NEWS
    • /
    • v.6 no.1
    • /
    • pp.29-33
    • /
    • 2006
  • Gene set analysis is a new concept and method. to analyze and interpret microarray gene expression data and tries to extract biological meaning from gene expression data at gene set level rather than at gene level. Compared with methods which select a few tens or hundreds of genes before gene ontology and pathway analysis, gene set analysis identifies important gene ontology terms and pathways more consistently and performs well even in gene expression data sets with minimal or moderate gene expression changes. Moreover, gene set analysis is useful for comparing multiple gene expression data sets dealing with similar biological questions. This review briefly summarizes the rationale behind the gene set analysis and introduces several algorithms and tools now available for gene set analysis.

  • PDF

A Study of Correlation Between Phonological Awareness and Word Identification Ability of Hearing Impaired Children (청각장애 아동의 음운인식 능력과 단어확인 능력의 상관연구)

  • Kim, Yu-Kyung;Kim, Mun-Jung;Ahn, Jong-Bok;Seok, Dong-Il
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • Hearing impairment children possess poor underlying perceptual knowledge of the sound system and show delayed development of segmental organization of that system. The purpose of this study was to investigate the relationship between phonological awareness ability and word identification ability in hearing impaired children. 14 children with moderately severe hearing loss participated in this study. All tasks were individually administered. Phonological awareness tests consisted of syllable blending, syllable segmentation, syllable deletion, body-coda discrimination, phoneme blending, phoneme segmentation and phoneme deletion. Close-set Monosyllabic Words(12 items) and lists 1 and 2 of open-set Monosyllabic Words in EARS-K were examined for word identification. Results of this study were as follows: First, from the phonological awareness task, the close-set word identification showed a high positive correlation with the coda discrimination, phoneme blending and phoneme deletion. The open-set word identification showed a high positive correlation with phoneme blending, phoneme deletion and phoneme segmentation. Second, from the level of phonological awareness, the close-set word identification showed a high positive correlation with the level of body-coda awareness and phoneme awareness while the open-set word identification showed a high positive correlation only with the level of phoneme awareness.

  • PDF

A new method to calculate a standard set of finite cloud dose correction factors for the level 3 probabilistic safety assessment of nuclear power plants

  • Gee Man Lee;Woo Sik Jung
    • Nuclear Engineering and Technology
    • /
    • v.56 no.4
    • /
    • pp.1225-1233
    • /
    • 2024
  • Level 3 probabilistic safety assessment (PSA) is performed to calculate radionuclide concentrations and exposure dose resulting from nuclear power plant accidents. To calculate the external exposure dose from the released radioactive materials, the radionuclide concentrations are multiplied by two factors of dose coefficient and a finite cloud dose correction factor (FCDCF), and the obtained values are summed. This indicates that a standard set of FCDCFs is required for external exposure dose calculations. To calculate a standard set of FCDCFs, the effective distance from the release point to the receptor along the wind direction should be predetermined. The TID-24190 document published in 1968 provides equations to calculate FCDCFs and the resultant standard set of FCDCFs. However, it does not provide any explanation on the effective distance required to calculate the standard set of FCDCFs. In 2021, Sandia National Laboratories (SNLs) proposed a method to predetermine finite effective distances depending on the atmospheric stability classes A to F, which results in six standard sets of FCDCFs. Meanwhile, independently of the SNLs, the authors of this paper discovered that an infinite effective distance assumption is a very reasonable approach to calculate one standard set of FCDCFs, and they implemented it into the multi-unit radiological consequence calculator (MURCC) code, which is a post-processor of the level 3 PSA codes. This paper calculates and compares short- and long-range FCDCFs calculated using the TID-24190, SNLs method, and MURCC method, and explains the strength of the MURCC method over the SNLs method. Although six standard sets of FCDCFs are required by the SNLs method, one standard sets of FCDCFs are sufficient by the MURCC method. Additionally, the use of the MURCC method and its resultant FCDCFs for level 3 PSA was strongly recommended.

Implementation of Hardware Circuits for Fuzzy Controller Using $\alpha$-Cut Decomposition of fuzzy set

  • Lee, Yo-Seob;Hong, Soon-Ill
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.2
    • /
    • pp.200-209
    • /
    • 2004
  • The fuzzy control based on $\alpha$-level fuzzy set decomposition. It is known to produce quick response and calculating time of fuzzy inference. This paper derived the embodiment computational algorithm for defuzzification by min-max fuzzy inference and the center of gravity method based on $\alpha$-level fuzzy set decomposition. It is easy to realize the fuzzy controller hardware. based on the calculation formula. In addition. this study proposed a circuit that generates PWM actual signals ranging from fuzzy inference to defuzzification. The fuzzy controller was implemented with mixed analog-digital logic circuit using the computational fuzzy inference algorithm by min-min-max and defuzzification by the center of gravity method. This study confirmed that the fuzzy controller worked satisfactorily when it was applied to the position control of a dc servo system.

A Study on the Second-Order Water Level Variation (2차근사 수위변화에 관한 연구)

  • 김창제;이경연
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.2 no.1
    • /
    • pp.83-87
    • /
    • 1996
  • This study aims to investigate theoretically and experimentally second-order water level variation. The simple method obtaining second-order water surface elevation and mean water level applicable to both progressive and diffraction wave, mean water level set-down, as well as set-up occurs and it is shown to be in good agreement with the experimental results.

  • PDF

Investigating Functional Level in Patients with Stroke using ICF Concept (ICF core-set를 이용한 뇌졸중 환자의 기능수행 분석)

  • Song, Jumin;Lee, Haejung
    • The Journal of Korean Physical Therapy
    • /
    • v.26 no.5
    • /
    • pp.351-357
    • /
    • 2014
  • Purpose: The purpose of this study was to investigate level of functioning in patients with stroke using Modified Bathel Index (MBI), World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and ICF core-set for stroke. Methods: Sixty-four patients with stroke were recruited for this study from nine medical institutes. The ICF core-set for stroke, WHODAS 2.0, and MBI were used to collect subjects' functional levels. ICF core-set was employed here as a standard frame to observe multi-dimension of functioning, that is physiological bodily function, activity and participation (AP) in daily life, and current environmental factors (EF) in patients with stroke. WHODAS 2.0 and MBI were also used in order to have a specific functioning level for subjects. The linkage of each item in WHODAS 2.0 and MBI into the ICF core-set for stroke was examined. Pearson correlation coefficient was used for analysis of their relationships. Results: Functioning level of participants showed moderate resulting from MBI and WHODAS 2.0 ($73.48{\pm}22.27$ and $35.55{\pm}12.53$, respectively). Strong relationship was observed between ICF core-set and WHODAS 2.0, and with MBI. Each item of disability scales was obtained its linkage into ICF in the domain of AP. However, lack of correlation between MBI and ICF in the domain of EF was found due to absence of related factors. Conclusion: MBI was found to be linked mainly into ICF in the domain of AP and to have limited linkage into EF. Therefore, it should be suggested that the ICF concept frame should be used as a multi-dimensional approach to patients with stroke.