• Title/Summary/Keyword: a priori

Search Result 764, Processing Time 0.024 seconds

Fat Quantification in the Vertebral Body: Comparison of Modified Dixon Technique with Single-Voxel Magnetic Resonance Spectroscopy

  • Sang Hyup Lee;Hye Jin Yoo;Seung-Man Yu;Sung Hwan Hong;Ja-Young Choi;Hee Dong Chae
    • Korean Journal of Radiology
    • /
    • v.20 no.1
    • /
    • pp.126-133
    • /
    • 2019
  • Objective: To compare the lumbar vertebral bone marrow fat-signal fractions obtained from six-echo modified Dixon sequence (6-echo m-Dixon) with those from single-voxel magnetic resonance spectroscopy (MRS) in patients with low back pain. Materials and Methods: Vertebral bone marrow fat-signal fractions were quantified by 6-echo m-Dixon (repetition time [TR] = 7.2 ms, echo time (TE) = 1.21 ms, echo spacing = 1.1 ms, total imaging time = 50 seconds) and single-voxel MRS measurements in 25 targets (23 normal bone marrows, two focal lesions) from 24 patients. The point-resolved spectroscopy sequence was used for localized single-voxel MRS (TR = 3000 ms, TE = 35 ms, total scan time = 1 minute 42 seconds). A 2 × 2 × 1.5 cm3 voxel was placed within the normal L2 or L3 vertebral body, or other lesions including a compression fracture or metastasis. The bone marrow fat spectrum was characterized on the basis of the magnitude of measurable fat peaks and a priori knowledge of the chemical structure of triglycerides. The imaging-based fat-signal fraction results were then compared to the MRS-based results. Results: There was a strong correlation between m-Dixon and MRS-based fat-signal fractions (slope = 0.86, R2 = 0.88, p < 0.001). In Bland-Altman analysis, 92.0% (23/25) of the data points were within the limits of agreement. Bland-Altman plots revealed a slight but systematic error in the m-Dixon based fat-signal fraction, which showed a prevailing overestimation of small fat-signal fractions (< 20%) and underestimation of high fat-signal fractions (> 20%). Conclusion: Given its excellent agreement with single-voxel-MRS, 6-echo m-Dixon can be used for visual and quantitative evaluation of vertebral bone marrow fat in daily practice.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

Identification of Substructure Model using Measured Response Data (계측 거동 데이터를 이용한 부분구조 모델의 식별)

  • Oh, Seong-Ho;Lee, Sang-Min;Shin, Soobong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.8 no.2
    • /
    • pp.137-145
    • /
    • 2004
  • The paper provides a methodology of identifying a substructure model when sectional and material properties of the structure are not the a priori information. In defining a substructure model, it is required that structural responses be consistent with the actual behavior of the part of the structure. Substructure model is identified by estimating boundary spring constants and stiffness properties of the substructure. Static and modal system identification methods have been applied using responses measured at limited locations within the substructure. Simulation studies for static and dynamic responses have been carried. The results and associated problems are discussed in the paper. The procedure has been also applied to an actual multi-span plate-girder Gerber-type bridge with dynamic responses obtained from a moving truck test and construction blasting vibrations.

Implementation of an Agent-centric Planning of Complex Events as Objects of Pedagogical Experiences in Virtual World

  • Park, Jong Hee
    • International Journal of Contents
    • /
    • v.12 no.1
    • /
    • pp.25-43
    • /
    • 2016
  • An agent-centric event planning method is proposed for providing pedagogical experiences in an immersed environment. Two-level planning is required at in a macro-level (i.e., inter-event level) and an intra-event level to provide realistic experiences with the objective of learning declarative knowledge. The inter-event (horizontal) planning is based on search, while intra-event (vertical) planning is based on hierarchical decomposition. The horizontal search is dictated by several realistic types of association between events besides the conventional causality. The resulting schematic plan is further augmented by conditions associated with those agents cast into the roles of the events identified in the plan. Rather than following a main story plot, all the events potentially relevant to accomplishing an initial goal are derived in the final result of our planning. These derived events may progress concurrently or digress toward a new main goal replacing the current goal or event, and the plan could be merged or fragmented according to their respective lead agents' intentions and other conditions. The macro-level coherence across interconnected events is established via their common background world existing a priori. As the pivotal source of event concurrency and intricacy, agents are modeled to not only be autonomous but also independent, i.e., entities with their own beliefs and goals (and subsequent plans) in their respective parts of the world. Additional problems our method addresses for augmenting pedagogical experiences include casting of agents into roles based on their availability, subcontracting of subsidiary events, and failure of multi-agent event entailing fragmentation of a plan. The described planning method was demonstrated by monitoring implementation.

Simulation of Blasting Demolition Using Three-Dimensional Bonded Particle Model (삼차원 입자결합모델을 이용한 구조물 해체발파 모사 연구)

  • Shin Byung-Hun;Jeon Seok-Won
    • Explosives and Blasting
    • /
    • v.23 no.1
    • /
    • pp.65-77
    • /
    • 2005
  • Reflecting the fact that there are increasing number of old high-story apartment structures in urban area, it is expected that the demand of blasting demolition will increase in the near future. It is of great important to make up for the insufficient empirical knowledge in blasting demolition through priori method such as computer simulation. Computer simulation of the blasting demolition involves complicated process. In the past domestic researches, two-dimensional bonded particle model was used to examine the overall demolition behavior of a five-story simple structure. It was observed that the two-dimensional simulation did not properly simulate the collapsing behavior of a structure mainly due to the reduced degree of freedom. In this study, three-dimensional simulation was tried. It consumed a great amount of calculation time, which limited the extent of the study. A few parameters, such as delay times, amount of charge at each hole, ball properties, were modified in order to check oui; their effect on the collapsing behavior. The differences were observed as expected but the collapsing behavior did not exactly coincide with the test blasting with a scaled model.

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

Bayesian Nonstationary Probability Rainfall Estimation using the Grid Method (Grid Method 기법을 이용한 베이지안 비정상성 확률강수량 산정)

  • Kwak, Dohyun;Kim, Gwangseob
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.1
    • /
    • pp.37-44
    • /
    • 2015
  • A Bayesian nonstationary probability rainfall estimation model using the Grid method is developed. A hierarchical Bayesian framework is consisted with prior and hyper-prior distributions associated with parameters of the Gumbel distribution which is selected for rainfall extreme data. In this study, the Grid method is adopted instead of the Matropolis Hastings algorithm for random number generation since it has advantage that it can provide a thorough sampling of parameter space. This method is good for situations where the best-fit parameter values are not easily inferred a priori, and where there is a high probability of false minima. The developed model was applied to estimated target year probability rainfall using hourly rainfall data of Seoul station from 1973 to 2012. Results demonstrated that the target year estimate using nonstationary assumption is about 5~8% larger than the estimate using stationary assumption.

Searching for an Optimal Level of Cash Holdings for Korean Chaebols (국내 재벌 계열사들의 최적 현금유동성 수준에 대한 실증적 분석)

  • Kim, Hanjoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.10
    • /
    • pp.7118-7125
    • /
    • 2015
  • This study examined one of the concerned or even imperative issues in the field of contemporary finance related to approaching an optimal level of cash holdings for the firms belonging to the chaebols in the Korean domestic capital markets. However, the subject may not have been drawn much attention so far, even if there are still ongoing and active debates among the interest parties at the macro- or micro-level. Two primary hypotheses were postulated to be empirically tested. On the results of the first hypothesis test for the existence of an optimal cash reserves for the sample firms, two estimation techniques were performed in terms of a quadratic regression equation and a relationship between a firm's value and the residuals derived from the static panel date model. As a primary financial implication of the study which may contribute to the practitioners and the academics in finance, the optimal level of cash holdings can be estimated by controlling for the a priori significant components for the sample firms towards maximizing firm value.

The Measurements of Data Accuracy and Error Detection in DEM using GRASS and Arc/Info (GRASS와 Arc/Info를 이용한 DEM 데이터의 정확도와 에러 측정)

  • Cho, Sung-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.1 no.1
    • /
    • pp.3-7
    • /
    • 1998
  • The issue of data accuracy brings a different perspective to the issue of GIS modeling, calls into a question the usefulness of data models such as DEM. Accuracy can be determined by randomly checking positional and attribute accuracy within a GIS data layer. With the increasing availability of DEM and the software capable of processing them, it is worthwhile to call attention for data accuracy and error analysis as GIS application depends on the priori established spatial data. The purpose of this paper was to investigate methods for data accuracy measurement and error detection methodology with two types of DEM's: 1 to 24,000 and 1 to 250,000 DEM released by U.S. Geological Survey. Another emphasis was given to the development of methodology for processing DEM's to create Arc/Info and GRASS layers. Data accuracy analysis with DEM was applied to a 250 sq.km area and an error was detected at a scale of 1:24,000 DEM. There were two possible reasons for this error: gross errors and blunders.

Investigation of Indicator Kriging for Evaluating Proper Rock Mass Classification based on Electrical Resistivity and RMR Correlation Analysis (RMR과 전기비저항의 상관성 해석에 기초하여 지시크리깅을 적용한 최적 암반 분류 기법 고찰)

  • Lee, Kyung-Ju;Ha, Hee-Sang;Ko, Kwang-Buem;Kim, Ji-Soo
    • Tunnel and Underground Space
    • /
    • v.19 no.5
    • /
    • pp.407-420
    • /
    • 2009
  • In this study geostatistical technique using indicator kriging was performed to evaluate the optimal rock mass classification by integrating the various geophysical information such as borehole data and geophysical data. To get the optimal kriging result, it is necessary to devise the suitable technique to integrate the hard (borehole) and soft (geophysical) data effectively. Also, the model parameters of the variogram must be determined as a priori procedure. Iterative non-linear inversion method was implemented to determine the model parameters of theoretical variogram. To verify the algorithm, behaviour of object function and precision of convergence were investigated, revealing that gradient of the range is extremely small. This algorithm for the field data was applied to a mountainous area planned for a large-scale tunneling construction. As for a soft data, resistivity information from AMT survey is incorporated with RMR information from borehole data, a sort of hard data. Finally, RMR profiles were constructed and attempted to be interpreted at the tunnel elevation and the upper 1D level.