• Title/Summary/Keyword: TrueFidelity

Search Result 15, Processing Time 0.03 seconds

Usefulness of Deep Learning Image Reconstruction in Pediatric Chest CT (소아 흉부 CT 검사 시 딥러닝 영상 재구성의 유용성)

  • Do-Hun Kim;Hyo-Yeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.297-303
    • /
    • 2023
  • Pediatric Computed Tomography (CT) examinations can often result in exam failures or the need for frequent retests due to the difficulty of cooperation from young patients. Deep Learning Image Reconstruction (DLIR) methods offer the potential to obtain diagnostically valuable images while reducing the retest rate in CT examinations of pediatric patients with high radiation sensitivity. In this study, we investigated the possibility of applying DLIR to reduce artifacts caused by respiration or motion and obtain clinically useful images in pediatric chest CT examinations. Retrospective analysis was conducted on chest CT examination data of 43 children under the age of 7 from P Hospital in Gyeongsangnam-do. The images reconstructed using Filtered Back Projection (FBP), Adaptive Statistical Iterative Reconstruction (ASIR-50), and the deep learning algorithm TrueFidelity-Middle (TF-M) were compared. Regions of interest (ROI) were drawn on the right ascending aorta (AA) and back muscle (BM) in contrast-enhanced chest images, and noise (standard deviation, SD) was measured using Hounsfield units (HU) in each image. Statistical analysis was performed using SPSS (ver. 22.0), analyzing the mean values of the three measurements with one-way analysis of variance (ANOVA). The results showed that the SD values for AA were FBP=25.65±3.75, ASIR-50=19.08±3.93, and TF-M=17.05±4.45 (F=66.72, p=0.00), while the SD values for BM were FBP=26.64±3.81, ASIR-50=19.19±3.37, and TF-M=19.87±4.25 (F=49.54, p=0.00). Post-hoc tests revealed significant differences among the three groups. DLIR using TF-M demonstrated significantly lower noise values compared to conventional reconstruction methods. Therefore, the application of the deep learning algorithm TrueFidelity-Middle (TF-M) is expected to be clinically valuable in pediatric chest CT examinations by reducing the degradation of image quality caused by respiration or motion.

A Study on the Usefulness of Deep Learning Image Reconstruction with Radiation Dose Variation in MDCT (MDCT에서 선량 변화에 따른 딥러닝 재구성 기법의 유용성 연구)

  • Ga-Hyun, Kim;Ji-Soo, Kim;Chan-Deul, Kim;Joon-Pyo, Lee;Joo-Wan, Hong;Dong-Kyoon, Han
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.37-46
    • /
    • 2023
  • This study aims to evaluate the usefulness of Deep Learning Image Reconstruction (TrueFidelity, TF), the image quality of existing Filtered Back Projection (FBP) and Adaptive Statistical Iterative Reconstruction-Veo (ASIR-V) were compared. Noise, CNR, and SSIM were measured by obtaining images with doses fixed at 17.29 mGy and altered to 10.37 mGy, 12.10 mGy, 13.83 mGy, and 15.56 mGy in reconstruction techniques of FBP, ASIR-V 50%, and TF-H. TF-H has superior image quality compared to FBP and ASIR-V when the reconstruction technique change is given at 17.29 mGy. When dose changes were made, Noise, CNR, and SSIM were significantly different when comparing 10.37 mGy TF-H and FBP (p<0.05), and no significant difference when comparing 10.37 mGy TF-H and ASIR-V 50% (p>0.05). TF-H has a dose-reduction effect of 30%, as the highest dose of 15.56 mGy ASIR-V has the same image quality as the lowest dose of 10.37 mGy TF-H. Thus, Deep Learning Reconstruction techniques (TF) were able to reduce dose compared to Iterative Reconstruction techniques (ASIR-V) and Filtered Back Projection (FBP). Therefore, it is considered to reduce the exposure dose of patients.

Homo replicus: imitation, mirror neurons, and memes (호모 리플리쿠스(Homo replicus): 모방, 거울뉴런, 그리고 밈)

  • Jang, Dayk
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.4
    • /
    • pp.517-551
    • /
    • 2012
  • We are imitating animals. True imitation can be defined as a learning to do an act from seeing it done by others. We have been building culture by imitating others' skills and knowledge with high fidelity. In this regard, it is important to ask how the faculty of imitation has evolved and how imitation behaviors develop ontogenetically. It is also interesting to see whether nonhuman animals can imitate truly or not and how different imitation learning is among human and non-human animals. In this paper, first I review empirical data from imitation studies with human and nonhuman animals. Comparing different species, I highlight their different levels of copying fidelity and explain the reason why they are showing the difference. Then I review recent studies on neurobiological mechanisms underlying imitation. The initial neurobiological studies on imitation in humans suggested a core imitation circuitry composed of mirror neuron system [inferior parietal lobule(IPL) and inferior frontal gyrus(IFG)] and the posterior part of the superior temporal sulcus(pSTS). More recent studies on the neurobiology of imitation, however, has gone beyond the studies on the core mechanisms. Finally, I try to find out implications of psychology and biology of imitation for cultural evolution. I argue for a memetic approach to cultural evolution, along the lines with a recent study on measuring memes by mirror neurons system.

  • PDF

Block-based Motion Vector Smoothing for Nonrigid Moving Objects (비정형성 등속운동 객체의 움직임 추정을 위한 블록기반 움직임 평활화)

  • Sohn, Young-Wook;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.47-53
    • /
    • 2007
  • True motion estimation is necessary for deinterlacing, frame-rate conversion, and film judder compensation. There have been several block-based approaches to find true motion vectors by tracing minimum sum-of-absolute-difference (SAD) values by considering spatial and temporal consistency. However, the algorithms cannot find robust motion vectors when the texture of objects is changed. To find the robust motion vectors in the region, a recursive vector selection scheme and an adaptive weighting parameter are proposed. Previous frame vectors are recursively averaged to be utilized for motion error region. The weighting parameter controls fidelity to input vectors and the recursively averaged ones, where the input vectors come from the conventional estimators. If the input vectors are not reliable, then the mean vectors of the previous frame are used for temporal consistency. Experimental results show more robust motion vectors than those of the conventional methods in time-varying texture objects.

A meta-analysis of adolescent psychosocial smoking prevention programs in the United States: Identifying factors associated with program effectiveness (사회 심리 이론에 근거한 학교 흡연 예방 프로그램의 메타분석: 미국 사례와 Explanatory Variables)

  • Hwang, Myung-Hee-Song
    • Korean Journal of Health Education and Promotion
    • /
    • v.24 no.5
    • /
    • pp.1-21
    • /
    • 2007
  • Adolescent psychosocial smoking prevention programs have been successful, but limited in the magnitude of program effects. The present study is the secondary analysis after the previous study estimated mean effect sizes in smoking knowledge, attitudes, skills, and behaviors with treatment variables. Regardless of overall program effect estimations that other meta.analysis studies have done, this study is conducted to identify explanatory variables that are likely to increase program effects. A decrease of adolescent smoking behaviors is associated with the following factors: a. Younger students ($5^{th}-7^{th}$) than older students ($8^{th}-12^{th}$). b. Research methodology using true experimental design, quasi experimental design with equivalence between groups, use of random assignment, 10% or less attrition rate, use of a no treatment control group, high implementation fidelity, and/or acceptable instrumentation reliability. c. Programs using trained peer leaders, targeting cigarette smoking only, implementing 10 or more treatment sessions and/ or providing booster sessions.

Adaptation of Deep Learning Image Reconstruction for Pediatric Head CT: A Focus on the Image Quality (소아용 두부 컴퓨터단층촬영에서 딥러닝 영상 재구성 적용: 영상 품질에 대한 고찰)

  • Nim Lee;Hyun-Hae Cho;So Mi Lee;Sun Kyoung You
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.1
    • /
    • pp.240-252
    • /
    • 2023
  • Purpose To assess the effect of deep learning image reconstruction (DLIR) for head CT in pediatric patients. Materials and Methods We collected 126 pediatric head CT images, which were reconstructed using filtered back projection, iterative reconstruction using adaptive statistical iterative reconstruction (ASiR)-V, and all three levels of DLIR (TrueFidelity; GE Healthcare). Each image set group was divided into four subgroups according to the patients' ages. Clinical and dose-related data were reviewed. Quantitative parameters, including the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), and qualitative parameters, including noise, gray matter-white matter (GM-WM) differentiation, sharpness, artifact, acceptability, and unfamiliar texture change were evaluated and compared. Results The SNR and CNR of each level in each age group increased among strength levels of DLIR. High-level DLIR showed a significantly improved SNR and CNR (p < 0.05). Sequential reduction of noise, improvement of GM-WM differentiation, and improvement of sharpness was noted among strength levels of DLIR. Those of high-level DLIR showed a similar value as that with ASiR-V. Artifact and acceptability did not show a significant difference among the adapted levels of DLIR. Conclusion Adaptation of high-level DLIR for the pediatric head CT can significantly reduce image noise. Modification is needed while processing artifacts.

Development of a Physics-Based Design Framework for Aircraft Design using Parametric Modeling

  • Hong, Danbi;Park, Kook Jin;Kim, Seung Jo
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.3
    • /
    • pp.370-379
    • /
    • 2015
  • Handling constantly evolving configurations of aircraft can be inefficient and frustrating to design engineers, especially true in the early design phase when many design parameters are changeable throughout trade-off studies. In this paper, a physics-based design framework using parametric modeling is introduced, which is designated as DIAMOND/AIRCRAFT and developed for structural design of transport aircraft in the conceptual and preliminary design phase. DIAMOND/AIRCRAFT can relieve the burden of labor-intensive and time-consuming configuration changes with powerful parametric modeling techniques that can manipulate ever-changing geometric parameters for external layout of design alternatives. Furthermore, the design framework is capable of generating FE model in an automated fashion based on the internal structural layout, basically a set of design parameters describing the structural members in terms of their physical properties such as location, spacing and quantities. The design framework performs structural sizing using the FE model including both primary and secondary structural levels. This physics-based approach improves the accuracy of weight estimation significantly as compared with empirical methods. In this study, combining a physics-based model with parameter modeling techniques delivers a high-fidelity design framework, remarkably expediting otherwise slow and tedious design process of the early design phase.

Investigation on the performance of the six DOF C.G.S., Algeria, shaking table

  • Aknouche, Hassan;Bechtoula, Hakim;Airouche, Abdelhalim;Benouar, Djillali
    • Earthquakes and Structures
    • /
    • v.6 no.5
    • /
    • pp.539-560
    • /
    • 2014
  • Shaking tables are devices for testing structures or structural components models with a wide range of synthetic ground motions or real recorded earthquakes. They are essential tools in earthquake engineering research since they simulate the effects of the true inertial forces on the test specimens. The destructive earthquakes that occurred at the north part of Algeria during the period of 1954-2003 resulted in an initiative from the Algerian authorities for the construction of a shaking simulator at the National Earthquake Engineering Research Center, CGS. The acceleration tracking performance and specifically the inability of the earthquake simulator to accurately replicate the input signal can be considered as the main challenge during shaking table test. The objective of this study is to validate the uni-axial sinusoidal performances curves and to assess the accuracy and fidelity in signal reproduction using the advanced adaptive control techniques incorporated into the MTS Digital controller and software of the CGS shaking table. A set of shake table tests using harmonic and earthquake acceleration records as reference/commanded signals were performed for four test configurations: bare table, 60 t rigid mass and two 20 t elastic specimens with natural frequencies of 5 Hz and 10 Hz.

Counterfactual image generation by disentangling data attributes with deep generative models

  • Jieon Lim;Weonyoung Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.589-603
    • /
    • 2023
  • Deep generative models target to infer the underlying true data distribution, and it leads to a huge success in generating fake-but-realistic data. Regarding such a perspective, the data attributes can be a crucial factor in the data generation process since non-existent counterfactual samples can be generated by altering certain factors. For example, we can generate new portrait images by flipping the gender attribute or altering the hair color attributes. This paper proposes counterfactual disentangled variational autoencoder generative adversarial networks (CDVAE-GAN), specialized for data attribute level counterfactual data generation. The structure of the proposed CDVAE-GAN consists of variational autoencoders and generative adversarial networks. Specifically, we adopt a Gaussian variational autoencoder to extract low-dimensional disentangled data features and auxiliary Bernoulli latent variables to model the data attributes separately. Also, we utilize a generative adversarial network to generate data with high fidelity. By enjoying the benefits of the variational autoencoder with the additional Bernoulli latent variables and the generative adversarial network, the proposed CDVAE-GAN can control the data attributes, and it enables producing counterfactual data. Our experimental result on the CelebA dataset qualitatively shows that the generated samples from CDVAE-GAN are realistic. Also, the quantitative results support that the proposed model can produce data that can deceive other machine learning classifiers with the altered data attributes.

Investigation of thermal hydraulic behavior of the High Temperature Test Facility's lower plenum via large eddy simulation

  • Hyeongi Moon ;Sujong Yoon;Mauricio Tano-Retamale ;Aaron Epiney ;Minseop Song;Jae-Ho Jeong
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3874-3897
    • /
    • 2023
  • A high-fidelity computational fluid dynamics (CFD) analysis was performed using the Large Eddy Simulation (LES) model for the lower plenum of the High-Temperature Test Facility (HTTF), a ¼ scale test facility of the modular high temperature gas-cooled reactor (MHTGR) managed by Oregon State University. In most next-generation nuclear reactors, thermal stress due to thermal striping is one of the risks to be curiously considered. This is also true for HTGRs, especially since the exhaust helium gas temperature is high. In order to evaluate these risks and performance, organizations in the United States led by the OECD NEA are conducting a thermal hydraulic code benchmark for HTGR, and the test facility used for this benchmark is HTTF. HTTF can perform experiments in both normal and accident situations and provide high-quality experimental data. However, it is difficult to provide sufficient data for benchmarking through experiments, and there is a problem with the reliability of CFD analysis results based on Reynolds-averaged Navier-Stokes to analyze thermal hydraulic behavior without verification. To solve this problem, high-fidelity 3-D CFD analysis was performed using the LES model for HTTF. It was also verified that the LES model can properly simulate this jet mixing phenomenon via a unit cell test that provides experimental information. As a result of CFD analysis, the lower the dependency of the sub-grid scale model, the closer to the actual analysis result. In the case of unit cell test CFD analysis and HTTF CFD analysis, the volume-averaged sub-grid scale model dependency was calculated to be 13.0% and 9.16%, respectively. As a result of HTTF analysis, quantitative data of the fluid inside the HTTF lower plenum was provided in this paper. As a result of qualitative analysis, the temperature was highest at the center of the lower plenum, while the temperature fluctuation was highest near the edge of the lower plenum wall. The power spectral density of temperature was analyzed via fast Fourier transform (FFT) for specific points on the center and side of the lower plenum. FFT results did not reveal specific frequency-dominant temperature fluctuations in the center part. It was confirmed that the temperature power spectral density (PSD) at the top increased from the center to the wake. The vortex was visualized using the well-known scalar Q-criterion, and as a result, the closer to the outlet duct, the greater the influence of the mainstream, so that the inflow jet vortex was dissipated and mixed at the top of the lower plenum. Additionally, FFT analysis was performed on the support structure near the corner of the lower plenum with large temperature fluctuations, and as a result, it was confirmed that the temperature fluctuation of the flow did not have a significant effect near the corner wall. In addition, the vortices generated from the lower plenum to the outlet duct were identified in this paper. It is considered that the quantitative and qualitative results presented in this paper will serve as reference data for the benchmark.