• Title/Summary/Keyword: U-Net Model

Search Result 242, Processing Time 0.029 seconds

Application case for phase III of UAM-LWR benchmark: Uncertainty propagation of thermal-hydraulic macroscopic parameters

  • Mesado, C.;Miro, R.;Verdu, G.
    • Nuclear Engineering and Technology
    • /
    • v.52 no.8
    • /
    • pp.1626-1637
    • /
    • 2020
  • This work covers an important point of the benchmark released by the expert group on Uncertainty Analysis in Modeling of Light Water Reactors. This ambitious benchmark aims to determine the uncertainty in light water reactors systems and processes in all stages of calculation, with emphasis on multi-physics (coupled) and multi-scale simulations. The Gesellschaft für Anlagen und Reaktorsicherheit methodology is used to propagate the thermal-hydraulic uncertainty of macroscopic parameters through TRACE5.0p3/PARCSv3.0 coupled code. The main innovative points achieved in this work are i) a new thermal-hydraulic model is developed with a highly-accurate 3D core discretization plus an iterative process is presented to adjust the 3D bypass flow, ii) a control rod insertion occurrence -which data is obtained from a real PWR test- is used as a transient simulation, iii) two approaches are used for the propagation process: maximum response where the uncertainty and sensitivity analysis is performed for the maximum absolute response and index dependent where the uncertainty and sensitivity analysis is performed at each time step, and iv) RESTING MATLAB code is developed to automate the model generation process and, then, propagate the thermal-hydraulic uncertainty. The input uncertainty information is found in related literature or, if not found, defined based on expert judgment. This paper, first, presents the Gesellschaft für Anlagen und Reaktorsicherheit methodology to propagate the uncertainty in thermal-hydraulic macroscopic parameters and, then, shows the results when the methodology is applied to a PWR reactor.

Isotopic Fissile Assay of Spent Fuel in a Lead Slowing-Down Spectrometer System

  • Lee, Yongdeok;Jeon, Juyoung;Park, Changje
    • Nuclear Engineering and Technology
    • /
    • v.49 no.3
    • /
    • pp.549-555
    • /
    • 2017
  • A lead slowing-down spectrometer (LSDS) system is under development to analyze isotopic fissile content that is applicable to spent fuel and recycled material. The source neutron mechanism for efficient and effective generation was also determined. The source neutron interacts with a lead medium and produces continuous neutron energy, and this energy generates dominant fission at each fissile, below the unresolved resonance region. From the relationship between the induced fissile fission and the fast fission neutron detection, a mathematical assay model for an isotopic fissile material was set up. The assay model can be expanded for all fissile materials. The correction factor for self-shielding was defined in the fuel assay area. The corrected fission signature provides well-defined fission properties with an increase in the fissile content. The assay procedure was also established. The assay energy range is very important to take into account the prominent fission structure of each fissile material. Fission detection occurred according to the change of the Pu239 weight percent (wt%), but the content of U235 and Pu241 was fixed at 1 wt%. The assay result was obtained with 2~3% uncertainty for Pu239, depending on the amount of Pu239 in the fuel. The results show that LSDS is a very powerful technique to assay the isotopic fissile content in spent fuel and recycled materials for the reuse of fissile materials. Additionally, a LSDS is applicable during the optimum design of spent fuel storage facilities and their management. The isotopic fissile content assay will increase the transparency and credibility of spent fuel storage.

Evaluating Usefulness of Deep Learning Based Left Ventricle Segmentation in Cardiac Gated Blood Pool Scan (게이트심장혈액풀검사에서 딥러닝 기반 좌심실 영역 분할방법의 유용성 평가)

  • Oh, Joo-Young;Jeong, Eui-Hwan;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.2
    • /
    • pp.151-158
    • /
    • 2022
  • The Cardiac Gated Blood Pool (GBP) scintigram, a nuclear medicine imaging, calculates the left ventricular Ejection Fraction (EF) by segmenting the left ventricle from the heart. However, in order to accurately segment the substructure of the heart, specialized knowledge of cardiac anatomy is required, and depending on the expert's processing, there may be a problem in which the left ventricular EF is calculated differently. In this study, using the DeepLabV3 architecture, GBP images were trained on 93 training data with a ResNet-50 backbone. Afterwards, the trained model was applied to 23 separate test sets of GBP to evaluate the reproducibility of the region of interest and left ventricular EF. Pixel accuracy, dice coefficient, and IoU for the region of interest were 99.32±0.20, 94.65±1.45, 89.89±2.62(%) at the diastolic phase, and 99.26±0.34, 90.16±4.19, and 82.33±6.69(%) at the systolic phase, respectively. Left ventricular EF was calculated to be an average of 60.37±7.32% in the ROI set by humans and 58.68±7.22% in the ROI set by the deep learning segmentation model. (p<0.05) The automated segmentation method using deep learning presented in this study similarly predicts the average human-set ROI and left ventricular EF when a random GBP image is an input. If the automatic segmentation method is developed and applied to the functional examination method that needs to set ROI in the field of cardiac scintigram in nuclear medicine in the future, it is expected to greatly contribute to improving the efficiency and accuracy of processing and analysis by nuclear medicine specialists.

Measurements of the Hepatectomy Rate and Regeneration Rate Using Deep Learning in CT Scan of Living Donors (딥러닝을 이용한 CT 영상에서 생체 공여자의 간 절제율 및 재생률 측정)

  • Sae Byeol, Mun;Young Jae, Kim;Won-Suk, Lee;Kwang Gi, Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.434-440
    • /
    • 2022
  • Liver transplantation is a critical used treatment method for patients with end-stage liver disease. The number of cases of living donor liver transplantation is increasing due to the imbalance in needs and supplies for brain-dead organ donation. As a result, the importance of the accuracy of the donor's suitability evaluation is also increasing rapidly. To measure the donor's liver volume accurately is the most important, that is absolutely necessary for the recipient's postoperative progress and the donor's safety. Therefore, we propose liver segmentation in abdominal CT images from pre-operation, POD 7, and POD 63 with a two-dimensional U-Net. In addition, we introduce an algorithm to measure the volume of the segmented liver and measure the hepatectomy rate and regeneration rate of pre-operation, POD 7, and POD 63. The performance for the learning model shows the best results in the images from pre-operation. Each dataset from pre-operation, POD 7, and POD 63 has the DSC of 94.55 ± 9.24%, 88.40 ± 18.01%, and 90.64 ± 14.35%. The mean of the measured liver volumes by trained model are 1423.44 ± 270.17 ml in pre-operation, 842.99 ± 190.95 ml in POD 7, and 1048.32 ± 201.02 ml in POD 63. The donor's hepatectomy rate is an average of 39.68 ± 13.06%, and the regeneration rate in POD 63 is an average of 14.78 ± 14.07%.

CO2 Exchange in Kwangneung Broadleaf Deciduous Forest in a Hilly Terrain in the Summer of 2002 (2002년 여름철 경사진 광릉 낙엽 활엽수림에서의 이산화탄소 교환)

  • Choi, Tae-jin;Kim, Joon;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.5 no.2
    • /
    • pp.70-80
    • /
    • 2003
  • We report the first direct measurement of $CO_2$ flux over Kwangneung broadleaf deciduous forest, one of the tower flux sites in KoFlux network. Eddy covariance system was installed on a 30 m tower along with other meteorological instruments from June to August in 2002. Although the study site was non-ideal (with valley-like terrain), turbulence characteristics from limited wind directions (i.e., 90$\pm$45$^{\circ}$) was not significantly different from those obtained at simple, homogeneous terrains with an ideal fetch. Despite very low rate of data retrieval, preliminary results from our analysis are encouraging and worthy of further investigation. Ignoring the role of advection terms, the averaged net ecosystem exchange (NEE) of $CO_2$ ranged from -1.2 to 0.7 mg m$^{-2}$ s$^{-1}$ from June to August in 2002. The effect of weak turbulence on nocturnal NEE was examined in terms of friction velocity (u*) along with the estimation of storage term. The effect of low uf u* NEE was obvious with a threshold value of about 0.2 m s$^{-1}$ . The contribution of storage term to nocturnal NEE was insignificant; suggesting that the $CO_2$ stored within the forest canopy at night was probably removed by the drainage flow along the hilly terrain. This could be also an artifact of uncertainty in calculations of storage term based on a single-level concentration. The hyperbolic light response curves explained >80% of variation in the observed NEE, indicating that $CO_2$ exchange at the site was notably light-dependent. Such a relationship can be used effectively in filling up the missing gaps in NEE data through the season. Finally, a simple scaling analysis based on a linear flow model suggested that advection might play a significant role in NEE evaluation at this site.

CFD ANALYSIS OF TURBULENT JET BEHAVIOR INDUCED BY A STEAM JET DISCHARGED THROUGH A VERTICAL UPWARD SINGLE HOLE IN A SUBCOOLED WATER POOL

  • Kang, Hyung-Seok;Song, Chul-Hwa
    • Nuclear Engineering and Technology
    • /
    • v.42 no.4
    • /
    • pp.382-393
    • /
    • 2010
  • Thermal mixing by steam jets in a pool is dominantly influenced by a turbulent water jet generated by the condensing steam jets, and the proper prediction of this turbulent jet behavior is critical for the pool mixing analysis. A turbulent jet flow induced by a steam jet discharged through a vertical upward single hole into a subcooled water pool was subjected to computational fluid dynamics (CFD) analysis. Based on the small-scale test data derived under a horizontal steam discharging condition, this analysis was performed to validate a CFD method of analysis previously developed for condensing jet-induced pool mixing phenomena. In previous validation work, the CFD results and the test data for a limited range of radial and axial directions were compared in terms of profiles of the turbulent jet velocity and temperature. Furthermore, the behavior of the turbulent jet induced by the steam jet through a horizontal single hole in a subcooled water pool failed to show the exact axisymmetric flow pattern with regards to an overall pool mixing, whereas the CFD analysis was done with an axisymmetric grid model. Therefore, another new small-scale test was conducted under a vertical upward steam discharging condition. The purpose of this test was to generate the velocity and temperature profiles of the turbulent jet by expanding the measurement ranges from the jet center to a location at about 5% of $U_m$ and 10 cm to 30 cm from the exit of the discharge nozzle. The results of the new CFD analysis show that the recommended CFD model of the high turbulent intensity of 40% for the turbulent jet and the fine mesh grid model can accurately predict the test results within an error rate of about 10%. In this work, the turbulent jet model, which is used to simply predict the temperature and velocity profiles along the axial and radial directions by means of the empirical correlations and Tollmien's theory was improved on the basis of the new test data. The results validate the CFD model of analysis. Furthermore, the turbulent jet model developed in this study can be used to analyze pool thermal mixing when an ellipsoidal steam jet is discharged under a high steam mass flux in a subcooled water pool.

A Study of Unemployment Duration: A Survival Analysis Using Log Normal Model (실업급여 수급권자의 실업기간과 재취업에 관한 실증연구: 모수적 생존모델(Log-Normal Model)을 이용한 분석)

  • Kang, Chul-Hee;Kim, Kyo-Seong;Kim, Jin-Wook
    • Korean Journal of Social Welfare
    • /
    • v.37
    • /
    • pp.1-31
    • /
    • 1999
  • In Korea, little is known about unemployment duration and exit rate from unemployment. This paper empirically examines the duration of unemployment using data for the years 1996 and 1997 on unemployed individuals who are eligible for unemployment insurance benefits in Korea. A parametric survival model (log-normal model) is adopted to identify factors predicting transitions to reemployment. Factors that affect unemployment duration are sex, age, employment duration (year), prior salary, region, prior employment industry, cause of unemployment, officially determined unemployment benefit duration, degree of benefit exhaustion, and amount of benefits for early reemployment. However, education is not statistically significant In degree of benefit exhaustion, the exit rate from unemployment decreases as benefit exhaustion is approached. In amount of benefits for early reemployment, the exit rate from unemployment increases as amount of benefits increases. Hazards for reemployment gradually increase until 80 days after unemployment and gradually decrease in the following period. Thus, we find that distribution of hazards for reemployment has log-normal shapes between inverted U and inverted L This paper takes advantage of a unique analysis about unemployment duration and exit rate from unemployment in the Korean Unemployment Insurance system which functions as the most valuable social safely-net mechanism in the recent national economic crisis. Indeed, this paper provides a basic knowledge about realities of unemployed individuals in the Unemployment Insurance system and identifies research areas that require further study.

  • PDF

Optimal Strategy of Hybrid Marketing Channel in Electronic Commerce (전자상거래하에서의 하이브리드 마케팅 채널의 믹스 전략에 관한 연구)

  • Chun, Se-Hak;Kim, Jae-Cheol
    • Asia pacific journal of information systems
    • /
    • v.17 no.2
    • /
    • pp.83-95
    • /
    • 2007
  • We are motivated by how offline and online firms compete. The Internet made many conventional offline firms build a dynamic online business as another sales channel using their advantages such as brand equity, an existing customer base with comprehensive purchasing data, integrated marketing, economies of scale, and longtime experience with the logistics of order fulfillment and customer service. Even though the hybrid selling using both offline and online channel seems to have advantages over a pure online retailer, all the conventional offline firms are not seen to create an online business. Many conventional offline firms began to launch online business since the Internet era, however, just being online business is not likely to guarantee success. According to Bizate.com's report whether the hybrid channel strategy is successful is still under investigation. For example, consider the classic case of Barnes and Noble versus Amazon.com, Barnes and Noble was already the largest chain of bookstores in the U,S., when Amazon.com was established in 1995, BarnesandNoble.com followed suit in 1997, After suffering losses in its initial years, Amazon finally turned profitable in 2003. In 2004, Amazon's net income was $588 million on revenues of $6.92 billion, while Barnes and Noble earned $143 million on revenues of $4.87 billion, which included BarnesandNoble.com's loss of $21 million on revenues of $420 million. While these examples serve to motivate our thinking, it does not explain when offline firms should venture online. It also does not provide an analytical framework that can generalized to other competitive online-offline situations. We attempt to do this in this paper and analyze a hybrid channel model where a conventional offline firm competes against online firms using its own direct online channels. We are particularly interested in an optimal channel strategy when a conventional offline firm sells its products through its own direct online channel to compete with other rival online firms. We consider two situations where its direct online channel and other online firms are symmetric and asymmetric in the brand effect. The analysis of this paper presents several findings. In the symmetric model where a hybrid firm's online channel is not differentiated from a pure online firm, (i) a conventional offline firm will not launch its online business. In the asymmetric model where a hybrid firm's online channel is differentiated from a pure online firm, (ii) a conventional offline firm can launch its online business if its brand effect is greater than a certain threshold. (iii) there is a positive relationship between its brand effect and online customer costs showing that a conventional offline firm needs more brand effect in order to launch online business as online customer costs decrease. (iv) there is a negative relationship between its brand effect and the number of customers with access to the Internet showing that a conventional offline firm tends to launch its online business when customers with access to the Internet increases.

Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people (언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법)

  • SeungKwon Lee;U-Jin Choe;Gwangil Jeon
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.17-24
    • /
    • 2022
  • With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.

Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction

  • Jung Hee Hong;Eun-Ah Park;Whal Lee;Chulkyun Ahn;Jong-Hyo Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.10
    • /
    • pp.1165-1177
    • /
    • 2020
  • Objective: To assess the feasibility of applying a deep learning-based denoising technique to coronary CT angiography (CCTA) along with iterative reconstruction for additional noise reduction. Materials and Methods: We retrospectively enrolled 82 consecutive patients (male:female = 60:22; mean age, 67.0 ± 10.8 years) who had undergone both CCTA and invasive coronary artery angiography from March 2017 to June 2018. All included patients underwent CCTA with iterative reconstruction (ADMIRE level 3, Siemens Healthineers). We developed a deep learning based denoising technique (ClariCT.AI, ClariPI), which was based on a modified U-net type convolutional neural net model designed to predict the possible occurrence of low-dose noise in the originals. Denoised images were obtained by subtracting the predicted noise from the originals. Image noise, CT attenuation, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were objectively calculated. The edge rise distance (ERD) was measured as an indicator of image sharpness. Two blinded readers subjectively graded the image quality using a 5-point scale. Diagnostic performance of the CCTA was evaluated based on the presence or absence of significant stenosis (≥ 50% lumen reduction). Results: Objective image qualities (original vs. denoised: image noise, 67.22 ± 25.74 vs. 52.64 ± 27.40; SNR [left main], 21.91 ± 6.38 vs. 30.35 ± 10.46; CNR [left main], 23.24 ± 6.52 vs. 31.93 ± 10.72; all p < 0.001) and subjective image quality (2.45 ± 0.62 vs. 3.65 ± 0.60, p < 0.001) improved significantly in the denoised images. The average ERDs of the denoised images were significantly smaller than those of originals (0.98 ± 0.08 vs. 0.09 ± 0.08, p < 0.001). With regard to diagnostic accuracy, no significant differences were observed among paired comparisons. Conclusion: Application of the deep learning technique along with iterative reconstruction can enhance the noise reduction performance with a significant improvement in objective and subjective image qualities of CCTA images.