• Title/Summary/Keyword: Optimization calculation

Search Result 632, Processing Time 0.027 seconds

Code development on steady-state thermal-hydraulic for small modular natural circulation lead-based fast reactor

  • Zhao, Pengcheng;Liu, Zijing;Yu, Tao;Xie, Jinsen;Chen, Zhenping;Shen, Chong
    • Nuclear Engineering and Technology
    • /
    • v.52 no.12
    • /
    • pp.2789-2802
    • /
    • 2020
  • Small Modular Reactors (SMRs) are attracting wide attention due to their outstanding performance, extensive studies have been carried out for lead-based fast reactors (LFRs) that cooled with Lead or Lead-bismuth (LBE), and small modular natural circulation LFR is one of the promising candidates for SMRs and LFRs development. One of the challenges for the design small modular natural circulation LFR is to master the natural circulation thermal-hydraulic performance in the reactor primary circuit, while the natural circulation characteristics is a coupled thermal-hydraulic problem of the core thermal power, the primary loop layout and the operating state of secondary cooling system etc. Thus, accurate predicting the natural circulation LFRs thermal-hydraulic features are highly required for conducting reactor operating condition evaluate and Thermal hydraulic design optimization. In this study, a thermal-hydraulic analysis code is developed for small modular natural circulation LFRs, which is based on several mathematical models for natural circulation originally. A small modular natural circulation LBE cooled fast reactor named URANUS developed by Korea is chosen to assess the code's capability. Comparisons are performed to demonstrate the accuracy of the code by the calculation results of MARS, and the key thermal-hydraulic parameters agree fairly well with the MARS ones. As a typical application case, steady-state analyses were conducted to have an assessment of thermal-hydraulic behavior under nominal condition, and several parameters affecting natural circulation were evaluated. What's more, two characteristics parameters that used to analyze natural circulation LFRs natural circulation capacity were established. The analyses show that the core thermal power, thermal center difference and flow resistance is the main factors affecting the reactor natural circulation. Improving the core thermal power, increasing the thermal center difference and decreasing the flow resistance can significantly increase the reactor mass flow rate. Characteristics parameters can be used to quickly evaluate the natural circulation capacity of natural circulation LFR under normal operating conditions.

A Study on the Optimization of Offsite Consequence Analysis by Plume Segmentation and Multi-Threading (플룸분할 및 멀티스레딩을 통한 소외사고영향 분석시간 최적화 연구)

  • Seunghwan, Kim;Sung-yeop, Kim
    • Journal of the Korean Society of Safety
    • /
    • v.37 no.6
    • /
    • pp.166-173
    • /
    • 2022
  • A variety of input parameters are taken into consideration while performing a Level 3 PSA. Some parameters related to plume segments, spatial grids, and particle size distribution have flexible input formats. Fine modeling performed by splitting a number of segments or grids may enhance the accuracy of analysis but is time-consuming. Analysis speed is highly important because a considerably large number of calculations is required to handle Level 2 PSA scenarios for a single-unit or multi-unit Level 3 PSA. This study developed a sensitivity analysis supporting interface called MACCSsense to compare the results of the trials of plume segmentation with the results of the base case to determine its impact (in terms of time and accuracy) and to support the development of a modeling approach, which saves calculation time and improves accuracy. MACCSense is an automation tool that uses a large amount of plume segmentation analysis results obtained from MUST Converter and Mr. Manager developed by KAERI to generate a sensitivity report that includes impact (time and accuracy) by comparing them with the base-case result. In this study, various plume segmentation approaches were investigated, and both the accuracy and speed of offsite consequence analysis were evaluated using MACCS as a consequence analysis tool. A simultaneous evaluation revealed that execution time can be reduced using multi-threading. In addition, this study can serve as a framework for the development of a modeling strategy for plume segmentation in order to perform accurate and fast offsite consequence analyses.

Development of Optimal Stage Calculation Program for the Design of Waste Etchant Recovering Process (폐식각액 재생공정 설계를 위한 최적단수계산 프로그램 개발)

  • So, Won-Shoup;Park, Jin-Soo;Jung, Jae-Hak;Sur, Gil-Soo
    • Clean Technology
    • /
    • v.15 no.3
    • /
    • pp.165-171
    • /
    • 2009
  • In this study, we found out the relation between $FeCl_3$ recovering-concentration and stage number of extraction process for invar (Fe+Ni) etching process. In order to get the desired $FeCl_3$ recovering-concentration economically, we developed the simulation program for designing the optimal $FeCl_3$ extraction process. We got the key parameter for this simulation program through pilot scale experiments. The process simulation by the developed program could reduce the emission of waste etching solution as well as the treatment costs. In addition, the developed program could calculate the number of stage of the etchant recovering system and the process time to get the desired concentration of $FeCl_3$. This program was used to compute the optimal capacity of the etchant recovering system and applied to the optimization of the stage of the etchant recovering system in real IT industry.

Analytical study on cable shape and its lateral and vertical sags for earth-anchored suspension bridges with spatial cables

  • Gen-min Tian;Wen-ming Zhang;Jia-qi Chang;Zhao Liu
    • Structural Engineering and Mechanics
    • /
    • v.87 no.3
    • /
    • pp.255-272
    • /
    • 2023
  • Spatial cable systems can provide more transverse stiffness and torsional stiffness without sacrificing the vertical bearing capacity compared with conventional vertical cable systems, which is quite lucrative for long-span earth-anchored suspension bridges' development. Higher economy highlights the importance of refined form-finding analysis. Meanwhile, the internal connection between the lateral and vertical sags has not yet been specified. Given this, an analytic algorithm of form-finding for the earth-anchored suspension bridge with spatial cables is proposed in this paper. Through the geometric compatibility condition and mechanical equilibrium condition, the expressions for cable segment, the recurrence relationship between catenary parameters and control equations of spatial cable are established. Additionally, the nonlinear general reduced gradient method is introduced into fast and high-precision numerical analysis. Furthermore, the analytic expression of the lateral and vertical sags is deduced and discussed. This is very significant for the space design above the bridge deck and the optimization of the sag-to-span ratio in the preliminary design stage of the bridge. Finally, the proposed method is verified with the aid of two examples, one being an operational self-anchored suspension bridge (with spatial cables and a 260 m main span), and the other being an earth-anchored suspension bridge under design (with spatial cables and a 500 m main span). The necessity of an iterative calculation for hanger tensions on earth-anchored suspension bridges is confirmed. It is further concluded that the main cable and their connected hangers are in very close inclined planes.

Clinical Review of the Current Status and Utility of Targeted Alpha Therapy (표적 알파 치료의 현황 및 유용성에 대한 임상적 고찰)

  • Sang-Gyu Choi
    • Journal of radiological science and technology
    • /
    • v.46 no.5
    • /
    • pp.379-394
    • /
    • 2023
  • Targeted Alpha Therapy (TAT) is a new method of cancer treatment that protects normal tissues while selectively killing tumor cells using high cytotoxicity and short range of alpha particles, and target alpha therapy is a highly specific and effective cancer treatment strategy, and its potential has been proven through many clinical and experimental studies. This treatment method accurately delivers alpha particles by selecting specific molecules present in cancer tissue, which has an effective destruction and tumor suppression effect on cancer cells, and one of the main advantages of target alpha treatment is the physical properties of alpha particles. Alpha particles have a very high energy and short effective distance, interacting with target molecules in cancer tissues and having a fatal effect on cancer cells, which is known to cause DNA damage and cell death in cancer cells. TAT has shown positive results in preclinical and clinical studies for various types of cancers, especially those that resist or are unresponsive to existing treatments, but there are several challenges and limitations to overcome for successful clinical transition and application. These include the provision and production of suitable alpha radioisotopes, optimization of target vectors and delivery formulations, understanding and regulation of radiological effects, accurate dosage calculation and toxicity assessment. Future research should focus on developing new or improved isotopes, target vectors, transfer formulations, radiobiological models, combination strategies, imaging techniques, etc. for TAT. In addition, TAT has the potential to improve the quality of life and survival of cancer patients due to the possibility of a new treatment for overcoming cancer, and to this end, prospective research on more carcinomas and more diverse patient groups is needed.

The Optimization of Scan Timing for Contrast-Enhanced Magnetic Resonance Angiography

  • Jongmin J. Lee;Phillip J. Tirman;Yongmin Chang;Hun-Kyu Ryeom;Sang-Kwon Lee;Yong-Sun Kim;Duk-Sik Kang
    • Korean Journal of Radiology
    • /
    • v.1 no.3
    • /
    • pp.142-151
    • /
    • 2000
  • Objective: To determine the optimal scan timing for contrast-enhanced magnetic resonance angiography and to evaluate a new timing method based on the arteriovenous circulation time. Materials and Methods: Eighty-nine contrast-enhanced magnetic resonance angiographic examinations were performed mainly in the extremities. A 1.5T scanner with a 3-D turbo-FLASH sequence was used, and during each study, two consecutive arterial phases and one venous phase were acquired. Scan delay time was calculated from the time-intensity curve by the traditional (n = 48) and/or the new (n = 41) method. This latter was based on arteriovenous circulation time rather than peak arterial enhancement time, as used in the traditional method. The numbers of first-phase images showing a properly enhanced arterial phase were compared between the two methods. Results: Mean scan delay time was 5.4 sec longer with the new method than with the traditional. Properly enhanced first-phase images were found in 65% of cases (31/48) using the traditional timing method, and 95% (39/41) using the new method. When cases in which there was mismatch between the target vessel and the time-intensity curve acquisition site are excluded, erroneous acquisition occurred in seven cases with the traditional method, but in none with the new method. Conclusion: The calculation of scan delay time on the basis of arteriovenous circulation time provides better timing for arterial phase acquisition than the traditional method.

  • PDF

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Optimal Sensor Placement of Boundaries and Robustness Analysis for Chemical Release Detection and Response of Near Plant (주변 사업장의 화학물질 확산 감지와 대응을 위한 경계면의 센서배치 최적화 및 강건성 분석)

  • Cho, Jaehoon;Kim, Hyunseung;Kim, Tae-Ok;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.20 no.5
    • /
    • pp.104-111
    • /
    • 2016
  • Recently, the quantities of chemical material are increasing in chemical industries. At that time, release accident is increasing due to aging of equipment, mechanical failure, human error, etc. and industrial complexes found community properties in a specific area. For that matter, chemical release accident can lead to hight probability of large disaster. There is a need to analyze the boundaries optimal sensor placement calculated by selecting release scenarios through release condition and wether condition in a chemical process for release detection and response. This paper is to investigate chlorine release accident scenarios using COMSOL. Through accident scenarios, a numerical calculation is studied to determine optimized sensor placement with weight of detection probability, detection time and concentration. In addition, validity of sensor placement is improved by robustness analysis about unpredicted accident scenarios. Therefore, this verifies our studies can be effectively applicable on any process. As mention above, the result of this study can help to place mobile sensor, to track gas release based concentration data.

A Hardwired Location-Aware Engine based on Weighted Maximum Likelihood Estimation for IoT Network (IoT Network에서 위치 인식을 위한 가중치 방식의 최대우도방법을 이용한 하드웨어 위치인식엔진 개발 연구)

  • Kim, Dong-Sun;Park, Hyun-moon;Hwang, Tae-ho;Won, Tae-ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.32-40
    • /
    • 2016
  • IEEE 802.15.4 is the one of the protocols for radio communication in a personal area network. Because of low cost and low power communication for IoT communication, it requires the highest optimization level in the implementation. Recently, the studies of location aware algorithm based on IEEE802.15.4 standard has been achieved. Location estimation is performed basically in equal consideration of reference node information and blind node information. However, an error is not calculated in this algorithm despite the fact that the coordinates of the estimated location of the blind node include an error. In this paper, we enhanced a conventual maximum likelihood estimation using weighted coefficient and implement the hardwired location aware engine for small code size and low power consumption. On the field test using test-beds, the suggested hardware based location awareness method results better accuracy by 10 percents and reduces both calculation and memory access by 30 percents, which improves the systems power consumption.