• Title/Summary/Keyword: Iteration number

Search Result 356, Processing Time 0.023 seconds

Thermal-Hydraulic Analysis and Parametric Study on the Spent Fuel Pool Storage (기사용 핵연료 저장조에 대한 열수력 해석 및 관련 인자의 영향 평가)

  • Lee, Kye-Bock;Nam, Ki-Il;Park, Jong-Ryul;Lee, Sang-Keun
    • Nuclear Engineering and Technology
    • /
    • v.26 no.1
    • /
    • pp.19-31
    • /
    • 1994
  • The objective of this study is to conduct a thermal-hydraulic analysis on the spent fuel pool and to evaluate a parametric effect for the thermal-hydraulic analysis of spent fuel pool. The selected parameters are the Reynolds Number and the gap flow through the oater gap between fuel cell and fuel bundle. The simplified flow network for a path of fuel cells is used to analyze the natural circulation phenomenon. In the flow network analysis, the pressure drop for each assembly from the entrance of the fuel rack to the exit of the fuel assembly is balanced by the driving head due to the density difference between the pool fluid and the average fluid in each spent fuel assembly. The governing equations ore developed using this relation. But, since the parameters(flow rate, pressure loss coefficient, decay heat, density)are coupled each other, iteration method is used to obtain the solution. For the analysis of the YGN 3&4 spent fuel rack, 12 channels are considered and the inputs such as decay heat and pressure loss coefficient are determined conservatively. The results show the thermal-hydraulic characteristics(void fraction, density, boiling height)of the YGN 3&4 spent fuel rack. There occurs small amount of boiling in the cells. Fuel cladding temperature is lower than 343.3$^{\circ}C$. The evaluation of parametric effect indicates that flow resistances by geometric effect are very sensitive to Reynolds number in the transition region and the gap flow is negligible because of the larger flow resistance in the gap flow path than in the fuel bundle.

  • PDF

A Study for Development of Ratio Beale Measuring Pain Using Korean Pain Tersm (통증어휘를 이용한 통증비율척도의 개발연구)

  • 이은옥;윤순녕;송미순
    • Journal of Korean Academy of Nursing
    • /
    • v.14 no.2
    • /
    • pp.93-111
    • /
    • 1984
  • The main purpose of this study is to develop a ratio scale measuring level of pain using Korean pain terms. The specific purposes of this study are to identify the degree of pain of each pain term in each subclass: to classify each subclass in terms of dimensions of pain; and to analyze factors of the Korean pain ratio scale clustering together. One hundred an4 fifty eight pain terms which were originally identified as representative terms and their synonyms were used for data collection. Fifty eight nursing professors ana sixty one medical doctors who have contacted with patients having pain were asked to rate the weight of each pain term on a visual analogue scale. Subclasses in which ranks of pain terms were same f s findings in two previous studies were 1) thermal 3 am 2) cavity pressure, 3) single stimulating pain, 4) radiation pain. and 5) chemical pain. Subclasses in which ranks of pain terms were confused were 1) incisive pressure, and 2) cold pain. Subclasses in which one new pain term was added were 1) inflammatory-repeated pain, 2) punctuate pressure, 3) constrictive pressure, 4) fatigue-related pressure, and 5) suffering-relate4 pain. Subclasses in which two new pain terms were added were 1) traction pressure, 2) peripheral nerve pain, 3) dull pain, 4) pulsation-related pain, 5) digestion-related pain, 6) tract pain, and 7) punishment-related pain. Subclass in which 3 new pain terms were included was fear-related pain. Rating scores of 5 words in 4 subclasses were significantly different between the normal group and the extreme group of subjects in terms of subjective rating. Only one word among 6 words was that newly added to the scale. Rating scores of 12 words in 9 subclasses were significantly different between doctor group and nursing professor group. Among these 12 words, only 3 were those newly added to the scale. In comparison of these 12 words, mean scores of the nursing professors were always 7 to 16 points higher than those of the medical doctors. In the analysis of judgement of subjects in terms of dimensions of pain terms, subclasses of dull pain, cavity pressure, tract pain and cold pain were suggested to be included in the miscellaneous dimension. As a result of factor analysis of the ratings given to 96 pain words using principal components analysis without iteration and with varimax rotation limiting the number of factors to 4, factors of severe pain (factor I) mild-moderate pain (factor II) , causative pain (factor III) and temperature-related pain(factor IV) were extracted with the factor loading above 0.388. When the pain words were re-arranged on the bases of factor loading above 0.368, number of factors decreased to only first two factors. Maximum score of pain word in factor II was 46.17 and the minimum score of the factor I was 45.36. Further studies are needed to identify the validity, reliability, sensitivity and practicability of this ratio scale using patients having various sources of pain.

  • PDF

Performance Analysis of a Statistical Packet Voice/Data Multiplexer (통계적 패킷 음성 / 데이터 다중화기의 성능 해석)

  • 신병철;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.3
    • /
    • pp.179-196
    • /
    • 1986
  • In this paper, the peformance of a statistical packet voice/data multiplexer is studied. In ths study we assume that in the packet voice/data multiplexer two separate finite queues are used for voice and data traffics, and that voice traffic gets priority over data. For the performance analysis we divide the output link of the multiplexer into a sequence of time slots. The voice signal is modeled as an (M+1) - state Markov process, M being the packet generation period in slots. As for the data traffic, it is modeled by a simple Poisson process. In our discrete time domain analysis, the queueing behavior of voice traffic is little affected by the data traffic since voice signal has priority over data. Therefore, we first analyze the queueing behavior of voice traffic, and then using the result, we study the queueing behavior of data traffic. For the packet voice multiplexer, both inpur state and voice buffer occupancy are formulated by a two-dimensional Markov chain. For the integrated voice/data multiplexer we use a three-dimensional Markov chain that represents the input voice state and the buffer occupancies of voice and data. With these models, the numerical results for the performance have been obtained by the Gauss-Seidel iteration method. The analytical results have been verified by computer simylation. From the results we have found that there exist tradeoffs among the number of voice users, output link capacity, voic queue size and overflow probability for the voice traffic, and also exist tradeoffs among traffic load, data queue size and oveflow probability for the data traffic. Also, there exists a tradeoff between the performance of voice and data traffics for given inpur traffics and link capacity. In addition, it has been found that the average queueing delay of data traffic is longer than the maximum buffer size, when the gain of time assignment speech interpolation(TASI) is more than two and the number of voice users is small.

  • PDF

New Frequency-domain GSC using the Modified-CFAR Algorithm (변형된 CFAR 알고리즘을 이용한 새로운 주파수영역 GSC)

  • Cho, Myeong-Je;Moon, Sung-Hoon;Han, Dong-Seog;Jung, Jin-Won;Kim, Soo-Joong
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.2
    • /
    • pp.96-107
    • /
    • 1999
  • The generalized sidelobe cancellers(GSC's) ar used for suppressing an interference in array radar. The frequency-domain GSC's have a faster convergence rate than the time-domain GSC's because they remove the correlation between the interferences using a frequency-domain least mean square(LMS) algorithm. However, we have not fully used the advantage of the frequency-domain GSC's since we have always updated the weights of all frequency bins, even the interferer free frequency bin. In this paper, we propose a new frequency-domain GSC based on constant false-alarm rate(CFAR) detector, of which GSC adaptively determine the bin whose weight is updated according to the power of each frequency bin. This canceller updates the weight of only updated according to the power of each frequency bin. This canceller updates the weight of only the bin of which the power is high because of the interference signal. The computer simulation shows that the new GSC reduces the iteration number for convergence over the conventional GSC's by more than 100 iterations. The signal-to-noise ration(SNR) improvement is more than 5 dB. Moreover, the number of renewal weights required for the adaptation is much fewer than that of the conventional one.

  • PDF

Evaluation of SharpIR Reconstruction Method in PET/CT (PET/CT 검사에서 SharpIR 재구성 방법의 평가)

  • Kim, Jung-Yul;Kang, Chun-Koo;Park, Hoon-Hee;Lim, Han-Sang;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.12-16
    • /
    • 2012
  • Purpose : In conventional PET image reconstruction, iterative reconstruction methods such as OSEM (Ordered Subsets Expectation Maximization) have now generally replaced traditional analytic methods such as filtered back-projection. This includes improvements in components of the system model geometry, fully 3D scatter and low noise randoms estimates. SharpIR algorithm is to improve PET image contrast to noise by incorporating information about the PET detector response into the 3D iterative reconstruction algorithm. The aim of this study is evaluation of SharpIR reconstruction method in PET/CT. Materials and Methods: For the measurement of detector response for the spatial resolution, a capillary tube was filled with FDG and scanned at varying distances from the iso-center (5, 10, 15, 20 cm). To measure image quality for contrast recovery, the NEMA IEC body phantom (Data Spectrum Corporation, Hillsborough, NC) with diameters of 1, 13, 17 and 22 for simulating hot and 28 and 37 mm for simulating cold lesions. A solution of 5.4 kBq/mL of $^{18}F$-FDG in water was used as a radioactive background obtaining a lesion of background ratio of 4.0. Images were reconstructed with VUE point HD and VUE point HD using SharpIR reconstruction algorithm. For the clinical evaluation, a whole body FDG scan acquired and to demonstrate contrast recovery, ROIs were drawn on a metabolic hot spot and also on a uniform region of the liver. Images were reconstructed with function of varying iteration number (1~10). Results: The result of increases axial distance from iso-center, full width at half maximum (FWHM) is also increasing in VUE point HD reconstruction image. Even showed an increasing distances constant FWHM. VUE point HD with SharpIR than VUE point HD showed improves contrast recovery in phantom and clinical study. Conclusion: By incorporating more information about the detector system response, the SharpIR algorithm improves the accuracy of underlying model used in VUE point HD. SharpIR algorithm improve spatial resolution for a line source in air, and improves contrast recovery at equivalent noise levels in phantoms and clinical studies. Therefore, SharpIR algorithm can be applied as through a longitudinal study will be useful in clinical.

  • PDF

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Study on CGM-LMS Hybrid Based Adaptive Beam Forming Algorithm for CDMA Uplink Channel (CDMA 상향채널용 CGM-LMS 접목 적응빔형성 알고리듬에 관한 연구)

  • Hong, Young-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.895-904
    • /
    • 2007
  • This paper proposes a robust sub-optimal smart antenna in Code Division Multiple Access (CDMA) basestation. It makes use of the property of the Least Mean Square (LMS) algorithm and the Conjugate Gradient Method (CGM) algorithm for beamforming processes. The weight update takes place at symbol level which follows the PN correlators of receiver module under the assumption that the post correlation desired signal power is far larger than the power of each of the interfering signals. The proposed algorithm is simple and has as low computational load as five times of the number of antenna elements(O(5N)) as a whole per each snapshot. The output Signal to Interference plus Noise Ratio (SINR) of the proposed smart antenna system when the weight vector reaches the steady state has been examined. It has been observed in computer simulations that proposed beamforming algorithm improves the SINR significantly compared to the single antenna case. The convergence property of the weight vector has also been investigated to show that the proposed hybrid algorithm performs better than CGM and LMS during the initial stage of the weight update iteration. The Bit Error Rate (BER) characteristics of the proposed array has also been shown as the processor input Signal to Noise Ratio (SNR) varies.

An Improved Fast Fractal Image Decoding by recomposition of the Decoding Order (복원순서 재구성에 의한 개선된 고속 프랙탈 영상복원)

  • Jeong, Tae-Il;Moon, Kwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.84-93
    • /
    • 2000
  • The conventional fractal decoding was implemented to IFS(iterated function system) for every range regions But a part of the range regions can be decoded without the iteration and there is a data dependence regions In order to decode $R{\times}R$ range blocks, It needs $2R{\times}2R$ domain blocks This decoding can be analyzed to the dependence graph The vertex of the graph represents the range blocks, and the vertex is classified into the vertex of the range and domain The edge indicates that the vertex is referred to the other vertices The in-degree and the out-degree are defined to the number of the edge that is entered and exited, respectively The proposed method is analyzed by a dependence graph to the fractal code, and the decoding order is recomposed by the information of the out-degree That is, If the out-degree of the vertex is zero, then this vertex can be used to the vertex with data dependence Thus, the proposed method can extend the data dependence regions by the recomposition of the decoding order As a result, the Iterated regions are minimized without loss of the image quality or PSNR(peak signal-to-noise ratio), Therefore, it can be a fast decoding by the reducing to the computational complexity for IFS in the fractal Image decoding.

  • PDF

The Flood Water Stage Prediction based on Neural Networks Method in Stream Gauge Station (하천수위표지점에서 신경망기법을 이용한 홍수위의 예측)

  • Kim, Seong-Won;Salas, Jose-D.
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.2
    • /
    • pp.247-262
    • /
    • 2000
  • In this paper, the WSANN(Water Stage Analysis with Neural Network) model was presented so as to predict flood water stage at Jindong which has been the major stream gauging station in Nakdong river basin. The WSANN model used the improved backpropagation training algorithm which was complemented by the momentum method, improvement of initial condition and adaptive-learning rate and the data which were used for this study were classified into training and testing data sets. An empirical equation was derived to determine optimal hidden layer node between the hidden layer node and threshold iteration number. And, the calibration of the WSANN model was performed by the four training data sets. As a result of calibration, the WSANN22 and WSANN32 model were selected for the optimal models which would be used for model verification. The model verification was carried out so as to evaluate model fitness with the two-untrained testing data sets. And, flood water stages were reasonably predicted through the results of statistical analysis. As results of this study, further research activities are needed for the construction of a real-time warning of the impending flood and for the control of flood water stage with neural network method in river basin. basin.

  • PDF

Adaptive Hard Decision Aided Fast Decoding Method using Parity Request Estimation in Distributed Video Coding (패리티 요구량 예측을 이용한 적응적 경판정 출력 기반 고속 분산 비디오 복호화 기술)

  • Shim, Hiuk-Jae;Oh, Ryang-Geun;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.635-646
    • /
    • 2011
  • In distributed video coding, low complexity encoder can be realized by shifting encoder-side complex processes to decoder-side. However, not only motion estimation/compensation processes but also complex LDPC decoding process are imposed to the Wyner-Ziv decoder, therefore decoder-side complexity has been one important issue to improve. LDPC decoding process consists of numerous iterative decoding processes, therefore complexity increases as the number of iteration increases. This iterative LDPC decoding process accounts for more than 60% of whole WZ decoding complexity, therefore it can be said to be a main target for complexity reduction. Previously, HDA (Hard Decision Aided) method is introduced for fast LDPC decoding process. For currently received parity bits, HDA method certainly reduces the complexity of decoding process, however, LDPC decoding process is still performed even with insufficient amount of parity request which cannot lead to successful LDPC decoding. Therefore, we can further reduce complexity by avoiding the decoding process for insufficient parity bits. In this paper, therefore, a parity request estimation method is proposed using bit plane-wise correlation and temporal correlation. Joint usage of HDA method and the proposed method achieves about 72% of complexity reduction in LDPC decoding process, while rate distortion performance is degraded only by -0.0275 dB in BDPSNR.