• Title/Summary/Keyword: Iterative technique

Search Result 568, Processing Time 0.031 seconds

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Standard Penetration Test Performance in Sandy Deposits (모래지반에서 표준관입시험에 따른 관입거동)

  • Dung, N.T.;Chung, Sung-Gyo
    • Journal of the Korean Geotechnical Society
    • /
    • v.29 no.10
    • /
    • pp.39-48
    • /
    • 2013
  • This paper presents an equation to depict the penetration behavior during the standard penetration test (SPT) in sandy deposits. An energy balance approach is considered and the driving mechanism of the SPT sampler is conceptually modeled as that of a miniature open-ended steel pipe pile into sands. The equation consists of three sets of input parameters including hyperbolic parameters (m and ${\lambda}$) which are difficult to determine. An iterative technique is thus applied to determine the optimized values of m and ${\lambda}$ using three measured values from a routine SPT data. It is verified from a well-documented record that the simulated penetration curves are in good agreement with the measured ones. At a given depth, the increase in m results in the decrease in ${\lambda}$ and the increase in the curvature of the penetration curve as well as the simulated N-value. Generally, the predicted penetration curve becomes nearly straight for the portion of exceeding the seating drive zone, which is more pronounced as soil density increases. Thus, the simulation method can be applied to extrapolating a prematurely completed test data, i.e., to determining the N value equivalent to a 30 cm penetration. A simple linear equation is considered for obtaining similar results.

Image Restoration and Segmentation for PAN-sharpened High Multispectral Imagery (PAN-SHARPENED 고해상도 다중 분광 자료의 영상 복원과 분할)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.1003-1017
    • /
    • 2017
  • Multispectral image data of high spatial resolution is required to obtain correct information on the ground surface. The multispectral image data has lower resolution compared to panchromatic data. PAN-sharpening fusion technique produces the multispectral data with higher resolution of panchromatic image. Recently the object-based approach is more applied to the high spatial resolution data than the conventional pixel-based one. For the object-based image analysis, it is necessary to perform image segmentation that produces the objects of pixel group. Image segmentation can be effectively achieved by the process merging step-by-step two neighboring regions in RAG (Regional Adjacency Graph). In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. This degradation increases variation of pixel values in same area, and results in deteriorating the accuracy of image segmentation. An iterative approach that reduces the difference of pixel values in two neighboring pixels of same area is employed to alleviate variation of pixel values in same area. The size of segmented regions is associated with the quality of image segmentation and is decided by a stopping rue in the merging process. In this study, the image restoration and segmentation was quantitatively evaluated using simulation data and was also applied to the three PAN-sharpened multispectral images of high resolution: Dubaisat-2 data of 1m panchromatic resolution from LA, USA and KOMPSAT3 data of 0.7m panchromatic resolution from Daejeon and Chungcheongnam-do in the Korean peninsula. The experimental results imply that the proposed method can improve analytical accuracy in the application of remote sensing high resolution PAN-sharpened multispectral imagery.

A Metrics-Based Approach to the Reorganization of Class Hierarchy Structures (클래스계층구조의 품질평가척도를 기반으로 하는 재구성기법)

  • Hwang, Sun-Hyung;Yang, Hea-Sool;Hwang, Young-Sub
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.859-872
    • /
    • 2003
  • Class hierarchies often constitute the backbone of object-oriented software. Their quality is therefore quite crucial. Building class hierarchies with good qualify is a very important and common tasks on the object oriented software development, but such hierarchies are not so easy to build. Moreover, the class hierarchy structure under construction is frequently restructured and refined until it becomes suitable for the requirement on the iterative and incremental development lifecycle. Therefore, there has been renewal of interest in all methodologies and tools to assist the object oriented developers in this task. In this paper, we define a set of quantitative metrics which provide a wav of capturing features of a rough estimation of complexity of class hierarchy structure. In addition to, we suggest a set of algorithms that transform a original class hierarchy structure into reorganized one based on the proposed metrics for class hierarchy structure. Furthermore, we also prove that each algorithm is "object-preserving". That is, we prove that the set of objects are never changed before and after applying the algorithm on a class hierarchy. The technique presented in this paper can be used as a guidelines of the construction, restructuring and refinement of class hierarchies. Moreover, the proposed set of algorithms based on metrics can be helpful for developers as an useful instrument for the object-oriented software development.velopment.

Turbid water atmospheric correction for GOCI: Modification of MUMM algorithm (GOCI영상의 탁한 해역 대기보정: MUMM 알고리즘 개선)

  • Lee, Boram;Ahn, Jae Hyun;Park, Young-Je;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.2
    • /
    • pp.173-182
    • /
    • 2013
  • The early Sea-viewing Wide Field-of-view Sensor(SeaWiFS) atmospheric correction algorithm which is the basis of the atmospheric correction algorithm for Geostationary Ocean Color Imager(GOCI) assumes that water-leaving radiances is negligible at near-infrared(NIR) wavelengths. For this reason, all of the satellite measured radiances at the NIR wavelengths are assigned to aerosol radiances. However that assumption would cause underestimation of water-leaving radiances if it were applied to turbid Case-2 waters. To overcome this problem, Management Unit of the North Sea Mathematical Models(MUMM) atmospheric correction algorithm has been developed for turbid waters. This MUMM algorithm introduces new parameter ${\alpha}$, representing the ratio of water-leaving reflectance at the NIR wavelengths. ${\alpha}$ is calculated by statistical method and is assumed to be constant throughout the study area. Using this algorithm, we can obtain comparatively accurate water-leaving radiances in the moderately turbid waters where the NIR water-leaving reflectance is less than approximately 0.01. However, this algorithm still underestimates the water-leaving radiances at the extremely turbid water since the ratio of water-leaving radiance at two NIR wavelengths, ${\alpha}$ is changed with concentration of suspended particles. In this study, we modified the MUMM algorithm to calculate appropriate value for ${\alpha}$ using an iterative technique. As a result, the accuracy of water-leaving reflectance has been significantly improved. Specifically, the results show that the Root Mean Square Error(RMSE) of the modified MUMM algorithm was 0.002 while that of the MUMM algorithm was 0.0048.

FEM-based Seismic Reliability Analysis of Real Structural Systems (실제 구조계의 유한요소법에 기초한 지진 신뢰성해석)

  • Huh Jung-Won;Haldar Achintya
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.19 no.2 s.72
    • /
    • pp.171-185
    • /
    • 2006
  • A sophisticated reliability analysis method is proposed to evaluate the reliability of real nonlinear complicated dynamic structural systems excited by short duration dynamic loadings like earthquake motions by intelligently integrating the response surface method, the finite element method, the first-order reliability method, and the iterative linear interpolation scheme. The method explicitly considers all major sources of nonlinearity and uncertainty in the load and resistance-related random variables. The unique feature of the technique is that the seismic loading is applied in the time domain, providing an alternative to the classical random vibration approach. The four-parameter Richard model is used to represent the flexibility of connections of real steel frames. Uncertainties in the Richard parameters are also incorporated in the algorithm. The laterally flexible steel frame is then reinforced with reinforced concrete shear walls. The stiffness degradation of shear walls after cracking is also considered. The applicability of the method to estimate the reliability of real structures is demonstrated by considering three examples; a laterally flexible steel frame with fully restrained connections, the same steel frame with partially restrained connections with different rigidities, and a steel frame reinforced with concrete shear walls.

Implementation of Stopping Criterion Algorithm using Variance Values of LLR in Turbo Code (터보부호에서 LLR 분산값을 이용한 반복중단 알고리즘 구현)

  • Jeong Dae-Ho;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.149-157
    • /
    • 2006
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication system. As the number of iterations increases, it can achieves remarkable BER performance over AWGN channel environment. However, if the number of iterations is increased in the several channel environments, any further iteration results in very little improvement, and requires much delay and computation in proportion to the number of iterations. To solve this problems, it is necessary to device an efficient criterion to stop the iteration process and prevent unnecessary delay and computation. In this paper, it proposes an efficient and simple criterion for stopping the iteration process in turbo decoding. By using variance values of LLR in turbo decoder, the proposed algerian can largely reduce the average number of iterations without BER performance degradation in all SNR regions. As a result of simulation, the average number of iterations in the upper SNR region is reduced by about $34.66%{\sim}41.33%$ compared to method using variance values of extrinsic information. the average number of iterations in the lower SNR region is reduced by about $13.93%{\sim}14.45%$ compared to CE algorithm and about $13.23%{\sim}14.26%$ compared to SDR algorithm.

A New SPW Scheme for PAPR Reduction in OFDM Systems by Using Genetic Algorithm (유전자 알고리즘을 적용한 SPW에 의한 새로운 OFDM 시스템 PAPR 감소 기법)

  • Kim Sung-Soo;Kim Myoung-Je;Kee Jong-Hae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.11 s.102
    • /
    • pp.1131-1137
    • /
    • 2005
  • An orthogonal frequency division multiplexing(OFDM) system has the problem of peak-to-average power ratio (PAPR) due to the overlapping phenomena of many sub-carriers. In order to improve the performance of PAPR, we propose in this paper a new genetic sub-block phase weighting(GA-SPW) using the SPW technique. Not only the selecting mapping(SLM) and the partial sequence(PTS) but also the previously proposed SPW becomes more effective as the number of sub-blocks and phase elements increases. However, all of them have limitation on the number of sub-blocks since the searching repetition increases exponentially as the number of sub-blocks increases. Therefore, in this research, a new GA SPW is proposed to reduce the amount of calculation by using Genetic algorithm(GA). In the proposed method, the number of calculations involved in the iterative phase searching yields to depend on the number of population and generation not on the number of sub-blocks and phase elements. The superiority of the proposed method is presented in the experimental results and analysis.

Implementation of High-Throughput SHA-1 Hash Algorithm using Multiple Unfolding Technique (다중 언폴딩 기법을 이용한 SHA-1 해쉬 알고리즘 고속 구현)

  • Lee, Eun-Hee;Lee, Je-Hoon;Jang, Young-Jo;Cho, Kyoung-Rok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.4
    • /
    • pp.41-49
    • /
    • 2010
  • This paper proposes a new high speed SHA-1 architecture using multiple unfolding and pre-computation techniques. We unfolds iterative hash operations to 2 continuos hash stage and reschedules computation timing. Then, the part of critical path is computed at the previous hash operation round and the rest is performed in the present round. These techniques reduce 3 additions to 2 additions on the critical path. It makes the maximum clock frequency of 118 MHz which provides throughput rate of 5.9 Gbps. The proposed architecture shows 26% higher throughput with a 32% smaller hardware size compared to other counterparts. This paper also introduces a analytical model of multiple SHA-1 architecture at the system level that maps a large input data on SHA-1 block in parallel. The model gives us the required number of SHA-1 blocks for a large multimedia data processing that it helps to make decision hardware configuration. The hs fospeed SHA-1 is useful to generate a condensed message and may strengthen the security of mobile communication and internet service.