• Title/Summary/Keyword: iterative process

Search Result 652, Processing Time 0.026 seconds

Application of Ultrasound Tomography for Non-Destructive Testing of Concrete Structure (초음파 tomography를 응용한 콘크리트 구조물의 비파괴 시험에 관한 연구)

  • Kim, Young-Ki;Yoon, Young-Deuk;Yoon, Chong-Yul;Kim, Jung-Soo;Kim, Woon-Kyung;Song, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.1
    • /
    • pp.27-36
    • /
    • 2000
  • As a potential approach for non-destructive testing of concrete structures, we evaluate the time-of-flight (TOF) ultrasound tomography technique In conventional X ray tomography, the reconstructed Image corresponds to the internal attenuation coefficient However, in TOF ultrasound tomography, the reconstructed Image is proportional to the retractive index of the medium Because refractive effects are minimal for X-rays, conventional reconstruction techniques are applied to reconstruct the Image in X-ray tomography However, since ultrasound travels in curved path, due to the spatial variations in the refractive index of the medium, the path must be known to correctly reconstruct the Image. Algorithm for determining the ultrasound path is developed from a Geometrical Optics point view and the image reconstruction algorithm, since the paths are curved It requires the algebraic approach, namely the ART or the SIRT Here, the difference between the computed and the measured TOP data is used as a basis, for the iteration process First the initial image is reconstructed assuming straight paths. It then updates the path based on the recently reconstructed image This process of reconstruction and path determination repeats until convergence The proposed algorithm is evaluated by computer simulations, and in addition is applied to a real concrete structure.

  • PDF

Implementation of Stopping Criterion Algorithm using Variance Values of LLR in Turbo Code (터보부호에서 LLR 분산값을 이용한 반복중단 알고리즘 구현)

  • Jeong Dae-Ho;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.149-157
    • /
    • 2006
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication system. As the number of iterations increases, it can achieves remarkable BER performance over AWGN channel environment. However, if the number of iterations is increased in the several channel environments, any further iteration results in very little improvement, and requires much delay and computation in proportion to the number of iterations. To solve this problems, it is necessary to device an efficient criterion to stop the iteration process and prevent unnecessary delay and computation. In this paper, it proposes an efficient and simple criterion for stopping the iteration process in turbo decoding. By using variance values of LLR in turbo decoder, the proposed algerian can largely reduce the average number of iterations without BER performance degradation in all SNR regions. As a result of simulation, the average number of iterations in the upper SNR region is reduced by about $34.66%{\sim}41.33%$ compared to method using variance values of extrinsic information. the average number of iterations in the lower SNR region is reduced by about $13.93%{\sim}14.45%$ compared to CE algorithm and about $13.23%{\sim}14.26%$ compared to SDR algorithm.

A Survey of Elementary school teachers' perceptions of mathematics instruction (수학수업에 대한 초등교사의 인식 조사)

  • Kwon, Sungyong
    • Education of Primary School Mathematics
    • /
    • v.20 no.4
    • /
    • pp.253-266
    • /
    • 2017
  • The purpose of the study was to investigate the perceptions of Elementary school teachers on mathematics instruction. To do this, 7 test items were developed to obtain data on teacher's perception of mathematics instruction and 73 teachers who take mathematical lesson analysis lectures were selected and conducted a survey. Since the data obtained are all qualitative data, they were analyzed through coding and similar responses were grouped into the same category. As a result of the survey, several facts were found as follow; First, When teachers thought about 'mathematics', the first words that come to mind were 'calculation', 'difficult', and 'logic'. It is necessary for the teacher to have positive thoughts on mathematics and mathematics learning, and this needs to be stressed enough in teacher education and teacher retraining. Second, the reason why mathematics is an important subject is 'because it is related to the real life', followed by 'because it gives rise to logical thinking ability' and 'because it gives rise to mathematical thinking ability'. These ideas are related to the cultivating mind value and the practical value of mathematics. In order for students to understand the various values of mathematics, teachers must understand the various values of mathematics. Third, the responses for reasons why elementary school students hate mathematics and are hard are because teachers demand 'thinking', 'because they repeat simple calculations', 'children hate complicated things', 'bother', 'Because mathematics itself is difficult', 'the level of curriculum and textbooks is high', and 'the amount of time and activity is too much'. These problems are likely to be improved by the implementation of revised 2015 national curriculum that emphasize core competence and process-based evaluation including mathematical processes. Fourth, the most common reason for failing elementary school mathematics instruction was 'because the process was difficult' and 'because of the results-based evaluation'. In addition, 'Results-oriented evaluation,' 'iterative calculation,' 'infused education,' 'failure to consider the level difference,' 'lack of conceptual and principle-centered education' were mentioned as a failure factor. Most of these factors can be changed by improving and changing teachers' teaching practice. Fifth, the responses for what does a desirable mathematics instruction look like are 'classroom related to real life', 'easy and fun mathematics lessons', 'class emphasizing understanding of principle', etc. Therefore, it is necessary to deeply deal with the related contents in the training courses for the improvement of the teachers' teaching practice, and it is necessary to support not only the one-time training but also the continuous professional development of teachers.

Space-Time Concatenated Convolutional and Differential Codes with Interference Suppression for DS-CDMA Systems (간섭 억제된 DS-CDMA 시스템에서의 시공간 직렬 연쇄 컨볼루션 차등 부호 기법)

  • Yang, Ha-Yeong;Sin, Min-Ho;Song, Hong-Yeop;Hong, Dae-Sik;Gang, Chang-Eon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.39 no.1
    • /
    • pp.1-10
    • /
    • 2002
  • A space-time concatenated convolutional and differential coding scheme is employed in a multiuser direct-sequence code-division multiple-access(DS-CDMA) system. The system consists of single-user detectors (SUD), which are used to suppress multiple-access interference(MAI) with no requirement of other users' spreading codes, timing, or phase information. The space-time differential code, treated as a convolutional code of code rate 1 and memory 1, does not sacrifice the coding efficiency and has the least number of states. In addition, it brings a diversity gain through the space-time processing with a simple decoding process. The iterative process exchanges information between the differential decoder and the convolutional decoder. Numerical results show that this space-time concatenated coding scheme provides better performance and more flexibility than conventional convolutional codes in DS-CDMA systems, even in the sense of similar complexity Further study shows that the performance of this coding scheme applying to DS-CDMA systems with SUDs improves by increasing the processing gain or the number of taps of the interference suppression filter, and degrades for higher near-far interfering power or additional near-far interfering users.

Gaussian Noise Reduction Method using Adaptive Total Variation : Application to Cone-Beam Computed Tomography Dental Image (적응형 총변이 기법을 이용한 가우시안 잡음 제거 방법: CBCT 치과 영상에 적용)

  • Kim, Joong-Hyuk;Kim, Jung-Chae;Kim, Kee-Deog;Yoo, Sun-K.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.29-38
    • /
    • 2012
  • The noise generated in the process of obtaining the medical image acts as the element obstructing the image interpretation and diagnosis. To restore the true image from the image polluted from the noise, the total variation optimization algorithm was proposed by the R.O. F (L.Rudin, S Osher, E. Fatemi). This method removes the noise by fitting the balance of the regularity and fidelity. However, the blurring phenomenon of the border area generated in the process of performing the iterative operation cannot be avoided. In this paper, we propose the adaptive total variation method by mapping the control parameter to the proposed transfer function for minimizing boundary error. The proposed transfer function is determined by the noise variance and the local property of the image. The proposed method was applied to 464 tooth images. To evaluate proposed method performance, PSNR which is a indicator of signal and noise's signal power ratio was used. The experimental results show that the proposed method has better performance than other methods.

Implementation of Stopping Criterion Algorithm using Sign Change Ratio for Extrinsic Information Values in Turbo Code (터보부호에서 외부정보에 대한 부호변화율을 이용한 반복중단 알고리즘 구현)

  • Jeong Dae-Ho;Shim Byong-Sup;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.7 s.349
    • /
    • pp.143-149
    • /
    • 2006
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication system. As the number of iterations increases, it can achieves remarkable BER performance over AWGN channel environment. However, if the number of iterations is increased in the several channel environments, any further iteration results in very little improvement, and requires much delay and computation in proportion to the number of iterations. To solve this problems, it is necessary to device an efficient criterion to stop the iteration process and prevent unnecessary delay and computation. In this paper, it proposes an efficient and simple criterion for stopping the iteration process in turbo decoding. By using sign changed ratio of extrinsic information values in turbo decoder, the proposed algorithm can largely reduce the average number of iterations without BER performance degradation. As a result of simulations, the average number of iterations is reduced by about $12.48%{\sim}22.22%$ compared to CE algorithm and about $20.43%{\sim}54.02%$ compared to SDR algorithm.

An Algorithm for Optimized Accuracy Calculation of Hull Block Assembly (선박 블록 조립 후 최적 정도 계산을 위한 알고리즘 연구)

  • Noh, Jac-Kyou
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.19 no.5
    • /
    • pp.552-560
    • /
    • 2013
  • In this paper, an optimization algorithm for the block assembly accuracy control assessment is proposed with consideration for the current block assembly process and accuracy control procedure used in the shipbuilding site. The objective function of the proposed algorithm consists of root mean square error of the distances between design and measured data of the other control points with respect to a specific point of the whole control points. The control points are divided into two groups: points on the control line and the other points. The grouped data are used as criteria for determining the combination of 6 degrees of freedom in the registration process when constituting constraints and calculating objective function. The optimization algorithm is developed by using combination of the sampling method and the point to point relation based modified ICP algorithm which has an allowable error check procedure that makes sure that error between design and measured point is under allowable error. According to the results from the application of the proposed algorithm with the design and measured data of two blocks data which are verified and validated by an expert in the shipbuilding site, it implies that the choice of whole control points as target points for the accuracy calculation shows better results than that of the control points on the control line as target points for the accuracy of the calculation and the best optimized result can be acquired from the accuracy calculation with a fixed point on the control line as the reference point of the registration.

A Study on Efficient Deconstruction of Supporters with Response Ratio (응답비를 고려한 효율적인 버팀보 해체방안에 관한연구)

  • Choi, Jung-Youl;Park, Sang-Wook;Chung, Jee-Seung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.469-475
    • /
    • 2022
  • As the recent structure construction is constructed as a large-scale and deep underground excavation in close proximity to the building, the installation of retaining wall and supporters (Struts) has become complicated, and the number of supporters to avoid interference of the structural slab has increased. This construction process becomes a factor that causes an increase in construction joints of a structure, leakage and an increase in wall cracks. In addition, this reduced the durability and workability of the structure and led to an increase in the construction period. This study planned to dismantle the two struts simultaneously as a plan to reduce the construction joints, and corrected the earth pressure by assuming the reaction force value by the initial earth pressure and the measured data as the response ratio. After recalculating the corrected earth pressure through the iterative trial method, it was verified by numerical analysis that simultaneous disassembly of the two struts was possible. As a result of numerical analysis applying the final corrected earth pressure, the measured value for the design reaction force was found to be up to 197%. It was analyzed that this was due to the effect of grouting on the ground and some underestimation of the ground characteristics during design. Based on the result of calculating the corrected earth pressure in consideration of the response ratio performed in this study, it was proved analytically that the improvement of the brace dismantling process is possible. In addition, it was considered that the overall construction period could be shortened by reducing cracks due to leakage and improving workability by reducing construction joints. However, to apply the proposed method of this study, it is judged that sufficient estimations are necessary as there are differences in ground conditions, temporary facilities, and reinforcement methods for each site.

Implicit Numerical Integration of Two-surface Plasticity Model for Coarse-grained Soils (Implicit 수치적분 방법을 이용한 조립토에 관한 구성방정식의 수행)

  • Choi, Chang-Ho
    • Journal of the Korean Geotechnical Society
    • /
    • v.22 no.9
    • /
    • pp.45-59
    • /
    • 2006
  • The successful performance of any numerical geotechnical simulation depends on the accuracy and efficiency of the numerical implementation of constitutive model used to simulate the stress-strain (constitutive) response of the soil. The corner stone of the numerical implementation of constitutive models is the numerical integration of the incremental form of soil-plasticity constitutive equations over a discrete sequence of time steps. In this paper a well known two-surface soil plasticity model is implemented using a generalized implicit return mapping algorithm to arbitrary convex yield surfaces referred to as the Closest-Point-Projection method (CPPM). The two-surface model describes the nonlinear behavior of coarse-grained materials by incorporating a bounding surface concept together with isotropic and kinematic hardening as well as fabric formulation to account for the effect of fabric formation on the unloading response. In the course of investigating the performance of the CPPM integration method, it is proven that the algorithm is an accurate, robust, and efficient integration technique useful in finite element contexts. It is also shown that the algorithm produces a consistent tangent operator $\frac{d\sigma}{d\varepsilon}$ during the iterative process with quadratic convergence rate of the global iteration process.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.