• Title/Summary/Keyword: 반복연산

Search Result 503, Processing Time 0.027 seconds

The Properties of a Nonlinear Direct Spectrum Method for Estimating the Seismic Performance (내진성능평가를 위한 비선형 직접스펙트럼법의 특성)

  • 강병두;김재웅
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.6 no.4
    • /
    • pp.65-73
    • /
    • 2002
  • It has been recognized that the damage control must become a more explicit design consideration. In an effort to develop design methods based on performance it is clear that the evaluation of the nonlinear response is required. The methods available to the design engineer today are nonlinear time history analyses, monotonic static nonlinear analyses, or equivalent static analyses with simulated nonlinear influences. Some building codes propose the capacity spectrum method based on the nonlinear static analysis(pushover analysis) to determine the earthquake-induced demand given by the structure pushover curve. These procedures are conceptually simple but iterative and time consuming with some errors. This paper presents a nonlinear direct spectrum method(NDSM) to evaluate seismic performance of structures, without iterative computations, given by the structural initial elastic period and yield strength from the pushover analysis, especially for MDF(multi degree of freedom) systems. The purpose of this paper is to investigate the accuracy and confidence of this method from a point of view of various earthquakes and unloading stiffness degradation parameters. The conclusions of this study are as follows; 1) NDSM is considered as practical method because the peak deformations of nonlinear system of MDF by NDSM are almost equal to the results of nonlinear time history analysis(NTHA) for various ground motions. 2) When the results of NDSM are compared with those of NTHA. mean of errors is the smallest in case of post-yielding stiffness factor 0.1, static force by MAD(modal adaptive distribution) and unloading stiffness degradation factor 0.2~0.3.

Improvement of LMS Algorithm Convergence Speed with Updating Adaptive Weight in Data-Recycling Scheme (데이터-재순환 구조에서 적응 가중치 갱신을 통한 LMS 알고리즘 수렴 속 도 개선)

  • Kim, Gwang-Jun;Jang, Hyok;Suk, Kyung-Hyu;Na, Sang-Dong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.4
    • /
    • pp.11-22
    • /
    • 1999
  • Least-mean-square(LMS) adaptive filters have proven to be extremely useful in a number of signal processing tasks. However LMS adaptive filter suffer from a slow rate of convergence for a given steady-state mean square error as compared to the behavior of recursive least squares adaptive filter. In this paper an efficient signal interference control technique is introduced to improve the convergence speed of LMS algorithm with tap weighted vectors updating which were controled by reusing data which was abandoned data in the Adaptive transversal filter in the scheme with data recycling buffers. The computer simulation show that the character of convergence and the value of MSE of proposed algorithm are faster and lower than the existing LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of LMS algorithm.

High Performance Hardware Implementation of the 128-bit SEED Cryptography Algorithm (128비트 SEED 암호 알고리즘의 고속처리를 위한 하드웨어 구현)

  • 전신우;정용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.11 no.1
    • /
    • pp.13-23
    • /
    • 2001
  • This paper implemented into hardware SEED which is the KOREA standard 128-bit block cipher. First, at the respect of hardware implementation, we compared and analyzed SEED with AES finalist algorithms - MARS, RC6, RIJNDAEL, SERPENT, TWOFISH, which are secret key block encryption algorithms. The encryption of SEED is faster than MARS, RC6, TWOFISH, but is as five times slow as RIJNDAEL which is the fastest. We propose a SEED hardware architecture which improves the encryption speed. We divided one round into three parts, J1 function block, J2 function block J3 function block including key mixing block, because SEED repeatedly executes the same operation 16 times, then we pipelined one round into three parts, J1 function block, J2 function block, J3 function block including key mixing block, because SEED repeatedly executes the same operation 16 times, then we pipelined it to make it more faster. G-function is implemented more easily by xoring four extended 4 byte SS-boxes. We tested it using ALTERA FPGA with Verilog HDL. If the design is synthesized with 0.5 um Samsung standard cell library, encryption of ECB and decryption of ECB, CBC, CFB, which can be pipelined would take 50 clock cycles to encrypt 384-bit plaintext, and hence we have 745.6 Mbps assuming 97.1 MHz clock frequency. Encryption of CBC, OFB, CFB and decryption of OFB, which cannot be pipelined have 258.9 Mbps under same condition.

How to Generate Lightweight S-Boxes by Using AND Gate Accumulation (AND 연산자 축적을 통한 경량 S-boxes 생성방법)

  • Jeon, Yongjin;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.465-475
    • /
    • 2022
  • Due to the impact of COVID-19, people are paying attention to convenience and health, and the use of IoT devices to help them is increasing. In order to embed a lightweight security element in IoT devices that need to handle sensitive information even with limited resources, the development of a lightweight S-box is essential. Until 2021, it was common to develop a lightweight 4-bit S-box by a heuristic method, and to develop an extended structure or repeat the same operation for a larger size lightweight S-box. However, in January 2022, a paper that proposed a heuristic algorithm to find an 8-bit S-box with better differential uniformity and linearity than the S-box generated with an MISTY extended structure, although non-bijective, was published [1]. The heuristic algorithm proposed in this paper generates an S-box by adding AND operations one by one. Whenever an AND operation is added, they use a method that pre-removes the S-box for which the calculated differential uniformity does not reach the desired criterion. In this paper, we improve the performance of this heuristic algorithm. By increasing the amount of pre-removal using not only differential uniformity but also other differential property, and adding a process of calculating linearity for pre-removing, it is possible to satisfy not only differential security but also linear security.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

Compare the Clinical Tissue Dose Distributions to the Derived from the Energy Spectrum of 15 MV X Rays Linear Accelerator by Using the Transmitted Dose of Lead Filter (연(鉛)필터의 투과선량을 이용한 15 MV X선의 에너지스펙트럼 결정과 조직선량 비교)

  • Choi, Tae-Jin;Kim, Jin-Hee;Kim, Ok-Bae
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.80-88
    • /
    • 2008
  • Recent radiotherapy dose planning system (RTPS) generally adapted the kernel beam using the convolution method for computation of tissue dose. To get a depth and profile dose in a given depth concerened a given photon beam, the energy spectrum was reconstructed from the attenuation dose of transmission of filter through iterative numerical analysis. The experiments were performed with 15 MV X rays (Oncor, Siemens) and ionization chamber (0.125 cc, PTW) for measurements of filter transmitted dose. The energy spectrum of 15MV X-rays was determined from attenuated dose of lead filter transmission from 0.51 cm to 8.04 cm with energy interval 0.25 MeV. In the results, the peak flux revealed at 3.75 MeV and mean energy of 15 MV X rays was 4.639 MeV in this experiments. The results of transmitted dose of lead filter showed within 0.6% in average but maximum 2.5% discrepancy in a 5 cm thickness of lead filter. Since the tissue dose is highly depend on the its energy, the lateral dose are delivered from the lateral spread of energy fluence through flattening filter shape as tangent 0.075 and 0.125 which showed 4.211 MeV and 3.906 MeV. In this experiments, analyzed the energy spectrum has applied to obtain the percent depth dose of RTPS (XiO, Version 4.3.1, CMS). The generated percent depth dose from $6{\times}6cm^2$ of field to $30{\times}30cm^2$ showed very close to that of experimental measurement within 1 % discrepancy in average. The computed dose profile were within 1% discrepancy to measurement in field size $10{\times}10cm$, however, the large field sizes were obtained within 2% uncertainty. The resulting algorithm produced x-ray spectrum that match both quality and quantity with small discrepancy in this experiments.

  • PDF

Development of Evaluation Method of Regional Contractility of Left Ventricle Using Gated Myocardial SPECT and Assessment of Reproducibility (게이트 심근 SPECT를 이용한 좌심실의 국소탄성률 평가방법 개발 및 재현성 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Kang, Won-Jun;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.6
    • /
    • pp.355-363
    • /
    • 2003
  • Purpose: Regional contractility can be calculated using the regional volume change of left ventricle measured by gated myocardial SPECT image and curve of central artery pressure obtained from radial artery pressure data. In this study, a program to obtain the regional contractility was developed, and reproducibility of regional contractility measurement was assessed. Materials and Methods: Seven patients(male:female=5:2, $58{\pm}11.9$ years) with coronary artery diseases underwent gated Tc-99m MIBI myocardial SPECT twice without delay between two scans. Regional volume change of left ventricle was estimated using CSA (Cardiac SPECT Analyzer) software developed in this study. Regional contractility was iteratively estimated from the time-elastance curve obtained using the time-pressure curve and regional time-volume curve. Reproducibility of regional contractility measurement assessed by comparing the contractility values measured twice from the same SPECT data and by comparing those measured from the pair of SPECT data obtained from a same patient. Results: Measured regional contractility was $3.36{\pm}3.38{mm}Hg/mL$ using 15-segment model, $3.16{\pm}2.25{mm}Hg/mL$ using 7-segment model, and $3.11{\pm}2.57{mm}Hg/mL$ using 5-segment model. The harmonic average of regional contractility value was almost identical to the global contractility. Correlation coefficient of regional contractility values measured twice from the same data was greater than 0.97 for all models, and two standard deviations of contractility difference on Bland Altman plot were 1.5%, 1.0%, and 0.9% for 15-, 7-, and 5-segment models, respectively. Correlation coefficient of regional contractility values measured from the pair of SPECT data obtained from a same patient was greater than 0.95 for all models, and two standard deviations on Bland Altman plot were 2.2%, 1.0%, and 1.2%. Conclusion: Regional contractility of left ventricle measured using developed software in this study was reproducible. Regional contractility of left ventricle will be a new useful index for myocardial function after analysis of the clinical data.

Evaluation of Radioactivity Concentration According to Radioactivity Uptake on Image Acquisition of PET/CT 2D and 3D (PET/CT 2D와 3D 영상 획득에서 방사능 집적에 따른 방사능 농도의 평가)

  • Park, Sun-Myung;Hong, Gun-Chul;Lee, Hyuk;Kim, Ki;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.111-114
    • /
    • 2010
  • Purpose: There has been recent interest in the radioactivity uptake and image acquisition of radioactivity concentration. The degree of uptake is strongly affected by many factors containing $^{18}F$-FDG injection volume, tumor size and the density of blood glucose. Therefore, we investigated how radioactivity uptake in target influences 2D or 3D image analysis and elucidate radioactivity concentration that mediate this effect. This study will show the relationship between the radioactivity uptake and 2D,3D image acquisition on radioactivity concentration. Materials and Methods: We got image with 2D and 3D using 1994 NEMA PET phantom and GE Discovery(GE, U.S.A) STe 16 PET/CT setting the ratio of background and hot sphere's radioactivity concentration as being a standard of 1:2, 1:4, 1:8, 1:10, 1:20, and 1:30 respectively. And we set 10 minutes for CT attenuation correction and acquisition time. For the reconstruction method, we applied iteration method with twice of the iterative and twenty times subset to both 2D and 3D respectively. For analyzing the images, We set the same ROI at the center of hot sphere and the background radioactivity. We measured the radioactivity count of each part of hot sphere and background, and it was comparative analyzed. Results: The ratio of hot sphere's radioactivity density and the background radioactivity with setting ROI was 1:1.93, 1:3.86, 1:7.79, 1:8.04, 1:18.72, and 1:26.90 in 2D, and 1:1.95, 1:3.71, 1:7.10, 1:7.49, 1:15.10, and 1:23.24 in 3D. The differences of percentage were 3.50%, 3.47%, 8.12%, 8.02%, 10.58%, and 11.06% in 2D, the minimum differentiation was 3.47%, and the maximum one was 11.06%. In 3D, the difference of percentage was 3.66%, 4.80%, 8.38%, 23.92%, 23.86%, and 22.69%. Conclusion: The difference of accumulated concentrations is significantly increased following enhancement of radioactivity concentration. The change of radioactivity density in 2D image is affected by less than 3D. For those reasons, when patient is examined as follow up scan with changing the acquisition mode, scan should be conducted considering those things may affect to the quantitative analysis result and take into account these differences at reading.

  • PDF

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

Implementation of Markerless Augmented Reality with Deformable Object Simulation (변형물체 시뮬레이션을 활용한 비 마커기반 증강현실 시스템 구현)

  • Sung, Nak-Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.35-42
    • /
    • 2016
  • Recently many researches have been focused on the use of the markerless augmented reality system using face, foot, and hand of user's body to alleviate many disadvantages of the marker based augmented reality system. In addition, most existing augmented reality systems have been utilized rigid objects since they just desire to insert and to basic interaction with virtual object in the augmented reality system. In this paper, unlike restricted marker based augmented reality system with rigid objects that is based in display, we designed and implemented the markerless augmented reality system using deformable objects to apply various fields for interactive situations with a user. Generally, deformable objects can be implemented with mass-spring modeling and the finite element modeling. Mass-spring model can provide a real time simulation and finite element model can achieve more accurate simulation result in physical and mathematical view. In this paper, the proposed markerless augmented reality system utilize the mass-spring model using tetraheadron structure to provide real-time simulation result. To provide plausible simulated interaction result with deformable objects, the proposed method detects and tracks users hand with Kinect SDK and calculates the external force which is applied to the object on hand based on the position change of hand. Based on these force, 4th order Runge-Kutta Integration is applied to compute the next position of the deformable object. In addition, to prevent the generation of excessive external force by hand movement that can provide the natural behavior of deformable object, we set up the threshold value and applied this value when the hand movement is over this threshold. Each experimental test has been repeated 5 times and we analyzed the experimental result based on the computational cost of simulation. We believe that the proposed markerless augmented reality system with deformable objects can overcome the weakness of traditional marker based augmented reality system with rigid object that are not suitable to apply to other various fields including healthcare and education area.