• Title/Summary/Keyword: 알고리즘연구

Search Result 17,963, Processing Time 0.045 seconds

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

Development of JPEG2000 Viewer for Mobile Image System (이동형 의료영상 장치를 위한 JPEG2000 영상 뷰어 개발)

  • 김새롬;정해조;강원석;이재훈;이상호;신성범;유선국;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.124-130
    • /
    • 2003
  • Currently, as a consequence of PACS (Picture Archiving Communication System) implementation many hospitals are replacing conventional film-type interpretations of diagnostic medical images with new digital-format interpretations that can also be saved, and retrieve However, the big limitation in PACS is considered to be the lack of mobility. The purpose of this study is to determine the optimal communication packet size. This was done by considering the terms occurred in the wireless communication. After encoding medical image using JPGE2000 image compression method, This method embodied auto-error correction technique preventing the loss of packets occurred during wireless communication. A PC class server, with capabilities to load, collect data, save images, and connect with other network, was installed. Image data were compressed using JPEG2000 algorithm which supports the capability of high energy density and compression ratio, to communicate through a wireless network. Image data were also transmitted in block units coeded by JPEG2000 to prevent the loss of the packets in a wireless network. When JPGE2000 image data were decoded in a PUA (Personal Digital Assistant), it was instantaneous for a MR (Magnetic Resonance) head image of 256${\times}$256 pixels, while it took approximately 5 seconds to decode a CR (Computed Radiography) chest image of 800${\times}$790 pixels. In the transmission of the image data using a CDMA 1X module (Code-Division Multiple Access 1st Generation), 256 byte/sec was considered a stable transmission rate, but packets were lost in the intervals at the transmission rate of 1Kbyte/sec. However, even with a transmission rate above 1 Kbyte/sec, packets were not lost in wireless LAN. Current PACS are not compatible with wireless networks. because it does not have an interface between wired and wireless. Thus, the mobile JPEG2000 image viewing system was developed in order to complement mobility-a limitation in PACS. Moreover, the weak-connections of the wireless network was enhanced by re-transmitting image data within a limitations The results of this study are expected to play an interface role between the current wired-networks PACS and the mobile devices.

  • PDF

Evaluating applicability of metal artifact reduction algorithm for head & neck radiation treatment planning CT (Metal artifact reduction algorithm의 두경부 CT에 대한 적용 가능성 평가)

  • Son, Sang Jun;Park, Jang Pil;Kim, Min Jeong;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.107-114
    • /
    • 2014
  • Purpose : The purpose of this study is evaluation for the applicability of O-MAR(Metal artifact Reduction for Orthopedic Implants)(ver. 3.6.0, Philips, Netherlands) in head & neck radiation treatment planning CT with metal artifact created by dental implant. Materials and Methods : All of the in this study's CT images were scanned by Brilliance Big Bore CT(Philips, Netherlands) at 120kVp, 2mm sliced and Metal artifact reduced by O-MAR. To compare the original and reconstructed CT images worked on RTPS(Eclipse ver 10.0.42, Varian, USA). In order to test the basic performance of the O-MAR, The phantom was made to create metal artifact by dental implant and other phantoms used for without artifact images. To measure a difference of HU in with artifact images and without artifact images, homogeneous phantom and inhomogeneous phantoms were used with cerrobend rods. Each of images were compared a difference of HU in ROIs. And also, 1 case of patient's original CT image applied O-MAR and density corrected CT were evaluated for dose distributions with SNC Patient(Sun Nuclear Co., USA). Results : In cases of head&neck phantom, the difference of dose distibution is appeared 99.8% gamma passing rate(criteria 2 mm / 2%) between original and CT images applied O-MAR. And 98.5% appeared in patient case, among original CT, O-MAR and density corrected CT. The difference of total dose distribution is less than 2% that appeared both phantom and patient case study. Though the dose deviations are little, there are still matters to discuss that the dose deviations are concentrated so locally. In this study, The quality of all images applied O-MAR was improved. Unexpectedly, Increase of max. HU was founded in air cavity of the O-MAR images compare to cavity of the original images and wrong corrections were appeared, too. Conclusion : The result of study assuming restrained case of O-MAR adapted to near skin and low density area, it appeared image distortion and artifact correction simultaneously. In O-MAR CT, air cavity area even turned tissue HU by wrong correction was founded, too. Consequentially, It seems O-MAR algorithm is not perfect to distinguish air cavity and photon starvation artifact. Nevertheless, the differences of HU and dose distribution are not a huge that is not suitable for clinical use. And there are more advantages in clinic for improved quality of CT images and DRRs, precision of contouring OARs or tumors and correcting artifact area. So original and O-MAR CT must be used together in clinic for more accurate treatment plan.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

High-Speed Implementation and Efficient Memory Usage of Min-Entropy Estimation Algorithms in NIST SP 800-90B (NIST SP 800-90B의 최소 엔트로피 추정 알고리즘에 대한 고속 구현 및 효율적인 메모리 사용 기법)

  • Kim, Wontae;Yeom, Yongjin;Kang, Ju-Sung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.1
    • /
    • pp.25-39
    • /
    • 2018
  • NIST(National Institute of Standards and Technology) has recently published SP 800-90B second draft which is the document for evaluating security of entropy source, a key element of a cryptographic random number generator(RNG), and provided a tool implemented on Python code. In SP 800-90B, the security evaluation of the entropy sources is a process of estimating min-entropy by several estimators. The process of estimating min-entropy is divided into IID track and non-IID track. In IID track, the entropy sources are estimated only from MCV estimator. In non-IID Track, the entropy sources are estimated from 10 estimators including MCV estimator. The running time of the NIST's tool in non-IID track is approximately 20 minutes and the memory usage is over 5.5 GB. For evaluation agencies that have to perform repeatedly evaluations on various samples, and developers or researchers who have to perform experiments in various environments, it may be inconvenient to estimate entropy using the tool and depending on the environment, it may be impossible to execute. In this paper, we propose high-speed implementations and an efficient memory usage technique for min-entropy estimation algorithm of SP 800-90B. Our major achievements are the three improved speed and efficient memory usage reduction methods which are the method applying advantages of C++ code for improving speed of MultiMCW estimator, the method effectively reducing the memory and improving speed of MultiMMC by rebuilding the data storage structure, and the method improving the speed of LZ78Y by rebuilding the data structure. The tool applied our proposed methods is 14 times faster and saves 13 times more memory usage than NIST's tool.

A Hardware Implementation of the Underlying Field Arithmetic Processor based on Optimized Unit Operation Components for Elliptic Curve Cryptosystems (타원곡선을 암호시스템에 사용되는 최적단위 연산항을 기반으로 한 기저체 연산기의 하드웨어 구현)

  • Jo, Seong-Je;Kwon, Yong-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.88-95
    • /
    • 2002
  • In recent years, the security of hardware and software systems is one of the most essential factor of our safe network community. As elliptic Curve Cryptosystems proposed by N. Koblitz and V. Miller independently in 1985, require fewer bits for the same security as the existing cryptosystems, for example RSA, there is a net reduction in cost size, and time. In this thesis, we propose an efficient hardware architecture of underlying field arithmetic processor for Elliptic Curve Cryptosystems, and a very useful method for implementing the architecture, especially multiplicative inverse operator over GF$GF (2^m)$ onto FPGA and futhermore VLSI, where the method is based on optimized unit operation components. We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed and inversion speed has been improved 150 times, 480 times respectively compared with the thesis presented by Sarwono Sutikno et al. [7]. The designed underlying arithmetic processor can be also applied for implementing other crypto-processor and various finite field applications.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

Resolution Evaluation of a Pinhole Collimator according to the Aperture Diameter using Micro Deluxe Phantom (Micro Deluxe Phantom을 통한 핀홀 콜리메이터 초점의 직경별 분해능 평가)

  • An, Byung Ho;Yeon, Joon Ho;Kim, Soo Young;Choi, Sung Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.3-11
    • /
    • 2015
  • Purpose It is hard to obtain high quality images of knee and T.M joint because of a lot of soft tissues in the knee and T.M joint area. Most conventional system for high resolution scintigraphy was used by 4 mm aperture pinhole collimator. Performance comparison of high-resolution pinhole SPECT for Micro deluxe phantom using conventional system. the aim of this study is to evaluate performance of each aperture according to the diameter size and the usefulness of 24-hour delayed bone scintigraphy. Materials and Methods In this study 6 mm, 8 mm diameter pinhole collimators were mounted on Siemens E.CAM systems. In order to evaluate performance evaluation of each aperture and Micro Deluxe phantom was used for performance comparison of conventional SPECT system, Projection data were obtained with 9 degree increment per 30 second. Transverse images were reconstructed using dedicated OSEM algorithm with recovery of detector blurring. $^{99m}Tc-HDP$ source was used for 24-hour delayed bone scintigraphy. Results The knee joint images obtained with 24-hour delay were improved more than those obtained with 3-hour delay in our study. The 6 mm and 8 mm pinhole collimators FWHM have improved by 28% SNR and Uniformity have improved by 35%, Contrast has improved by 7% in 24-hour delayed knee joint image. While in 24-hour delayed T.M joint image of the 6 mm and 8 mm pinhole collimators FWHM have decreased by 60% SNR has decreased by 20% and Uniformity has decreased by 25%, Contrast has decreased significantly. Conclusion Pinhole collimators with 6 mm and 8 mm diameter could offer a superior performance for 24-hour delayed bone scintigraphy. The use of 24-hour delayed image provides additional benefits for pinhole scintigraphy of knee joint. Therefore, we expect that it is useful for precise diagnosis of knee joint and it is applicable to others joint imaging.

  • PDF

Introduction on the Products and the Quality Management Plans for GOCI-II (천리안 해양위성 2호 산출물 및 품질관리 계획)

  • Lee, Sun-Ju;Lee, Kyeong-Sang;Han, Tae Hyun;Moon, Jeong-Eon;Bae, Sujung;Choi, Jong-kuk
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1245-1257
    • /
    • 2021
  • GOCI-II, succeeding the mission of GOCI, was launched in February 2020 and has been in regular operation since October 2020. Korea Institute of Ocean Science and Technology (KIOST) processes and produces in real time Level-1B and 26 Level-2 outputs, which then are provided by Korea Hydrographic and Oceanographic Agency (KHOA). We introduced current status of regular GOCI-II operation and showed future improvement. Basic GOCI-II products including chlorophyll-a, total suspended materials, and colored dissolved organic matter concentration, are induced by OC4 and YOC algorithms, which were described in detail. For the full disk (FD), imaging schedule was established considering solar zenith angle and sun glint during the in-orbital test, but improved by further considering satellite zenith angle. The number of slots satisfying the condition 'Best Ocean' significantly increased from 15 to 78. GOCI-II calibration requirements were presented based on that by European Space Agency (ESA) and candidate fixed locations for calibrating local observation area were. The quality management of FD uses research ships and overseas bases of KIOST, but it is necessary to establish an international calibration/validation network. These results are expected to enhance the understanding of users for output processing and help establish detailed plans for future quality management tasks.

A Comparative Errors Assessment Between Surface Albedo Products of COMS/MI and GK-2A/AMI (천리안위성 1·2A호 지표면 알베도 상호 오차 분석 및 비교검증)

  • Woo, Jongho;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Byeon, Yugyeong;Jeon, Uujin;Sohn, Eunha;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1767-1772
    • /
    • 2021
  • Global satellite observation surface albedo data over a long period of time are actively used to monitor changes in the global climate and environment, and their utilization and importance are great. Through the generational shift of geostationary satellites COMS (Communication, Ocean and Meteorological Satellite)/MI (Meteorological Imager sensor) and GK-2A (GEO-KOMPSAT-2A)/AMI (Advanced Meteorological Imager sensor), it is possible to continuously secure surface albedo outputs. However, the surface albedo outputs of COMS/MI and GK-2A/AMI differ between outputs due to Differences in retrieval algorithms. Therefore, in order to expand the retrieval period of the surface albedo of COMS/MI and GK-2A/AMI to secure continuous climate change monitoring linkage, the analysis of the two satellite outputs and errors should be preceded. In this study, error characteristics were analyzed by performing comparative analysis with ground observation data AERONET (Aerosol Robotic Network) and other satellite data GLASS (Global Land Surface Satellite) for the overlapping period of COMS/MI and GK-2A/AMI surface albedo data. As a result of error analysis, it was confirmed that the RMSE of COMS/MI was 0.043, higher than the RMSE of GK-2A/AMI, 0.015. In addition, compared to other satellite (GLASS) data, the RMSE of COMS/MI was 0.029, slightly lower than that of GK-2A/AMI 0.038. When understanding these error characteristics and using COMS/MI and GK-2A/AMI's surface albedo data, it will be possible to actively utilize them for long-term climate change monitoring.