• Title/Summary/Keyword: Signature Generation

Search Result 109, Processing Time 0.032 seconds

Isotopic Fissile Assay of Spent Fuel in a Lead Slowing-Down Spectrometer System

  • Lee, Yongdeok;Jeon, Juyoung;Park, Changje
    • Nuclear Engineering and Technology
    • /
    • v.49 no.3
    • /
    • pp.549-555
    • /
    • 2017
  • A lead slowing-down spectrometer (LSDS) system is under development to analyze isotopic fissile content that is applicable to spent fuel and recycled material. The source neutron mechanism for efficient and effective generation was also determined. The source neutron interacts with a lead medium and produces continuous neutron energy, and this energy generates dominant fission at each fissile, below the unresolved resonance region. From the relationship between the induced fissile fission and the fast fission neutron detection, a mathematical assay model for an isotopic fissile material was set up. The assay model can be expanded for all fissile materials. The correction factor for self-shielding was defined in the fuel assay area. The corrected fission signature provides well-defined fission properties with an increase in the fissile content. The assay procedure was also established. The assay energy range is very important to take into account the prominent fission structure of each fissile material. Fission detection occurred according to the change of the Pu239 weight percent (wt%), but the content of U235 and Pu241 was fixed at 1 wt%. The assay result was obtained with 2~3% uncertainty for Pu239, depending on the amount of Pu239 in the fuel. The results show that LSDS is a very powerful technique to assay the isotopic fissile content in spent fuel and recycled materials for the reuse of fissile materials. Additionally, a LSDS is applicable during the optimum design of spent fuel storage facilities and their management. The isotopic fissile content assay will increase the transparency and credibility of spent fuel storage.

A Design and Analysis of Micro-payment System for Internet Commerce (인터넷 상거래를 위한 소액대금결제 시스템의 설계 및 성능평가)

  • Sung, Won;Kim, Eui-Jung;Park, Jong-Won
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.533-546
    • /
    • 2003
  • for the low information goods which will be traded through Internet is impossible to manage with previously existed payment system. The reason is that it's not economic because the management cost is bigger than the benefit of the information goods trade. Therefore, recently, there have been micropaymentresearches such as "Milicent", "PayWord", "MicroMint", and "iKP", etc. Though these methods don't have any problem with the low cost of the mechanism and the satisfaction of adequate security, they have big problem with the use of the unnecessary account and the aggregation of payment bill. The PayHash system which has been developed in this study simplifies the system's mechanism with "one-way hash function" which is used in generation, payment, and verification of the bill. And the system removed the generation and use of unnecessary account by making one customer have one account. The system solve the problem of the payment aggregation by using the last payment hash value and its index. And the system improves its performance by reducing the use of "digital signature" drastically, as well. As the result of this study, the PayHash system made it possible for the participants of the Internet Commerce to trade the lowest cost goods through efficient maintenance.o trade the lowest cost goods through efficient maintenance.

  • PDF

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Current Status of Cattle Genome Sequencing and Analysis using Next Generation Sequencing (차세대유전체해독 기법을 이용한 소 유전체 해독 연구현황)

  • Choi, Jung-Woo;Chai, Han-Ha;Yu, Dayeong;Lee, Kyung-Tai;Cho, Yong-Min;Lim, Dajeong
    • Journal of Life Science
    • /
    • v.25 no.3
    • /
    • pp.349-356
    • /
    • 2015
  • Thanks to recent advances in next-generation sequencing (NGS) technology, diverse livestock species have been dissected at the genome-wide sequence level. As for cattle, there are currently four Korean indigenous breeds registered with the Domestic Animal Diversity Information System of the Food and Agricultural Organization of the United Nations: Hanwoo, Chikso, Heugu, and Jeju Heugu. These native genetic resources were recently whole-genome resequenced using various NGS technologies, providing enormous single nucleotide polymorphism information across the genomes. The NGS application further provided biological such that Korean native cattle are genetically distant from some cattle breeds of European origins. In addition, the NGS technology was successfully applied to detect structural variations, particularly copy number variations that were usually difficult to identify at the genome-wide level with reasonable accuracy. Despite the success, those recent studies also showed an inherent limitation in sequencing only a representative individual of each breed. To elucidate the biological implications of the sequenced data, further confirmatory studies should be followed by sequencing or validating the population of each breed. Because NGS sequencing prices have consistently dropped, various population genomic theories can now be applied to the sequencing data obtained from the population of each breed of interest. There are still few such population studies available for the Korean native cattle breeds, but this situation will soon be improved with the recent initiative for NGS sequencing of diverse native livestock resources, including the Korean native cattle breeds.

Discussion of Preliminary Design Review for MIRIS, the Main Payload of STSAT-3

  • Han, Won-Yong;Jin, Ho;Park, Jang-Hyun;Nam, Uk-Won;Yuk, In-Soo;Lee, Sung-Ho;Park, Young-Sik;Park, Sung-Jun;Lee, Dae-Hee;Ree, Chang-H.;Jeong, Woong-Seob;Moon, Bong-Kon;Cha, Sang-Mok;Cho, Seoung-Hyun;Rhee, Seung-Woo;Park, Jong-Oh;Lee, Seung-Heon;Lee, Hyung-Mok;Matsumoto, Toshio
    • Bulletin of the Korean Space Science Society
    • /
    • 2008.10a
    • /
    • pp.27.1-27.1
    • /
    • 2008
  • KASI (Korea Astronomy and Space Science Institute) is developing a compact wide-field survey space telescope system, MIRIS (The Multi-purpose IR Imaging System) to be launched in 2010 as the main payload of the Korea Science and Technology Satellite 3. Through recent System Design Review (SDR) and Preliminary Design Review (PDR), most of the system design concept was reviewed and confirmed. The near IR imaging system adopted short F/2 optics for wide field low resolution observation at wavelength band 0.9~2.0 um minimizing the effect of attitude control system. The mechanical system is composed of a cover, baffle, optics, and detector system using a $256\times256$ Teledyne PICNIC FPA providing a $3.67\times3.67$ degree field of view with a pixel scale of 51.6 arcsec. We designed a support system to minimize heat transfer with Muti-Layer Insulation. The electronics of the MIRIS system is composed of 7 boards including DSP, control, SCIF. Particular attention is being paid to develop mission operation scenario for space observation to minimize IR background radiation from the Earth and Sun. The scientific purpose of MIRIS is to survey the Galactic plane in the emission line of Pa$\alpha$ ($1.88{\mu}m$) and to detect the cosmic infrared background (CIB) radiation. The CIB is being suspected to be originated from the first generation stars of the Universe and we will test this hypothesis by comparing the fluctuations in I (0.9~1.2 um) and H (1.2~2.0 um) bands to search the red shifted Lyman cutoff signature.

  • PDF

Automatic Generation of Snort Content Rule for Network Traffic Analysis (네트워크 트래픽 분석을 위한 Snort Content 규칙 자동 생성)

  • Shim, Kyu-Seok;Yoon, Sung-Ho;Lee, Su-Kang;Kim, Sung-Min;Jung, Woo-Suk;Kim, Myung-Sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.666-677
    • /
    • 2015
  • The importance of application traffic analysis for efficient network management has been emphasized continuously. Snort is a popular traffic analysis system which detects traffic matched to pre-defined signatures and perform various actions based on the rules. However, it is very difficult to get highly accurate signatures to meet various analysis purpose because it is very tedious and time-consuming work to search the entire traffic data manually or semi-automatically. In this paper, we propose a novel method to generate signatures in a fully automatic manner in the form of sort rule from raw packet data captured from network link or end-host. We use a sequence pattern algorithm to generate common substring satisfying the minimum support from traffic flow data. Also, we extract the location and header information of the signature which are the components of snort content rule. When we analyzed the proposed method to several application traffic data, the generated rule could detect more than 97 percentage of the traffic data.

Proposal and Analysis of Primality and Safe Primality test using Sieve of Euler (오일러체를 적용한 소수와 안전소수의 생성법 제안과 분석)

  • Jo, Hosung;Lee, Jiho;Park, Heejin
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.438-447
    • /
    • 2019
  • As the IoT-based hyper-connected society grows, public-key cryptosystem such as RSA is frequently used for encryption, authentication, and digital signature. Public-key cryptosystem use very large (safe) prime numbers to ensure security against malicious attacks. Even though the performance of the device has greatly improved, the generation of a large (safe)prime is time-consuming or memory-intensive. In this paper, we propose ET-MR and ET-MR-MR using Euler sieve so it runs faster while using less memory. We present a running time prediction model by probabilistic analysis and compare time and memory of our method with conventional methods. Experimental results show that the difference between the expected running time and the measured running time is less than 4%. In addition, the fastest running time of ET-MR is 36% faster than that of TD-MR, 8.5% faster than that of DT-MR and the fastest running time of ET-MR-MR is 65.3% faster than that of TD-MR-MR and similar to that of DT-MR-MR. When k=12,381, the memory usage of ET-MR is 2.7 times more than that of DT-MR but 98.5% less than that of TD-MR and when k=65,536, the memory usage of ET-MR-MR is 98.48% less than that of TD-MR-MR and 92.8% less than that of DT-MR-MR.

CHEMICAL PROPERTIES OF CORES IN DIFFERENT ENVIRONMENTS; THE ORION A, B AND λ ORIONIS CLOUDS

  • Yi, Hee-Weon;Lee, Jeong-Eun;Liu, Tie;Kim, Kee-Tae
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.42.1-42.1
    • /
    • 2019
  • We observed 80 dense cores ($N(H_2)$ > $10^{22}cm^{-2}$) in the Orion molecular cloud complex which contains the Orion A (39 cores), B (26 cores), and ${\lambda}$ Orionis (15 cores) clouds. We investigate the behavior of the different molecular tracers and look for chemical variations of cores in the three clouds in order to systematically investigate the effects of stellar feedback. The most commonly detected molecular lines (with the detection rates higher than 50%) are $N_2H^+$, $HCO^+$, $H^{13}CO^+$, $C_2H$, HCN, and $H_2CO$. The detection rates of dense gas tracers, $N_2H^+$, $HCO^+$, $H^{13}CO^+$, and $C_2H$ show the lowest values in the ${\lambda}$ Orionis cloud. We find differences in the D/H ratio of $H_2CO$ and the $N_2H^+/HCO^+$ abundance ratios among the three clouds. Eight starless cores in the Orion A and B clouds exhibit high deuterium fractionations, larger than 0.10, while in the ${\lambda}$ Orionis cloud, no cores reveal the high ratio. These chemical properties could support that cores in the ${\lambda}$ Orionis cloud are affected by the photo-dissociation and external heating from the nearby H II region. An unexpected trend was found in the $[N_2H^+]/[HCO^+]$ ratio with a higher median value in the ${\lambda}$ Orionis cloud than in the Orion A/B clouds than; typically, the $[N_2H^+]/[HCO^+]$ ratio is lower in higher temperatures and lower column densities. This could be explained by a longer timescale in the prestellar stage in the ${\lambda}$ Orionis cloud, resulting in more abundant nitrogen-bearing molecules. In addition to these chemical differences, the kinematical difference was also found among the three clouds; the blue excess, which is an infall signature found in optically thick line profiles, is 0 in the ${\lambda}$ Orionis cloud while it is 0.11 and 0.16 in the Orion A and B clouds, respectively. This result could be another evidence of the negative feedback of active current star formation to the next generation of star formation.

  • PDF

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.