• Title/Summary/Keyword: Code Clustering

Search Result 63, Processing Time 0.019 seconds

When do cosmic peaks, filaments, or walls merge? A theory of critical events in a multiscale landscape

  • C Cadiou;C Pichon;S Codis;M Musso;D Pogosyan;Y Dubois;J-F Cardoso;S Prunet
    • Monthly Notices of the Royal Astronomical Society
    • /
    • v.496 no.4
    • /
    • pp.4787-4821
    • /
    • 2020
  • The merging rate of cosmic structures is computed, relying on the ansatz that they can be predicted in the initial linear density field from the coalescence of critical points with increasing smoothing scale, used here as a proxy for cosmic time. Beyond the mergers of peaks with saddle points (a proxy for halo mergers), we consider the coalescence and nucleation of all sets of critical points, including wall-saddle to filament-saddle and wall-saddle to minima (a proxy for filament and void mergers, respectively), as they impact the geometry of galactic infall, and in particular filament disconnection. Analytical predictions of the one-point statistics are validated against multiscale measurements in 2D and 3D realizations of Gaussian random fields (the corresponding code being available upon request) and compared qualitatively to cosmological N-body simulations at early times (z ≥ 10) and large scales (≥5 Mpc h-1). The rate of filament coalescence is compared to the merger rate of haloes and the two-point clustering of these events is computed, along with their cross-correlations with critical points. These correlations are qualitatively consistent with the preservation of the connectivity of dark matter haloes, and the impact of the large-scale structures on assembly bias. The destruction rate of haloes and voids as a function of mass and redshift is quantified down to z = 0 for a Lambda cold dark matter cosmology. The one-point statistics in higher dimensions are also presented, together with consistency relations between critical point and critical event counts.

A Study on Automatic Classification Technique of Malware Packing Type (악성코드 패킹유형 자동분류 기술 연구)

  • Kim, Su-jeong;Ha, Ji-hee;Lee, Tae-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.5
    • /
    • pp.1119-1127
    • /
    • 2018
  • Most of the cyber attacks are caused by malicious codes. The damage caused by cyber attacks are gradually expanded to IoT and CPS, which is not limited to cyberspace but a serious threat to real life. Accordingly, various malicious code analysis techniques have been appeared. Dynamic analysis have been widely used to easily identify the resulting malicious behavior, but are struggling with an increase in Anti-VM malware that is not working in VM environment detection. On the other hand, static analysis has difficulties in analysis due to various packing techniques. In this paper, we proposed malware classification techniques regardless of known packers or unknown packers through the proposed model. To do this, we designed a model of supervised learning and unsupervised learning for the features that can be used in the PE structure, and conducted the results verification through 98,000 samples. It is expected that accurate analysis will be possible through customized analysis technology for each class.

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF