• Title/Summary/Keyword: Scoring algorithm

Search Result 67, Processing Time 0.025 seconds

Application of peak based-Bayesian statistical method for isotope identification and categorization of depleted, natural and low enriched uranium measured by LaBr3:Ce scintillation detector

  • Haluk Yucel;Selin Saatci Tuzuner;Charles Massey
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3913-3923
    • /
    • 2023
  • Todays, medium energy resolution detectors are preferably used in radioisotope identification devices(RID) in nuclear and radioactive material categorization. However, there is still a need to develop or enhance « automated identifiers » for the useful RID algorithms. To decide whether any material is SNM or NORM, a key parameter is the better energy resolution of the detector. Although masking, shielding and gain shift/stabilization and other affecting parameters on site are also important for successful operations, the suitability of the RID algorithm is also a critical point to enhance the identification reliability while extracting the features from the spectral analysis. In this study, a RID algorithm based on Bayesian statistical method has been modified for medium energy resolution detectors and applied to the uranium gamma-ray spectra taken by a LaBr3:Ce detector. The present Bayesian RID algorithm covers up to 2000 keV energy range. It uses the peak centroids, the peak areas from the measured gamma-ray spectra. The extraction features are derived from the peak-based Bayesian classifiers to estimate a posterior probability for each isotope in the ANSI library. The program operations were tested under a MATLAB platform. The present peak based Bayesian RID algorithm was validated by using single isotopes(241Am, 57Co, 137Cs, 54Mn, 60Co), and then applied to five standard nuclear materials(0.32-4.51% at.235U), as well as natural U- and Th-ores. The ID performance of the RID algorithm was quantified in terms of F-score for each isotope. The posterior probability is calculated to be 54.5-74.4% for 238U and 4.7-10.5% for 235U in EC-NRM171 uranium materials. For the case of the more complex gamma-ray spectra from CRMs, the total scoring (ST) method was preferred for its ID performance evaluation. It was shown that the present peak based Bayesian RID algorithm can be applied to identify 235U and 238U isotopes in LEU or natural U-Th samples if a medium energy resolution detector is was in the measurements.

Korean Sentence Boundary Detection Using Memory-based Machine Learning (메모리 기반의 기계 학습을 이용한 한국어 문장 경계 인식)

  • Han Kun-Heui;Lim Heui-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.133-139
    • /
    • 2004
  • This paper proposes a Korean sentence boundary detection system which employs k-nearest neighbor algorithm. We proposed three scoring functions to classify sentence boundary and performed comparative analysis. We uses domain independent linguistic features in order to make a general and robust system. The proposed system was trained and evaluated on the two kinds of corpus; ETRI corpus and KAIST corpus. As experimental results, the proposed system shows about $98.82\%$ precision and $99.09\%$ recall rate even though it was trained on relatively small corpus.

  • PDF

Appearance-Order-Based Schema Matching

  • Ding, Guohui;Cao, Keyan;Wang, Guoren;Han, Dong
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.2
    • /
    • pp.94-106
    • /
    • 2014
  • Schema matching is widely used in many applications, such as data integration, ontology merging, data warehouse and dataspaces. In this paper, we propose a novel matching technique that is based on the order of attributes appearing in the schema structure of query results. The appearance order embodies the extent of the importance of an attribute for the user examining the query results. The core idea of our approach is to collect statistics about the appearance order of attributes from the query logs, to find correspondences between attributes in the schemas to be matched. As a first step, we employ a matrix to structure the statistics around the appearance order of attributes. Then, two scoring functions are considered to measure the similarity of the collected statistics. Finally, a traditional algorithm is employed to find the mapping with the highest score. Furthermore, our approach can be seen as a complementary member to the family of the existing matchers, and can also be combined with them to obtain more accurate results. We validate our approach with an experimental study, the results of which demonstrate that our approach is effective, and has good performance.

A Study on the Design and Implementation of System for Predicting Attack Target Based on Attack Graph (공격 그래프 기반의 공격 대상 예측 시스템 설계 및 구현에 대한 연구)

  • Kauh, Janghyuk;Lee, Dongho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.1
    • /
    • pp.79-92
    • /
    • 2020
  • As the number of systems increases and the network size increases, automated attack prediction systems are urgently needed to respond to cyber attacks. In this study, we developed four types of information gathering sensors for collecting asset and vulnerability information, and developed technology to automatically generate attack graphs and predict attack targets. To improve performance, the attack graph generation method is divided into the reachability calculation process and the vulnerability assignment process. It always keeps up to date by starting calculations whenever asset and vulnerability information changes. In order to improve the accuracy of the attack target prediction, the degree of asset risk and the degree of asset reference are reflected. We refer to CVSS(Common Vulnerability Scoring System) for asset risk, and Google's PageRank algorithm for asset reference. The results of attack target prediction is displayed on the web screen and CyCOP(Cyber Common Operation Picture) to help both analysts and decision makers.

Tool-trajectory Error at the Singular Area of Five-axis Machining - Part I: Trajectory Error Modeling - (5축 가공의 특이영역에서 공구궤적 오차 - Part I: 궤적오차 모델링 -)

  • So, Bum-Sik;Jung, Yoong-Ho;Yun, Jae-Deuk
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.1
    • /
    • pp.18-24
    • /
    • 2009
  • This paper proposes an analytical method of evaluating the maximum error by modeling the exact tool path for the tool traverse singular region in five-axis machining. It is known that the NC data from the inverse kinematics transformation of 5-axis machining can generate singular positions where incoherent movements of the rotary axes can appear. These lead to unexpected errors and abrupt operations, resulting in scoring on the machined surface. To resolve this problem, previous methods have calculated several tool positions during a singular operation, using inverse kinematics equations to predict tool trajectory and approximate the maximum error. This type of numerical approach, configuring the tool trajectory, requires much computation time to obtain a sufficient number of tool positions in a region. We have derived an analytical equation for the tool trajectory in a singular area by modeling the tool operation into a linear and a nonlinear part that is a general form of the tool trajectory in the singular area and that is suitable for all types of five-axis machine tools. In addition, we have evaluated the maximum tool-path error exactly, using our analytical model. Our algorithm can be used to modify NC data, making the operation smoother and bringing any errors to within tolerance.

Development of an X-window Program, XFAP, for Assembling Contigs from DNA Fragment Data (DNA 염기 서열로부터 contig 구성을 위한 프로그램 XFAP의 개발)

  • Lee, Byung-Uk;Park, Kie-Jung;Kim, Seung-Moak
    • Korean Journal of Microbiology
    • /
    • v.34 no.1_2
    • /
    • pp.58-63
    • /
    • 1998
  • Fragment assembly problem is to reconstruct DNA sequence contigs from a collection of fragment sequences. We have developed an efficient X-window program, XFAP, for assembling DNA fragments. In the XFAP, the dimer frequency comparison method is used to quickly eliminate pairs of fragments that can not overlap. This method takes advantage of the difference of dimer frequencies within the minimum acceptable overlap length in each fragment pair. Hirschberg algorithm is applied to compute the maximal-scoring overlapping alignment in linear space. The perfomance of XFAP was tested on a set of DNA fragment sequences extracted from long DNA sequences of GenBank by a fragmentation program and showed a great improvement in execution time, especially as the number of fragments increases.

  • PDF

Design of Query Processing System to Retrieve Information from Social Network using NLP

  • Virmani, Charu;Juneja, Dimple;Pillai, Anuradha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1168-1188
    • /
    • 2018
  • Social Network Aggregators are used to maintain and manage manifold accounts over multiple online social networks. Displaying the Activity feed for each social network on a common dashboard has been the status quo of social aggregators for long, however retrieving the desired data from various social networks is a major concern. A user inputs the query desiring the specific outcome from the social networks. Since the intention of the query is solely known by user, therefore the output of the query may not be as per user's expectation unless the system considers 'user-centric' factors. Moreover, the quality of solution depends on these user-centric factors, the user inclination and the nature of the network as well. Thus, there is a need for a system that understands the user's intent serving structured objects. Further, choosing the best execution and optimal ranking functions is also a high priority concern. The current work finds motivation from the above requirements and thus proposes the design of a query processing system to retrieve information from social network that extracts user's intent from various social networks. For further improvements in the research the machine learning techniques are incorporated such as Latent Dirichlet Algorithm (LDA) and Ranking Algorithm to improve the query results and fetch the information using data mining techniques.The proposed framework uniquely contributes a user-centric query retrieval model based on natural language and it is worth mentioning that the proposed framework is efficient when compared on temporal metrics. The proposed Query Processing System to Retrieve Information from Social Network (QPSSN) will increase the discoverability of the user, helps the businesses to collaboratively execute promotions, determine new networks and people. It is an innovative approach to investigate the new aspects of social network. The proposed model offers a significant breakthrough scoring up to precision and recall respectively.

Accurate Measurement of Agatston Score Using kVp-Independent Reconstruction Algorithm for Ultra-High-Pitch Sn150 kVp CT

  • Xi Hu;Xinwei Tao;Yueqiao Zhang;Zhongfeng Niu;Yong Zhang;Thomas Allmendinger;Yu Kuang;Bin Chen
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1777-1785
    • /
    • 2021
  • Objective: To investigate the accuracy of the Agatston score obtained with the ultra-high-pitch (UHP) acquisition mode using tin-filter spectral shaping (Sn150 kVp) and a kVp-independent reconstruction algorithm to reduce the radiation dose. Materials and Methods: This prospective study included 114 patients (mean ± standard deviation, 60.3 ± 9.8 years; 74 male) who underwent a standard 120 kVp scan and an additional UHP Sn150 kVp scan for coronary artery calcification scoring (CACS). These two datasets were reconstructed using a standard reconstruction algorithm (120 kVp + Qr36d, protocol A; Sn150 kVp + Qr36d, protocol B). In addition, the Sn150 kVp dataset was reconstructed using a kVp-independent reconstruction algorithm (Sn150 kVp + Sa36d, protocol C). The Agatston scores for protocols A and B, as well as protocols A and C, were compared. The agreement between the scores was assessed using the intraclass correlation coefficient (ICC) and the Bland-Altman plot. The radiation doses for the 120 kVp and UHP Sn150 kVp acquisition modes were also compared. Results: No significant difference was observed in the Agatston score for protocols A (median, 63.05; interquartile range [IQR], 0-232.28) and C (median, 60.25; IQR, 0-195.20) (p = 0.060). The mean difference in the Agatston score for protocols A and C was relatively small (-7.82) and with the limits of agreement from -65.20 to 49.56 (ICC = 0.997). The Agatston score for protocol B (median, 34.85; IQR, 0-120.73) was significantly underestimated compared with that for protocol A (p < 0.001). The UHP Sn150 kVp mode facilitated an effective radiation dose reduction by approximately 30% (0.58 vs. 0.82 mSv, p < 0.001) from that associated with the standard 120 kVp mode. Conclusion: The Agatston scores for CACS with the UHP Sn150 kVp mode with a kVp-independent reconstruction algorithm and the standard 120 kVp demonstrated excellent agreement with a small mean difference and narrow agreement limits. The UHP Sn150 kVp mode allowed a significant reduction in the radiation dose.

Implementation of Parallel Local Alignment Method for DNA Sequence using Apache Spark (Apache Spark을 이용한 병렬 DNA 시퀀스 지역 정렬 기법 구현)

  • Kim, Bosung;Kim, Jinsu;Choi, Dojin;Kim, Sangsoo;Song, Seokil
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.608-616
    • /
    • 2016
  • The Smith-Watrman (SW) algorithm is a local alignment algorithm which is one of important operations in DNA sequence analysis. The SW algorithm finds the optimal local alignment with respect to the scoring system being used, but it has a problem to demand long execution time. To solve the problem of SW, some methods to perform SW in distributed and parallel manner have been proposed. The ADAM which is a distributed and parallel processing framework for DNA sequence has parallel SW. However, the parallel SW of the ADAM does not consider that the SW is a dynamic programming method, so the parallel SW of the ADAM has the limit of its performance. In this paper, we propose a method to enhance the parallel SW of ADAM. The proposed parallel SW (PSW) is performed in two phases. In the first phase, the PSW splits a DNA sequence into the number of partitions and assigns them to multiple nodes. Then, the original Smith-Waterman algorithm is performed in parallel at each node. In the second phase, the PSW estimates the portion of data sequence that should be recalculated, and the recalculation is performed on the portions in parallel at each node. In the experiment, we compare the proposed PSW to the parallel SW of the ADAM to show the superiority of the PSW.

An Efficient Top-k Query Processing Algorithm over Encrypted Outsourced-Data in the Cloud (아웃소싱 암호화 데이터에 대한 효율적인 Top-k 질의 처리 알고리즘)

  • Kim, Jong Wook;Suh, Young-Kyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.12
    • /
    • pp.543-548
    • /
    • 2015
  • Recently top-k query processing has been extremely important along with the explosion of data produced by a variety of applications. Top-k queries return the best k results ordered by a user-provided monotone scoring function. As cloud computing service has been getting more popular than ever, a hot attention has been paid to cloud-based data outsourcing in which clients' data are stored and managed by the cloud. The cloud-based data outsourcing, though, exposes a critical secuity concern of sensitive data, resulting in the misuse of unauthorized users. Hence it is essential to encrypt sensitive data before outsourcing the data to the cloud. However, there has been little attention to efficient top-k processing on the encrypted cloud data. In this paper we propose a novel top-k processing algorithm that can efficiently process a large amount of encrypted data in the cloud. The main idea of the algorithm is to prune unpromising intermediate results at the early phase without decrypting the encrypted data by leveraging an order-preserving encrypted technique. Experiment results show that the proposed top-k processing algorithm significantly reduces the overhead of client systems from 10X to 10000X.