• Title/Summary/Keyword: Maximum likelihood detection

Search Result 250, Processing Time 0.03 seconds

Closed-Form Expression of Approximate ML DOA Estimates in Bistatic MIMO Radar System (바이스태틱 MIMO 레이다 시스템에 적용되는 ML 도래각 추정 알고리즘의 근사 추정치에 대한 Closed-Form 표현)

  • Paik, Ji Woong;Kim, Jong-Mann;Lee, Joon-Ho
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.11
    • /
    • pp.886-893
    • /
    • 2017
  • Recently, for detection of low-RCS targets, bistatic radar and multistatic radar have been widely employed. In this paper, we present the process of deriving the received signal modeling of the bistatic MIMO radar system and deals with the performance analysis of applying the bistatic signal to the ML arrival angle estimation algorithm. In case of the ML algorithm, as the number of the targets increases, azimuth search dimension for DOA estimation also increases, which implies that the ML algorithm for multiple targets is computationally very intensive. To solve this problem a closed-form expression of estimation error is presented for performance analysis of the algorithm.

Estimation of Lead Exposure Intensity by Industry Using Nationwide Exposure Databases in Korea

  • Koh, Dong-Hee;Park, Ju-Hyun;Lee, Sang-Gil;Kim, Hwan-Cheol;Jung, Hyejung;Kim, Inah;Choi, Sangjun;Park, Donguk
    • Safety and Health at Work
    • /
    • v.12 no.4
    • /
    • pp.439-444
    • /
    • 2021
  • Background: In a previous study, we estimated exposure prevalence and the number of workers exposed to carcinogens by industry in Korea. The present study aimed to evaluate the optimal exposure intensity indicators of airborne lead exposure by comparing to blood lead measurements for the future development of the carcinogen exposure intensity database. Methods: Data concerning airborne lead measurements and blood lead levels were collected from nationwide occupational exposure databases, compiled between 2015 and 2016. Summary statistics, including the arithmetic mean (AM), geometric mean (GM), and 95th percentile level (X95) were calculated by industry both for airborne lead and blood lead measurements. Since many measurements were below the limits of detection (LODs), the simple replacement with half of the LOD and maximum likelihood estimation (MLE) methods were used for statistical analysis. For examining the optimal exposure indicator of airborne lead exposure, blood lead levels were used as reference data for subsequent rank correlation analyses. Results: A total of 19,637 airborne lead measurements and 32,848 blood lead measurements were used. In general, simple replacement showed a higher correlation than MLE. The results showed that AM and X95 using simple replacement could be used as optimal exposure intensity indicators, while X95 showed better correlations than AM in industries with 20 or more measurements. Conclusion: Our results showed that AM or X95 could be potential candidates for exposure intensity indicators in the Korean carcinogen exposure database. Especially, X95 is an optimal indicator where there are enough measurements to compute X95 values.

Radiation measurement and imaging using 3D position sensitive pixelated CZT detector

  • Kim, Younghak;Lee, Taewoong;Lee, Wonho
    • Nuclear Engineering and Technology
    • /
    • v.51 no.5
    • /
    • pp.1417-1427
    • /
    • 2019
  • In this study, we evaluated the performance of a commercial pixelated cadmium zinc telluride (CZT) detector for spectroscopy and identified its feasibility as a Compton camera for radiation monitoring in a nuclear power plant. The detection system consisted of a $20mm{\times}20mm{\times}5mm$ CZT crystal with $8{\times}8$ pixelated anodes and a common cathode, in addition to an application specific integrated circuit. The performance of the various radioisotopes $^{57}Co$, $^{133}Ba$, $^{22}Na$, and $^{137}Cs$ was evaluated. In general, the amplitude of the induced signal in a CZT crystal depends on the interaction position and material non-uniformity. To minimize this dependency, a drift time correction was applied. The depth of each interaction was calculated by the drift time and the positional dependency of the signal amplitude was corrected based on the depth information. After the correction, the Compton regions of each spectrum were reduced, and energy resolutions of 122 keV, 356 keV, 511 keV, and 662 keV peaks were improved from 13.59%, 9.56%, 6.08%, and 5%-4.61%, 2.94%, 2.08%, and 2.2%, respectively. For the Compton imaging, simulations and experiments using one $^{137}Cs$ source with various angular positions and two $^{137}Cs$ sources were performed. Individual and multiple sources of $^{133}Ba$, $^{22}Na$, and $^{137}Cs$ were also measured. The images were successfully reconstructed by weighted list-mode maximum likelihood expectation maximization method. The angular resolutions and intrinsic efficiency of the $^{137}Cs$ experiments were approximately $7^{\circ}-9^{\circ}$ and $5{\times}10^{-4}-7{\times}10^{-4}$, respectively. The distortions of the source distribution were proportional to the offset angle.

Damage Proxy Map over Collapsed Structure in Ansan Using COSMO-SkyMed Data

  • Nur, Arip Syaripudin;Fadhillah, Muhammad Fulki;Jung, Young-Hoon;Nam, Boo Hyun;Kim, Yong Je;Park, Yu-Chul;Lee, Chang-Wook
    • The Journal of Engineering Geology
    • /
    • v.32 no.3
    • /
    • pp.363-376
    • /
    • 2022
  • An area under construction for a living facility collapsed around 12:48 KST on 13 January 2021 in Sa-dong, Ansan-si, Gyeonggi-do. There were no casualties due to the rapid evacuation measure, but part of the temporary retaining facility collapsed, and several cracks occurred in the adjacent road on the south side. This study used the potential of synthetic aperture radar (SAR) satellite for surface property changes that lies in backscattering characteristic to map the collapsed structure. The interferometric SAR technique can make a direct measurement of the decorrelation among different acquisition dates by integrating both amplitude and phase information. The damage proxy map (DPM) technique has been employed using four high-resolution Constellation of Small Satellites for Mediterranean basin Observation (COSMO-SkyMed) data spanning from 2020 to 2021 during ascending observation to analyze the collapse of the construction. DPM relies on the difference of pre- and co-event interferometric coherences to depict anomalous changes that indicate collapsed structure in the study area. The DPMs were displayed in a color scale that indicates an increasingly more significant ground surface change in the area covered by the pixels, depicting the collapsed structure. Therefore, the DPM technique with SAR data can be used for damage assessment with accurate and comprehensive detection after an event. In addition, we classify the amplitude information using support vector machine (SVM) and maximum likelihood classification algorithms. An investigation committee was formed to determine the cause of the collapse of the retaining wall and to suggest technical and institutional measures and alternatives to prevent similar incidents from reoccurring. The report from the committee revealed that the incident was caused by a combination of factors that were not carried out properly.

A Study on the Reliability of S/W during the Developing Stage (소프트웨어 개발단계의 신뢰도에 관한 연구)

  • Yang, Gye-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.5
    • /
    • pp.61-73
    • /
    • 2009
  • Many software reliability growth models(SRGM) have been proposed since the software reliability issue was raised in 1972. The technology to estimate and grow the reliability of developing S/W to target value during testing phase were developed using them. Most of these propositions assumed the S/W debugging testing efforts be constant or even did not consider them. A few papers were presented as the software reliability evaluation considering the testing effort was important afterwards. The testing effort forms which have been presented by this kind of papers were exponential, Rayleigh, Weibull, or Logistic functions, and one of these 4 types was used as a testing effort function depending on the S/W developing circumstances. I propose the methology to evaluate the SRGM using least square estimater and maximum likelihood estimater for those 4 functions, and then examine parameters applying actual data adopted from real field test of developing S/W.

A Comparative Study of Image Classification Method to Detect Water Body Based on UAS (UAS 기반의 수체탐지를 위한 영상분류기법 비교연구)

  • LEE, Geun-Sang;KIM, Seok-Gu;CHOI, Yun-Woong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.3
    • /
    • pp.113-127
    • /
    • 2015
  • Recently, there has been a growing interest in UAS(Unmanned Aerial System), and it is required to develop techniques to effectively detect water body from the recorded images in order to implement flood monitoring using UAS. This study used a UAS with RGB and NIR+RG bands to achieve images, and applied supervised classification method to evaluate the accuracy of water body detection. Firstly, the result for accuracy in water body image classification by RGB images showed high Kappa coefficients of 0.791 and 0.783 for the artificial neural network and minimum distance method respectively, and the maximum likelihood method showed the lowest, 0.561. Moreover, in the evaluation of accuracy in water body image classification by NIR+RG images, the magalanobis and minimum distance method showed high values of 0.869 and 0.830 respectively, and in the artificial neural network method, it was very low as 0.779. Especially, RGB band revealed errors to classify trees or grasslands of Songsan amusement park as water body, but NIR+RG presented noticeable improvement in this matter. Therefore, it was concluded that images with NIR+RG band, compared those with RGB band, are more effective for detection of water body when the mahalanobis and minimum distance method were applied.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Evaluation of Diagnostic Performance of a Polymerase Chain Reaction for Detection of Canine Dirofilaria immitis (개 심장사상충을 진단하기 위한 중합연쇄반응검사 (PCR)의 진단적 특성 평가)

  • Pak, Son-Il;Kim, Doo
    • Journal of Veterinary Clinics
    • /
    • v.24 no.2
    • /
    • pp.77-81
    • /
    • 2007
  • Diagnostic performance of polymerase chain reaction (PCR) for detecting Dirofilaria immitis in dogs was evaluated when no gold standard test was employed. An enzyme-linked immunosorbent assay test kit (SnapTM, IDEXX, USA) with unknown parameters was also employed. The sensitivity and specificity of the PCR from two-population model were estimated by using both maximum likelihood using expectation-maximization (EM) algorithm and Bayesian method, assuming conditional independence between the two tests. A total of 266 samples, 133 samples in each trial, were randomly retrieved from the heartworm database records during the year 2002-2004 in a university animal hospital. These data originated from the test results of military dogs which were brought for routine medical check-up or testing for heartworm infection. When combined 2 trials, sensitivity and specificity of the PCR was 96.4-96.7% and 97.6-98.8% in EM and 94.4-94.8% and 97.1-98% in Bayesian. There were no statistical differences between estimates. This finding indicates that the PCR assay could be useful screening tool for detecting heartworm antigen in dogs. This study was provided further evidences that Bayesian approach is an alternative approach to draw better inference about the performance of a new diagnostic test in case when either gold test is not available.

Fire Severity Mapping Using a Single Post-Fire Landsat 7 ETM+ Imagery (단일 시기의 Landsat 7 ETM+ 영상을 이용한 산불피해지도 작성)

  • 원강영;임정호
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.85-97
    • /
    • 2001
  • The KT(Kauth-Thomas) and IHS(Intensity-Hue-Saturation) transformation techniques were introduced and compared to investigate fire-scarred areas with single post-fire Landsat 7 ETM+ image. This study consists of two parts. First, using only geometrically corrected imagery, it was examined whether or not the different level of fire-damaged areas could be detected by simple slicing method within the image enhanced by the IHS transform. As a result, since the spectral distribution of each class on each IHS component was overlaid, the simple slicing method did not seem appropriate for the delineation of the areas of the different level of fire severity. Second, the image rectified by both radiometrically and topographically was enhanced by the KT transformation and the IHS transformation, respectively. Then, the images were classified by the maximum likelihood method. The cross-validation was performed for the compensation of relatively small set of ground truth data. The results showed that KT transformation produced better accuracy than IHS transformation. In addition, the KT feature spaces and the spectral distribution of IHS components were analyzed on the graph. This study has shown that, as for the detection of the different level of fire severity, the KT transformation reflects the ground physical conditions better than the IHS transformation.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.