• 제목/요약/키워드: 오류 확률

Search Result 545, Processing Time 0.022 seconds

AWGN Removal using Laplace Distribution and Weighted Mask (라플라스 분포와 가중치 마스크를 이용한 AWGN 제거)

  • Park, Hwa-Jung;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1846-1852
    • /
    • 2021
  • In modern society, various digital devices are being distributed in a wide range of fields due to the fourth industrial revolution and the development of IoT technology. However, noise is generated in the process of acquiring or transmitting an image, and not only damages the information, but also affects the system, causing errors and incorrect operation. AWGN is a representative noise among image noise. As a method for removing noise, prior research has been conducted, and among them, AF, A-TMF, and MF are the representative methods. Existing filters have a disadvantage that smoothing occurs in areas with high frequency components because it is difficult to consider the characteristics of images. Therefore, the proposed algorithm calculates the standard deviation distribution to effectively eliminate noise even in the high frequency domain, and then calculates the final output by applying the probability density function weight of the Laplace distribution using the curve fitting method.

Effective Speaker Recognition Technology Using Noise (잡음을 활용한 효과적인 화자 인식 기술)

  • Ko, Suwan;Kang, Minji;Bang, Sehee;Jung, Wontae;Lee, Kyungroul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.259-262
    • /
    • 2022
  • 정보화 시대 스마트폰이 대중화되고 실시간 인터넷 사용이 가능해짐에 따라, 본인을 식별하기 위한 사용자 인증이 필수적으로 요구된다. 대표적인 사용자 인증 기술로는 아이디와 비밀번호를 이용한 비밀번호 인증이 있지만, 키보드로부터 입력받는 이러한 인증 정보는 시각 장애인이나 손 사용이 불편한 사람, 고령층과 같은 사람들이 많은 서비스로부터 요구되는 아이디와 비밀번호를 기억하고 입력하기에는 불편함이 따를 뿐만 아니라, 키로거와 같은 공격에 노출되는 문제점이 존재한다. 이러한 문제점을 해결하기 위하여, 자신의 신체의 특징을 활용하는 생체 인증이 대두되고 있으며, 그중 목소리로 사용자를 인증한다면, 효과적으로 비밀번호 인증의 한계점을 극복할 수 있다. 이러한 화자 인식 기술은 KT의 기가 지니와 같은 음성 인식 기술에서 활용되고 있지만, 목소리는 위조 및 변조가 비교적 쉽기에 지문이나 홍채 등을 활용하는 인증 방식보다 정확도가 낮고 음성 인식 오류 또한 높다는 한계점이 존재한다. 상기 목소리를 활용한 사용자 인증 기술인 화자 인식 기술을 활용하기 위하여, 사용자 목소리를 학습시켰으며, 목소리의 주파수를 추출하는 MFCC 알고리즘을 이용해 테스트 목소리와 정확도를 측정하였다. 그리고 악의적인 공격자가 사용자 목소리를 흉내 내는 경우나 사용자 목소리를 마이크로 녹음하는 등의 방법으로 획득하였을 경우에는 높은 확률로 인증의 우회가 가능한 것을 검증하였다. 이에 따라, 더욱 효과적으로 화자 인식의 정확도를 향상시키기 위하여, 본 논문에서는 목소리에 잡음을 섞는 방법으로 화자를 인식하는 방안을 제안한다. 제안하는 방안은 잡음이 정확도에 매우 민감하게 반영되기 때문에, 기존의 인증 우회 방법을 무력화하고, 더욱 효과적으로 목소리를 활용한 화자 인식 기술을 제공할 것으로 사료된다.

  • PDF

A Study on Efficient Design of Surveillance RADAR Interface Control Unit in Naval Combat System

  • Dong-Kwan Kim;Dong-Han Jung;Won-Seok Jang;Young-San Kim;Hyo-Jo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.125-134
    • /
    • 2023
  • In this paper, we propose an efficient surveillance RADAR(RAdio Detection And Ranging) interface control unit(ICU) design in the naval combat system. The proposed design applied a standardized architecture for modules that can be shared in ship combat system software. An error detection function for each link was implemented to increase the recognition speed of disconnection. Messages that used to be sent periodically for human-computer interaction(HCI) are now only transmitted when there is a change in the datagram. This can reduce the processing load of the console. The proposed design supplements the radar with the waterfall scope and time-limited splash recognition in relation to the hit check and zeroing of the shot when the radar processing ability is low due to the adoption of a low-cost commercial radar in the ship. Therefore, it is easy for the operator to determine whether the shot is hit or not, the probability of wrong recognition can be reduced, and the radar's resources can be obtained more effectively.

Voice Activity Detection Based on SVM Classifier Using Likelihood Ratio Feature Vector (우도비 특징 벡터를 이용한 SVM 기반의 음성 검출기)

  • Jo, Q-Haing;Kang, Sang-Ki;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.8
    • /
    • pp.397-402
    • /
    • 2007
  • In this paper, we apply a support vector machine(SVM) that incorporates an optimized nonlinear decision rule over different sets of feature vectors to improve the performance of statistical model-based voice activity detection(VAD). Conventional method performs VAD through setting up statistical models for each case of speech absence and presence assumption and comparing the geometric mean of the likelihood ratio (LR) for the individual frequency band extracted from input signal with the given threshold. We propose a novel VAD technique based on SVM by treating the LRs computed in each frequency bin as the elements of feature vector to minimize classification error probability instead of the conventional decision rule using geometric mean. As a result of experiments, the performance of SVM-based VAD using the proposed feature has shown better results compared with those of reported VADs in various noise environments.

Frequency Domain Double-Talk Detector Based on Gaussian Mixture Model (주파수 영역에서의 Gaussian Mixture Model 기반의 동시통화 검출 연구)

  • Lee, Kyu-Ho;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.401-407
    • /
    • 2009
  • In this paper, we propose a novel method for the cross-correlation based double-talk detection (DTD), which employing the Gaussian Mixture Model (GMM) in the frequency domain. The proposed algorithm transforms the cross correlation coefficient used in the time domain into 16 channels in the frequency domain using the discrete fourier transform (DFT). The channels are then selected into seven feature vectors for GMM and we identify three different regions such as far-end, double-talk and near-end speech using the likelihood comparison based on those feature vectors. The presented DTD algorithm detects efficiently the double-talk regions without Voice Activity Detector which has been used in conventional cross correlation based double-talk detection. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional schemes. especially, show the robustness against detection errors resulting from the background noises or echo path change which one of the key issues in practical DTD.

Entity Embeddings for Enhancing Feasible and Diverse Population Synthesis in a Deep Generative Models (심층 생성모델 기반 합성인구 생성 성능 향상을 위한 개체 임베딩 분석연구)

  • Donghyun Kwon;Taeho Oh;Seungmo Yoo;Heechan Kang
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.17-31
    • /
    • 2023
  • An activity-based model requires detailed population information to model individual travel behavior in a disaggregated manner. The recent innovative approach developed deep generative models with novel regularization terms that improves fidelity and diversity for population synthesis. Since the method relies on measuring the distance between distribution boundaries of the sample data and the generated sample, it is crucial to obtain well-defined continuous representation from the discretized dataset. Therefore, we propose an improved entity embedding models to enhance the performance of the regularization terms, which indirectly supports the synthesis in terms of feasible and diverse populations. Our results show a 28.87% improvement in the F1 score compared to the baseline method.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF

Study of the Impact of Light Through the Vitamin $B_{12}$/Folate Inspection (Vitamin $B_{12}$/Folate 검사 시 빛의 영향에 대한 고찰)

  • Cho, Eun Bit;Pack, Song Ran;Kim, Whe Jung;Kim, Seong Ho;Yoo, Seon Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.162-166
    • /
    • 2012
  • Purpose : Vitamin $B_{12}$ and Folate are for anemia work-up which is well known for its sensitivity of light; the screening manual also specifies to be careful with light conditions. According to this, our laboratory minimized the exposure of light when inspecting the Vitamin $B_{12}$ and Folate, but the exposure cannot be wholly blocked due to other various factors such as when conducting specimen segregation. Thus, this inspection is to identify to what extent light can influence and whether the exclusion of light is mandatory during the Vitamin $B_{12}$/Folate test. Materials and Methods : We have conducted two experiments of identifying the extent of light's influence when conducting the Vitamin $B_{12}$/Folate test and also when specimens are under preservation. These experiments were progressed with various concentrations of patients' specimens which were requested to our hospital in March 2012. The first experiment is to verify the results on Vitamin $B_{12}$/Folate dependent on light exposure during the experiment. In the process, we have compared the results of light exposure/exclusion during the incubation process after the reagent division. The second experiment is about the impact of light exposure on the results on Vitamin $B_{12}$/Folate during the preservation. For 1, 2, 7 days the light on the specimen were wholly blocked and were preserved under $-15^{\circ}C$ temperature refrigeration. Then, we compared the results of light-excluded specimen and the exposed one. Results : When conducting first experiment, there were no noticeable changes in the Standard and specimen's cpm, but for Vitamin $B_{12}$, the average result of specimen exposed to light increased 7.8% compare to that of excluded one's. Furthermore, in the significant level 0.05, the significance probability or the p-value was 0.251 which means it has no impact. For Folate, the result being exposed to light decreased 5.4%, the significance probability was 0.033 which means it has little impact. For the second preservation, the result was dependent on the light exposure. The first day of preservation of Vitamin $B_{12}$, the clinical material exposed to light was 11.6%, second day clinical material exposed to light was 10.8%, seventh day clinical material exposed to light increased 3.8%, the significance probability of the $1^{st}$, $2^{nd}$, $7^{th}$ day is 0.372, 0.033, 0.144 respectively, and which indicates that the $1^{st}$ and $7^{th}$ day seems to have no impact. For Folate's case, the clinical material exposed to light has increased 1.4% but hardly had impact, $2^{nd}$ day clinical material being exposed to light was 6.1%, $7^{Th}$ day clinical material being exposed to light decreased 5.2%. The significance probability of Folate on the $1^{st}$, $2^{nd}$, $7^{th}$ day is 0.378, 0.037, 0.217 respectively, and the $1^{st}$ day and the $7^{th}$ day seems to have no impact. Conclusion : After scrutinizing the impact of light exposure/exclusion, Vitamin $B_{12}$ has no impact, while Folate seems to have no noticeable influence but light exclusion is recommended due to its significance probability of 0.033 when conducting experiment. During the preservation, the $2^{nd}$ day result depend on the light exclusion seems to have impact or influence. However, to consider the complication of the experimental process, the experiment including technical errors is predictable. Hence, it is likely to have no impact of light. Nevertheless, it is recommendable to exclude the light during the long preservation as per the significance probability (p-value) of $1^{st}$ and $7^{th}$ day has been diminished.

  • PDF

Large-scale Virtual Power Plant Management Method Considering Variable and Sensitive Loads (가변 및 민감성 부하를 고려한 대단위 가상 발전소 운영 방법)

  • Park, Yong Kuk;Lee, Min Goo;Jung, Kyung Kwon;Lee, Yong-Gu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.225-234
    • /
    • 2015
  • Nowadays a Virtual Power Plant (VPP) represents an aggregation of distributed energy resource such as Distributed Generation (DG), Combined Heat and Power generation (CHP), Energy Storage Systems (ESS) and load in order to operate as a single power plant by using Information and Communication Technologies, ICT. The VPP has been developed and verified based on a single virtual plant platform which is connected with a number of various distributed energy resources. As the VPP's distributed energy resources increase, so does the number of data from distributed energy. Moreover, it is obviously inefficient in the aspects of technique and cost that a virtual plant platform operates in a centralized manner over widespread region. In this paper the concept of the large-scale VPP which can reduce a error probability of system's load and increase the robustness of data exchange among distributed energy resources will be proposed. In addition, it can directly control and supervise energy resource by making small size's virtual platform which can make a optimal resource scheduling to consider of variable and sensitive load in the large-scale VPP. It makes certain the result is verified by simulation.