• Title/Summary/Keyword: Measure Detection

Search Result 1,308, Processing Time 0.029 seconds

Improvement of Domain-specific Keyword Spotting Performance Using Hybrid Confidence Measure (하이브리드 신뢰도를 이용한 제한 영역 핵심어 검출 성능향상)

  • 이경록;서현철;최승호;최승호;김진영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.7
    • /
    • pp.632-640
    • /
    • 2002
  • In this paper, we proposed ACM (Anti-filler confidence measure) to compensate shortcoming of conventional RLJ-CM (RLJ-CM) and NCM (normalized CM), and integrated proposed ACM and conventional NCM using HCM (hybrid CM). Proposed ACM analyzes that FA (false acceptance) happens by the construction method of anti-phone model, and presumed phoneme sequence in actuality using phoneme recognizer to compensate this. We defined this as anti-phone model and used in confidence measure calculation. Analyzing feature of two confidences measure, conventional NCM shows good performance to FR (false rejection) and proposed ACM shows good performance in FA. This shows that feature of each other are complementary. Use these feature, we integrated two confidence measures using weighting vector α And defined this as HCM. In MDR (missed detection rate) 10% neighborhood, HCM is 0.219 FA/KW/HR (false alarm/keyword/hour). This is that Performance improves 22% than used conventional NCM individually.

A Method for Quantitative Performance Evaluation of Edge Detection Algorithms Depending on Chosen Parameters that Influence the Performance of Edge Detection (경계선 검출 성능에 영향을 주는 변수 변화에 따른 경계선 검출 알고리듬 성능의 정량적인 평가 방법)

  • 양희성;김유호;한정현;이은석;이준호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.993-1001
    • /
    • 2000
  • This research features a method that quantitatively evaluates the performance of edge detection algorithms. Contrary to conventional methods that evaluate the performance of edge detection as a function of the amount of noise added to he input image, the proposed method is capable of assessing the performance of edge detection algorithms based on chosen parameters that influence the performance of edge detection. We have proposed a quantitative measure, called average performance index, that compares the average performance of different edge detection algorithms. We have applied the method to the commonly used edge detectors, Sobel, LOG(Laplacian of Gaussian), and Canny edge detectors for noisy images that contain straight line edges and curved line edges. Two kinds of noises i.e, Gaussian and impulse noises, are used. Experimental results show that our method of quantitatively evaluating the performance of edge detection algorithms can facilitate the selection of the optimal dge detection algorithm for a given task.

  • PDF

Selection of Detection Measures for Malicious Codes using Naive Estimator (단순 추정량을 이용한 악성코드의 탐지척도 선정)

  • Mun, Gil-Jong;Kim, Yong-Min
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.2
    • /
    • pp.97-105
    • /
    • 2008
  • The various mutations of the malicious codes are fast generated on the network. Also the behaviors of them become intelligent and the damage becomes larger step by step. In this paper, we suggest the method to select the useful measures for the detection of the codes. The method has the advantage of shortening the detection time by using header data without payloads and uses connection data that are composed of TCP/IP packets, and much information of each connection makes use of the measures. A naive estimator is applied to the probability distribution that are calculated by the histogram estimator to select the specific measures among 80 measures for the useful detection. The useful measures are then selected by using relative entropy. This method solves the problem that is to misclassify the measure values. We present the usefulness of the proposed method through the result of the detection experiment using the detection patterns based on the selected measures.

Efficient Method of Detecting Blurry Images

  • Tsomko, Elena;Kim, Hyoung-Joong;Paik, Joon-Ki;Yeo, In-Kwon
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.27-39
    • /
    • 2008
  • In this paper we present a simple, efficient method for detecting the blurry photographs. Recently many digital cameras are equipped with various auto-focusing functions to help users take well-focused pictures as easily as possible. In addition, motion compensation devices are able to compensate motion causing blurriness in the images. However, digital pictures can be degraded by limited contrast, inappropriate exposure, imperfection of auto-focusing or motion compensating devices, unskillfulness of the photographers, and so on. In order to decide whether to process the images or not, or whether to delete them or not, reliable measure of image degradation to detect blurry images from sharp ones is needed. This paper presents a blurriness/sharpness measure, and demonstrates its feasibility by using extensive experiments. This method is fast, easy to implement and accurate. Regardless of the detection accuracy, the proposed measure in this paper is not demanding in computation time. Needless to say, this measure can be used for various imaging applications including auto-focusing and astigmatism correction.

  • PDF

An Edge Detection for Face Feature Extraction using λ-Fuzzy Measure (λ-퍼지척도를 이용한 얼굴특징의 윤곽선 검출)

  • Park, In-Kue;Ahn, Bo-Hyeok;Choi, Gyoo-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.75-79
    • /
    • 2009
  • In this paper the method was proposed which uses ${\lambda}$-fuzzy measure to detect the edge of the features of the face region. In the conventional method the features was founded using valley, brightness and edge. This method had its drawbacks that it is so sensitive to the external noises and environments. This paper proposed ${\lambda}$-fuzzy measure to cope with this drawbacks. By considering each weight of the pixels the integral evaluation was considered using the center of area method. Thus the continuity of the edge was kept by way of the neighborhood information and the reduction of time complexity wad resulted in.

  • PDF

Black Consumer Detection in E-Commerce Using Filter Method and Classification Algorithms (Filter Method와 Classification 알고리즘을 이용한 전자상거래 블랙컨슈머 탐지에 대한 연구)

  • Lee, Taekyu;Lee, Kyung Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.6
    • /
    • pp.1499-1508
    • /
    • 2018
  • Although fast-growing e-commerce markets gave a lot of companies opportunities to expand their customer bases, it is also the case that there are growing number of cases in which the so-called 'black consumers' cause much damage on many companies. In this study, we will implement and optimize a machine learning model that detects black consumers using customer data from e-commerce store. Using filter method for feature selection and 4 different algorithms for classification, we could get the best-performing machine learning model that detects black consumer with F-measure 0.667 and could also yield improvements in performance which are 11.44% in F-measure, 10.51% in AURC, and 22.87% in TPR.

A Multiple Features Video Copy Detection Algorithm Based on a SURF Descriptor

  • Hou, Yanyan;Wang, Xiuzhen;Liu, Sanrong
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.502-510
    • /
    • 2016
  • Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.

Salient Object Detection Based on Regional Contrast and Relative Spatial Compactness

  • Xu, Dan;Tang, Zhenmin;Xu, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2737-2753
    • /
    • 2013
  • In this study, we propose a novel salient object detection strategy based on regional contrast and relative spatial compactness. Our algorithm consists of four basic steps. First, we learn color names offline using the probabilistic latent semantic analysis (PLSA) model to find the mapping between basic color names and pixel values. The color names can be used for image segmentation and region description. Second, image pixels are assigned to special color names according to their values, forming different color clusters. The saliency measure for every cluster is evaluated by its spatial compactness relative to other clusters rather than by the intra variance of the cluster alone. Third, every cluster is divided into local regions that are described with color name descriptors. The regional contrast is evaluated by computing the color distance between different regions in the entire image. Last, the final saliency map is constructed by incorporating the color cluster's spatial compactness measure and the corresponding regional contrast. Experiments show that our algorithm outperforms several existing salient object detection methods with higher precision and better recall rates when evaluated using public datasets.

3D Building Reconstruction Using Building Model and Segment Measure Function (건물모델 및 선소측정함수를 이용한 건물의 3차원 복원)

  • Ye, Chul-Soo;Lee, Kwae-Hi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.4
    • /
    • pp.46-55
    • /
    • 2000
  • This paper presents an algorithm for 3D building reconstruction from a pair of stereo aerial images using the 3D building model and the linear segments of building. Direct extraction of linear segments from original building images using parametric building model is attempted instead of employing the conventional procedures such as edge detection, linear approximation and line linking A segment measure function is simultaneously applied to each line segment extracted in order to improve the accuracy of building detection comparing to individual linear segment detection. The algorithm has been applied to pairs of stereo aerial images and the result showed accurate detection and reconstruction of buildings.

  • PDF

Measure and Analysis of Open-Close Frequency of Mouth and Eyes for Sleepiness Decision (졸음 판단을 위한 눈과 입의 개폐 빈도수 측정 및 분석)

  • Sung, Jae-Kyung;Choi, In-Ho;Park, Sang-Min;Kim, Yong-Guk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.89-97
    • /
    • 2014
  • In this paper, we propose real-time program that measure open-close frequency of mouth and eyes to detect drowsiness of a driver. This program detects a face to the CCD camera image using OpenCV library. Then that extracts each area using CDF for eye detection and Active Contour for mouth detection based on detected face. This system measures each frequency of Open-Close using extracted area data of eyes and mouth. We propose foundation technique how to perform sleepiness decision of users based on measurement data.