• Title/Summary/Keyword: algorithm classification scheme

Search Result 144, Processing Time 0.033 seconds

Edge Detection By Fusion Using Local Information of Edges

  • Vlachos, Ioannis K.;Sergiadis, George D.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.403-406
    • /
    • 2003
  • This paper presents a robust algorithm for edge detection based on fuzzy fusion, using a novel local edge information measure based on Renyi's a-order entropy. The calculation of the proposed measure is carried out using a parametric classification scheme based on local statistics. By suitably tuning its parameters, the local edge information measure is capable of extracting different types of edges, while exhibiting high immunity to noise. The notions of fuzzy measures and the Choquet fuzzy integral are applied to combine the different sources of information obtained using the local edge information measure with different sets of parameters. The effectiveness and the robustness of the new method are demonstrated by applying our algorithm to various synthetic computer-generated and real-world images.

  • PDF

A Novel Human Detection Scheme using a Human Characteristics Function in a Low Resolution 2D LIDAR (저해상도 2D 라이다의 사람 특성 함수를 이용한 새로운 사람 감지 기법)

  • Kwon, Seong Kyung;Hyun, Eugin;Lee, Jin-Hee;Lee, Jonghun;Son, Sang Hyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.267-276
    • /
    • 2016
  • Human detection technologies are widely used in smart homes and autonomous vehicles. However, in order to detect human, autonomous vehicle researchers have used a high-resolution LIDAR and smart home researchers have applied a camera with a narrow detection range. In this paper, we propose a novel method using a low-cost and low-resolution LIDAR that can detect human fast and precisely without complex learning algorithm and additional devices. In other words, human can be distinguished from objects by using a new human characteristics function which is empirically extracted from the characteristics of a human. In addition, we verified the effectiveness of the proposed algorithm through a number of experiments.

Wafer Dicing State Monitoring by Signal Processing (신호처리를 이용한 웨이퍼 다이싱 상태 모니터링)

  • 고경용;차영엽;최범식
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.5
    • /
    • pp.70-75
    • /
    • 2000
  • After the patterning and probe process of wafer have been achieved, the dicing process is necessary to separate chips from a wafer. The dicing process cuts a wafer to lengthwise and crosswise direction to make many chips by using narrow circular rotating diamond blade. But inferior goods are made under the influence of complex dicing environment such as blade, wafer, cutting water and cutting conditions. This paper describes a monitoring algorithm using feature extraction in order to find out an instant of vibration signal change when bad dicing appears. The algorithm is composed of two steps: feature extraction and decision. In the feature extraction, two features processed from vibration signal which is acquired by accelerometer attached on blade head are proposed. In the decision. a threshold method is adopted to classify the dicing process into normal and abnormal dicing. Experiment have been performed for GaAs semiconductor wafer. Based upon observation of the experimental results, the proposed scheme shown a good accuracy of classification performance by which the inferior goods decreased from 35.2% to 12.8%.

  • PDF

A Face-Detection Postprocessing Scheme Using a Geometric Analysis for Multimedia Applications

  • Jang, Kyounghoon;Cho, Hosang;Kim, Chang-Wan;Kang, Bongsoon
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.1
    • /
    • pp.34-42
    • /
    • 2013
  • Human faces have been broadly studied in digital image and video processing fields. An appearance-based method, the adaptive boosting learning algorithm using integral image representations has been successfully employed for face detection, taking advantage of the feature extraction's low computational complexity. In this paper, we propose a face-detection postprocessing method that equalizes instantaneous facial regions in an efficient hardware architecture for use in real-time multimedia applications. The proposed system requires low hardware resources and exhibits robust performance in terms of the movements, zooming, and classification of faces. A series of experimental results obtained using video sequences collected under dynamic conditions are discussed.

MCE Training Algorithm for a Speech Recognizer Detecting Mispronunciation of a Foreign Language (외국어 발음오류 검출 음성인식기를 위한 MCE 학습 알고리즘)

  • Bae, Min-Young;Chung, Yong-Joo;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.43-52
    • /
    • 2004
  • Model parameters in HMM based speech recognition systems are normally estimated using Maximum Likelihood Estimation(MLE). The MLE method is based mainly on the principle of statistical data fitting in terms of increasing the HMM likelihood. The optimality of this training criterion is conditioned on the availability of infinite amount of training data and the correct choice of model. However, in practice, neither of these conditions is satisfied. In this paper, we propose a training algorithm, MCE(Minimum Classification Error), to improve the performance of a speech recognizer detecting mispronunciation of a foreign language. During the conventional MLE(Maximum Likelihood Estimation) training, the model parameters are adjusted to increase the likelihood of the word strings corresponding to the training utterances without taking account of the probability of other possible word strings. In contrast to MLE, the MCE training scheme takes account of possible competing word hypotheses and tries to reduce the probability of incorrect hypotheses. The discriminant training method using MCE shows better recognition results than the MLE method does.

  • PDF

Domain Specific Annotation of Digital Documents through Keyphrase Extraction (고정키어구 추출을 통한 디지털 문서의 도메인 특정 주석)

  • Fatima, Iram;Lee, Young-Koo;Lee, Sung-Young
    • Annual Conference of KIPS
    • /
    • 2011.04a
    • /
    • pp.1389-1391
    • /
    • 2011
  • In this paper, we propose a methodology to annotate the digital documents through keyphrase extraction using domain specific taxonomy. Limitation of the existing keyphrase extraction algorithms is that output keyphrases may contain irrelevant information along with relevant ones. The quality of the generated keyphrases by the existing approaches does not meet the required level of accuracy. Our proposed approach exploits semantic relationships and hierarchical structure of the classification scheme to filter out irrelevant keyphrases suggested by Keyphrase Extraction Algorithm (KEA++). Our experimental results proved the accuracy of the proposed algorithm through high precision and low recall.

Query Optimization Scheme using Query Classification in Hybrid Spatial DBMS (하이브리드 공간 DBMS에서 질의 분류를 이용한 최적화 기법)

  • Chung, Weon-Il;Jang, Seok-Kyu
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.290-299
    • /
    • 2008
  • We propose the query optimization technique using query classification in hybrid spatial DBMS. In our approach, user queries should to be classified into three types: memory query, disk query, and hybrid query. Specialty, In the hybrid query processing, the query predicate is divided by comparison between materialized view creating conditions and user query conditions. Then, the deductions of the classified queries' cost formula are used for the query optimization. The optimization is mainly done by the selection algorithm of the smallest cost data access path. Our approach improves the performance of hybrid spatial DBMS than traditional disk-based DBMS by $20%{\sim}50%$.

Statistical Approach to Noisy Band Removal for Enhancement of HIRIS Image Classification

  • Huan, Nguyen Van;Kim, Hak-Il
    • Proceedings of the KSRS Conference
    • /
    • 2008.03a
    • /
    • pp.195-200
    • /
    • 2008
  • The accuracy of classifying pixels in HIRIS images is usually degraded by noisy bands since noisy bands may deform the typical shape of spectral reflectance. Proposed in this paper is a statistical method for noisy band removal which mainly makes use of the correlation coefficients between bands. Considering each band as a random variable, the correlation coefficient measures the strength and direction of a linear relationship between two random variables. While the correlation between two signal bands is high, existence of a noisy band will produce a low correlation due to ill-correlativeness and undirectedness. The application of the correlation coefficient as a measure for detecting noisy bands is under a two-pass screening scheme. This method is independent of the prior knowledge of the sensor or the cause resulted in the noise. The classification in this experiment uses the unsupervised k-nearest neighbor algorithm in accordance with the well-accepted Euclidean distance measure and the spectral angle mapper measure. This paper also proposes a hierarchical combination of these measures for spectral matching. Finally, a separability assessment based on the between-class and within-class scatter matrices is followed to evaluate the performance.

  • PDF

Load Balancing in Cloud Computing Using Meta-Heuristic Algorithm

  • Fahim, Youssef;Rahhali, Hamza;Hanine, Mohamed;Benlahmar, El-Habib;Labriji, El-Houssine;Hanoune, Mostafa;Eddaoui, Ahmed
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.569-589
    • /
    • 2018
  • Cloud computing, also known as "country as you go", is used to turn any computer into a dematerialized architecture in which users can access different services. In addition to the daily evolution of stakeholders' number and beneficiaries, the imbalance between the virtual machines of data centers in a cloud environment impacts the performance as it decreases the hardware resources and the software's profitability. Our axis of research is the load balancing between a data center's virtual machines. It is used for reducing the degree of load imbalance between those machines in order to solve the problems caused by this technological evolution and ensure a greater quality of service. Our article focuses on two main phases: the pre-classification of tasks, according to the requested resources; and the classification of tasks into levels ('odd levels' or 'even levels') in ascending order based on the meta-heuristic "Bat-algorithm". The task allocation is based on levels provided by the bat-algorithm and through our mathematical functions, and we will divide our system into a number of virtual machines with nearly equal performance. Otherwise, we suggest different classes of virtual machines, but the condition is that each class should contain machines with similar characteristics compared to the existing binary search scheme.

Texture Classification Algorithm for Patch-based Image Processing (패치 기반 영상처리를 위한 텍스쳐 분류 알고리즘)

  • Yu, Seung Wan;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.146-154
    • /
    • 2014
  • The local binary pattern (LBP) scheme that is one of the texture classification methods normally uses the distribution of flat, edge and corner patterns. However, it cannot examine the edge direction and the pixel difference because it is a sort of binary pattern caused by thresholding. Furthermore, since it cannot consider the pixel distribution, it shows lower performance as the image size becomes larger. In order to solve this problem, we propose a sub-classification method using the edge direction distribution and eigen-matrix. The proposed sub-classification is applied to the particular texture patches which cannot be classified by LBP. First, we quantize the edge direction and compute its distribution. Second, we calculate the distribution of the largest value among eigenvalues derived from structure matrix. Simulation results show that the proposed method provides a higher classification performance of about 8 % than the existing method.