• Title/Summary/Keyword: ICA(Independent Component Analysis)

Search Result 234, Processing Time 0.031 seconds

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Active Noise Cancellation using a Teacher Forced BSS Learning Algorithm

  • Sohn, Jun-Il;Lee, Min-Ho;Lee, Wang-Ha
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.3
    • /
    • pp.224-229
    • /
    • 2004
  • In this paper, we propose a new Active Noise Control (ANC) system using a teacher forced Blind Source Separation (BSS) algorithm. The Blind Source Separation based on the Independent Component Analysis (ICA) separates the desired sound signal from the unwanted noise signal. In the proposed system, the BSS algorithm is used as a preprocessor of ANC system. Also, we develop a teacher forced BSS learning algorithm to enhance the performance of BSS. The teacher signal is obtained from the output signal of the ANC system. Computer experimental results show that the proposed ANC system in conjunction with the BSS algorithm effectively cancels only the ship engine noise signal from the linear and convolved mixtures with human voice.

Invariant Iris Code extraction for generating cryptographic key based on Fuzzy Vault (퍼지볼트 기반의 암호 키 생성을 위한 불변 홍채코드 추출)

  • Lee, Youn-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.321-322
    • /
    • 2006
  • In this paper, we propose a method that extracts invariant iris codes from user's iris pattern in order to apply these codes to a new cryptographic construct called fuzzy vault. The fuzzy vault, proposed by Juels and Sudan, has been used to manage cryptographic key safely by merging with biometrics. Generally, iris data has intra-variation of iris pattern according to sensed environmental changes, but cryptography requires correctness. Therefore, to combine iris data and fuzzy vault, we have to extract an invariant iris feature from iris pattern. In this paper, we obtain invariant iris codes by clustering iris features extracted by independent component analysis(ICA) transform. From experimental results, we proved that the iris codes extracted by our method are invariant to sensed environmental changes and can be used in fuzzy vault.

  • PDF

An Improvement of Recognition Performance Based on Nonlinear Equalization and Statistical Correlation (비선형 평활화와 통계적 상관성에 기반을 둔 인식성능 개선)

  • Shin, Hyun-Soo;Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.555-562
    • /
    • 2012
  • This paper presents a hybrid method for improving the recognition performance, which is based on the nonlinear histogram equalization, features extraction, and statistical correlation of images. The nonlinear histogram equalization based on a logistic function is applied to adaptively improve the quality by adjusting the brightness of the image according to its intensity level frequency. The statistical correlation that is measured by the normalized cross-correlation(NCC) coefficient, is applied to rapidly and accurately express the similarity between the images. The local features based on independent component analysis(ICA) that is used to calculate the NCC, is also applied to statistically measure the correct similarity in each images. The proposed method has been applied to the problem for recognizing the 30-face images of 40*50 pixels. The experimental results show that the proposed method has a superior recognition performances to the method without performing the preprocessing, or the methods of conventional and adaptively modified histogram equalization, respectively.

Separations and Feature Extractions for Image Signals Using Independent Component Analysis Based on Neural Networks of Efficient Learning Rule (효율적인 학습규칙의 신경망 기반 독립성분분석을 이용한 영상신호의 분리 및 특징추출)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.200-208
    • /
    • 2003
  • This paper proposes a separation and feature extraction of image signals using the independent component analysis(ICA) based on neural networks of efficient learning rule. The proposed learning rule is a hybrid fixed-point(FP) algorithm based on secant method and momentum. Secant method is applied to improve the performance by simplifying the 1st-order derivative computation for optimizing the objective function, which is to minimize the mutual informations of the independent components. The momentum is applied for high-speed convergence by restraining the oscillation in the process of converging to the optimal solution. The proposed algorithm has been applied to the composite images generated by random mixing matrix from the 10 images of $512\times512$-pixel. The simulation results show that the proposed algorithm has better performances of the separation speed and rate than those using the FP algorithm based on Newton and secant method. The proposed algorithm has been also applied to extract the features using a 3 set of 10,000 image patches from the 10 fingerprints of $256\times256$-pixel and the front and the rear paper money of $480\times225$-pixel, respectively, The simulation results show that the proposed algorithm has also better extraction speed than those using the another methods. Especially, the 160 basis vectors(features) of $16\times16$-pixel show the local features which have the characteristics of spatial frequency and oriented edges in the images.

Skin Pigment Recognition using Projective Hemoglobin- Melanin Coordinate Measurements

  • Yang, Liu;Lee, Suk-Hwan;Kwon, Seong-Geun;Song, Ha-Joo;Kwon, Ki-Ryong
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1825-1838
    • /
    • 2016
  • The detection of skin pigment is crucial in the diagnosis of skin diseases and in the evaluation of medical cosmetics and hairdressing. Accuracy in the detection is a basis for the prompt cure of skin diseases. This study presents a method to recognize and measure human skin pigment using Hemoglobin-Melanin (HM) coordinate. The proposed method extracts the skin area through a Gaussian skin-color model estimated from statistical analysis and decomposes the skin area into two pigments of hemoglobin and melanin using an Independent Component Analysis (ICA) algorithm. Then, we divide the two-dimensional (2D) HM coordinate into rectangular bins and compute the location histograms of hemoglobin and melanin for all the bins. We label the skin pigment of hemoglobin, melanin, and normal skin on all bins according to the Bayesian classifier. These bin-based HM projective histograms can quantify the skin pigment and compute the standard deviation on the total quantification of skin pigments surrounding normal skin. We tested our scheme using images taken under different illumination conditions. Several cosmetic coverings were used to test the performance of the proposed method. The experimental results show that the proposed method can detect skin pigments with more accuracy and evaluate cosmetic covering effects more effectively than conventional methods.

Mobile ECG Measurement System Design with Fetal ECG Extraction Capability (태아 ECG 추출 기능을 가지는 모바일 심전도 측정 시스템 설계)

  • Choi, Chul-Hyung;Kim, Young-Pil;Kim, Si-Kyung;You, Jeong-Bong;Seo, Bong-Gyun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.2
    • /
    • pp.431-438
    • /
    • 2017
  • In this paper, the abdomen ECG(AECG) is employed to measure the mother's ECG instead of the conventioanl thoracic ECG measurement. The fetus ECG signal can be extracted from the AECG using an algorithm that utilizes the mobile fetal ECG measurement platform, which is based on the BLE (Bluetooth Low Energy). The algorithm has been implemented by using a replacement processor processed directly from the platform BLE instead of the large statistical data processing required in the ICA(Independent component analysis). The proposed algorithm can be implemented on a mobile BLE wireless ECG system hardware platform to process the maternal ECG. Wireless technology can realize a compact, low-power radio system for short distance communication and the IOT(Intenet of Things) enables the transmission of real-time ECG data. It was also implemented in the form of a compact module in order for mothers to be able to download and store the collected ECG data without having to interrupt or move the logger, and later link the module to a computer for downloading and analyzing the data. A mobile ECG measurement prototype is manufactured and tested to measure the FECG for pregnant women. The experimental results verify a real-time FECG extraction capability for the proposed system. In this paper, we propose an ECG measurement system that shows approximately 91.65% similarity to the MIT database and the conventional algorithm and SNR performance about 10% better.

Estimation and Elimination of ECG Artifacts from Single Channel Scalp EEG (단일 채널 두피 뇌전도에서의 심전도 잡음 추정 및 제거)

  • Cho, Sung-Pil;Song, Mi-Hye;Park, Ho-Dong;Lee, Kyoung-Joung;Park, Young-Cheol
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1910-1911
    • /
    • 2007
  • A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. In conclusion, we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.

  • PDF

Robust Blind Source Separation to Noisy Environment For Speech Recognition in Car (차량용 음성인식을 위한 주변잡음에 강건한 브라인드 음원분리)

  • Kim, Hyun-Tae;Park, Jang-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.89-95
    • /
    • 2006
  • The performance of blind source separation(BSS) using independent component analysis (ICA) declines significantly in a reverberant environment. A post-processing method proposed in this paper was designed to remove the residual component precisely. The proposed method used modified NLMS(normalized least mean square) filter in frequency domain, to estimate cross-talk path that causes residual cross-talk components. Residual cross-talk components in one channel is correspond to direct components in another channel. Therefore, we can estimate cross-talk path using another channel input signals from adaptive filter. Step size is normalized by input signal power in conventional NLMS filter, but it is normalized by sum of input signal power and error signal power in modified NLMS filter. By using this method, we can prevent misadjustment of filter weights. The estimated residual cross-talk components are subtracted by non-stationary spectral subtraction. The computer simulation results using speech signals show that the proposed method improves the noise reduction ratio(NRR) by approximately 3dB on conventional FDICA.

  • PDF

Development and Validation of a Machine Learning-based Differential Diagnosis Model for Patients with Mild Cognitive Impairment using Resting-State Quantitative EEG (안정 상태에서의 정량 뇌파를 이용한 기계학습 기반의 경도인지장애 환자의 감별 진단 모델 개발 및 검증)

  • Moon, Kiwook;Lim, Seungeui;Kim, Jinuk;Ha, Sang-Won;Lee, Kiwon
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.185-192
    • /
    • 2022
  • Early detection of mild cognitive impairment can help prevent the progression of dementia. The purpose of this study was to design and validate a machine learning model that automatically differential diagnosed patients with mild cognitive impairment and identified cognitive decline characteristics compared to a control group with normal cognition using resting-state quantitative electroencephalogram (qEEG) with eyes closed. In the first step, a rectified signal was obtained through a preprocessing process that receives a quantitative EEG signal as an input and removes noise through a filter and independent component analysis (ICA). Frequency analysis and non-linear features were extracted from the rectified signal, and the 3067 extracted features were used as input of a linear support vector machine (SVM), a representative algorithm among machine learning algorithms, and classified into mild cognitive impairment patients and normal cognitive adults. As a result of classification analysis of 58 normal cognitive group and 80 patients in mild cognitive impairment, the accuracy of SVM was 86.2%. In patients with mild cognitive impairment, alpha band power was decreased in the frontal lobe, and high beta band power was increased in the frontal lobe compared to the normal cognitive group. Also, the gamma band power of the occipital-parietal lobe was decreased in mild cognitive impairment. These results represented that quantitative EEG can be used as a meaningful biomarker to discriminate cognitive decline.