• Title/Summary/Keyword: Segmentation and feature extraction

Search Result 190, Processing Time 0.025 seconds

AdaBoost-based Gesture Recognition Using Time Interval Window Applied Global and Local Feature Vectors with Mono Camera (모노 카메라 영상기반 시간 간격 윈도우를 이용한 광역 및 지역 특징 벡터 적용 AdaBoost기반 제스처 인식)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.3
    • /
    • pp.471-479
    • /
    • 2018
  • Recently, the spread of smart TV based Android iOS Set Top box has become common. This paper propose a new approach to control the TV using gestures away from the era of controlling the TV using remote control. In this paper, the AdaBoost algorithm is applied to gesture recognition by using a mono camera. First, we use Camshift-based Body tracking and estimation algorithm based on Gaussian background removal for body coordinate extraction. Using global and local feature vectors, we recognized gestures with speed change. By tracking the time interval trajectories of hand and wrist, the AdaBoost algorithm with CART algorithm is used to train and classify gestures. The principal component feature vector with high classification success rate is searched using CART algorithm. As a result, 24 optimal feature vectors were found, which showed lower error rate (3.73%) and higher accuracy rate (95.17%) than the existing algorithm.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Environmental IoT-Enabled Multimodal Mashup Service for Smart Forest Fires Monitoring

  • Elmisery, Ahmed M.;Sertovic, Mirela
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.163-170
    • /
    • 2017
  • Internet of things (IoT) is a new paradigm for collecting, processing and analyzing various contents in order to detect anomalies and to monitor particular patterns in a specific environment. The collected data can be used to discover new patterns and to offer new insights. IoT-enabled data mashup is a new technology to combine various types of information from multiple sources into a single web service. Mashup services create a new horizon for different applications. Environmental monitoring is a serious tool for the state and private organizations, which are located in regions with environmental hazards and seek to gain insights to detect hazards and locate them clearly. These organizations may utilize IoT - enabled data mashup service to merge different types of datasets from different IoT sensor networks in order to leverage their data analytics performance and the accuracy of the predictions. This paper presents an IoT - enabled data mashup service, where the multimedia data is collected from the various IoT platforms, then fed into an environmental cognition service which executes different image processing techniques such as noise removal, segmentation, and feature extraction, in order to detect interesting patterns in hazardous areas. The noise present in the captured images is eliminated with the help of a noise removal and background subtraction processes. Markov based approach was utilized to segment the possible regions of interest. The viable features within each region were extracted using a multiresolution wavelet transform, then fed into a discriminative classifier to extract various patterns. Experimental results have shown an accurate detection performance and adequate processing time for the proposed approach. We also provide a data mashup scenario for an IoT-enabled environmental hazard detection service and experimentation results.

Multimodality Image Registration and Fusion using Feature Extraction (특징 추출을 이용한 다중 영상 정합 및 융합 연구)

  • Woo, Sang-Keun;Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.123-130
    • /
    • 2007
  • The aim of this study was to propose a fusion and registration method with heterogeneous small animal acquisition system in small animal in-vivo study. After an intravenous injection of $^{18}F$-FDG through tail vain and 60 min delay for uptake, mouse was placed on an acryl plate with fiducial markers that were made for fusion between small animal PET (microPET R4, Concorde Microsystems, Knoxville TN) and Discovery LS CT images. The acquired emission list-mode data was sorted to temporally framed sinograms and reconstructed using FORE rebining and 2D-OSEM algorithms without correction of attenuation and scatter. After PET imaging, CT images were acquired by mean of a clinical PET/CT with high-resolution mode. The microPET and CT images were fusion and co-registered using the fiducial markers and segmented lung region in both data sets to perform a point-based rigid co-registration. This method improves the quantitative accuracy and interpretation of the tracer.

  • PDF

A semi-automated method for integrating textural and material data into as-built BIM using TIS

  • Zabin, Asem;Khalil, Baha;Ali, Tarig;Abdalla, Jamal A.;Elaksher, Ahmed
    • Advances in Computational Design
    • /
    • v.5 no.2
    • /
    • pp.127-146
    • /
    • 2020
  • Building Information Modeling (BIM) is increasingly used throughout the facility's life cycle for various applications, such as design, construction, facility management, and maintenance. For existing buildings, the geometry of as-built BIM is often constructed using dense, three dimensional (3D) point clouds data obtained with laser scanners. Traditionally, as-built BIM systems do not contain the material and textural information of the buildings' elements. This paper presents a semi-automatic method for generation of material and texture rich as-built BIM. The method captures and integrates material and textural information of building elements into as-built BIM using thermal infrared sensing (TIS). The proposed method uses TIS to capture thermal images of the interior walls of an existing building. These images are then processed to extract the interior walls using a segmentation algorithm. The digital numbers in the resulted images are then transformed into radiance values that represent the emitted thermal infrared radiation. Machine learning techniques are then applied to build a correlation between the radiance values and the material type in each image. The radiance values were used to extract textural information from the images. The extracted textural and material information are then robustly integrated into the as-built BIM providing the data needed for the assessment of building conditions in general including energy efficiency, among others.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

Fire Detection Approach using Robust Moving-Region Detection and Effective Texture Features of Fire (강인한 움직임 영역 검출과 화재의 효과적인 텍스처 특징을 이용한 화재 감지 방법)

  • Nguyen, Truc Kim Thi;Kang, Myeongsu;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.6
    • /
    • pp.21-28
    • /
    • 2013
  • This paper proposes an effective fire detection approach that includes the following multiple heterogeneous algorithms: moving region detection using grey level histograms, color segmentation using fuzzy c-means clustering (FCM), feature extraction using a grey level co-occurrence matrix (GLCM), and fire classification using support vector machine (SVM). The proposed approach determines the optimal threshold values based on grey level histograms in order to detect moving regions, and then performs color segmentation in the CIE LAB color space by applying the FCM. These steps help to specify candidate regions of fire. We then extract features of fire using the GLCM and these features are used as inputs of SVM to classify fire or non-fire. We evaluate the proposed approach by comparing it with two state-of-the-art fire detection algorithms in terms of the fire detection rate (or percentages of true positive, PTP) and the false fire detection rate (or percentages of true negative, PTN). Experimental results indicated that the proposed approach outperformed conventional fire detection algorithms by yielding 97.94% for PTP and 4.63% for PTN, respectively.

A Study on Machine Learning-Based Real-Time Gesture Classification Using EMG Data (EMG 데이터를 이용한 머신러닝 기반 실시간 제스처 분류 연구)

  • Ha-Je Park;Hee-Young Yang;So-Jin Choi;Dae-Yeon Kim;Choon-Sung Nam
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.57-67
    • /
    • 2024
  • This paper explores the potential of electromyography (EMG) as a means of gesture recognition for user input in gesture-based interaction. EMG utilizes small electrodes within muscles to detect and interpret user movements, presenting a viable input method. To classify user gestures based on EMG data, machine learning techniques are employed, necessitating the preprocessing of raw EMG data to extract relevant features. EMG characteristics can be expressed through formulas such as Integrated EMG (IEMG), Mean Absolute Value (MAV), Simple Square Integral (SSI), Variance (VAR), and Root Mean Square (RMS). Additionally, determining the suitable time for gesture classification is crucial, considering the perceptual, cognitive, and response times required for user input. To address this, segment sizes ranging from a minimum of 100ms to a maximum of 1,000ms are varied, and feature extraction is performed to identify the optimal segment size for gesture classification. Notably, data learning employs overlapped segmentation to reduce the interval between data points, thereby increasing the quantity of training data. Using this approach, the paper employs four machine learning models (KNN, SVC, RF, XGBoost) to train and evaluate the system, achieving accuracy rates exceeding 96% for all models in real-time gesture input scenarios with a maximum segment size of 200ms.

Fault Classification Model Based on Time Domain Feature Extraction of Vibration Data (진동 데이터의 시간영역 특징 추출에 기반한 고장 분류 모델)

  • Kim, Seung-il;Noh, Yoojeong;Kang, Young-jin;Park, Sunhwa;Ahn, Byungha
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.1
    • /
    • pp.25-33
    • /
    • 2021
  • With the development of machine learning techniques, various types of data such as vibration, temperature, and flow rate can be used to detect and diagnose abnormalities in machine conditions. In particular, in the field of the state monitoring of rotating machines, the fault diagnosis of machines using vibration data has long been carried out, and the methods are also very diverse. In this study, an experiment was conducted to collect vibration data from normal and abnormal compressors by installing accelerometers directly on rotary compressors used in household air conditioners. Data segmentation was performed to solve the data shortage problem, and the main features for the fault classification model were extracted through the chi-square test after statistical and physical features were extracted from the vibration data in the time domain. The support vector machine (SVM) model was developed to classify the normal or abnormal conditions of compressors and improve the classification accuracy through the hyperparameter optimization of the SVM.

Corpus-based Korean Text-to-speech Conversion System (콜퍼스에 기반한 한국어 문장/음성변환 시스템)

  • Kim, Sang-hun; Park, Jun;Lee, Young-jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.24-33
    • /
    • 2001
  • this paper describes a baseline for an implementation of a corpus-based Korean TTS system. The conventional TTS systems using small-sized speech still generate machine-like synthetic speech. To overcome this problem we introduce the corpus-based TTS system which enables to generate natural synthetic speech without prosodic modifications. The corpus should be composed of a natural prosody of source speech and multiple instances of synthesis units. To make a phone level synthesis unit, we train a speech recognizer with the target speech, and then perform an automatic phoneme segmentation. We also detect the fine pitch period using Laryngo graph signals, which is used for prosodic feature extraction. For break strength allocation, 4 levels of break indices are decided as pause length and also attached to phones to reflect prosodic variations in phrase boundaries. To predict the break strength on texts, we utilize the statistical information of POS (Part-of-Speech) sequences. The best triphone sequences are selected by Viterbi search considering the minimization of accumulative Euclidean distance of concatenating distortion. To get high quality synthesis speech applicable to commercial purpose, we introduce a domain specific database. By adding domain specific database to general domain database, we can greatly improve the quality of synthetic speech on specific domain. From the subjective evaluation, the new Korean corpus-based TTS system shows better naturalness than the conventional demisyllable-based one.

  • PDF