• Title/Summary/Keyword: Feature Normalization

Search Result 156, Processing Time 0.029 seconds

Gaze Detection Based on Facial Features and Linear Interpolation on Mobile Devices (모바일 기기에서의 얼굴 특징점 및 선형 보간법 기반 시선 추적)

  • Ko, You-Jin;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1089-1098
    • /
    • 2009
  • Recently, many researches of making more comfortable input device based on gaze detection technology have been performed in human computer interface. Previous researches were performed on the computer environment with a large sized monitor. With recent increase of using mobile device, the necessities of interfacing by gaze detection on mobile environment were also increased. In this paper, we research about the gaze detection method by using UMPC (Ultra-Mobile PC) and an embedded camera of UMPC based on face and facial feature detection by AAM (Active Appearance Model). This paper has following three originalities. First, different from previous research, we propose a method for tracking user's gaze position in mobile device which has a small sized screen. Second, in order to detect facial feature points, we use AAM. Third, gaze detection accuracy is not degraded according to Z distance based on the normalization of input features by using the features which are obtained in an initial user calibration stage. Experimental results showed that gaze detection error was 1.77 degrees and it was reduced by mouse dragging based on the additional facial movement.

  • PDF

Performance Evaluation of a Machine Learning Model Based on Data Feature Using Network Data Normalization Technique (네트워크 데이터 정형화 기법을 통한 데이터 특성 기반 기계학습 모델 성능평가)

  • Lee, Wooho;Noh, BongNam;Jeong, Kimoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.4
    • /
    • pp.785-794
    • /
    • 2019
  • Recently Deep Learning technology, one of the fourth industrial revolution technologies, is used to identify the hidden meaning of network data that is difficult to detect in the security arena and to predict attacks. Property and quality analysis of data sources are required before selecting the deep learning algorithm to be used for intrusion detection. This is because it affects the detection method depending on the contamination of the data used for learning. Therefore, the characteristics of the data should be identified and the characteristics selected. In this paper, the characteristics of malware were analyzed using network data set and the effect of each feature on performance was analyzed when the deep learning model was applied. The traffic classification experiment was conducted on the comparison of characteristics according to network characteristics and 96.52% accuracy was classified based on the selected characteristics.

Hardware Implementation of Fog Feature Based on Coefficient of Variation Using Normalization (정규화를 이용한 변동계수 기반 안개 특징의 하드웨어 구현)

  • Kang, Ui-Jin;Kang, Bong-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.6
    • /
    • pp.819-824
    • /
    • 2021
  • As technologies related to image processing such as autonomous driving and CCTV develop, fog removal algorithms using a single image are being studied to improve the problem of image distortion. As a method of predicting fog density, there is a method of estimating the depth of an image by generating a depth map, and various fog features may be used as training data of the depth map. In addition, it is essential to implement a hardware capable of processing high-definition images in real time in order to apply the fog removal algorithm to actual technologies. In this paper, we implement NLCV (Normalize Local Coefficient of Variation), a feature of fog based on coefficient of variation, in hardware. The proposed hardware is an FPGA implementation of Xilinx's xczu7ev-2ffvc1156 as a target device. As a result of synthesis through the Vivado program, it has a maximum operating frequency of 479.616MHz and shows that real-time processing is possible in 4K UHD environment.

Image Retrieval Using Multiresoluton Color and Texture Features in Wavelet Transform Domain (웨이브릿 변환 영역의 칼라 및 질감 특징을 이용한 영상검색)

  • Chun Young-Deok;Sung Joong-Ki;Kim Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.55-66
    • /
    • 2006
  • We propose a progressive image retrieval method based on an efficient combination of multiresolution color and torture features in wavelet transform domain. As a color feature, color autocorrelogram of the hue and saturation components is chosen. As texture features, BDIP and BVLC moments of the value component are chosen. For the selected features, we obtain multiresolution feature vectors which are extracted from all decomposition levels in wavelet domain. The multiresolution feature vectors of the color and texture features are efficiently combined by the normalization depending on their dimensions and standard deviation vector, respectively, vector components of the features are efficiently quantized in consideration of their storage space, and computational complexity in similarity computation is reduced by using progressive retrieval strategy. Experimental results show that the proposed method yields average $15\%$ better performance in precision vs. recall and average 0.2 in ANMRR than the methods using color histogram color autocorrelogram SCD, CSD, wavelet moments, EHD, BDIP and BVLC moments, and combination of color histogram and wavelet moments, respectively. Specially, the proposed method shows an excellent performance over the other methods in image DBs contained images of various resolutions.

Character Recognition System using Fast Preprocessing Method (전처리의 고속화에 기반한 문자 인식 시스템)

  • 공용해
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.3
    • /
    • pp.297-307
    • /
    • 1999
  • A character recognition system, where a large amount of character images arrive continuously in real time, must preprocess character images very quickly. Moreover, information loss due to image trans-formations such as geometric normalization and thinning needs to be minimized especially when character images are small and noisy. Therefore, we suggest a prompt and effective feature extraction method without transforming original images. For this, boundary pixels are defined in terms of the degree in classification, and those boundary pixels are considered selectively in extracting features. The proposed method is tested by a handwritten character recognition and a car plate number recognition. The experiments show that the proposed method is effective in recognition compared to conventional methods. And an overall reduction of execution time is achieved by completing all the required processing by a single image scan.

  • PDF

An eigenspace projection clustering method for structural damage detection

  • Zhu, Jun-Hua;Yu, Ling;Yu, Li-Li
    • Structural Engineering and Mechanics
    • /
    • v.44 no.2
    • /
    • pp.179-196
    • /
    • 2012
  • An eigenspace projection clustering method is proposed for structural damage detection by combining projection algorithm and fuzzy clustering technique. The integrated procedure includes data selection, data normalization, projection, damage feature extraction, and clustering algorithm to structural damage assessment. The frequency response functions (FRFs) of the healthy and the damaged structure are used as initial data, median values of the projections are considered as damage features, and the fuzzy c-means (FCM) algorithm are used to categorize these features. The performance of the proposed method has been validated using a three-story frame structure built and tested by Los Alamos National Laboratory, USA. Two projection algorithms, namely principal component analysis (PCA) and kernel principal component analysis (KPCA), are compared for better extraction of damage features, further six kinds of distances adopted in FCM process are studied and discussed. The illustrated results reveal that the distance selection depends on the distribution of features. For the optimal choice of projections, it is recommended that the Cosine distance is used for the PCA while the Seuclidean distance and the Cityblock distance suitably used for the KPCA. The PCA method is recommended when a large amount of data need to be processed due to its higher correct decisions and less computational costs.

Feeature extraction for recognition rate improvemen of hand written numerals (필기체 숫자 인식률 향상을 위한 특징추출)

  • Koh, Chan;Lee, Chang-In
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.10
    • /
    • pp.2102-2111
    • /
    • 1997
  • Hand written numeral is projected on the 3D space after pre-processing of inputs and it makes a index by tracking of numerals. It computes the distance between extracted every features. It is used by input part of recognition process from the statistical historgram of the normalization of data in order to adaptation from variation. One hundred unmeral patterns have used for making a standard feature map and 100 pattern for the recogintion experiment. The result of it, we have the recoginition rete is 93.5% based on thresholding is 0.20 and 97.5% based on 0.25.

  • PDF

Generic Training Set based Multimanifold Discriminant Learning for Single Sample Face Recognition

  • Dong, Xiwei;Wu, Fei;Jing, Xiao-Yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.368-391
    • /
    • 2018
  • Face recognition (FR) with a single sample per person (SSPP) is common in real-world face recognition applications. In this scenario, it is hard to predict intra-class variations of query samples by gallery samples due to the lack of sufficient training samples. Inspired by the fact that similar faces have similar intra-class variations, we propose a virtual sample generating algorithm called k nearest neighbors based virtual sample generating (kNNVSG) to enrich intra-class variation information for training samples. Furthermore, in order to use the intra-class variation information of the virtual samples generated by kNNVSG algorithm, we propose image set based multimanifold discriminant learning (ISMMDL) algorithm. For ISMMDL algorithm, it learns a projection matrix for each manifold modeled by the local patches of the images of each class, which aims to minimize the margins of intra-manifold and maximize the margins of inter-manifold simultaneously in low-dimensional feature space. Finally, by comprehensively using kNNVSG and ISMMDL algorithms, we propose k nearest neighbor virtual image set based multimanifold discriminant learning (kNNMMDL) approach for single sample face recognition (SSFR) tasks. Experimental results on AR, Multi-PIE and LFW face datasets demonstrate that our approach has promising abilities for SSFR with expression, illumination and disguise variations.

A Novel Automatic Algorithm for Selecting a Target Brain using a Simple Structure Analysis in Talairach Coordinate System

  • Koo B.B.;Lee Jong-Min;Kim June Sic;Kim In Young;Kim Sun I.
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.3
    • /
    • pp.129-132
    • /
    • 2005
  • It is one of the most important issues to determine a target brain image that gives a common coordinate system for a constructing population-based brain atlas. The purpose of this study is to provide a simple and reliable procedure that determines the target brain image among the group based on the inherent structural information of three-dimensional magnetic resonance (MR) images. It uses only 11 lines defined automatically as a feature vector representing structural variations based on the Talairach coordinate system. Average characteristic vector of the group and the difference vectors of each one from the average vector were obtained. Finally, the individual data that had the minimum difference vector was determined as the target. We determined the target brain image by both our algorithm and conventional visual inspection for 20 healthy young volunteers. Eighteen fiducial points were marked independently for each data to evaluate the similarity. Target brain image obtained by our algorithm showed the best result, and the visual inspection determined the second one. We concluded that our method could be used to determine an appropriate target brain image in constructing brain atlases such as disease-specific ones.

Analysis and Measurement of the Spectrum of Whole Blood (전혈의 SPECTRUM 측정과 분석)

  • Kim, Y.J.;Kim, H.S.;Kim, J.W.;Yoon, K.W.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.05
    • /
    • pp.52-55
    • /
    • 1996
  • The spectra of whole blood EDTA samples from two people were generated using a CARY 5E (UV-VIS-NIR) spectrophotometer from 400 to 1000nm which contain visible and NIR region. Only the data between 400 and 800nm were used to analyze the components of blood. Using the same spectrophotometer, the spectra of Water, normal saline, plasma were generated These spectra were subtracted from each blood sample, and then the first derivative of each of the subtracted data was taken to minimize baseline variations and indicated the wavelength-shift of peak and valley. Normalization and division between two blood samples were used to correlate the quantity ratio of specific components with feature of spectra. Samples were controlled at $30^{\circ}C,\;37^{\circ}C$, ambient temperature.

  • PDF