• Title/Summary/Keyword: six feature

Search Result 302, Processing Time 0.025 seconds

Design Space Exploration of Embedded Many-Core Processors for Real-Time Fire Feature Extraction (실시간 화재 특징 추출을 위한 임베디드 매니코어 프로세서의 디자인 공간 탐색)

  • Suh, Jun-Sang;Kang, Myeongsu;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.1-12
    • /
    • 2013
  • This paper explores design space of many-core processors for a fire feature extraction algorithm. This paper evaluates the impact of varying the number of cores and memory sizes for the many-core processor and identifies an optimal many-core processor in terms of performance, energy efficiency, and area efficiency. In this study, we utilized 90 samples with dimensions of $256{\times}256$ (60 samples containing fire and 30 samples containing non-fire) for experiments. Experimental results using six different many-core architectures (PEs=16, 64, 256, 1,024, 4,096, and 16,384) and the feature extraction algorithm of fire indicate that the highest area efficiency and energy efficiency are achieved at PEs=1,024 and 4,096, respectively, for all fire/non-fire containing movies. In addition, all the six many-core processors satisfy the real-time requirement of 30 frames-per-second (30 fps) for the algorithm.

Feature Extraction Algorithm for Underwater Transient Signal Using Cepstral Coefficients Based on Wavelet Packet (웨이브렛 패킷 기반 캡스트럼 계수를 이용한 수중 천이신호 특징 추출 알고리즘)

  • Kim, Juho;Paeng, Dong-Guk;Lee, Chong Hyun;Lee, Seung Woo
    • Journal of Ocean Engineering and Technology
    • /
    • v.28 no.6
    • /
    • pp.552-559
    • /
    • 2014
  • In general, the number of underwater transient signals is very limited for research on automatic recognition. Data-dependent feature extraction is one of the most effective methods in this case. Therefore, we suggest WPCC (Wavelet packet ceptsral coefficient) as a feature extraction method. A wavelet packet best tree for each data set is formed using an entropy-based cost function. Then, every terminal node of the best trees is counted to build a common wavelet best tree. It corresponds to flexible and non-uniform filter bank reflecting characteristics for the data set. A GMM (Gaussian mixture model) is used to classify five classes of underwater transient data sets. The error rate of the WPCC is compared using MFCC (Mel-frequency ceptsral coefficients). The error rates of WPCC-db20, db40, and MFCC are 0.4%, 0%, and 0.4%, respectively, when the training data consist of six out of the nine pieces of data in each class. However, WPCC-db20 and db40 show rates of 2.98% and 1.20%, respectively, while MFCC shows a rate of 7.14% when the training data consists of only three pieces. This shows that WPCC is less sensitive to the number of training data pieces than MFCC. Thus, it could be a more appropriate method for underwater transient recognition. These results may be helpful to develop an automatic recognition system for an underwater transient signal.

A Behavior-based Authentication Using the Measuring Cosine Similarity (코사인 유사도 측정을 통한 행위 기반 인증)

  • Gil, Seon-Woong;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.4
    • /
    • pp.17-22
    • /
    • 2020
  • Behavior-based authentication technology, which is currently being researched a lot, requires a long extraction of a lot of data to increase the recognition rate of authentication compared to other authentication technologies. This paper uses the touch sensor and the gyroscope embedded in the smartphone in the Android environment to measure five times to the user to use only the minimum data that is essential among the behavior feature data used in the behavior-based authentication study. By requesting, a total of six behavior feature data were collected by touching the five touch screen, and the mean value was calculated from the changes in data during the next touch measurement to measure the cosine similarity between the value and the measured value. After generating the allowable range of cosine similarity by performing, we propose a user behavior based authentication method that compares the cosine similarity value of the authentication attempt data. Through this paper, we succeeded in demonstrating high performance from the first EER of 37.6% to the final EER of 1.9% by adjusting the threshold applied to the cosine similarity authentication range even in a small number of feature data and experimenter environments.

ACCURACY ASSESSMENT BY REFINING THE RATIONAL POLYNOMIALS COEFFICIENTS(RPCs) OF IKONOS IMAGERY

  • LEE SEUNG-CHAN;JUNG HYUNG-SUP;WON JOONG-SUN
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.344-346
    • /
    • 2004
  • IKONOS 1m satellite imagery is particularly well suited for 3-D feature extraction and 1 :5,000 scale topographic mapping. Because the image line and sample calculated by given RPCs have the error of more than 11m, in order to be able to perform feature extraction and topographic mapping, rational polynomial coefficients(RPCs) camera model that are derived from the very complex IKONOS sensor model to describe the object-image geometry must be refined by several Ground Control Points(GCPs). This paper presents a quantitative evaluation of the geometric accuracy that can be achieved with IKONOS imagery by refining the offset and scaling factors of RPCs using several GCPs. If only two GCPs are available, the offsets and scale factors of image line and sample are updated. If we have more than three GCPs, four parameters of the offsets and scale factors of image line and sample are refined first, and then six parameters of the offsets and scale factors of latitude, longitude and height are updated. The stereo images acquired by IKONOS satellite are tested using six ground points. First, the RPCs model was refined using 2 GCPs and 4 check points acquired by GPS. The results from IKONOS stereo images are reported and these show that the RMSE of check point acquired from left images and right are 1.021m and 1.447m. And then we update the RPCs model using 4 GCPs and 2 check points. The RMSE of geometric accuracy is 0.621 m in left image and 0.816m in right image.

  • PDF

Fault Diagnosis of Bearing Based on Convolutional Neural Network Using Multi-Domain Features

  • Shao, Xiaorui;Wang, Lijiang;Kim, Chang Soo;Ra, Ilkyeun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1610-1629
    • /
    • 2021
  • Failures frequently occurred in manufacturing machines due to complex and changeable manufacturing environments, increasing the downtime and maintenance costs. This manuscript develops a novel deep learning-based method named Multi-Domain Convolutional Neural Network (MDCNN) to deal with this challenging task with vibration signals. The proposed MDCNN consists of time-domain, frequency-domain, and statistical-domain feature channels. The Time-domain channel is to model the hidden patterns of signals in the time domain. The frequency-domain channel uses Discrete Wavelet Transformation (DWT) to obtain the rich feature representations of signals in the frequency domain. The statistic-domain channel contains six statistical variables, which is to reflect the signals' macro statistical-domain features, respectively. Firstly, in the proposed MDCNN, time-domain and frequency-domain channels are processed by CNN individually with various filters. Secondly, the CNN extracted features from time, and frequency domains are merged as time-frequency features. Lastly, time-frequency domain features are fused with six statistical variables as the comprehensive features for identifying the fault. Thereby, the proposed method could make full use of those three domain-features for fault diagnosis while keeping high distinguishability due to CNN's utilization. The authors designed massive experiments with 10-folder cross-validation technology to validate the proposed method's effectiveness on the CWRU bearing data set. The experimental results are calculated by ten-time averaged accuracy. They have confirmed that the proposed MDCNN could intelligently, accurately, and timely detect the fault under the complex manufacturing environments, whose accuracy is nearly 100%.

SOH Estimation and Feature Extraction using Principal Component Analysis based on Health Indicator for High Energy Battery Pack (건전성 지표 기반 주성분분석(PCA)을 적용한 고용량 배터리 팩의 열화 인자 추출 방법 및 SOH 진단 기법 연구)

  • Lee, Pyeong-Yeon;Kwon, Sanguk;Kang, Deokhun;Han, Seungyun;Kim, Jonghoon
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.25 no.5
    • /
    • pp.376-384
    • /
    • 2020
  • An energy storage system is composed of lithium-ion batteries in modern applications. Batteries are regarded as storage devices for renewable and residual energy. The failure of batteries can cause the performance reduction and explosion of battery systems. High maintenance cost is essential when dealing with the problem of battery safety. Therefore an accurate health diagnosis is required to ensure the high reliability of battery systems. A battery pack is a combination of single cells in series and parallel connections. A battery pack has to consider various factors to assess battery health. Battery health involves conventional factors and additional factors, such as cell-to-cell imbalance. For large applications, state-of-health (SOH) can be inaccurate because of the lack of factors that indicate the state of the battery pack. In this study, six characterization factors are proposed for improving the SOH estimation of battery packs. The six proposed characterization factors can be regarded as health indicators (HIs). The six HIs are applied to the principal component analysis (PCA) algorithm. To reflect information regarding capacity, voltage, and temperature, the PCA algorithm extracts new degradation factors by using the six HIs. The new degradation factors are applied to a multiple regression model. Results show the advancement and improvement of SOH estimation.

Genetic characterization and population structure of six brown layer pure lines using microsatellite markers

  • Karsli, Taki;Balcioglu, Murat Soner
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.1
    • /
    • pp.49-57
    • /
    • 2019
  • Objective: The first stage in both breeding and programs for the conservation of genetic resources are the identification of genetic diversity in the relevant population. The aim of the present study is to identify genetic diversity of six brown layer pure chicken lines (Rhode Island Red [RIRI, RIRII], Barred Rock [BARI, BARII], Columbian Rock [COL], and line 54 [L-54]) with microsatellite markers. Furthermore, the study aims to employ its findings to discuss the possibilities for the conservation and sustainable use of these lines that have been bred as closed populations for a long time. Methods: In the present study, a total number of 180 samples belonging to RIRI (n = 30), RIRII (n = 30), BARI (n = 30), BARII (n = 30), L-54 (n = 30), and COL (n = 30) lines were genotyped using 22 microsatellite loci. Microsatellite markers are extremely useful tools in the identification of genetic diversity since they are distributed throughout the eukaryotic genome in multitudes, demonstrate co-dominant inheritance and they feature a high rate of polymorphism and repeatability. Results: In this study, we found all loci to be polymorphic and identified the average number of alleles per locus to be in the range between 4.41 (BARI) and 5.45 (RIRI); the observed heterozygosity to be in the range between 0.31 (RIRII) and 0.50 (BARII); and $F_{IS}$ (inbreeding coefficient) values in the range between 0.16 (L-54) and 0.46 (RIRII). The $F_{IS}$ values obtained in this context points out to a deviation from Hardy-Weinberg equilibrium due to heterozygote deficiency in six different populations. The Neighbour-Joining tree, Factorial Correspondence Analysis and STRUCTURE clustering analyzes showed that six brown layer lines were separated according to their genetic origins. Conclusion: The results obtained from the study indicate a medium level of genetic diversity, high level inbreeding in chicken lines and high level genetic differentiation between chicken lines.

Face Recognition Using Local Statistics of Gradients and Correlations (그래디언트와 상관관계의 국부통계를 이용한 얼굴 인식)

  • Ju, Yingai;So, Hyun-Joo;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.19-29
    • /
    • 2011
  • Until now, many face recognition methods have been proposed, most of them use a 1-dimensional feature vector which is vectorized the input image without feature extraction process or input image itself is used as a feature matrix. It is known that the face recognition methods using raw image yield deteriorated performance in databases whose have severe illumination changes. In this paper, we propose a face recognition method using local statistics of gradients and correlations which are good for illumination changes. BDIP (block difference of inverse probabilities) is chosen as a local statistics of gradients and two types of BVLC (block variation of local correlation coefficients) is chosen as local statistics of correlations. When a input image enters the system, it extracts the BDIP, BVLC1 and BVLC2 feature images, fuses them, obtaining feature matrix by $(2D)^2$ PCA transformation, and classifies it with training feature matrix by nearest classifier. From experiment results of four face databases, FERET, Weizmann, Yale B, Yale, we can see that the proposed method is more reliable than other six methods in lighting and facial expression.

Forensic Decision of Median Filtering by Pixel Value's Gradients of Digital Image (디지털 영상의 픽셀값 경사도에 의한 미디언 필터링 포렌식 판정)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.79-84
    • /
    • 2015
  • In a distribution of digital image, there is a serious problem that is a distribution of the altered image by a forger. For the problem solution, this paper proposes a median filtering (MF) image forensic decision algorithm using a feature vector according to the pixel value's gradients. In the proposed algorithm, AR (Autoregressive) coefficients are computed from pixel value' gradients of original image then 1th~6th order coefficients to be six feature vector. And the reconstructed image is produced by the solution of Poisson's equation with the gradients. From the difference image between original and its reconstructed image, four feature vector (Average value, Max. value and the coordinate i,j of Max. value) is extracted. Subsequently, Two kinds of the feature vector combined to 10 Dim. feature vector that is used in the learning of a SVM (Support Vector Machine) classification for MF (Median Filtering) detector of the altered image. On the proposed algorithm of the median filtering detection, compare to MFR (Median Filter Residual) scheme that had the same 10 Dim. feature vectors, the performance is excellent at Unaltered, Averaging filtering ($3{\times}3$) and JPEG (QF=90) images, and less at Gaussian filtering ($3{\times}3$) image. However, in the measured performances of all items, AUC (Area Under Curve) by the sensitivity and 1-specificity is approached to 1. Thus, it is confirmed that the grade evaluation of the proposed algorithm is 'Excellent (A)'.

SVM based Clustering Technique for Processing High Dimensional Data (고차원 데이터 처리를 위한 SVM기반의 클러스터링 기법)

  • Kim, Man-Sun;Lee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.816-820
    • /
    • 2004
  • Clustering is a process of dividing similar data objects in data set into clusters and acquiring meaningful information in the data. The main issues related to clustering are the effective clustering of high dimensional data and optimization. This study proposed a method of measuring similarity based on SVM and a new method of calculating the number of clusters in an efficient way. The high dimensional data are mapped to Feature Space ones using kernel functions and then similarity between neighboring clusters is measured. As for created clusters, the desired number of clusters can be got using the value of similarity measured and the value of Δd. In order to verify the proposed methods, the author used data of six UCI Machine Learning Repositories and obtained the presented number of clusters as well as improved cohesiveness compared to the results of previous researches.