• Title/Summary/Keyword: stable feature

Search Result 269, Processing Time 0.027 seconds

Vehicle Face Re-identification Based on Nonnegative Matrix Factorization with Time Difference Constraint

  • Ma, Na;Wen, Tingxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2098-2114
    • /
    • 2021
  • Light intensity variation is one of the key factors which affect the accuracy of vehicle face re-identification, so in order to improve the robustness of vehicle face features to light intensity variation, a Nonnegative Matrix Factorization model with the constraint of image acquisition time difference is proposed. First, the original features vectors of all pairs of positive samples which are used for training are placed in two original feature matrices respectively, where the same columns of the two matrices represent the same vehicle; Then, the new features obtained after decomposition are divided into stable and variable features proportionally, where the constraints of intra-class similarity and inter-class difference are imposed on the stable feature, and the constraint of image acquisition time difference is imposed on the variable feature; At last, vehicle face matching is achieved through calculating the cosine distance of stable features. Experimental results show that the average False Reject Rate and the average False Accept Rate of the proposed algorithm can be reduced to 0.14 and 0.11 respectively on five different datasets, and even sometimes under the large difference of light intensities, the vehicle face image can be still recognized accurately, which verifies that the extracted features have good robustness to light variation.

Robust 2D Feature Tracking in Long Video Sequences (긴 비디오 프레임들에서의 강건한 2차원 특징점 추적)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.473-480
    • /
    • 2007
  • Feature tracking in video frame sequences has suffered from the instability and the frequent failure of feature matching between two successive frames. In this paper, we propose a robust 2D feature tracking method that is stable to long video sequences. To improve the stability of feature tracking, we predict the spatial movement in the current image frame using the state variables. The predicted current movement is used for the initialization of the search window. By computing the feature similarities in the search window, we refine the current feature positions. Then, the current feature states are updated. This tracking process is repeated for each input frame. To reduce false matches, the outlier rejection stage is also introduced. Experimental results from real video sequences showed that the proposed method performs stable feature tracking for long frame sequences.

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function (시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성)

  • Seok, Jin-Uk;Jo, Seong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.2
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

A Method to Detect Object of Interest from Satellite Imagery based on MSER(Maximally Stable Extremal Regions) (MSER(Maximally Stable Extremal Regions)기반 위성영상에서의 관심객체 검출기법)

  • Baek, Inhye
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.5
    • /
    • pp.510-516
    • /
    • 2015
  • This paper describes an approach to detect interesting objects using satellite images. This paper focuses on the interesting objects that have common special patterns but do not have identical shapes and sizes. The previous technologies are still insufficient for automatic finding of the interesting objects based on operation of special pattern analysis. In order to overcome the circumstances, this paper proposes a methodology to obtain the special patterns of interesting objects considering their common features and their related characteristics. This paper applies MSER(Maximally Stable Extremal Regions) for the region detection and corner detector in order to extract the features of the interesting object. This paper conducts a case study and obtains the experimental results of the case study, which is efficient in reducing processing time and efforts comparing to the previous manual searching.

Face recognition using PCA and face direction information (PCA와 얼굴방향 정보를 이용한 얼굴인식)

  • Kim, Seung-Jae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.6
    • /
    • pp.609-616
    • /
    • 2017
  • In this paper, we propose an algorithm to obtain more stable and high recognition rate by using left and right rotation information of input image in order to obtain a stable recognition rate in face recognition. The proposed algorithm uses the facial image as the input information in the web camera environment to reduce the size of the image and normalize the information about the brightness and color to obtain the improved recognition rate. We apply Principal Component Analysis (PCA) to the detected candidate regions to obtain feature vectors and classify faces. Also, In order to reduce the error rate range of the recognition rate, a set of data with the left and right $45^{\circ}$ rotation information is constructed considering the directionality of the input face image, and each feature vector is obtained with PCA. In order to obtain a stable recognition rate with the obtained feature vector, it is after scattered in the eigenspace and the final face is recognized by comparing euclidean distant distances to each feature. The PCA-based feature vector is low-dimensional data, but there is no problem in expressing the face, and the recognition speed can be fast because of the small amount of calculation. The method proposed in this paper can improve the safety and accuracy of recognition and recognition rate faster than other algorithms, and can be used for real-time recognition system.

A Feature Selection-based Ensemble Method for Arrhythmia Classification

  • Namsrai, Erdenetuya;Munkhdalai, Tsendsuren;Li, Meijing;Shin, Jung-Hoon;Namsrai, Oyun-Erdene;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.31-40
    • /
    • 2013
  • In this paper, a novel method is proposed to build an ensemble of classifiers by using a feature selection schema. The feature selection schema identifies the best feature sets that affect the arrhythmia classification. Firstly, a number of feature subsets are extracted by applying the feature selection schema to the original dataset. Then classification models are built by using the each feature subset. Finally, we combine the classification models by adopting a voting approach to form a classification ensemble. The voting approach in our method involves both classification error rate and feature selection rate to calculate the score of the each classifier in the ensemble. In our method, the feature selection rate depends on the extracting order of the feature subsets. In the experiment, we applied our method to arrhythmia dataset and generated three top disjointed feature sets. We then built three classifiers based on the top-three feature subsets and formed the classifier ensemble by using the voting approach. Our method can improve the classification accuracy in high dimensional dataset. The performance of each classifier and the performance of their ensemble were higher than the performance of the classifier that was based on whole feature space of the dataset. The classification performance was improved and a more stable classification model could be constructed with the proposed approach.

Robust Visual Odometry System for Illumination Variations Using Adaptive Thresholding (적응적 이진화를 이용하여 빛의 변화에 강인한 영상거리계를 통한 위치 추정)

  • Hwang, Yo-Seop;Yu, Ho-Yun;Lee, Jangmyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.9
    • /
    • pp.738-744
    • /
    • 2016
  • In this paper, a robust visual odometry system has been proposed and implemented in an environment with dynamic illumination. Visual odometry is based on stereo images to estimate the distance to an object. It is very difficult to realize a highly accurate and stable estimation because image quality is highly dependent on the illumination, which is a major disadvantage of visual odometry. Therefore, in order to solve the problem of low performance during the feature detection phase that is caused by illumination variations, it is suggested to determine an optimal threshold value in the image binarization and to use an adaptive threshold value for feature detection. A feature point direction and a magnitude of the motion vector that is not uniform are utilized as the features. The performance of feature detection has been improved by the RANSAC algorithm. As a result, the position of a mobile robot has been estimated using the feature points. The experimental results demonstrated that the proposed approach has superior performance against illumination variations.

Cross-architecture Binary Function Similarity Detection based on Composite Feature Model

  • Xiaonan Li;Guimin Zhang;Qingbao Li;Ping Zhang;Zhifeng Chen;Jinjin Liu;Shudan Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2101-2123
    • /
    • 2023
  • Recent studies have shown that the neural network-based binary code similarity detection technology performs well in vulnerability mining, plagiarism detection, and malicious code analysis. However, existing cross-architecture methods still suffer from insufficient feature characterization and low discrimination accuracy. To address these issues, this paper proposes a cross-architecture binary function similarity detection method based on composite feature model (SDCFM). Firstly, the binary function is converted into vector representation according to the proposed composite feature model, which is composed of instruction statistical features, control flow graph structural features, and application program interface calling behavioral features. Then, the composite features are embedded by the proposed hierarchical embedding network based on a graph neural network. In which, the block-level features and the function-level features are processed separately and finally fused into the embedding. In addition, to make the trained model more accurate and stable, our method utilizes the embeddings of predecessor nodes to modify the node embedding in the iterative updating process of the graph neural network. To assess the effectiveness of composite feature model, we contrast SDCFM with the state of art method on benchmark datasets. The experimental results show that SDCFM has good performance both on the area under the curve in the binary function similarity detection task and the vulnerable candidate function ranking in vulnerability search task.

Fast Image Stitching For Video Stabilization Using Sift Feature Points

  • Hossain, Mostafiz Mehebuba;Lee, Hyuk-Jae;Lee, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.10
    • /
    • pp.957-966
    • /
    • 2014
  • Video Stabilization For Vehicular Applications Is An Important Method Of Removing Unwanted Shaky Motions From Unstable Videos. In This Paper, An Improved Video Stabilization Method With Image Stitching Has Been Proposed. Scale Invariant Feature Transform (Sift) Matching Is Used To Calculate The New Position Of The Points In Next Frame. Image Stitching Is Done In Every Frame To Get Stabilized Frames To Provide Stable Video As Well As A Better Understanding Of The Previous Frame'S Position And Show The Surrounding Objects Together. The Computational Complexity Of Sift (Scale-Invariant Feature Transform) Is Reduced By Reducing The Sift Descriptors Size And Resticting The Number Of Keypints To Be Extracted. Also, A Modified Matching Procedure Is Proposed To Improve The Accuracy Of The Stabilization.