• Title/Summary/Keyword: discriminative learning

Search Result 75, Processing Time 0.041 seconds

Online Multi-Object Tracking by Learning Discriminative Appearance with Fourier Transform and Partial Least Square Analysis

  • Lee, Seong-Ho;Bae, Seung-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.2
    • /
    • pp.49-58
    • /
    • 2020
  • In this study, we solve an online multi-object problem which finds object states (i.e. locations and sizes) while conserving their identifications in online-provided images and detections. We handle this problem based on a tracking-by-detection approach by linking (or associating) detections between frames. For more accurate online association, we propose novel online appearance learning with discrete fourier transform and partial least square analysis (PLS). We first transform each object image into a Fourier image in order to extract meaningful features on a frequency domain. We then learn PLS subspaces which can discriminate frequency features of different objects. In addition, we incorporate the proposed appearance learning into the recent confidence-based association method, and extensively compare our methods with the state-of-the-art methods on MOT benchmark challenge datasets.

Cell Images Classification using Deep Convolutional Autoencoder of Unsupervised Learning (비지도학습의 딥 컨벌루셔널 자동 인코더를 이용한 셀 이미지 분류)

  • Vununu, Caleb;Park, Jin-Hyeok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.942-943
    • /
    • 2021
  • The present work proposes a classification system for the HEp-2 cell images using an unsupervised deep feature learning method. Unlike most of the state-of-the-art methods in the literature that utilize deep learning in a strictly supervised way, we propose here the use of the deep convolutional autoencoder (DCAE) as the principal feature extractor for classifying the different types of the HEp-2 cell images. The network takes the original cell images as the inputs and learns to reconstruct them in order to capture the features related to the global shape of the cells. A final feature vector is constructed by using the latent representations extracted from the DCAE, giving a highly discriminative feature representation. The created features will be fed to a nonlinear classifier whose output will represent the final type of the cell image. We have tested the discriminability of the proposed features on one of the most popular HEp-2 cell classification datasets, the SNPHEp-2 dataset and the results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning based state-of-the-art methods.

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

A Study on Person Re-Identification System using Enhanced RNN (확장된 RNN을 활용한 사람재인식 시스템에 관한 연구)

  • Choi, Seok-Gyu;Xu, Wenjie
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.15-23
    • /
    • 2017
  • The person Re-identification is the most challenging part of computer vision due to the significant changes in human pose and background clutter with occlusions. The picture from non-overlapping cameras enhance the difficulty to distinguish some person from the other. To reach a better performance match, most methods use feature selection and distance metrics separately to get discriminative representations and proper distance to describe the similarity between person and kind of ignoring some significant features. This situation has encouraged us to consider a novel method to deal with this problem. In this paper, we proposed an enhanced recurrent neural network with three-tier hierarchical network for person re-identification. Specifically, the proposed recurrent neural network (RNN) model contain an iterative expectation maximum (EM) algorithm and three-tier Hierarchical network to jointly learn both the discriminative features and metrics distance. The iterative EM algorithm can fully use of the feature extraction ability of convolutional neural network (CNN) which is in series before the RNN. By unsupervised learning, the EM framework can change the labels of the patches and train larger datasets. Through the three-tier hierarchical network, the convolutional neural network, recurrent network and pooling layer can jointly be a feature extractor to better train the network. The experimental result shows that comparing with other researchers' approaches in this field, this method also can get a competitive accuracy. The influence of different component of this method will be analyzed and evaluated in the future research.

Age Estimation via Selecting Discriminated Features and Preserving Geometry

  • Tian, Qing;Sun, Heyang;Ma, Chuang;Cao, Meng;Chu, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1721-1737
    • /
    • 2020
  • Human apparent age estimation has become a popular research topic and attracted great attention in recent years due to its wide applications, such as personal security and law enforcement. To achieve the goal of age estimation, a large number of methods have been pro-posed, where the models derived through the cumulative attribute coding achieve promised performance by preserving the neighbor-similarity of ages. However, these methods afore-mentioned ignore the geometric structure of extracted facial features. Indeed, the geometric structure of data greatly affects the accuracy of prediction. To this end, we propose an age estimation algorithm through joint feature selection and manifold learning paradigms, so-called Feature-selected and Geometry-preserved Least Square Regression (FGLSR). Based on this, our proposed method, compared with the others, not only preserves the geometry structures within facial representations, but also selects the discriminative features. Moreover, a deep learning extension based FGLSR is proposed later, namely Feature selected and Geometry preserved Neural Network (FGNN). Finally, related experiments are conducted on Morph2 and FG-Net datasets for FGLSR and on Morph2 datasets for FGNN. Experimental results testify our method achieve the best performances.

A New CSR-DCF Tracking Algorithm based on Faster RCNN Detection Model and CSRT Tracker for Drone Data

  • Farhodov, Xurshid;Kwon, Oh-Heum;Moon, Kwang-Seok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1415-1429
    • /
    • 2019
  • Nowadays object tracking process becoming one of the most challenging task in Computer Vision filed. A CSR-DCF (channel spatial reliability-discriminative correlation filter) tracking algorithm have been proposed on recent tracking benchmark that could achieve stat-of-the-art performance where channel spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process with only two simple standard features, HoGs and Color names. However, there are some cases where this method cannot track properly, like overlapping, occlusions, motion blur, changing appearance, environmental variations and so on. To overcome that kind of complications a new modified version of CSR-DCF algorithm has been proposed by integrating deep learning based object detection and CSRT tracker which implemented in OpenCV library. As an object detection model, according to the comparable result of object detection methods and by reason of high efficiency and celerity of Faster RCNN (Region-based Convolutional Neural Network) has been used, and combined with CSRT tracker, which demonstrated outstanding real-time detection and tracking performance. The results indicate that the trained object detection model integration with tracking algorithm gives better outcomes rather than using tracking algorithm or filter itself.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Detection of Frame Deletion Using Coding Pattern Analysis (부호화 패턴 분석을 이용한 동영상 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.734-743
    • /
    • 2017
  • In this paper, we introduce a technique to detect the video forgery using coding pattern analysis. In the proposed method, the recently developed standard HEVC codec, which is expected to be widely used in the future, is used. First, HEVC coding patterns of the forged and the original videos are analyzed to select the discriminative features, and the selected feature vectors are learned through the machine learning technique to model the classification criteria between two groups. Experimental results show that the proposed method is more effective to detect frame deletions for HEVC-coded videos than existing works.

Improving Discriminative Feature Learning for Face Recognition utilizing a Center Expansion Algorithm (중심확장 알고리즘이 보강된 식별적 특징학습을 통한 얼굴인식 향상기법)

  • Kang, Myeong-Kyun;Lee, Sang C.;Lee, In-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.881-884
    • /
    • 2017
  • 좋은 특징을 도출할 수 있는 신경망은 곧 대상을 잘 이해하고 있는 신경망을 의미한다. 그러나 얼굴과 같이 유사한 이미지를 분류하기 위해서는 신경망이 좀 더 구분되는 특징을 도출해야한다. 본 논문에서는 얼굴과 같이 유사도한 이미지를 분류하기 위해 오차함수에 중심확장(Center Expansion)이라는 오차를 추가한다. 중심확장은 도출된 특징이 밀집되면 클래스를 분류하는 매니폴드를 구하기 어려워져 분류 성능이 하락되는 문제를 해결하기 위해 제안한 것으로 특징이 밀집될 가능성이 높은 부분에 특징이 도출되지 않도록 강제하는 방식이다. 학습 시 활용하는 오차는 일반적으로 분류 문제를 위해 사용되는 softmax cross-entropy 오차와 각 클래스의 분산을 줄이는 오차 그리고 제안한 중심확장 오차를 조합해 구할 것이다. 본 논문에서는 제안한 중심확장 오차를 조합한 모델과 조합되지 않은 모델이 결과적으로 특징 도출과 분류에 어떠한 영향을 주었는지 알아볼 것이다. 중심확장을 조합해 학습한 모델이 어떤 영향을 주었는지 알기 위해 본 논문에서는 Labeled Faces in the Wild를 활용해 분류 실험을 진행할 것이다. Labeled Faces in the Wild을 활용해 실험한 결과 중심확장을 활용한 모델과 활용하지 않은 모델간의 성능을 차이를 확인할 수 있었다.