• Title/Summary/Keyword: Network Features

Search Result 2,657, Processing Time 0.025 seconds

Human Activity Recognition Based on 3D Residual Dense Network

  • Park, Jin-Ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1540-1551
    • /
    • 2020
  • Aiming at the problem that the existing human behavior recognition algorithm cannot fully utilize the multi-level spatio-temporal information of the network, a human behavior recognition algorithm based on a dense three-dimensional residual network is proposed. First, the proposed algorithm uses a dense block of three-dimensional residuals as the basic module of the network. The module extracts the hierarchical features of human behavior through densely connected convolutional layers; Secondly, the local feature aggregation adaptive method is used to learn the local dense features of human behavior; Then, the residual connection module is applied to promote the flow of feature information and reduced the difficulty of training; Finally, the multi-layer local feature extraction of the network is realized by cascading multiple three-dimensional residual dense blocks, and use the global feature aggregation adaptive method to learn the features of all network layers to realize human behavior recognition. A large number of experimental results on benchmark datasets KTH show that the recognition rate (top-l accuracy) of the proposed algorithm reaches 93.52%. Compared with the three-dimensional convolutional neural network (C3D) algorithm, it has improved by 3.93 percentage points. The proposed algorithm framework has good robustness and transfer learning ability, and can effectively handle a variety of video behavior recognition tasks.

Human Gait Recognition Based on Spatio-Temporal Deep Convolutional Neural Network for Identification

  • Zhang, Ning;Park, Jin-ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.927-939
    • /
    • 2020
  • Gait recognition can identify people's identity from a long distance, which is very important for improving the intelligence of the monitoring system. Among many human features, gait features have the advantages of being remotely available, robust, and secure. Traditional gait feature extraction, affected by the development of behavior recognition, can only rely on manual feature extraction, which cannot meet the needs of fine gait recognition. The emergence of deep convolutional neural networks has made researchers get rid of complex feature design engineering, and can automatically learn available features through data, which has been widely used. In this paper,conduct feature metric learning in the three-dimensional space by combining the three-dimensional convolution features of the gait sequence and the Siamese structure. This method can capture the information of spatial dimension and time dimension from the continuous periodic gait sequence, and further improve the accuracy and practicability of gait recognition.

Model-based 3-D object recognition using hopfield neural network (Hopfield 신경회로망을 이용한 모델 기반형 3차원 물체 인식)

  • 정우상;송호근;김태은;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.5
    • /
    • pp.60-72
    • /
    • 1996
  • In this paper, a enw model-base three-dimensional (3-D) object recognition mehtod using hopfield network is proposed. To minimize deformation of feature values on 3-D rotation, we select 3-D shape features and 3-D relational features which have rotational invariant characteristics. Then these feature values are normalized to have scale invariant characteristics, also. The input features are matched with model features by optimization process of hopjfield network in the form of two dimensional arrayed neurons. Experimental results on object classification and object matching with the 3-D rotated, scale changed, an dpartial oculued objects show good performance of proposed method.

  • PDF

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

Deep Learning Method for Identification and Selection of Relevant Features

  • Vejendla Lakshman
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.212-216
    • /
    • 2024
  • Feature Selection have turned into the main point of investigations particularly in bioinformatics where there are numerous applications. Deep learning technique is a useful asset to choose features, anyway not all calculations are on an equivalent balance with regards to selection of relevant features. To be sure, numerous techniques have been proposed to select multiple features using deep learning techniques. Because of the deep learning, neural systems have profited a gigantic top recovery in the previous couple of years. Anyway neural systems are blackbox models and not many endeavors have been made so as to examine the fundamental procedure. In this proposed work a new calculations so as to do feature selection with deep learning systems is introduced. To evaluate our outcomes, we create relapse and grouping issues which enable us to think about every calculation on various fronts: exhibitions, calculation time and limitations. The outcomes acquired are truly encouraging since we figure out how to accomplish our objective by outperforming irregular backwoods exhibitions for each situation. The results prove that the proposed method exhibits better performance than the traditional methods.

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum;Jeong, Hye Won;Choi, Chang Kyun;Ali, Muhammad Salman;Bae, Sung-Ho;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.868-876
    • /
    • 2021
  • Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.

Masked Face Recognition via a Combined SIFT and DLBP Features Trained in CNN Model

  • Aljarallah, Nahla Fahad;Uliyan, Diaa Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.319-331
    • /
    • 2022
  • The latest global COVID-19 pandemic has made the use of facial masks an important aspect of our lives. People are advised to cover their faces in public spaces to discourage illness from spreading. Using these face masks posed a significant concern about the exactness of the face identification method used to search and unlock telephones at the school/office. Many companies have already built the requisite data in-house to incorporate such a scheme, using face recognition as an authentication. Unfortunately, veiled faces hinder the detection and acknowledgment of these facial identity schemes and seek to invalidate the internal data collection. Biometric systems that use the face as authentication cause problems with detection or recognition (face or persons). In this research, a novel model has been developed to detect and recognize faces and persons for authentication using scale invariant features (SIFT) for the whole segmented face with an efficient local binary texture features (DLBP) in region of eyes in the masked face. The Fuzzy C means is utilized to segment the image. These mixed features are trained significantly in a convolution neural network (CNN) model. The main advantage of this model is that can detect and recognizing faces by assigning weights to the selected features aimed to grant or provoke permissions with high accuracy.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

  • Zhou, Yanyan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.411-425
    • /
    • 2021
  • In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.