• 제목/요약/키워드: Deep features

검색결과 1,071건 처리시간 0.026초

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

A new framework for Person Re-identification: Integrated level feature pattern (ILEP)

  • Manimaran, V.;Srinivasagan, K.G.;Gokul, S.;Jacob, I.Jeena;Baburenagarajan, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4456-4475
    • /
    • 2021
  • The system for re-identifying persons is used to find and verify the persons crossing through different spots using various cameras. Much research has been done to re-identify the person by utilising features with deep-learned or hand-crafted information. Deep learning techniques segregate and analyse the features of their layers in various forms, and the output is complex feature vectors. This paper proposes a distinctive framework called Integrated Level Feature Pattern (ILFP) framework, which integrates local and global features. A new deep learning architecture named modified XceptionNet (m-XceptionNet) is also proposed in this work, which extracts the global features effectively with lesser complexity. The proposed framework gives better performance in Rank1 metric for Market1501 (96.15%), CUHK03 (82.29%) and the newly created NEC01 (96.66%) datasets than the existing works. The mean Average Precision (mAP) calculated using the proposed framework gives 92%, 85% and 98%, respectively, for the same datasets.

언어 분석 자질을 활용한 인공신경망 기반의 단일 문서 추출 요약 (Single Document Extractive Summarization Based on Deep Neural Networks Using Linguistic Analysis Features)

  • 이경호;이공주
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제8권8호
    • /
    • pp.343-348
    • /
    • 2019
  • 최근의 문서요약 시스템은 인공신경망을 이용한 End-to-End 방식이 주류를 이루고 있다. 이러한 시스템은 인간의 자질 추출 과정이 필요 없으며 데이터 중심의 접근 방법을 채택한다. 그러나 기존의 관련 연구들은 품사 정보, 개체명 정보, 단어의 빈도 정보와 같은 언어 분석 자질이 중요 문장을 선택하여 요약을 작성하는데 유용함을 보여왔다. 본 연구에서는 기존의 언어 분석 자질을 활용하여 인공신경망을 기반으로 한 단일 문서의 추출 요약 시스템을 제안한다. 언어 분석 자질의 유용성을 보이기 위해 자질을 사용하는 모델과 사용하지 않는 모델을 비교하였다. 실험 결과 자질을 사용하는 모델이 그렇지 않은 모델에 비해 약 0.5점의 Rouge-2 F1점수 향상을 보였다.

심층 학습 모델을 이용한 수피 인식 (Bark Identification Using a Deep Learning Model)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제22권10호
    • /
    • pp.1133-1141
    • /
    • 2019
  • Most of the previous studies for bark recognition have focused on the extraction of LBP-like statistical features. Deep learning approach was not well studied because of the difficulty of acquiring large volume of bark image dataset. To overcome the bark dataset problem, this study utilizes the MobileNet which was trained with the ImageNet dataset. This study proposes two approaches. One is to extract features by the pixel-wise convolution and classify the features with SVM. The other is to tune the weights of the MobileNet by flexibly freezing layers. The experimental results with two public bark datasets, BarkTex and Trunk12, show that the proposed methods are effective in bark recognition. Especially the results of the flexible tunning method outperform state-of-the-art methods. In addition, it can be applied to mobile devices because the MobileNet is compact compared to other deep learning models.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권2호
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

Unsupervised Learning-Based Pipe Leak Detection using Deep Auto-Encoder

  • Yeo, Doyeob;Bae, Ji-Hoon;Lee, Jae-Cheol
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권9호
    • /
    • pp.21-27
    • /
    • 2019
  • In this paper, we propose a deep auto-encoder-based pipe leak detection (PLD) technique from time-series acoustic data collected by microphone sensor nodes. The key idea of the proposed technique is to learn representative features of the leak-free state using leak-free time-series acoustic data and the deep auto-encoder. The proposed technique can be used to create a PLD model that detects leaks in the pipeline in an unsupervised learning manner. This means that we only use leak-free data without labeling while training the deep auto-encoder. In addition, when compared to the previous supervised learning-based PLD method that uses image features, this technique does not require complex preprocessing of time-series acoustic data owing to the unsupervised feature extraction scheme. The experimental results show that the proposed PLD method using the deep auto-encoder can provide reliable PLD accuracy even considering unsupervised learning-based feature extraction.

DroidVecDeep: Android Malware Detection Based on Word2Vec and Deep Belief Network

  • Chen, Tieming;Mao, Qingyu;Lv, Mingqi;Cheng, Hongbing;Li, Yinglong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2180-2197
    • /
    • 2019
  • With the proliferation of the Android malicious applications, malware becomes more capable of hiding or confusing its malicious intent through the use of code obfuscation, which has significantly weaken the effectiveness of the conventional defense mechanisms. Therefore, in order to effectively detect unknown malicious applications on the Android platform, we propose DroidVecDeep, an Android malware detection method using deep learning technique. First, we extract various features and rank them using Mean Decrease Impurity. Second, we transform the features into compact vectors based on word2vec. Finally, we train the classifier based on deep learning model. A comprehensive experimental study on a real sample collection was performed to compare various malware detection approaches. Experimental results demonstrate that the proposed method outperforms other Android malware detection techniques.

음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템 (Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features)

  • 이주환;김진영;정동기;김형국
    • 한국음향학회지
    • /
    • 제41권2호
    • /
    • pp.115-121
    • /
    • 2022
  • 본 논문에서는 음악 청취 시에 나타나는 뇌파 특징을 이용하여 사용자 감정에 따른 음악 분류 시스템을 제안한다. 제안된 시스템에서는 뇌파 신호로부터 추출한 감정별 뇌파 특징과 음악신호에서 추출한 청각적 특징 간의 관계를 회귀 심층신경망을 통해 학습한다. 실제 적용 시에는 이러한 회귀모델을 기반으로 제안된 시스템은 입력되는 음악의 청각 특성에 매핑된 뇌파 신호 특징을 자동으로 생성하고, 이 특징을 주의집중 기반의 심층신경망에 적용함으로써 음악을 자동으로 분류한다. 실험결과는 제안된 자동 음악분류 프레임 워크의 음악 분류 정확도를 제시한다.

Android malicious code Classification using Deep Belief Network

  • Shiqi, Luo;Shengwei, Tian;Long, Yu;Jiong, Yu;Hua, Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권1호
    • /
    • pp.454-475
    • /
    • 2018
  • This paper presents a novel Android malware classification model planned to classify and categorize Android malicious code at Drebin dataset. The amount of malicious mobile application targeting Android based smartphones has increased rapidly. In this paper, Restricted Boltzmann Machine and Deep Belief Network are used to classify malware into families of Android application. A texture-fingerprint based approach is proposed to extract or detect the feature of malware content. A malware has a unique "image texture" in feature spatial relations. The method uses information on texture image extracted from malicious or benign code, which are mapped to uncompressed gray-scale according to the texture image-based approach. By studying and extracting the implicit features of the API call from a large number of training samples, we get the original dynamic activity features sets. In order to improve the accuracy of classification algorithm on the features selection, on the basis of which, it combines the implicit features of the texture image and API call in malicious code, to train Restricted Boltzmann Machine and Back Propagation. In an evaluation with different malware and benign samples, the experimental results suggest that the usability of this method---using Deep Belief Network to classify Android malware by their texture images and API calls, it detects more than 94% of the malware with few false alarms. Which is higher than shallow machine learning algorithm clearly.