• 제목/요약/키워드: Bag-of-Feature

검색결과 58건 처리시간 0.022초

업사이클 브랜드 패션가방제품의 표현 특성 연구 (Study on Expression Characteristic of Fashion Bag Products of Up-cycle Brand)

  • 박해인;곽태기
    • 한국의상디자인학회지
    • /
    • 제19권2호
    • /
    • pp.1-14
    • /
    • 2017
  • The consumption trend of fashion in modern industrial society is developing from the rapid changes, and the lifespan of fashion products becomes shorter due to the various industrial wastes. Due to the attitude change caused by the ethical consumption consciousness and environment awareness, the up-cycle fashion products got to receive attention, and it is in the limelight as a new trend to realize the sustainable fashion products in the domestic and foreign fashion. The purpose of this study lies in drawing the expression characteristics by investigating and analyzing the cases of each type on the fashion bag products of up-cycle brand, and contributing to the diversification of product family fitting to the characteristics of fashion bag product of up-cycle brand, systematic strategies of up-cycle fashion products, and activation of up-cycle fashion market. In research methods, the theatrical researches were conducted centered the relevant domestic literature materials, preceding papers, etc., which ran paralleled with the actual case analysis study. Through the preceding research and websites related to selected products, websites of up-cycle companies, relevant books, related articles, etc., the expression characteristics of up-cycle fashion bag products were drawn. The results of this study are as follows: First, as it has the feature of historicality, the designs can be created by containing the designer's story, story of materials, and consumer's story. Second, since it has the characteristic of sustainability, the application of manufacturing process and materials, extension of product life, conversion of original material's function, etc. can be sustainable. Third, as it's a trait of scarcity, all products may be produced by hand, and it can have the specialty which the original materials have. Fourth, since it has an eco-friendly trait, even while saving the original materials, the aesthetic needs could be met according to the consumers' continuous demand.

  • PDF

A Salient Based Bag of Visual Word Model (SBBoVW): Improvements toward Difficult Object Recognition and Object Location in Image Retrieval

  • Mansourian, Leila;Abdullah, Muhamad Taufik;Abdullah, Lilli Nurliyana;Azman, Azreen;Mustaffa, Mas Rina
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권2호
    • /
    • pp.769-786
    • /
    • 2016
  • Object recognition and object location have always drawn much interest. Also, recently various computational models have been designed. One of the big issues in this domain is the lack of an appropriate model for extracting important part of the picture and estimating the object place in the same environments that caused low accuracy. To solve this problem, a new Salient Based Bag of Visual Word (SBBoVW) model for object recognition and object location estimation is presented. Contributions lied in the present study are two-fold. One is to introduce a new approach, which is a Salient Based Bag of Visual Word model (SBBoVW) to recognize difficult objects that have had low accuracy in previous methods. This method integrates SIFT features of the original and salient parts of pictures and fuses them together to generate better codebooks using bag of visual word method. The second contribution is to introduce a new algorithm for finding object place based on the salient map automatically. The performance evaluation on several data sets proves that the new approach outperforms other state-of-the-arts.

Multiscale Spatial Position Coding under Locality Constraint for Action Recognition

  • Yang, Jiang-feng;Ma, Zheng;Xie, Mei
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권4호
    • /
    • pp.1851-1863
    • /
    • 2015
  • – In the paper, to handle the problem of traditional bag-of-features model ignoring the spatial relationship of local features in human action recognition, we proposed a Multiscale Spatial Position Coding under Locality Constraint method. Specifically, to describe this spatial relationship, we proposed a mixed feature combining motion feature and multi-spatial-scale configuration. To utilize temporal information between features, sub spatial-temporal-volumes are built. Next, the pooled features of sub-STVs are obtained via max-pooling method. In classification stage, the Locality-Constrained Group Sparse Representation is adopted to utilize the intrinsic group information of the sub-STV features. The experimental results on the KTH, Weizmann, and UCF sports datasets show that our action recognition system outperforms the classical local ST feature-based recognition systems published recently.

Chaotic Features for Dynamic Textures Recognition with Group Sparsity Representation

  • Luo, Xinbin;Fu, Shan;Wang, Yong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권11호
    • /
    • pp.4556-4572
    • /
    • 2015
  • Dynamic texture (DT) recognition is a challenging problem in numerous applications. In this study, we propose a new algorithm for DT recognition based on group sparsity structure in conjunction with chaotic feature vector. Bag-of-words model is used to represent each video as a histogram of the chaotic feature vector, which is proposed to capture self-similarity property of the pixel intensity series. The recognition problem is then cast to a group sparsity model, which can be efficiently optimized through alternating direction method of multiplier algorithm. Experimental results show that the proposed method exhibited the best performance among several well-known DT modeling techniques.

Recognizing Actions from Different Views by Topic Transfer

  • Liu, Jia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2093-2108
    • /
    • 2017
  • In this paper, we describe a novel method for recognizing human actions from different views via view knowledge transfer. Our approach is characterized by two aspects: 1) We propose a unsupervised topic transfer model (TTM) to model two view-dependent vocabularies, where the original bag of visual words (BoVW) representation can be transferred into a bag of topics (BoT) representation. The higher-level BoT features, which can be shared across views, can connect action models for different views. 2) Our features make it possible to obtain a discriminative model of action under one view and categorize actions in another view. We tested our approach on the IXMAS data set, and the results are promising, given such a simple approach. In addition, we also demonstrate a supervised topic transfer model (STTM), which can combine transfer feature learning and discriminative classifier learning into one framework.

VILODE : 키 프레임 영상과 시각 단어들을 이용한 실시간 시각 루프 결합 탐지기 (VILODE : A Real-Time Visual Loop Closure Detector Using Key Frames and Bag of Words)

  • 김혜숙;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제4권5호
    • /
    • pp.225-230
    • /
    • 2015
  • 본 논문에서는 키 프레임 영상과 SURF 특징점 기반의 시각 단어들을 이용한 효과적인 실시간 시각 루프 결합 탐지기 VILODE를 제안한다. 시각 루프 결합 탐지기는 과거에 지나온 위치들 중 하나를 다시 재방문하였는지를 판단하기 위해, 새로운 입력 영상을 이미 지나온 위치들에서 수집한 과거 영상들과 모두 비교해보아야 한다. 따라서 새로운 위치나 장소를 방문할수록 비교 대상 영상들이 계속해서 증가하기 때문에, 일반적으로 루프 결합 탐지기는 실시간 제약과 높은 탐지 정확도를 동시에 만족하기 어렵다. 이러한 문제점을 극복하기 위해, 본 시스템에서는 입력 영상들 중에서 의미 있는 것들만을 선택해 이들만을 비교하는 효과적인 키 프레임 선택 방법을 채택하였다. 따라서 루프 탐지에 필요한 영상 비교를 대폭 줄일 수 있다. 또한 본 시스템에서는 루프 결합 탐지의 정확도와 효율성을 높이기 위해, 키 프레임 영상들을 시각 단어들로 표현하고, DBoW 데이터베이스 시스템을 이용해 키 프레임 영상들에 대한 색인을 구성하였다. TUM 대학의 벤치마크 데이터들을 이용한 실험을 통해, 본 논문에서 제안한 시각 루프 결합 탐지기의 높은 성능을 확인할 수 있었다.

특징점과 특징선을 활용한 단안 카메라 SLAM에서의 지도 병합 방법 (Map Alignment Method in Monocular SLAM based on Point-Line Feature)

  • 백무현;이진규;문지원;황성수
    • 한국멀티미디어학회논문지
    • /
    • 제23권2호
    • /
    • pp.127-134
    • /
    • 2020
  • In this paper, we propose a map alignment method for maps generated by point-line monocular SLAM. In the proposed method, the information of feature lines as well as feature points extracted from multiple maps are fused into a single map. To this end, the proposed method first searches for similar areas between maps via Bag-of-Words-based image matching. Thereafter, it calculates the similarity transformation between the maps in the corresponding areas to align the maps. Finally, we merge the overlapped information of multiple maps into a single map by removing duplicate information from similar areas. Experimental results show that maps created by different users are combined into a single map, and the accuracy of the fused map is similar with the one generated by a single user. We expect that the proposed method can be utilized for fast imagery map generation.

열 영상에서의 걸음걸이와 얼굴 특징을 이용한 개인 인식 (Person Recognition Using Gait and Face Features on Thermal Images)

  • 김사문;이대종;이호현;전명근
    • 전기학회논문지P
    • /
    • 제65권2호
    • /
    • pp.130-135
    • /
    • 2016
  • Gait recognition has advantage of non-contact type recognition. But It has disadvantage of low recognition rate when the pedestrian silhouette is changed due to bag or coat. In this paper, we proposed new method using combination of gait energy image feature and thermal face image feature. First, we extracted a face image which has optimal focusing value using human body rate and Tenengrad algorithm. Second step, we extracted features from gait energy image and thermal face image using linear discriminant analysis. Third, calculate euclidean distance between train data and test data, and optimize weights using genetic algorithm. Finally, we compute classification using nearest neighbor classification algorithm. So the proposed method shows a better result than the conventional method.

Word2vec과 앙상블 분류기를 사용한 효율적 한국어 감성 분류 방안 (Effective Korean sentiment classification method using word2vec and ensemble classifier)

  • 박성수;이건창
    • 디지털콘텐츠학회 논문지
    • /
    • 제19권1호
    • /
    • pp.133-140
    • /
    • 2018
  • 감성 분석에서 정확한 감성 분류는 중요한 연구 주제이다. 본 연구는 최근 많은 연구가 이루어지는 word2vec과 앙상블 방법을 이용하여 효과적으로 한국어 리뷰를 감성 분류하는 방법을 제시한다. 연구는 20 만 개의 한국 영화 리뷰 텍스트에 대해, 품사 기반 BOW 자질과 word2vec를 사용한 자질을 생성하고, 두 개의 자질 표현을 결합한 통합 자질을 생성했다. 감성 분류를 위해 Logistic Regression, Decision Tree, Naive Bayes, Support Vector Machine의 단일 분류기와 Adaptive Boost, Bagging, Gradient Boosting, Random Forest의 앙상블 분류기를 사용하였다. 연구 결과로 형용사와 부사를 포함한 BOW자질과 word2vec자질로 구성된 통합 자질 표현이 가장 높은 감성 분류 정확도를 보였다. 실증결과, 단일 분류기인 SVM이 가장 높은 성능을 나타내었지만, 앙상블 분류기는 단일 분류기와 비슷하거나 약간 낮은 성능을 보였다.

Image-based Soft Drink Type Classification and Dietary Assessment System Using Deep Convolutional Neural Network with Transfer Learning

  • Rubaiya Hafiz;Mohammad Reduanul Haque;Aniruddha Rakshit;Amina khatun;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.158-168
    • /
    • 2024
  • There is hardly any person in modern times who has not taken soft drinks instead of drinking water. The rate of people taking soft drinks being surprisingly high, researchers around the world have cautioned from time to time that these drinks lead to weight gain, raise the risk of non-communicable diseases and so on. Therefore, in this work an image-based tool is developed to monitor the nutritional information of soft drinks by using deep convolutional neural network with transfer learning. At first, visual saliency, mean shift segmentation, thresholding and noise reduction technique, collectively known as 'pre-processing' are adopted to extract the location of drinks region. After removing backgrounds and segment out only the desired area from image, we impose Discrete Wavelength Transform (DWT) based resolution enhancement technique is applied to improve the quality of image. After that, transfer learning model is employed for the classification of drinks. Finally, nutrition value of each drink is estimated using Bag-of-Feature (BoF) based classification and Euclidean distance-based ratio calculation technique. To achieve this, a dataset is built with ten most consumed soft drinks in Bangladesh. These images were collected from imageNet dataset as well as internet and proposed method confirms that it has the ability to detect and recognize different types of drinks with an accuracy of 98.51%.