• Title/Summary/Keyword: Feature detection

Search Result 2,221, Processing Time 0.033 seconds

Effects of Preprocessing and Feature Extraction on CNN-based Fire Detection Performance (전처리와 특징 추출이 CNN기반 화재 탐지 성능에 미치는 효과)

  • Lee, JeongHwan;Kim, Byeong Man;Shin, Yoon Sik
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.4
    • /
    • pp.41-53
    • /
    • 2018
  • Recently, the development of machine learning technology has led to the application of deep learning technology to existing image based application systems. In this context, some researches have been made to apply CNN (Convolutional Neural Network) to the field of fire detection. To verify the effects of existing preprocessing and feature extraction methods on fire detection when combined with CNN, in this paper, the recognition performance and learning time are evaluated by changing the VGG19 CNN structure while gradually increasing the convolution layer. In general, the accuracy is better when the image is not preprocessed. Also it's shown that the preprocessing method and the feature extraction method have many benefits in terms of learning speed.

Automatic Speechreading Feature Detection Using Color Information (색상 정보를 이용한 자동 독화 특징 추출)

  • Lee, Kyong-Ho;Yang, Ryong;Rhee, Sang-Burm
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.107-115
    • /
    • 2008
  • Face feature detection plays an important role in application such as automatic speechreading, human computer interface, face recognition, and face image database management. We proposed a automatic speechreading feature detection algorithm for color image using color information. Face feature pixels is represented for various value because of the luminance and chrominance in various color space. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, inner boundary of lips and the outer line of the tooth is detected and show very encouraging result.

  • PDF

Improvement of Active Shape Model for Detecting Face Features in iOS Platform (iOS 플랫폼에서 Active Shape Model 개선을 통한 얼굴 특징 검출)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.2
    • /
    • pp.61-65
    • /
    • 2016
  • Facial feature detection is a fundamental function in the field of computer vision such as security, bio-metrics, 3D modeling, and face recognition. There are many algorithms for the function, active shape model is one of the most popular local texture models. This paper addresses issues related to face detection, and implements an efficient extraction algorithm for extracting the facial feature points to use on iOS platform. In this paper, we extend the original ASM algorithm to improve its performance by four modifications. First, to detect a face and to initialize the shape model, we apply a face detection API provided from iOS CoreImage framework. Second, we construct a weighted local structure model for landmarks to utilize the edge points of the face contour. Third, we build a modified model definition and fitting more landmarks than the classical ASM. And last, we extend and build two-dimensional profile model for detecting faces within input images. The proposed algorithm is evaluated on experimental test set containing over 500 face images, and found to successfully extract facial feature points, clearly outperforming the original ASM.

Efficient Tire Wear and Defect Detection Algorithm Based on Deep Learning (심층학습 기법을 활용한 효과적인 타이어 마모도 분류 및 손상 부위 검출 알고리즘)

  • Park, Hye-Jin;Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1026-1034
    • /
    • 2021
  • Tire wear and defect are important factors for safe driving condition. These defects are generally inspected by some specialized experts or very expensive equipments such as stereo depth camera and depth gauge. In this paper, we propose tire safety vision inspector based on deep neural network (DNN). The status of tire wear is categorized into three: 'safety', 'warning', and 'danger' based on depth of tire tread. We propose an attention mechanism for emphasizing the feature of tread area. The attention-based feature is concatenated to output feature maps of the last convolution layer of ResNet-101 to extract more robust feature. Through experiments, the proposed tire wear classification model improves 1.8% of accuracy compared to the existing ResNet-101 model. For detecting the tire defections, the developed tire defect detection model shows up-to 91% of accuracy using the Mask R-CNN model. From these results, we can see that the suggested models are useful for checking on the safety condition of working tire in real environment.

Smoke Detection Method Using Local Binary Pattern Variance in RGB Contrast Imag (RGB Contrast 영상에서의 Local Binary Pattern Variance를 이용한 연기검출 방법)

  • Kim, Jung Han;Bae, Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.10
    • /
    • pp.1197-1204
    • /
    • 2015
  • Smoke detection plays an important role for the early detection of fire. In this paper, we suggest a newly developed method that generated LBPV(Local Binary Pattern Variance)s as special feature vectors from RGB contrast images can be applied to detect smoke using SVM(Support Vector Machine). The proposed method rearranges mean value of the block from each R, G, B channel and its intensity of the mean value. Additionally, it generates RGB contrast image which indicates each RGB channel’s contrast via smoke’s achromatic color. Uniform LBPV, Rotation-Invariance LBPV, Rotation-Invariance Uniform LBPV are applied to RGB Contrast images so that it could generate feature vector from the form of LBP. It helps to distinguish between smoke and non smoke area through SVM. Experimental results show that true positive detection rate is similar but false positive detection rate has been improved, although the proposed method reduced numbers of feature vector in half comparing with the existing method with LBP and LBPV.

A Study for Image Segmentation Using Java (Java를 이용한 영상분할에 관한 연구)

  • 신민화;최길환;배상현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.804-807
    • /
    • 2002
  • Edge of image have a many information about input image. There is a many applications to using a edge detection and uses by variable special effect. Edge detection is a field of image analysis, image segmentation using a pixel make the one field for decision of image construction. In this paper, image segmentation through many ways of edge detection for image segmentation. First of all, it analyze feature of image and extract by feature of each image, to adopt way of edge detection to selective. It realize edge detection efficiently, consider to feature of language through using a java image segmentation.

  • PDF

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

Integrated Object Representations in Visual Working Memory Examined by Change Detection and Recall Task Performance (변화탐지와 회상 과제에 기초한 시각작업기억의 통합적 객체 표상 검증)

  • Inae Lee;Joo-Seok Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.1-21
    • /
    • 2024
  • This study investigates the characteristics of visual working memory (VWM) representations by examining two theoretical models: the integrated-object and the parallel-independent feature storage models. Experiment I involved a change detection task where participants memorized arrays of either orientation bars, colored squares, or both. In the one-feature condition, the memory array consisted of one feature (either orientations or colors), whereas the two-feature condition included both. We found no differences in change detection performance between the conditions, favoring the integrated object model over the parallel-independent feature storage model. Experiment II employed a recall task with memory arrays of isosceles triangles' orientations, colored squares, or both, and one-feature and two-feature conditions were compared for their recall performance. We found again no clear difference in recall accuracy between the conditions, but the results of analyses for memory precision and guessing responses indicated the weak object model over the strong object model. For ongoing debates surrounding VWM's representational characteristics, these findings highlight the dominance of the integrated object model over the parallel independent feature storage model.

A flexible Feature Matching for Automatic Face and Facial Feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.705-711
    • /
    • 2003
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in !be image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the face identification system.

A New Confidence Measure for Eye Detection Using Pixel Selection (눈 검출에서의 픽셀 선택을 이용한 신뢰 척도)

  • Lee, Yonggeol;Choi, Sang-Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.7
    • /
    • pp.291-296
    • /
    • 2015
  • In this paper, we propose a new confidence measure using pixel selection for eye detection and design a hybrid eye detector. For this, we produce sub-images by applying a pixel selection method to the eye patches and construct the BDA(Biased Discriminant Analysis) feature space for measuring the confidence of the eye detection results. For a hybrid eye detector, we select HFED(Haar-like Feature based Eye Detector) and MFED(MCT Feature based Eye Detector), which are complementary to each other, as basic detectors. For a given image, each basic detector conducts eye detection and the confidence of each result is estimated in the BDA feature space by calculating the distances between the produced eye patches and the mean of positive samples in the training set. Then, the result with higher confidence is adopted as the final eye detection result and is used to the face alignment process for face recognition. The experimental results for various face databases show that the proposed method performs more accurate eye detection and consequently results in better face recognition performance compared with other methods.