• Title/Summary/Keyword: Salient Segment Detection

Search Result 5, Processing Time 0.019 seconds

Salient Object Detection via Adaptive Region Merging

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4386-4404
    • /
    • 2016
  • Most existing salient object detection algorithms commonly employed segmentation techniques to eliminate background noise and reduce computation by treating each segment as a processing unit. However, individual small segments provide little information about global contents. Such schemes have limited capability on modeling global perceptual phenomena. In this paper, a novel salient object detection algorithm is proposed based on region merging. An adaptive-based merging scheme is developed to reassemble regions based on their color dissimilarities. The merging strategy can be described as that a region R is merged with its adjacent region Q if Q has the lowest dissimilarity with Q among all Q's adjacent regions. To guide the merging process, superpixels that located at the boundary of the image are treated as the seeds. However, it is possible for a boundary in the input image to be occupied by the foreground object. To avoid this case, we optimize the boundary influences by locating and eliminating erroneous boundaries before the region merging. We show that even though three simple region saliency measurements are adopted for each region, encouraging performance can be obtained. Experiments on four benchmark datasets including MSRA-B, SOD, SED and iCoSeg show the proposed method results in uniform object enhancement and achieve state-of-the-art performance by comparing with nine existing methods.

An Artificial Visual Attention Model based on Opponent Process Theory for Salient Region Segmentation (돌출영역 분할을 위한 대립과정이론 기반의 인공시각집중모델)

  • Jeong, Kiseon;Hong, Changpyo;Park, Dong Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.157-168
    • /
    • 2014
  • We propose an novel artificial visual attention model that is capable of automatic detection and segmentation of saliency region on natural images in this paper. The proposed model is based on human visual perceptions in biological vision and contains there are main contributions. Firstly, we propose a novel framework of artificial visual attention model based on the opponent process theory using intensity and color features, and an entropy filter is designed to perceive salient regions considering the amount of information from intensity and color feature channels. The entropy filter is able to detect and segment salient regions in high segmentation accuracy and precision. Lastly, we also propose an adaptive combination method to generate a final saliency map. This method estimates scores about intensity and color conspicuous maps from each perception model and combines the conspicuous maps with weight derived from scores. In evaluation of saliency map by ROC analysis, the AUC of proposed model as 0.9256 approximately improved 15% whereas the AUC of previous state-of-the-art models as 0.7824. And in evaluation of salient region segmentation, the F-beta of proposed model as 0.7325 approximately improved 22% whereas the F-beta of previous state-of-the-art models.

Automatic pronunciation assessment of English produced by Korean learners using articulatory features (조음자질을 이용한 한국인 학습자의 영어 발화 자동 발음 평가)

  • Ryu, Hyuksu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.103-113
    • /
    • 2016
  • This paper aims to propose articulatory features as novel predictors for automatic pronunciation assessment of English produced by Korean learners. Based on the distinctive feature theory, where phonemes are represented as a set of articulatory/phonetic properties, we propose articulatory Goodness-Of-Pronunciation(aGOP) features in terms of the corresponding articulatory attributes, such as nasal, sonorant, anterior, etc. An English speech corpus spoken by Korean learners is used in the assessment modeling. In our system, learners' speech is forced aligned and recognized by using the acoustic and pronunciation models derived from the WSJ corpus (native North American speech) and the CMU pronouncing dictionary, respectively. In order to compute aGOP features, articulatory models are trained for the corresponding articulatory attributes. In addition to the proposed features, various features which are divided into four categories such as RATE, SEGMENT, SILENCE, and GOP are applied as a baseline. In order to enhance the assessment modeling performance and investigate the weights of the salient features, relevant features are extracted by using Best Subset Selection(BSS). The results show that the proposed model using aGOP features outperform the baseline. In addition, analysis of relevant features extracted by BSS reveals that the selected aGOP features represent the salient variations of Korean learners of English. The results are expected to be effective for automatic pronunciation error detection, as well.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF