• 제목/요약/키워드: Context Recognition

검색결과 526건 처리시간 0.02초

지능형 이동 로봇에서 강인 물체 인식을 위한 영상 문맥 정보 활용 기법 (Utilization of Visual Context for Robust Object Recognition in Intelligent Mobile Robots)

  • 김성호;김준식;권인소
    • 로봇학회논문지
    • /
    • 제1권1호
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contexts. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the future research directions of object recognition technologies for intelligent robots.

  • PDF

비디오에서 양방향 문맥 정보를 이용한 상호 협력적인 위치 및 물체 인식 (Collaborative Place and Object Recognition in Video using Bidirectional Context Information)

  • 김성호;권인소
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.172-179
    • /
    • 2006
  • In this paper, we present a practical place and object recognition method for guiding visitors in building environments. Recognizing places or objects in real world can be a difficult problem due to motion blur and camera noise. In this work, we present a modeling method based on the bidirectional interaction between places and objects for simultaneous reinforcement for the robust recognition. The unification of visual context including scene context, object context, and temporal context is also. The proposed system has been tested to guide visitors in a large scale building environment (10 topological places, 80 3D objects).

  • PDF

Abnormal Behavior Recognition Based on Spatio-temporal Context

  • Yang, Yuanfeng;Li, Lin;Liu, Zhaobin;Liu, Gang
    • Journal of Information Processing Systems
    • /
    • 제16권3호
    • /
    • pp.612-628
    • /
    • 2020
  • This paper presents a new approach for detecting abnormal behaviors in complex surveillance scenes where anomalies are subtle and difficult to distinguish due to the intricate correlations among multiple objects' behaviors. Specifically, a cascaded probabilistic topic model was put forward for learning the spatial context of local behavior and the temporal context of global behavior in two different stages. In the first stage of topic modeling, unlike the existing approaches using either optical flows or complete trajectories, spatio-temporal correlations between the trajectory fragments in video clips were modeled by the latent Dirichlet allocation (LDA) topic model based on Markov random fields to obtain the spatial context of local behavior in each video clip. The local behavior topic categories were then obtained by exploiting the spectral clustering algorithm. Based on the construction of a dictionary through the process of local behavior topic clustering, the second phase of the LDA topic model learns the correlations of global behaviors and temporal context. In particular, an abnormal behavior recognition method was developed based on the learned spatio-temporal context of behaviors. The specific identification method adopts a top-down strategy and consists of two stages: anomaly recognition of video clip and anomalous behavior recognition within each video clip. Evaluation was performed using the validity of spatio-temporal context learning for local behavior topics and abnormal behavior recognition. Furthermore, the performance of the proposed approach in abnormal behavior recognition improved effectively and significantly in complex surveillance scenes.

음성을 이용한 화자 및 문장독립 감정인식 (Speaker and Context Independent Emotion Recognition using Speech Signal)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

Scale Invariant Auto-context for Object Segmentation and Labeling

  • Ji, Hongwei;He, Jiangping;Yang, Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2881-2894
    • /
    • 2014
  • In complicated environment, context information plays an important role in image segmentation/labeling. The recently proposed auto-context algorithm is one of the effective context-based methods. However, the standard auto-context approach samples the context locations utilizing a fixed radius sequence, which is sensitive to large scale-change of objects. In this paper, we present a scale invariant auto-context (SIAC) algorithm which is an improved version of the auto-context algorithm. In order to achieve scale-invariance, we try to approximate the optimal scale for the image in an iterative way and adopt the corresponding optimal radius sequence for context location sampling, both in training and testing. In each iteration of the proposed SIAC algorithm, we use the current classification map to estimate the image scale, and the corresponding radius sequence is then used for choosing context locations. The algorithm iteratively updates the classification maps, as well as the image scales, until convergence. We demonstrate the SIAC algorithm on several image segmentation/labeling tasks. The results demonstrate improvement over the standard auto-context algorithm when large scale-change of objects exists.

MANET에서 상황인식 기반의 UoC Architecture 구현 (Implementation of a Context-awareness based UoC Architecture for MANET)

  • 두경민;이강환
    • 한국정보통신학회논문지
    • /
    • 제12권6호
    • /
    • pp.1128-1133
    • /
    • 2008
  • 상황인식(Context-aware)은 인간-컴퓨터 상호작용의 단점을 극복하기 위한 방법으로써 많은 주목을 받고 있다. 본 논문에서는 UoC(Ubiquitous system on Chip)로 구현될 수 있는 상황인식 시스템 구조를 제안한다. 본 논문은 유비쿼터스 컴퓨팅 시스템을 구현하기 위해 CRS(Context Recognition Switch)와 DOS(Dynamic and Optimal Standard)의 개념을 포함한 Pre-processor, HPSP(High Performance Signal Processor), Network Topology Processor의 부분으로 구성된 UoC Architecture를 제안한다. 또한, IEEE 802.15.4 WPAN(Wireless Personal Area Network) Standard에 의해 구현된 UoC를 보여준다. 제안된 상황인식 기반의 UoC Architecture는 주거 환경에서 컨텍스트를 인식하여 사용자를 지원하는 지능형 이동 로봇 등에 적용될 수 있을 것이다.

반음절 문맥종속 모델을 이용한 한국어 4 연숫자음 인식에 관한 연구 (A Study on Korean 4-connected Digit Recognition Using Demi-syllable Context-dependent Models)

  • 이기영;최성호;이호영;배명진
    • 한국음향학회지
    • /
    • 제22권3호
    • /
    • pp.175-181
    • /
    • 2003
  • 한국어 숫자음은 단음절이며 연결된 숫자음 사이에 연음현상의 영향 때문에 한국어 연결 숫자음의 인식방법으로 반음절에 기반한 모델들이 제시되어 왔다. 기존에 제안된 반음절이나 반음절+반음절의 인식모델을 이용한 방법에서는 아직까지 우수한 인식성능을 보이지 못하고 있다. 본 논문에서는 확장된 문맥종속 반음절 모델을 이용한 한국어 4 연숫자음 인식방법을 제안한다. 실험에서 연결숫자음은 SiTEC의 4 연숫자음 데이터 베이스를 사용하였으며 학습과 인식방법으로는 HTK 3.0의 C-HMM을 이용하였다. 기존의 방법들과 인식율을 비교해 본 결과, 92%의 비교적 우수한 인식성능을 보였다.

보강문맥자유문법을 이용한 필기체한글 온라인 인식 (On-Line Recognition of Handwritten Hangeul by Augmented Context Free Grammar)

  • 이희동;김태균
    • 대한전자공학회논문지
    • /
    • 제24권5호
    • /
    • pp.769-776
    • /
    • 1987
  • A method of on-line recognition of Korean characters (Hangeul) by augmented conterxt free grammar is described in this paper. Syntactic analysis with context free grammar oftern has ambiguity. Insufficient description of relations among Hangrul sub-patterns causes this ambiguity can be determined through repetition of experiments. Flexible syntactic analysis is executed by adapting the condition to the (advice)part of augmented context free grammar. The ratio of correct recognition of this method is more than 99%.

  • PDF

음성인식에서 문맥의존 음향모델의 성능향상을 위한 유사음소단위에 관한 연구 (A Study on Phoneme Likely Units to Improve the Performance of Context-dependent Acoustic Models in Speech Recognition)

  • 임영춘;오세진;김광동;노덕규;송민규;정현열
    • 한국음향학회지
    • /
    • 제22권5호
    • /
    • pp.388-402
    • /
    • 2003
  • In this paper, we carried out the word, 4 continuous digits. continuous, and task-independent word recognition experiments to verify the effectiveness of the re-defined phoneme-likely units (PLUs) for the phonetic decision tree based HM-Net (Hidden Markov Network) context-dependent (CD) acoustic modeling in Korean appropriately. In case of the 48 PLUs, the phonemes /ㅂ/, /ㄷ/, /ㄱ/ are separated by initial sound, medial vowel, final consonant, and the consonants /ㄹ/, /ㅈ/, /ㅎ/ are also separated by initial sound, final consonant according to the position of syllable, word, and sentence, respectively. In this paper. therefore, we re-define the 39 PLUs by unifying the one phoneme in the separated initial sound, medial vowel, and final consonant of the 48 PLUs to construct the CD acoustic models effectively. Through the experimental results using the re-defined 39 PLUs, in word recognition experiments with the context-independent (CI) acoustic models, the 48 PLUs has an average of 7.06%, higher recognition accuracy than the 39 PLUs used. But in the speaker-independent word recognition experiments with the CD acoustic models, the 39 PLUs has an average of 0.61% better recognition accuracy than the 48 PLUs used. In the 4 continuous digits recognition experiments with the liaison phenomena. the 39 PLUs has also an average of 6.55% higher recognition accuracy. And then, in continuous speech recognition experiments, the 39 PLUs has an average of 15.08% better recognition accuracy than the 48 PLUs used too. Finally, though the 48, 39 PLUs have the lower recognition accuracy, the 39 PLUs has an average of 1.17% higher recognition characteristic than the 48 PLUs used in the task-independent word recognition experiments according to the unknown contextual factor. Through the above experiments, we verified the effectiveness of the re-defined 39 PLUs compared to the 48PLUs to construct the CD acoustic models in this paper.

Vowel Context Effect on the Perception of Stop Consonants in Malayalam and Its Role in Determining Syllable Frequency

  • Mohan, Dhanya;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • 제25권3호
    • /
    • pp.124-130
    • /
    • 2021
  • Background and Objectives: The study investigated vowel context effects on the perception of stop consonants in Malayalam. It also probed into the role of vowel context effects in determining the frequency of occurrence of various consonant-vowel (CV) syllables in Malayalam. Subjects and Methods: The study used a cross-sectional pre-experimental post-test only research design on 30 individuals with normal hearing, who were native speakers of Malayalam. The stimuli included three stop consonants, each spoken in three different vowel contexts. The resultant nine syllables were presented in original form and five gating conditions. The consonant recognition in different vowel contexts of the participants was assessed. The frequency of occurrence of the nine target syllables in the spoken corpus of Malayalam was also systematically derived. Results: The consonant recognition score was better in the /u/ vowel context compared with /i/ and /a/ contexts. The frequency of occurrence of the target syllables derived from the spoken corpus of Malayalam showed that the three stop consonants occurred more frequently with the vowel /a/ compared with /u/ and /i/. Conclusions: The findings show a definite vowel context effect on the perception of the Malayalam stop consonants. This context effect observed is different from that in other languages. Stop consonants are perceived better in the context of /u/ compared with the /a/ and /i/ contexts. Furthermore, the vowel context effects do not appear to determine the frequency of occurrence of different CV syllables in Malayalam.