• Title/Summary/Keyword: Picture Recognition

Search Result 143, Processing Time 0.033 seconds

Implementation of Real-time Virtual Touch Recognition System in Embedded System (임베디드 환경에서 실시간 가상 터치 인식 시스템의 구현)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.10
    • /
    • pp.1759-1766
    • /
    • 2016
  • We can implement the virtual touch recognition system by mounting the virtual touch algorithm into an embedded device connected to a depth camera. Since the computing performance is limited in embedded system, the real-time processing of recognizing the virtual touch is difficult when the resolution of the depth image is large. In order to resolve the problem, this paper improves the algorithms of binarization and labeling that occupy a lot of time in all processing of virtual touch recognition. It processes the binarization and labeling in only necessary regions rather than all of the picture. By appling the proposed algorithm, the system can recognize the virtual touch in real-time as about 31ms per a frame in the depth image that has 640×480 resolution.

Moving Target Tracking and Recognition for Location Based Surveillance Service (위치기반 감시 서비스를 위한 이동 객체 추적 및 인식)

  • Kim, Hyun;Park, Chan-Ho;Woo, Jong-Woo;Doo, Seok-Bae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1211-1212
    • /
    • 2008
  • In this paper, we propose image process modeling as a part of location based surveillance system for unauthorized target recognition and tracking in harbor, airport, military zone. For this, we compress and store background image in lower resolution and perform object extraction and motion tracking by using sobel edge detection and difference picture method between real images and a background image. In addition to, we use Independent Component Analysis Neural Network for moving target recognition. Experiments are performed for object extraction and tracking of moving targets on road by using static camera in 20m height building and it shows the robust results.

  • PDF

Development of a visual-data processing system for a polyhedral object recognition by the projection of laser ring beam (다면체 물체 인식을 위한 환상레이져 빔 투사형 시각 정보 처리 시스템 개발)

  • 김종형;조용철;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.428-432
    • /
    • 1988
  • In this study, some issues on 3- dimentional object recognition and pose determination are discussed. The method employs a laser projector which projects a cyliderical light beam on the object plane where it produces a bright ring pattern. The picture is then taken by a T.V camera. The ring pattern is mathmetically the ellipse of which the geometrical parameters have the 3-dimentional feature of the object plane. This paper gives the mathematical aspects of 3-dimentional recognition method and shows experimentally the variations of ellipse parameters as the spatial deviation of the plane object.

  • PDF

Analysis of Electroencephalogram Electrode Position and Spectral Feature for Emotion Recognition (정서 인지를 위한 뇌파 전극 위치 및 주파수 특징 분석)

  • Chung, Seong-Youb;Yoon, Hyun-Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.2
    • /
    • pp.64-70
    • /
    • 2012
  • This paper presents a statistical analysis method for the selection of electroencephalogram (EEG) electrode positions and spectral features to recognize emotion, where emotional valence and arousal are classified into three and two levels, respectively. Ten experiments for a subject were performed under three categorized IAPS (International Affective Picture System) pictures, i.e., high valence and high arousal, medium valence and low arousal, and low valence and high arousal. The electroencephalogram was recorded from 12 sites according to the international 10~20 system referenced to Cz. The statistical analysis approach using ANOVA with Tukey's HSD is employed to identify statistically significant EEG electrode positions and spectral features in the emotion recognition.

Registration Error Compensation for Face Recognition Using Eigenface (Eigenface를 이용한 얼굴인식에서의 영상등록 오차 보정)

  • Moon Ji-Hye;Lee Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.5C
    • /
    • pp.364-370
    • /
    • 2005
  • The first step of face recognition is to align an input face picture with database images. We propose a new algorithm of removing registration error in eigenspace. Our algorithm can correct for translation, rotation and scale changes. Linear matrix modeling of registration error enables us to compensate for subpixel errors in eigenspace. After calculating derivative of a weighting vector in eigenspace we can obtain the amount of translation or rotation without time consuming search. We verify that the correction enhances the recognition rate dramatically.

5-Year-Old Children's Script Knowledge According to Task Situation and Socioeconomic Status (과제 상황 및 계층에 따른 만 5세 유아의 스크립트 지식)

  • 성미영;이순형
    • Journal of the Korean Home Economics Association
    • /
    • v.40 no.11
    • /
    • pp.119-130
    • /
    • 2002
  • This study investigated preschool children's script knowledge according to task situation and socioeconomic status. Subjects were seventy-eight 5-year-old children (38 low- and 40 middle-income children; 36 boys and 42 girls) recruited from three day-care centers in Seoul. Each child participated in the script knowledge assessment session. Assessment of script knowledge consisted of a picture-recognition and picture-sequencing task. Statistical methods used for data analysis were means, standard deviations, repeated measures ANOVA. Results showed that children's script knowledge scores were higher in familiar task situation than in unfamiliar task situation. Furthermore, middle-income children had higher scores of script knowledge than low-income children. Findings of this study indicate that there is the difference of script knowledge between low- and middle-income preschoolers.

Distortion in Visual Memory for Wide-angle Image (광각 이미지에 대한 시각적 기억의 왜곡)

  • Jang, Phil-Sik
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.11-16
    • /
    • 2007
  • Viewers remember seeing more of the scene than was present in the physical input: an illusion known as boundary extension. This study examined the aspects of the distortion by presenting 69 subjects with wide-angle views of four scenes. Results of recognition and reproduction test showed that the boundary extension is not a unidirectional phenomenon. On the contrary, boundary restriction and foreground extension were observed with extreme wide-angle views of scenes. Results support the hypothesis that boundary restriction and foreground extension were mediated by the activation of a memory schema during picture perception.

Raining Image Enhancement and Its Processing Acceleration for Better Human Detection (사람 인식을 위한 비 이미지 개선 및 고속화)

  • Park, Min-Woong;Jeong, Geun-Yong;Cho, Joong-Hwee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.6
    • /
    • pp.345-351
    • /
    • 2014
  • This paper presents pedestrian recognition to improve performance for vehicle safety system or surveillance system. Pedestrian detection method using HOG (Histograms of Oriented Gradients) has showed 90% recognition rate. But if someone takes a picture in the rain, the image may be distorted by rain streaks and recognition rate goes down by 62%. To solve this problem, we applied image decomposition method using MCA (Morphological Component Analysis). In this case, rain removal method improves recognition rate from 62% to 70%. However, it is difficult to apply conventional image decomposition method using MCA on vehicle safety system or surveillance system as conventional method is too slow for real-time system. To alleviate this issue, we propose a rain removal method by using low-pass filter and DCT (Discrete Cosine Transform). The DCT helps separate the image into rain components. The image is removed rain components by Butterworth filtering. Experimental results show that our method achieved 90% of recognition rate. In addition, the proposed method had accelerated processing time to 17.8ms which is acceptable for real-time system.

Physiological Responses-Based Emotion Recognition Using Multi-Class SVM with RBF Kernel (RBF 커널과 다중 클래스 SVM을 이용한 생리적 반응 기반 감정 인식 기술)

  • Vanny, Makara;Ko, Kwang-Eun;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.4
    • /
    • pp.364-371
    • /
    • 2013
  • Emotion Recognition is one of the important part to develop in human-human and human computer interaction. In this paper, we have focused on the performance of multi-class SVM (Support Vector Machine) with Gaussian RFB (Radial Basis function) kernel, which has been used to solve the problem of emotion recognition from physiological signals and to improve the accuracy of emotion recognition. The experimental paradigm for data acquisition, visual-stimuli of IAPS (International Affective Picture System) are used to induce emotional states, such as fear, disgust, joy, and neutral for each subject. The raw signals of acquisited data are splitted in the trial from each session to pre-process the data. The mean value and standard deviation are employed to extract the data for feature extraction and preparing in the next step of classification. The experimental results are proving that the proposed approach of multi-class SVM with Gaussian RBF kernel with OVO (One-Versus-One) method provided the successful performance, accuracies of classification, which has been performed over these four emotions.

Human hand gesture identification framework using SIFT and knowledge-level technique

  • Muhammad Haroon;Saud Altaf;Zia-ur- Rehman;Muhammad Waseem Soomro;Sofia Iqbal
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.1022-1034
    • /
    • 2023
  • In this study, the impact of varying lighting conditions on recognition and decision-making was considered. The luminosity approach was presented to increase gesture recognition performance under varied lighting. An efficient framework was proposed for sensor-based sign language gesture identification, including picture acquisition, preparing data, obtaining features, and recognition. The depth images were collected using multiple Microsoft Kinect devices, and data were acquired by varying resolutions to demonstrate the idea. A case study was designed to attain acceptable accuracy in gesture recognition under variant lighting. Using American Sign Language (ASL), the dataset was created and analyzed under various lighting conditions. In ASL-based images, significant feature points were selected using the scale-invariant feature transformation (SIFT). Finally, an artificial neural network (ANN) classified hand gestures using specified characteristics for validation. The suggested method was successful across a variety of illumination conditions and different image sizes. The total effectiveness of NN architecture was shown by the 97.6% recognition accuracy rate of 26 alphabets dataset with just a 2.4% error rate.