• Title/Summary/Keyword: Eye Image

Search Result 819, Processing Time 0.031 seconds

A Receiver for Dual-Channel CIS Interfaces (이중 채널 CIS 인터페이스를 위한 수신기 설계)

  • Shin, Hoon;Kim, Sang-Hoon;Kwon, Kee-Won;Chun, Jung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.10
    • /
    • pp.87-95
    • /
    • 2014
  • This paper describes a dual channel receiver design for CIS interfaces. Each channel includes CTLE(Continuous Time Linear Equalizer), sampler, deserializer and clocking circuit. The clocking circuit is composed of PLL, PI and CDR. Fast lock acquisition time, short latency and better jitter tolerance are achieved by adding OSPD(Over Sampling Phase Detector) and FSM(Finite State Machine) to PI-based CDR. The CTLE removes ISI caused by channel with -6 dB attenuation and the lock acquisition time of the CDR is below 1 baud period in frequency offset under 8000ppm. The voltage margin is 368 mV and the timing margin is 0.93 UI in eye diagram using 65 nm CMOS technology.

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

The Method for Measuring the Initial Stage of Emotion in Use Context (제품 사용 환경의 사용자 초기 감성 측정 방법에 관한 연구)

  • Lee, Jae-Hwa;Lee, Kun-Pyo
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.111-120
    • /
    • 2010
  • Initial stage of emotion has a great influence on building up product image and impression. Because of its influencing effects, measuring initial stage of emotion has potential to be a key factor for designers and marketers to achieve a distinct product concept. While many researchers have studied this topic with the emotion measurement method in product use stage, there are very few cases specialized in the initial stage of emotion. Even though present emotion measurement methods have difficulties to derive accurate user's initial stage of emotion, most case of initial emotion study applies these defective methods. The purpose of this study is to develop initial stage of emotion measurement method and apply this method to real product context. In the design of the initial stage of emotion measurement method, noticeable characteristics of initial stage of emotion were explored and initial emotion measurement framework was presented. Based on this framework, Initial Emotion Measurement System(IEMS) was suggested. This method collects user's eye movement, behavior and verbal data accurately and objectively.

  • PDF

Robust Pupil Detection using Rank Order Filter and Pixel Difference (Rank Order Filter와 화소값 차이를 이용한 강인한 눈동자 검출)

  • Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1383-1390
    • /
    • 2012
  • In this paper, we propose a robust pupil detection method using rank order filter and pixel value difference in facial image. We have detected the potential pupil candidates using rank order filter. Many false pupil candidates found at eyebrow are removed using the fact that the pixel difference is much at the boundary between pupil and sclera. The rest pupil candidates are grouped into pairs. Each pair is verified according to geometric constraints such as the angle and the distance between two candidates. A fitness function is obtained for each pair using the pixel values of two pupil regions, we select a pair with the smallest fitness value as a final pupil. The experiments have been performed for 400 images of the BioID face database. The results show that it achieves more than 90% accuracy, and especially the proposed method improves the detection rate and high accuracy for face with spectacle.

A Study on the Expression Transformation of Visual Information in 3D Architectural Models (3차원 건축모델정보의 표현변용방식에 관한 연구)

  • Park, Young-Ho
    • Korean Institute of Interior Design Journal
    • /
    • v.22 no.1
    • /
    • pp.105-114
    • /
    • 2013
  • This study investigated the application and the change of various architectural models by analyzing expression viewpoint media, which were applied to the visual information of digitalized 3D contemporary architectural models. The purpose of this study was to specify how modern architects have changed 3D architectural models to conceptual, logical, and formational visual information in the process of design. This study discovered a framework of analyses by theoretically investigating a relationship between expression media and expression change in the process of visualizing architectural models. Using the framework of analyses, this study analyzed how the expression viewpoints of architectural model information have been changed and applied. The transformation media of the visual information of digitalized 3D architectural models can be classified into conceptual, analytical, and formational information: 1) Contemporary architects used author-centered subjective viewpoints to express architectural concepts, which were generated in the process of their design. They selected a perspective viewpoint and a bird's eye view in order to present their architectural concepts and to depict them with one architectural model by expanding the visual scope of conceptual information. 2) Contemporary architects adopted observer-centered objective bird's eye view expression media to effectively present their architectural information to building owners and viewers. They used transformal media, which integrate architectural information into 3D and change it to different scales, in order to express their architecture logically. 3) Contemporary architects delivered model information about the generation and change of forms by expressing the image of a project from an author-centered viewpoint, instead of objectively defining formational information. They explained the generation principle of architectural forms via transformal media which develop and rotate an architectural model.

Development of Hand-held OCT probe for Ophthalmic Imaging (안구 영상을 위한 OCT용 손잡이 형 프로브의 개발)

  • Cho, Nam-Hyun;Jung, Woong-Gyu;Jung, Un-Sang;Sephen, A.Boppart;Shim, Jae-Hoon;Kim, Jee-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.1
    • /
    • pp.24-30
    • /
    • 2011
  • We have developed a hand-held probe for an ophthalmic OCT system. The hand-held probe for imaging was designed to be compact and portable. The cornea and retinal images were acquired by replacing the objective lens at the front of the probe. To verify the performance of the hand-held OCT probe, we acquired two dimensional OCT image of the rat eye in vivo and reconstructed three dimensional rat eye rendering images. In vivo 3D OCT images were showed distinct structural information in the posterior and anterior chamber with minimal motion artifacts. Thereby, OCT imaging speed is suitable for an dynamic in vivo experiment.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

A Implementation and Performance Analysis of Emotion Messenger Based on Dynamic Gesture Recognitions using WebCAM (웹캠을 이용한 동적 제스쳐 인식 기반의 감성 메신저 구현 및 성능 분석)

  • Lee, Won-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.75-81
    • /
    • 2010
  • In this paper, we propose an emotion messenger which recognizes face or hand gestures of a user using a WebCAM, converts recognized emotions (joy, anger, grief, happiness) to flash-cones, and transmits them to the counterpart. This messenger consists of face recognition module, hand gesture recognition module, and messenger module. In the face recognition module, it converts each region of the eye and the mouth to a binary image and recognizes wink, kiss, and yawn according to shape change of the eye and the mouth. In hand gesture recognition module, it recognizes gawi-bawi-bo according to the number of fingers it has recognized. In messenger module, it converts wink, kiss, and yawn recognized by the face recognition module and gawi-bawi-bo recognized by the hand gesture recognition module to flash-cones and transmits them to the counterpart. Through simulation, we confirmed that CPU share ratio of the emotion messenger is minimized. Moreover, with respect to recognition ratio, we show that the hand gesture recognition module performs better than the face recognition module.

3D First Person Shooting Game by Using Eye Gaze Tracking (눈동자 시선 추적에 의한 3차원 1인칭 슈팅 게임)

  • Lee, Eui-Chul;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.465-472
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMB. The proposed method is composed of 3 parts. At first, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, when the user gaze on the monitor plane, the geometric relationship between the gazing position of monitor and the detected position of pupil center is determined. In the last part, the final gaze position on the HMD monitor is tracked and the 3D view in game is controlled by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his(or her) hand. Also, it can Increase the interest and the immersion by synchronizing the gaze direction of game player and the view direction of game character.

The Effect of Teacher Participation-Oriented Education Program Centered on Multi-Faceted Analysis of Elementary Science Classes on the Class Expertise of Novice Teacher (초등 과학수업의 다면적 분석을 중심으로 한 교사 참여형 교육프로그램이 초보교사의 수업전문성에 미치는 효과)

  • Shin, Won-Sub;Shin, Dong-Hoon
    • Journal of Korean Elementary Science Education
    • /
    • v.38 no.3
    • /
    • pp.406-425
    • /
    • 2019
  • The purpose of this study is to analyze The Effect of Teacher Participation-oriented Education Program (TPEP) centered on Multi-Faceted Analysis of Elementary Science Classes on the Class Expertise of novice teacher. First, in order to develop the TPEP, lectures and exploratory science classes were analyzed using imaging and eye-tracking techniques. In this study, the TPEP was developed in five stages: image analysis, eye analysis, teaching language analysis, gesture analysis, and class development. Participants directly analyzed the classes of experienced and novice teachers at each stage. The TPEP developed in this study is different from the existing teacher education program in that it reflected the human performance technology aspects. The participants analyzed actual elementary science classes in a multi-faceted way and developed better classes based on them. The results of this study are as follows. First, at the teacher training institutions and the school sites, pre-service teachers and novice teachers should be provided with various experiences in class analysis and multi-faceted analysis of their own classes. Second, through this study, we were able to identify the limitations of existing class observations and video analysis. Third, the TPEP should be developed to improve the novice teachers' class expertise. Finally, we hope that the results of this study are used as basic data in developing programs to improve teachers' class expertise in teacher training institutions and education policy institutions.