• 제목/요약/키워드: Eye point

검색결과 340건 처리시간 0.027초

이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출 (Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot)

  • 권오상
    • 센서학회지
    • /
    • 제14권4호
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

얼굴 영상에서 유전자 알고리즘 기반 형판정합을 이용한 눈동자 검출 (Detection of Pupil using Template Matching Based on Genetic Algorithm in Facial Images)

  • 이찬희;장경식
    • 한국정보통신학회논문지
    • /
    • 제13권7호
    • /
    • pp.1429-1436
    • /
    • 2009
  • 본 논문에서는 다양한 조명하에서의 단일 얼굴 영상에 대해 유전자 알고리즘과 형판 정합을 이용하여 빠르게 눈동자를 검출하는 방법을 제안한다. 유전 알고리즘을 이용한 기존의 눈동자 검출 방법은 초기 개체군의 위치에 민감하여 낮은 눈 검출율을 보이며, 도한 그 결과가 일관적이지 않은 문제점을 갖는다. 이와 같은 문제점을 해결화기 위해 얼굴영상에서 지역적 최소치를 추출하고 형판과 가장 높은 적합도를 가지는 개체들로 초기 개체군을 생성 하였다. 각각의 개체는 형판의 기하학적 변환 정보로 구성되며, 형판 정합에 의해 눈동자가 검출된다. 실험을 통하여 본 논문에서 제안한 눈 후보 검출을 통하여 단일 영상에서도 눈 검출의 정확도와 높은 검출률을 확인하였다.

라이팅 시뮬레이션을 위한 분광특성기반의 랜더링 기법 (Spectral-based rendering technique for lighting simulation)

  • 이명영;조양호;이철희;하영호
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.379-382
    • /
    • 2005
  • This study proposes an effective algorithm that can render a realistic image of a lighting environment, especially an automotive rear lamp, using the backward ray tracing method. To producea realistic image similar to that perceived by the human eye, the incident light energy at the eye point estimated by a ray tracing algorithm is represented by XYZ tri-stimulus values, which are then converted into RGB values considering the particular display device.

  • PDF

찰가자미(Microstomus achne) 초기생활기의 상대 성장 (Relative Growth of Microstomus achne (Pleuronectidae, PISCES) during Early Life Stage)

  • 변순규;강충배;한경호;김진구
    • 한국수산과학회지
    • /
    • 제46권6호
    • /
    • pp.970-972
    • /
    • 2013
  • We examined the relative growth of Microstomus achne during early life stages of laboratory-reared larvae and juveniles. Turning points in the relative growth of preanal length and upper jaw length against total length occurred during the settlement period (11.12-19.91 mm in total length). However, turning points in the relative growth of head length and eye diameter, as compared to total length, occurred during metamorphosis (17.57-22.47 mm in total length). Our results suggest that Microstomus achne concentrates its energy on the feeding apparatus (jaw) and digestive organs (intestine) rather than sensory or neural organs (eye, head) during early larval stage growth.

Single-Chip Eye Ball Sensor using Smart CIS Pixels

  • Kim, Dongsoo;Seunghyun Lim;Gunhee Han
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 II
    • /
    • pp.847-850
    • /
    • 2003
  • An Eye Ball Sensor (EBS) is a system that locates the point where the user gazes on. The conventional EBS using a CCD camera needs many peripherals, software computation causing high cost and power consumption. This paper proposes a compact EBS using smart CMOS Image Sensor (CIS) pixels. The proposed single chip EBS does not need any peripheral and operates at higher speed and lower cost than the conventional EBS. The test chip was designed and fabricated for 32$\times$32 smart CIS pixel array with a 0.35 um CMOS process occupying 5.3$\textrm{mm}^2$ silicon area.

  • PDF

한방안이비인후피부과 외래환자의 통계적 관찰 (The stastical analysis of ophthalmology, otolaryngology, dermatology new outpatients)

  • 차재훈;김윤범;남혜정
    • 한방안이비인후피부과학회지
    • /
    • 제20권3호
    • /
    • pp.169-180
    • /
    • 2007
  • Objective : This study was to explore the change of new outpatients in ophthalmology, otolaryngology, dermatology. Methods : We did stastical analysis about 4638 new outpatients who had visited the department of ophthalmology, otolaryngology, dermatology in Kyunghee oriental medicine center from January 1, 2004 to December 31, 2006 and had ophthalmologic, otologic, rhinologic, laryngologic and dermatologic diseases. Results : The results were as follows. 1. Distribution of ophthalmology, otology, rhinology, laryngology and dermatology classification in new outpatients was 44.74% in dermatology the most, 26.50% in rhinology, 14.45% in otology, 8.78% in ophthalmology, 5.54% in laryngology. In all classifications except ophthalmology, outpatients increased, but the proportion of outpatients increased the most in dermatology. 2. The proportion of the new outpatients in ophthalmology was 20.15% in 51-60 years old the most. And 36.61% in dry eye the most, next 27.03% in visual disorder, 10.07% in strabismus. The proportion of the new outpatients in ophthalmology in strabismus decreased by 43.93 percent point but in dry eye increased by 32.17 percent point. 3. The proportion of the new outpatients in otology was 24.94% in 61+ years old the most. And 64.03% in tinnitus, hearing loss the most, next 17.46% in vertigo, dizziness. By 17.36 percent point, it in tinnitus, hearing loss decreased. But in vertigo, dizziness increased by 14.91 percent point. 4. The proportion of the new outpatients in rhinology was 40.93% in 0-10 years old the most. And it increased by 20.08 percent point. 69.30% in rhinitis the most but it decreased by 17.70 percent point. But 32.59% in sinusitis, that means it increased by 14.41 percent point. 5. The proportion of the new outpatients in laryngology was 68.09% in female, 23.35% in 51-60 years old the most. And it was 29.96% in laryngopharyngitis, 19.07% in stomatitis, 14.40% in the diseases of tongue. 6. The proportion of the new outpatients in dermatology was 37.21% in 21-30 years old the most.And it was 22.93% in atopic dermatitis the most, next 14.77% in urticaria. That means it increased by 8.19 percent point. But the proportion of the new outpatients in dermatology decreaed in acne, pruritus. Conclusions : We could know that there had been many changes of new outpatients in ophthalmology, otolaryngology, dermatology

  • PDF

여성의류 매장 공간의 구도에 나타난 공간구성의 주의집중 특성 - 백화점 매장의 순회동선을 대상으로 - (Features of Attention to Space Structure of Spacial Composition in Women's Shop - Targeting the Circulation Line of Department Store -)

  • 최계영;손광호
    • 한국실내디자인학회논문집
    • /
    • 제26권2호
    • /
    • pp.3-12
    • /
    • 2017
  • This study has analyzed the features of attention to spacial composition seen in "Seeing ${\leftrightarrow}$ Seen" Correlation of continuous move in the space. The eye-tracking was employed for collecting the data of attention features to the space so that the correlation between visual perception and space could be estimated through the attention features to the difference between spacial composition and display. First, it was confirmed that the attention features varied according to the structure of shops and the exposure degree of selling space, which revealed that, while causing the customers' less attention to both sides of shops, the vanishing-point structure characteristically made their eyes focused on the central part. Second, their initial observation activities were found to be active at the height of their eyes. Third, 10 images were selected as objects for continuous experiment. There was a concern that the central part of each image would be paid intense attention to during the initial observation, but only two of those were found to be so. Fourth, there had been a study result of eye-tracking experiment that the attention had been concentrated on the central part of the image first seen. This study, however, revealed that such phenomenon is limited to the first image. Accordingly, it is necessary to draw up such method for ensuring reliability in order to use the data acquired from any eye-tracking experiment as exclusion of the initial attention time to the first image or of unemployment of the initial image-experiment to analysis.

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

다양한 색공간 정보를 이용한 눈 영역의 특징벡터 생성 기법 (A Technique of Feature Vector Generation for Eye Region Using Embedded Information of Various Color Spaces)

  • 박정환;신판섭;김국보;정종진
    • 전기학회논문지
    • /
    • 제64권1호
    • /
    • pp.82-89
    • /
    • 2015
  • The researches of image recognition have been processed traditionally. Especially, face recognition technology has been received attractions with advance and applied to various areas according as camera sensor embedded into many devices such as smart phone. In this study, we design and develop a feature vector generation technique of face for making animation caricatures using methods for face detection which are previous stage of face recognition. At first, we detect both face region and detailed eye region of component element by Viola&Johns's realtime detection method which are called as ROI(Region Of Interest). And then, we generate feature vectors of eye region by utilizing factors as opposed to the periphery and by using appearance information of eye. At this point, we focus on the embedded information in many color spaces to overcome the problems which can be occurred by using one color space. We propose a feature vector generation method using information from many color spaces. Finally, we experiment the test of feature vector generation by the proposed method with enough quantity of sample picture data and evaluate the proposed method for factors of estimating performance such as error rate, accuracy and generation time.

색상 움직임을 이용한 얼굴 특징점 자동 추출 (Automatic Extraction of the Facial Feature Points Using Moving Color)

  • 김남호;김형곤;고성제
    • 전자공학회논문지S
    • /
    • 제35S권8호
    • /
    • pp.55-67
    • /
    • 1998
  • 본 논문에서는 컬러 비디오 시퀀스 상에서 눈과 입에 해당하는 얼굴 특징점을 고속으로 추출하는 방법을 제안한다. 자유로운 움직임을 갖는 얼굴 영역을 안정적으로 추출하기 위해 얼굴 색상 분포를 이용한 색상 변환 영상에 움직임 검출 기법을 적용하여 움직이는 살색 부분만을 효율적으로 검출하는 색상 움직임 개념을 사용하였다. 움직임 정보는 살색의 가능성 정도에 따라 가중치가 주어지며 화소 단위의 움직임 여부를 결정하는 문턱값도 살색의 가능성 정도에 따라 적응적으로 결정된다. 눈의 색상분포와 형태소 연산자를 사용한 움직임 살색 영역에서 눈 후보 영역을 추출하고 눈과 눈썹의 상호 위치 관계를 이용하여 눈의 영역을 최종 결정한다. 입의 영역은 눈의 위치를 기준으로 입 후보 영역을 정하고 색상 히스토그램을 이용하여 입의 영역을 검출한다. 찾아진 눈과 입의 영역에서 정확한 특징점의 위치를 구하기 위해 PCA (Principal Component Analysis)를 사용하였다. 실험 결과 복잡한 배경, 개인적인 편차, 얼굴의 방향과 크기 등에 영향을 받지 않고 고속으로 정확한 얼굴의 특징점을 추출할 수 있었다.

  • PDF