• Title/Summary/Keyword: Image-Based Lighting

Search Result 237, Processing Time 0.029 seconds

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;박동희;남기환;한준희;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.227-232
    • /
    • 2003
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. eased on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF

Rendering Method of Light Environment Based on Modeling of Physical Characteristic (물리적 특성 모델링에 기반한 라이팅 환경의 랜더링 기법)

  • Lee, Myong-Young;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.46-56
    • /
    • 2006
  • In this paper, we propose an improved reproduction algorithm for a realistic image of the real scene based on the optical characteristics of the light sources and the materials at the lighting environment. This paper is continuation of the previous study to improve the modeling method of the light sources and the materials and apply this to the real rear lamp of automobile. The backward ray tracing method is first used to trace the light ray from a light source, and also considers the physical characteristics of object surfaces and geometric properties of light radiation to estimate accurately the light energy incoming toward to human eyes. For experiments and verification of the proposed method, the simulation results are compared with the measured light stimuli. Accordingly, the simulation results show that the proposed algorithm can estimate light energy well and reproduce the visually similar image with a scene incident on a sight of viewer.

A Color-Based Medicine Bottle Classification Method Robust to Illumination Variations (조명 변화에 강인한 컬러정보 기반의 약병 분류 기법)

  • Kim, Tae-Hun;Kim, Gi-Seung;Song, Young-Chul;Ryu, Gang-Soo;Choi, Byung-Jae;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.57-64
    • /
    • 2013
  • In this paper, we propose the classification method of medicine bottle images using the features with color and size information. It is difficult to classify with size feature only, because there are many similar sizes of bottles. Therefore, we suggest a classification method based on color information, which robust to illumination variations. First, we extract MBR(Minimum Boundary Rectangle) of medicine bottle area using Binary Threshold of Red, Green, and Blue in image and classify images with size. Then, hue information and RGB color average rate are used to classify image, which features are robust to lighting variations. Finally, using SURF(Speed Up Robust Features) algorithm, corresponding image can be found from candidates with previous extracted features. The proposed method makes to reduce execution time and minimize the error rate and is confirmed to be reliable and efficient from experiment.

Multi-platform Visualization System for Earth Environment Data (지구환경 데이터를 위한 멀티플랫폼 가시화 시스템)

  • Jeong, Seokcheol;Jung, Seowon;Kim, Jongyong;Park, Sanghun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.36-45
    • /
    • 2015
  • It is important subject of research in engineering and natural science field that creating continuing high-definition image from very large volume data. The necessity of software that helps analyze useful information in data has improved by effectively showing visual image information of high resolution data with visualization technique. In this paper, we designed multi-platform visualization system based on client-server to analyze and express earth environment data effectively constructed with observation and prediction. The visualization server comprised of cluster transfers data to clients through parallel/distributed computing, and the client is developed to be operated in various platform and visualize data. In addition, we aim user-friendly program through multi-touch, sensor and have made realistic simulation image with image-based lighting technique.

An Illumination and Background-Robust Hand Image Segmentation Method Based on the Dynamic Threshold Values (조명과 배경에 강인한 동적 임계값 기반 손 영상 분할 기법)

  • Na, Min-Young;Kim, Hyun-Jung;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.607-613
    • /
    • 2011
  • In this paper, we propose a hand image segmentation method using the dynamic threshold values on input images with various lighting and background attributes. First, a moving hand silhouette is extracted using the camera input difference images, Next, based on the R,G,B histogram analysis of the extracted hand silhouette area, the threshold interval for each R, G, and B is calculated on run-time. Finally, the hand area is segmented using the thresholding and then a morphology operation, a connected component analysis and a flood-fill operation are performed for the noise removal. Experimental results on various input images showed that our hand segmentation method provides high level of accuracy and relatively fast stable results without the need of the fixed threshold values. Proposed methods can be used in the user interface of mixed reality applications.

Enhancement of Visibility Using App Image Categorization in Mobile Device (앱 영상 분류를 이용한 모바일 디바이스의 시인성 향상)

  • Kim, Dae-Chul;Kang, Dong-Wook;Kim, Kyung-Mo;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.77-86
    • /
    • 2014
  • Mobile devices are generally using app images which are artificially designed. Accordingly, this paper presents adjusting device brightness based on app image categorization for enhancing the visibility under various light condition. First, the proposed method performed two prior subjective tests under various lighting conditions for selecting features of app images concerning visibility and for selecting satisfactory range of device brightness for each app image. Then, the relationship between selected features of app image and satisfactory range of device brightness is analyzed. Next, app images are categorized by using two features of average brightness of app image and distribution ratio of advanced colors that are related to satisfaction range of device brightness. Then, optimal device brightness for each category is selected by having the maximum frequency of satisfaction device brightness. Experimental results show that the categorized app images with optimal device brightness have high satisfaction ratio under various light conditions.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Design of Optimized pRBFNNs-based Night Vision Face Recognition System Using PCA Algorithm (PCA알고리즘을 이용한 최적 pRBFNNs 기반 나이트비전 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Jang, Byoung-Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.225-231
    • /
    • 2013
  • In this study, we propose the design of optimized pRBFNNs-based night vision face recognition system using PCA algorithm. It is difficalt to obtain images using CCD camera due to low brightness under surround condition without lighting. The quality of the images distorted by low illuminance is improved by using night vision camera and histogram equalization. Ada-Boost algorithm also is used for the detection of face image between face and non-face image area. The dimension of the obtained image data is reduced to low dimension using PCA method. Also we introduce the pRBFNNs as recognition module. The proposed pRBFNNs consists of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned by using Fuzzy C-Means clustering. In the conclusion part of rules, the connection weights of pRBFNNs is represented as three kinds of polynomials such as linear, quadratic, and modified quadratic. The essential design parameters of the networks are optimized by means of Differential Evolution.

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.

Development of On-line Quality Sorting System for Dried Oak Mushroom - 3rd Prototype-

  • 김철수;김기동;조기현;이정택;김진현
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.8-15
    • /
    • 2003
  • In Korea, quality evaluation of dried oak mushrooms are done first by classifying them into more than 10 different categories based on the state of opening of the cap, surface pattern, and colors. And mushrooms of each category are further classified into 3 or 4 groups based on its shape and size, resulting into total 30 to 40 different grades. Quality evaluation and sorting based on the external visual features are usually done manually. Since visual features of mushroom affecting quality grades are distributed over the entire surface of the mushroom, both front (cap) and back (stem and gill) surfaces should be inspected thoroughly. In fact, it is almost impossible for human to inspect every mushroom, especially when they are fed continuously via conveyor. In this paper, considering real time on-line system implementation, image processing algorithms utilizing artificial neural network have been developed for the quality grading of a mushroom. The neural network based image processing utilized the raw gray value image of fed mushrooms captured by the camera without any complex image processing such as feature enhancement and extraction to identify the feeding state and to grade the quality of a mushroom. Developed algorithms were implemented to the prototype on-line grading and sorting system. The prototype was developed to simplify the system requirement and the overall mechanism. The system was composed of automatic devices for mushroom feeding and handling, a set of computer vision system with lighting chamber, one chip microprocessor based controller, and pneumatic actuators. The proposed grading scheme was tested using the prototype. Network training for the feeding state recognition and grading was done using static images. 200 samples (20 grade levels and 10 per each grade) were used for training. 300 samples (20 grade levels and 15 per each grade) were used to validate the trained network. By changing orientation of each sample, 600 data sets were made for the test and the trained network showed around 91 % of the grading accuracy. Though image processing itself required approximately less than 0.3 second depending on a mushroom, because of the actuating device and control response, average 0.6 to 0.7 second was required for grading and sorting of a mushroom resulting into the processing capability of 5,000/hr to 6,000/hr.

  • PDF