• Title/Summary/Keyword: Image-Based Lighting

Search Result 240, Processing Time 0.028 seconds

Indoor Position Estimation Using Stereo Image Sensor and LEDs (스테레오 이미지 센서와 LED 조명을 이용한 실내 측위)

  • Moon, Myoung-Geun;Choi, Su-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.11
    • /
    • pp.755-762
    • /
    • 2014
  • In recent year, along with the rapid development of LED technology, many applications using LEDs with Visible Light Communication(VLC) have been researched. Since it is easy to provide LOS communication environment along with cheap deployment cost, the indoor positioning system based on VLC has been actively studied. In this paper, we propose an accurate indoor positioning algorithm using a stereo image sensor and white-light LEDs with the visible light communication. Indoor white-light LEDs are located at the ceiling of a room and broadcast their position information by VLC technology. Mobile receiver with stereo image sensor receives LED position information by VLC and estimates its position and angle information. Simulation results are given to show the efficiency of proposed indoor positioning algorithm.

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

Vision Based Position Detection System of Used Oil Filter using Line Laser (라인형 레이저를 이용한 비전기반 차량용 폐오일필터 검출 시스템)

  • Xing, Xiong;Song, Un-Ji;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.332-336
    • /
    • 2010
  • There are so many successful applications to image processing systems in industries. In this study we propose a position detection system for used oil filter by using a line laser. We have been done on the development of line laser as interaction devices. A camera captures images of a display surface of a used oil filter and then a laser beam location is extracted from the captured image. This image is processed and used as a cursor position. We also discuss an algorithm that can distinguish the front part and rear part. In particular we present a robust and efficient linear detection algorithm that allows us to use our system under a variety lighting conditions, and allows us to reduce the amount of image parsing required to find a laser position by an order of magnitude.

Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot (주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘)

  • Kwon, Gi-Il
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

Skin Segmentation Using YUV and RGB Color Spaces

  • Al-Tairi, Zaher Hamid;Rahmat, Rahmita Wirza;Saripan, M. Iqbal;Sulaiman, Puteri Suhaiza
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.283-299
    • /
    • 2014
  • Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

JND based Illumination and Color Restoration Using Edge-preserving Filter (JND와 경계 보호 평탄화 필터를 이용한 휘도 및 색상 복원)

  • Han, Hee-Chul;Sohn, Kwan-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.132-145
    • /
    • 2009
  • We present the framework for JND based Illumination and Color Restoration Using Edge-preserving filter for restoring distorted images taken under the arbitrary lighting conditions. The proposed method is effective for appropriate illumination compensation, vivid color restoration, artifacts suppression, automatic parameter estimation, and low computation cost for HW implementation. We show the efficiency of the mean shift filter and sigma filter for illumination compensation with small spread parameter while considering the processing time and removing the artifacts such as HALO and noise amplification. The suggested CRF (color restoration filter) can restore the natural color and correct color distortion artifact more perceptually compared with current solutions. For the automatic processing, the image statistics analysis finds suitable parameter using JND and all constants are pre-defined. We also introduce the ROI-based parameter estimation dealing with small shadow area against spacious well-exposed background in an image for the touch-screen camera. The object evaluation is performed by CMC, CIEde2000, PSNR, SSIM, and 3D CIELAB gamut with state-of-the-art research and existing commercial solutions.

Deep Learning-based Rice Seed Segmentation for Phynotyping (표현체 연구를 위한 심화학습 기반 벼 종자 분할)

  • Jeong, Yu Seok;Lee, Hong Ro;Baek, Jeong Ho;Kim, Kyung Hwan;Chung, Young Suk;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.23-29
    • /
    • 2020
  • The National Institute of Agricultural Sciences of the Rural Developement Administration (NAS, RDA) is conducting various studies on various crops, such as monitoring the cultivation environment and analyzing harvested seeds for high-throughput phenotyping. In this paper, we propose a deep learning-based rice seed segmentation method to analyze the seeds of various crops owned by the NAS. Using Mask-RCNN deep learning model, we perform the rice seed segmentation from manually taken images under specific environment (constant lighting, white background) for analyzing the seed characteristics. For this purpose, we perform the parameter tuning process of the Mask-RCNN model. By the proposed method, the results of the test on seed object detection showed that the accuracy was 82% for rice stem image and 97% for rice grain image, respectively. As a future study, we are planning to researches of more reliable seeds extraction from cluttered seed images by a deep learning-based approach and selection of high-throughput phenotype through precise data analysis such as length, width, and thickness from the detected seed objects.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.