• Title/Summary/Keyword: Image-Based Lighting

Search Result 237, Processing Time 0.024 seconds

Automated measurement of tool wear using an image processing system

  • Sawai, Nobushige;Song, Joonyeob;Park, Hwayoung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.311-314
    • /
    • 1995
  • This paper presents a method for measuring tool wear parameters based on two dimensional image information. The tool wear images were obtained from an ITV camera with magnifying and lighting devices, and were analyzed using image processing techniques such as thresholding, noise filtering and boundary tracing. Thresholding was used to transform the captured gray scale image into a binary image for rapid sequential image processing. The threshold level was determined using a novel technique in which the brightness histograms of two concentric windows containing the tool wear image were compared. The use of noise filtering and boundary tracing to reduce the measuring errors was explored. Performance tests of the measurement precision and processing speed revealed that the direct method was highly effective in intermittent tool wear monitoring.

  • PDF

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

A Defect Inspection Algorithm Using Multi-Resolution Analysis based on Wavelet Transform (웨이블릿 다해상도 분석에 의한 디지털 이미지 결점 검출 알고리즘)

  • Kim, Kyung-Joon;Lee, Chang-Hwan;Kim, Joo-Yong
    • Textile Coloration and Finishing
    • /
    • v.21 no.1
    • /
    • pp.53-58
    • /
    • 2009
  • A real-time inspection system has been developed by combining CCD based image processing algorithm and a standard lighting equipment. The system was tested for defective fabrics showing nozzle contact scratch marks, which were one of the frequently occurring defects. Multi-resolution analysis(MRA) algorithm were used and evaluated according to both their processing time and detection rate. Standard value for defective inspection was the mean of the non-defect image feature. Similarity was decided via comparing standard value with sample image feature value. Totally, we achieved defective inspection accuracy above 95%.

A Robust Face Detection Method Based on Skin Color and Edges

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.141-156
    • /
    • 2013
  • In this paper we propose a method to detect human faces in color images. Many existing systems use a window-based classifier that scans the entire image for the presence of the human face and such systems suffers from scale variation, pose variation, illumination changes, etc. Here, we propose a lighting insensitive face detection method based upon the edge and skin tone information of the input color image. First, image enhancement is performed, especially if the image is acquired from an unconstrained illumination condition. Next, skin segmentation in YCbCr and RGB space is conducted. The result of skin segmentation is refined using the skin tone percentage index method. The edges of the input image are combined with the skin tone image to separate all non-face regions from candidate faces. Candidate verification using primitive shape features of the face is applied to decide which of the candidate regions corresponds to a face. The advantage of the proposed method is that it can detect faces that are of different sizes, in different poses, and that are making different expressions under unconstrained illumination conditions.

Development of Automated Rapid Influenza Diagnostic Test Method Based on Image Recognition (영상 인식 기반 신속 인플루엔자 자동 판독 기법 개발)

  • Lee, Ji Eun;Joo, Yoon Ha;Lee, Jung Chan
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.3
    • /
    • pp.97-104
    • /
    • 2019
  • To examine different types of influenza diagnostic test kits automatically, automated rapid influenza diagnostic test method based on image recognition is proposed in this paper. First, the proposed methods classify a variety of the rapid influenza diagnostic test kit based on support vector machine that analyzes the kits' feature point. Then, to improve the accuracy of test, the proposed methods match the histogram of both the target image of influenza kit and the input image of influenza kit for minimizing the effect of environment factors, such as lighting and exposure variations. And, to minimize the effect from composition of the hand-helds devices, the proposed methods extract the feature point and match point-by-point between target image of influenza kit and input image of influenza kit. Experimental results of 124 experimental group show that the proposed methods significantly have effectiveness, which shows 90% accuracy in moderate antigen, for the preliminary examination of influenza, and provides the opportunity for taking action against influenza.

Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System (적응형 헤드 램프 컨트롤을 위한 야간 차량 인식)

  • Kim, Hyun-Koo;Jung, Ho-Youl;Park, Ju H.
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

Sensibility Evaluation of Color Temperature and Rendering Index to the LED-Based White Illumination (LED 기반 백색 조명의 색온도 및 연색지수에 따른 감성 평가)

  • Jee, Soon-Duk;Choi, Kyoung-Jae;Kim, Ho-Kun;Lee, Sang-Hyuk
    • Science of Emotion and Sensibility
    • /
    • v.9 no.4
    • /
    • pp.353-366
    • /
    • 2006
  • The aim of this study is to characterize the optical properties of white light-emitting diodes lighting modules and then to evaluate the sensitivity of students and teachers in reacting to the optical properties of these modules. For the sake of this study, each of 5 lighting modules was introduced to the 5 test cabinets. The 5 test cabinets were evaluated and analyzed the student and teacher's sensitivity reaction. We have selected If questions on sensitivity of the lighting and evaluated these questions with semantic differential method. To verify the reliability and objectivity of the questions, the feasibility survey was carried out by a preliminary test. As a result of the test, the sensitivities on the test cabinets were classified the 4 factors, namely, activity as the first factor, stability as the second one , potency as the third one and sensitive image as the fourth one respectively. By the evaluation of student and teacher's sensitivity on the correlated color temperature, they preferred the cabinet with the higher color temperature in view of the activity and potency. And they preferred the cabinet with the lower color temperature in view of the stability factor. In the sensitive image, they preferred the 5800K, bluish white lighting regardless of the color temperature. By the evaluation on the color rendering index, they preferred the cabinet with the higher color rendering index in view of the activity, stability and sensitive image. In the potency factor, they preferred the white lighting with the middle color rendering index.

  • PDF

Multi-camera based Images through Feature Points Algorithm for HDR Panorama

  • Yeong, Jung-Ho
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.6-13
    • /
    • 2015
  • With the spread of various kinds of cameras such as digital cameras and DSLR and a growing interest in high-definition and high-resolution images, a method that synthesizes multiple images is being studied among various methods. High Dynamic Range (HDR) images store light exposure with even wider range of number than normal digital images. Therefore, it can store the intensity of light inherent in specific scenes expressed by light sources in real life quite accurately. This study suggests feature points synthesis algorithm to improve the performance of HDR panorama recognition method (algorithm) at recognition and coordination level through classifying the feature points for image recognition using more than one multi frames.

A Comparative Study of Local Features in Face-based Video Retrieval

  • Zhou, Juan;Huang, Lan
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.24-31
    • /
    • 2017
  • Face-based video retrieval has become an active and important branch of intelligent video analysis. Face profiling and matching is a fundamental step and is crucial to the effectiveness of video retrieval. Although many algorithms have been developed for processing static face images, their effectiveness in face-based video retrieval is still unknown, simply because videos have different resolutions, faces vary in scale, and different lighting conditions and angles are used. In this paper, we combined content-based and semantic-based image analysis techniques, and systematically evaluated four mainstream local features to represent face images in the video retrieval task: Harris operators, SIFT and SURF descriptors, and eigenfaces. Results of ten independent runs of 10-fold cross-validation on datasets consisting of TED (Technology Entertainment Design) talk videos showed the effectiveness of our approach, where the SIFT descriptors achieved an average F-score of 0.725 in video retrieval and thus were the most effective, while the SURF descriptors were computed in 0.3 seconds per image on average and were the most efficient in most cases.