• Title/Summary/Keyword: Image-Based Lighting

Search Result 237, Processing Time 0.027 seconds

Autonomous pothole detection using deep region-based convolutional neural network with cloud computing

  • Luo, Longxi;Feng, Maria Q.;Wu, Jianping;Leung, Ryan Y.
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.745-757
    • /
    • 2019
  • Road surface deteriorations such as potholes have caused motorists heavy monetary damages every year. However, effective road condition monitoring has been a continuing challenge to road owners. Depth cameras have a small field of view and can be easily affected by vehicle bouncing. Traditional image processing methods based on algorithms such as segmentation cannot adapt to varying environmental and camera scenarios. In recent years, novel object detection methods based on deep learning algorithms have produced good results in detecting typical objects, such as faces, vehicles, structures and more, even in scenarios with changing object distances, camera angles, lighting conditions, etc. Therefore, in this study, a Deep Learning Pothole Detector (DLPD) based on the deep region-based convolutional neural network is proposed for autonomous detection of potholes from images. About 900 images with potholes and road surface conditions are collected and divided into training and testing data. Parameters of the network in the DLPD are calibrated based on sensitivity tests. Then, the calibrated DLPD is trained by the training data and applied to the 215 testing images to evaluate its performance. It is demonstrated that potholes can be automatically detected with high average precision over 93%. Potholes can be differentiated from manholes by training and applying a manhole-pothole classifier which is constructed using the convolutional neural network layers in DLPD. Repeated detection of the same potholes can be prevented through feature matching of the newly detected pothole with previously detected potholes within a small region.

Inspection for Inner Wall Surface of Communication Conduits by Laser Projection Image Analysis (레이저 투영 영상 분석에 의한 통신 관로 내벽 검사 기법)

  • Lee Dae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1131-1138
    • /
    • 2006
  • This paper proposes a novel method for grading of underground communication conduits by laser projection image analysis. The equipment thrust into conduit consists of a laser diode, a light emitting diode and a camera, the laser diode is utilized for generating projection image onto pipe wall, the light emitting diode for lighting environment and the image of conduit is acquired by the camera. In order to segment profile region, we used a novel color difference model and multiple thresholds method. The shape of profile ring is represented as a minimum diameter and the Fourier descriptor, and then the pipe status is graded by the rule-based method. Both local and global features of the segmented ring shaped, the minimum diameter and the Fourier descriptor, are utilized, therefore injured and distorted pipes can be correctly graded. From the experimental results, the classification is measured with accuracy such that false alarms are less than 2% under the various conditions.

  • PDF

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

Non-contact mobile inspection system for tunnels: a review (터널의 비접촉 이동식 상태점검 장비: 리뷰)

  • Chulhee Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.3
    • /
    • pp.245-259
    • /
    • 2023
  • The purpose of this paper is to examine the most recent tunnel scanning systems to obtain insights for the development of non-contact mobile inspection system. Tunnel scanning systems are mostly being developed by adapting two main technologies, namely laser scanning and image scanning systems. Laser scanning system has the advantage of accurately recreating the geometric characteristics of tunnel linings from point cloud. On the other hand, image scanning system employs computer vision to effortlessly identify damage, such as fine cracks and leaks on the tunnel lining surface. The analysis suggests that image scanning system is more suitable for detecting damage on tunnel linings. A camera-based tunnel scanning system under development should include components such as lighting, data storage, power supply, and image-capturing controller synchronized with vehicle speed.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

Formative Expressions by Artificial Light applied to Office Building Lobbies (현대 오피스 로비공간에서 빛의 조형적 표현 특성에 관한 연구)

  • Jeong, Soo-Ryun
    • Korean Institute of Interior Design Journal
    • /
    • v.18 no.2
    • /
    • pp.41-49
    • /
    • 2009
  • Contemporary design environment is formed with image-centered trend based on pluralism. In this point of view, enterprises' building lobbies are public places containing the equivocal meaning, actively utilizing light as a design element to express the image of enterprises' identifications. Light is an immaterial entity having unlimited possibilities and potentials on space. It also acts as media to activate spaces and create new images in connection with formative elements of space. This study is to figure out how lightings are expressed and affected the formative characteristics of office lobby spaces and activate the specific characteristics of spaces. As a result, we drew conclusions as follows. First, as state-of-the-art technology and media are introduced, light is expressed on spaces as floating, direction, rhythm, silhouette, metaphor and allusion, sense of depth and volume. Second, expressive aspects of light in lobby space are embodiment of light, substantiation of immateriality, standing of materiality from the perspective of spatial aesthetics, and distortion/transformation of shape, pluralism phenomena of space from the perspective of spatial structure. In this way, light on building lobbies which are greatly required design differentiation strategy, specializes space and also integrates all the designs as not only a functional element but also a mental, psychological, formative element. Consequently, light on lobby spaces induces communication between spaces and users, makes formative value of existence in itself, and presents the characteristics of differentiated enterprises' identities.

Detection of eye using optimal edge technique and intensity information (눈 영역에 적합한 에지 추출과 밝기값 정보를 이용한 눈 검출)

  • Mun, Won-Ho;Choi, Yeon-Seok;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.196-199
    • /
    • 2010
  • The human eyes are important facial landmarks for image normalization due to their relatively constant interocular distance. This paper introduces a novel approach for the eye detection task using optimal segmentation method for eye representation. The method consists of three steps: (1)edge extraction method that can be used to accurately extract eye region from the gray-scale face image, (2)extraction of eye region using labeling method, (3)eye localization based on intensity information. Experimental results show that a correct eye detection rate of 98.9% can be achieved on 2408 FERET images with variations in lighting condition and facial expressions.

  • PDF

Wafer Position Recognition System of Cleaning Equipment (웨이퍼 클리닝 장비의 웨이퍼 장착 위치 인식 시스템)

  • Lee, Jung-Woo;Lee, Byung-Gook;Lee, Joon-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.400-409
    • /
    • 2010
  • This paper presents a position error recognition system when the wafer is mounted in cleaning equipment among the wafer manufacturing processes. The proposed system is to enhance the performance in cost and reliability by preventing the wafer cleaning system from damaging by alerting it when it is put in correct position. The key algorithms are the calibration method between image acquired from camera and physical wafer, a infrared lighting and the design of the filter, and the extraction of wafer boundary and the position error recognition resulting from generation of circle based on least square method. The system is to install in-line process using high reliable and high accurate position recognition. The experimental results show that the performance is good in detecting errors within tolerance.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.