• Title/Summary/Keyword: Intelligent Lighting

Search Result 100, Processing Time 0.025 seconds

Image Contrast Enhancement by Illumination Change Detection (조명 변화 감지에 의한 영상 콘트라스트 개선)

  • Odgerel, Bayanmunkh;Lee, Chang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • There are many image processing based algorithms and applications that fail when illumination change occurs. Therefore, the illumination change has to be detected then the illumination change occurred images need to be enhanced in order to keep the appropriate algorithm processing in a reality. In this paper, a new method for detecting illumination changes efficiently in a real time by using local region information and fuzzy logic is introduced. The effective way for detecting illumination changes in lighting area and the edge of the area was selected to analyze the mean and variance of the histogram of each area and to reflect the changing trends on previous frame's mean and variance for each area of the histogram. The ways are used as an input. The changes of mean and variance make different patterns w hen illumination change occurs. Fuzzy rules were defined based on the patterns of the input for detecting illumination changes. Proposed method was tested with different dataset through the evaluation metrics; in particular, the specificity, recall and precision showed high rates. An automatic parameter selection method was proposed for contrast limited adaptive histogram equalization method by using entropy of image through adaptive neural fuzzy inference system. The results showed that the contrast of images could be enhanced. The proposed algorithm is robust to detect global illumination change, and it is also computationally efficient in real applications.

Vehicle Headlight and Taillight Recognition in Nighttime using Low-Exposure Camera and Wavelet-based Random Forest (저노출 카메라와 웨이블릿 기반 랜덤 포레스트를 이용한 야간 자동차 전조등 및 후미등 인식)

  • Heo, Duyoung;Kim, Sang Jun;Kwak, Choong Sub;Nam, Jae-Yeal;Ko, Byoung Chul
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.282-294
    • /
    • 2017
  • In this paper, we propose a novel intelligent headlight control (IHC) system which is durable to various road lights and camera movement caused by vehicle driving. For detecting candidate light blobs, the region of interest (ROI) is decided as front ROI (FROI) and back ROI (BROI) by considering the camera geometry based on perspective range estimation model. Then, light blobs such as headlights, taillights of vehicles, reflection light as well as the surrounding road lighting are segmented using two different adaptive thresholding. From the number of segmented blobs, taillights are first detected using the redness checking and random forest classifier based on Haar-like feature. For the headlight and taillight classification, we use the random forest instead of popular support vector machine or convolutional neural networks for supporting fast learning and testing in real-life applications. Pairing is performed by using the predefined geometric rules, such as vertical coordinate similarity and association check between blobs. The proposed algorithm was successfully applied to various driving sequences in night-time, and the results show that the performance of the proposed algorithms is better than that of recent related works.

A Research on Autonomous Mobile LiDAR Performance between Lab and Field Environment (자율주행차량 모바일 LiDAR의 실내외 성능 비교 연구)

  • Ji yoon Kim;Bum jin Park;Jisoo Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.194-210
    • /
    • 2023
  • LiDAR plays a key role in autonomous vehicles, where it is used to detect the environment in place of the driver's eyes, and its role is expanding. In recent years, there has been a growing need to test the performance of LiDARs installed in autonomous vehicles. Many LiDAR performance tests have been conducted in simulated and indoor(lab) environments, but the number of tests in outdoor(field) and real-world road environments has been minimal. In this study, we compared LiDAR performance under the same conditions lab and field to determine the relationship between lab and field tests and to establish the characteristics and roles of each test environment. The experimental results showed that LiDAR detection performance varies depending on the lighting environment (direct sunlight, led) and the detected object. In particular, the effect of decreasing intensity due to increasing distance and rainfall is greater outdoors, suggesting that both lab and field experiments are necessary when testing LiDAR detection performance on objects. The results of this study are expected to be useful for organizations conducting research on the use of LiDAR sensors and facilities for LiDAR sensors.

A Study on Factors Influencing the Severity of Autonomous Vehicle Accidents: Combining Accident Data and Transportation Infrastructure Information (자율주행차 사고심각도의 영향요인 분석에 관한 연구: 사고데이터와 교통인프라 정보를 결합하여)

  • Changhun Kim;Junghwa Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.200-215
    • /
    • 2023
  • With the rapid advance of autonomous driving technology, the related vehicle market is experiencing explosive growth, and it is anticipated that the era of fully autonomous vehicles will arrive in the near future. However, along with the development of autonomous driving technology, questions regarding its safety and reliability continue to be raised. Concerns among technology adopters are increasing due to media reports of accidents involving autonomous vehicles. To promote the improvement of the safety of autonomous vehicles, it is essential to analyze previous accident cases and identify their causes. Therefore, in this study, we aimed to analyze the factors influencing the severity of autonomous vehicle accidents using previous accident cases and related data. The data used for this research primarily comprised autonomous vehicle accident reports collected and distributed by the California Department of Motor Vehicles (CA DMV). Spatial information on accident locations and additional traffic data were also collected and utilized. Given that the primary data used in this study were accident reports, a Poisson regression analysis was conducted to model the expected number of accidents. The research results indicated that the severity of autonomous vehicle accidents increases in areas with low lighting, the presence of bicycle or bus-exclusive lanes, and a history of pedestrian and bicycle accidents. These findings are expected to serve as foundational data for the development of algorithms to enhance the safety of autonomous vehicles and promote the installation of related transportation infrastructure.

A Lane Detection and Departure Warning System Robust to Illumination Change and Road Surface Symbols (도로조명변화 및 노면표시에 강인한 차선 검출 및 이탈 경고 시스템)

  • Kim, Kwang Soo;Choi, Seung Wan;Kwak, Soo Yeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.6
    • /
    • pp.9-16
    • /
    • 2017
  • An Algorithm for Lane Detection and Lane Departure Warning for a Vehicle Driving on Roads is proposed in This Paper. Using Images Obtained from On-board Cameras for Lane Detection has Some Difficulties, e.g. the Increase of Fault Detection Ratio Due to Symbols on Roads, Missing Yellow Lanes in the Tunnel due to a Similar Color Lighting, Missing Some Lanes in Rainy Days Due to Low Intensity of Illumination, and so on. The Proposed Algorithm has been developed Focusing on Solving These Problems. It also has an Additional Function to Determine How much the Vehicle is leaning to any Side between The Lanes and, If Necessary, to Give a Warning to a Driver. Experiments Using an Image Database Built by Collecting with Vehicle On-board Blackbox in Six Different Situations have been conducted for Validation of the Proposed Algorithm. The Experimental Results show a High Performance of the Proposed Algorithm with Overall 97% Detection Success Ratio.

Definition and Analysis of Shadow Features for Shadow Detection in Single Natural Image (단일 자연 영상에서 그림자 검출을 위한 그림자 특징 요소들의 정의와 분석)

  • Park, Ki Hong;Lee, Yang Sun
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.165-171
    • /
    • 2018
  • Shadow is a physical phenomenon observed in natural scenes and has a negative effect on various image processing systems such as intelligent video surveillance, traffic surveillance and aerial imagery analysis. Therefore, shadow detection should be considered as a preprocessing process in all areas of computer vision. In this paper, we define and analyze various feature elements for shadow detection in a single natural image that does not require a reference image. The shadow elements describe the intensity, chromaticity, illuminant-invariant, color invariance, and entropy image, which indicate the uncertainty of the information. The results show that the chromaticity and illuminant-invariant images are effective for shadow detection. In the future, we will define a fusion map of various shadow feature elements, and continue to study shadow detection that can adapt to various lighting levels, and shadow removal using chromaticity and illuminance invariant images.

Design of Vehicle Safety Protocol on Visible Light Communication using LED (LED 가시광 통신을 이용한 자동차 안전 프로토콜 설계)

  • Kim, Ho-Jin;Kong, In-Yeup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.563-565
    • /
    • 2010
  • LED is low power and pro-environment semiconductor element. That can be used not only as the lighting function but also for Visible Light Communication(VLC). The VLC is the communication technology that can send data by blinking a fluorescent or LED using visible spectrum. That velocity of blinking can not be usually recognized by eyesight. Visible Light Communication using LED can be used in many fields. In the field of ITS(Intelligent Transportation System), Under construction on the road, Emitting traffic signs can be applied to transfer the vehicle information. In this paper, Emitting traffic signs in addition to the VLC give information about road condition, safety distance and the lane change. We design Communication protocol to provide safety service and verify protocol by experiment.

  • PDF

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.

Implementation of a walking-aid light with machine vision-based pedestrian signal detection (머신비전 기반 보행신호등 검출 기능을 갖는 보행등 구현)

  • Jihun Koo;Juseong Lee;Hongrae Cho;Ho-Myoung An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.31-37
    • /
    • 2024
  • In this study, we propose a machine vision-based pedestrian signal detection algorithm that operates efficiently even in computing resource-constrained environments. This algorithm demonstrates high efficiency within limited resources and is designed to minimize the impact of ambient lighting by sequentially applying HSV color space-based image processing, binarization, morphological operations, labeling, and other steps to address issues such as light glare. Particularly, this algorithm is structured in a relatively simple form to ensure smooth operation within embedded system environments, considering the limitations of computing resources. Consequently, it possesses a structure that operates reliably even in environments with low computing resources. Moreover, the proposed pedestrian signal system not only includes pedestrian signal detection capabilities but also incorporates IoT functionality, allowing wireless integration with a web server. This integration enables users to conveniently monitor and control the status of the signal system through the web server. Additionally, successful implementation has been achieved for effectively controlling 50W LED pedestrian signals. This proposed system aims to provide a rapid and efficient pedestrian signal detection and control system within resource-constrained environments, contemplating its potential applicability in real-world road scenarios. Anticipated contributions include fostering the establishment of safer and more intelligent traffic systems.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.