• Title/Summary/Keyword: Image-Based Lighting

Search Result 237, Processing Time 0.024 seconds

Traffic Light Recognition Based on the Glow Effect at Night Image (야간 영상에서의 빛 번짐 현상을 이용한 교통신호등 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1901-1912
    • /
    • 2017
  • Traffic lights at night are usually framed in the image as bright regions bigger than the real size due to glow effect. Moreover, the colors of lighting region saturate to white. So it is difficult to distinguish between different traffic lights at night. Many related studies have tried to decrease the glow effect in the process of capturing images. Some studies drastically decreased the shutter time of the camera to reduce the adverse effect by the glow. However, this makes the video too dark. This study proposes a new idea which utilizes the glow effect. It examines the outer radial region of traffic light. It presents an algorithm to discriminate the color of traffic light by the analysis of the outer radial region. The advantage of the proposed method is that it can recognize traffic lights in the image captured by an ordinary black box camera. Experimental results using seven short videos show the performance of traffic light recognition reporting the precision of 96.4% and the recall of 98.2%. These results show that the proposed method is valid and effective.

RECOGNITION ALGORITHM OF DRIED OAK MUSHROOM GRADINGS USING GRAY LEVEL IMAGES

  • Lee, C.H.;Hwang, H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.773-779
    • /
    • 1996
  • Dried oak mushroom have complex and various visual features. Grading and sorting of dried oak mushrooms has been done by the human expert. Though actions involved in human grading looked simple, a decision making underneath the simple action comes from the result of the complex neural processing of the visual image. Through processing details involved in human visual recognition has not been fully investigated yet, it might say human can recognize objects via one of three ways such as extracting specific features or just image itself without extracting those features or in a combined manner. In most cases, extracting some special quantitative features from the camera image requires complex algorithms and processing of the gray level image requires the heavy computing load. This fact can be worse especially in dealing with nonuniform, irregular and fuzzy shaped agricultural products, resulting in poor performance because of the sensitiveness to the crisp criteria or specific ules set up by algorithms. Also restriction of the real time processing often forces to use binary segmentation but in that case some important information of the object can be lost. In this paper, the neuro net based real time recognition algorithm was proposed without extracting any visual feature but using only the directly captured raw gray images. Specially formated adaptable size of grids was proposed for the network input. The compensation of illumination was also done to accomodate the variable lighting environment. The proposed grading scheme showed very successful results.

  • PDF

Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario

  • Xiao, Shuyan;Tao, Weige;Wang, Yu;Jiang, Ye;Qian, Minqian.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4043-4064
    • /
    • 2021
  • Night-time image quality evaluation is an urgent requirement in visual inspection. The lighting environment of night-time results in low brightness, low contrast, loss of detailed information, and colour dissonance of image, which remains a daunting task of delicately evaluating the image quality at night. A new blind quality assessment metric is presented for realistic night-time scenario through a comprehensive consideration of contrast, texture, and colour in this article. To be specific, image blocks' color-gray-difference (CGD) histogram that represents contrast features is computed at first. Next, texture features that are measured by the mean subtracted contrast normalized (MSCN)-weighted local binary pattern (LBP) histogram are calculated. Then statistical features in Lαβ colour space are detected. Finally, the quality prediction model is conducted by the support vector regression (SVR) based on extracted contrast, texture, and colour features. Experiments conducted on NNID, CCRIQ, LIVE-CH, and CID2013 databases indicate that the proposed metric is superior to the compared BIQA metrics.

Analysis of Users' Emotions on Lighting Effect of Artificial Intelligence Devices (인공지능 디바이스의 조명효과에 대한 사용자의 감정 평가 분석)

  • Hyeon, Yuna;Pan, Young-hwan;Yoo, Hoon-Sik
    • Science of Emotion and Sensibility
    • /
    • v.22 no.3
    • /
    • pp.35-46
    • /
    • 2019
  • Artificial intelligence (AI) technology has been evolving to recognize and learn the languages, voice tones, and facial expressions of users so that they can respond to users' emotions in various contexts. Many AI-based services of particular importance in communications with users provide emotional interaction. However, research on nonverbal interaction as a means of expressing emotion in the AI system is still insufficient. We studied the effect of lighting on users' emotional interaction with an AI device, focusing on color and flickering motion. The AI device used in this study expresses emotions with six colors of light (red, yellow, green, blue, purple, and white) and with a three-level flickering effect (high, middle, and low velocity). We studied the responses of 50 men and women in their 20s and 30s to the emotions expressed by the light colors and flickering effects of the AI device. We found that each light color represented an emotion that was largely similar to the user's emotional image shown in a previous color-sensibility study. The rate of flickering of the lights produced changes in emotional arousal and balance. The change in arousal patterns produced similar intensities of all colors. On the other hand, changes in balance patterns were somewhat related to the emotional image in the previous color-sensibility study, but the colors were different. As AI systems and devices are becoming more diverse, our findings are expected to contribute to designing the users emotional with AI devices through lighting.

Design of Pattern Array Method for Multi Data Augmentation of Power Equipment uisng Single Image Pattern (단일 이미지 패턴을 이용한 다수의 전력설비 데이터를 증강하기 위한 패턴 배열화 기법 설계)

  • Kim, Seoksoo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.11
    • /
    • pp.1-8
    • /
    • 2020
  • As power consumption is maximized, research on augmented reality-based monitoring systems for on-site facility managers to maintain and repair power facilities is being actively conducted as individual power brokerages and power production facilities increase. However, in the case of existing augmented reality-based monitoring systems, it is difficult to accurately detect patterns due to problems such as external environment, facility complexity, and interference with the lighting environment, and it is not possible to match various sensing information and service information for power facilities to one pattern. there is a problem. For this reason, since sensor information is matched using a single image pattern for each sensor of a power facility, a plurality of image patterns are required to augment and provide all information. In this paper, we propose a single image pattern arrangement method that matches and provides a plurality of information through an array combination of feature patterns in a single image composed of a plurality of feature patterns.

Performance Enhancement of Shadow Removal Algorithms Using Color Information of Objects (물체의 컬러 정보를 이용한 그림자 제거 기법의 성능 향상)

  • Kim, Hee-Sang;Kim, Ji-Hong;Choi, Doo-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.7
    • /
    • pp.941-946
    • /
    • 2009
  • As supplying of automatic surveillance or patrol systems based on image processing, the needs on object extraction technology from images increases. The extraction is more difficult when the lighting condition is changed from time to time. There are many approaches to extract objects from images excluding shadow. They have a common problem something like loss of object region according with shadow removal. In this paper a restoration method using color information of objects to complement the problem is presented. The usefulness of the method is verified using images taken from different lighting conditions and selected from well-known DB.

  • PDF

A Study of Normal Map Extraction and Lighting Technology for Real-time Image Based Lighting (실시간 영상기반 라이팅을 위한 고속 노말맵 추출방법 및 라이팅 기술 연구)

  • Yu, Se-Un;Bang, Chan-Yeong;Lee, Sang-Hwa;Lee, Sang-Yeop;An, Sang-Cheol;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1031-1036
    • /
    • 2007
  • 최근 가상현실 기술의 주요 연구 동향으로 몰입감을 증가시키는 실감공간 구현구술이 주목 받고 있다. 실감공간 기술이란 서로 다른 공간에 떨어져 있는 사용자가 같은 공간에 있는 효과를 구현하는 기술이다. 본 논문에서는 특히 상호간의 주변 환경을 일치시키는 기술에 중점을 두고, 실시간으로 두 공간의 조명정보를 일치시키는 기술로서 2가지 핵심 내용을 소개한다. 첫째는 비주얼 헐 데이터를 기반으로 고속으로 노말벡터를 추출하는 방법이고, 둘째는 사용자 주변 조명 환경 정보를 반영하는 라이팅 방법이다. 본 논문에서 수행한 첫번째 방법은 비주얼 헐 데이터의 depth존재영역에서 노말맵을 계산하도록 하고, 노말맵을 계산할 때 주변 폴리곤들 기하학적 변화가 심할수록 노말맵 계산에 사용하는 주변 벡터의 선태을 늘리거나 줄이는 방식으로, 불필요한 계산량을 감소시켰다. 본 논문에서 수행한 두번째 방법에서는 주변 조명 정보에서 빛의 세기와 라이팅을 반영할 객체의 반사율의 특성을 고려하여 라이팅에 사용할 광원을 선택적으로 반영하여 불필요한 연산량을 감소시켰다. 종래의 영상기반 라이팅 기술이 사전에 촬영된 영상을 사용하거나 정지영상에 적용되는 연구를 한 반면에 본 논문은 실시간에서 라이팅을 구현하기 위한 시도로서 고속 라이팅 연산 기법을 제시하고 있다. 본 연구의 결과를 이용하면 영상기반 라이팅 연구의 실제적이고도 폭넓은 적용이 가능할 것으로 사료되며 고화질의 콘텐츠 양산에도 기여할 것으로 사료된다.

  • PDF

A Study on Visual and Auditory Inducement of VR Image Contents and the Inducement Components of for Immersion Improvement (몰입감 향상을 위한 VR 영상 콘텐츠의 시청각 유도와 구성요소에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.495-500
    • /
    • 2016
  • Since 2016, the VR market has been on the rapid growth. The most critical and arising issue in the VR market is VR contents. That is because it is necessary to develop making techniques and various VR contents to satisfy users' immersion and interaction as much as possible. Therefore, this study focused on VR image contents, conducted domestic and foreign cases of the components of visual and auditory inducement to keep and improve immersion, and thereby tried to find a right direction of visual and auditory inducement. As a result, the visual and auditory components of visual and auditory inducement were found to be photographing, edition, lighting, stitching, graphics, effect, voice actor's narration, dubbing, character voice, background sound, and sound effect; its technical and content components were found to be photographing technique, edition technique, lighting, stitching, graphics and effect, sound and sound effect, and theatric direction based on Mise-en-Scene, lines and narration of characters, and movements of characters and objets. For VR image contents, not only visual and auditory components, but technical and content components are necessary to improve immersion. In the future, it will be necessary to continue to research them.

A 2-D Barcode Detection Algorithm based on Local Binary Patterns (지역적 이진패턴을 이용한 2차원 바코드 검출 알고리즘)

  • Choi, Young-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.8 no.2
    • /
    • pp.23-29
    • /
    • 2009
  • To increase the data capacity of one-dimensional symbology, 2D barcodes have been proposed a decade ago. In this paper, a new 2D barcode detection algorithm based on Local Binary Pattern is presented. To locate 2D barcode symbols, a texture analysis scheme based on the Local Binary Pattern is adopted, and a gray-scale projection with sub-pixel operation is utilized to separate the symbol precisely from the input image. Finally, the segmented symbol is normalized using the inverse perspective transformation for the decoding process. The proposed method ensures high performances under various lighting/printing conditions and strong perspective deformations. Experiments show that our method is very robust and efficient in detecting the symbol area for the various types of 2D barcodes.

  • PDF

Deep Learning based Object Detector for Vehicle Recognition on Images Acquired with Fisheye Lens Cameras (어안렌즈 카메라로 획득한 영상에서 차량 인식을 위한 딥러닝 기반 객체 검출기)

  • Hieu, Tang Quang;Yeon, Sungho;Kim, Jaemin
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.128-135
    • /
    • 2019
  • This paper presents a deep learning-based object detection method for recognizing vehicles in images acquired through cameras installed on ceiling of underground parking lot. First, we present an image enhancement method, which improves vehicle detection performance under dark lighting environment. Second, we present a new CNN-based multiscale classifiers for detecting vehicles in images acquired through cameras with fisheye lens. Experiments show that the presented vehicle detector has better performance than the conventional ones.