• Title/Summary/Keyword: RGB 색상

Search Result 277, Processing Time 0.023 seconds

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.

Combining of GIS and the Food Chain Assessment Result around Yeonggwang Nuclear Power Plant (영광 원전 주변 육상생태계 평가 결과와 GIS의 연계)

  • Kang, H.S.;Jun, I.;Keum, D.K.;Choi, Y.H.;Lee, H.S.;Lee, C.W.
    • Journal of Radiation Protection and Research
    • /
    • v.30 no.4
    • /
    • pp.237-245
    • /
    • 2005
  • The distribution of radionuclides in soil and plants were calculated, assuming an accidental release of radionuclides from Yeonggwang Nuclear Power Plant. The results which show the concentration change with time and regions were displayed by GIS. GIS Included the commercial program, ArcView(ESRI), and a basic digital map of 1:5000 scale for 30km by 30km area around Yeonggwang Nuclear Power Plant. The target material was $^{137}Cs$ in soil around Yeonggwang area. Given denosited $^{137}Cs$ concentrations, ECOREA-II code computed the $^{137}Cs$ concentration of the soil and the plant in the area divided by 16 azimuth, 480 unit cells in total in which the concentrations also varied with time. The results were introduced into the attributed data of previously designed polygon cells in ArcView. In order to display the concentration change with time by monotonic color, the RGB value for ArcView color lamp was controlled. This display is useful for the public to understand the concentration change of radionuclide around Yeonggwang area definitely.

Development of RGBW Dimming Control Sensitivity Lighting System based on the Intelligence Algorithm (지능형 알고리즘 기반 RGBW Dimming control LED 감성조명 시스템 개발)

  • Oh, Sung-Kwun;Lim, Sung-Joon;Ma, Chang-Min;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.359-364
    • /
    • 2011
  • The study uses department of the sensitivity and fuzzy reasoning, one of artificial intelligence algorithms, so that develop LED lighting system based on fuzzy reasoning for systematical control of the LED color temperature. In the area of sensitivity engineering, by considering the relation between color and emotion expressed as an adjective word, the corresponding sensitivity word can be determined, By taking into consideration the relation between the brain wave measured from the human brain and the color temperature, the preferred lesson subject can be determined. From the decision of the sensitivity word and the lesson subject, we adjust the color temperature of RGB (Red, Green, Blue) LED. In addition, by using the information of the latitude and the longitude from GPS(Global Positioning System), we can calculate the on-line moving altitude of sun. By using the sensor information of both temperature and humidity, we can calculate the discomfort index. By considering the altitude of sun as well as the value of the discomfort index, the illumination of W(white) LED and the color temperature of RGB LED can be determined. The (LED) sensitivity lighting control system is bulit up by considering the sensitivity word, the lesson subject, the altitude of sun, and the discomfort index The developed sensitivity lighting control system leads to more suitable atmosphere and also the enhancement of the efficiency of lesson subjects as well as business affairs.

Real Time Traffic Signal Recognition Using HSI and YCbCr Color Models and Adaboost Algorithm (HSI/YCbCr 색상모델과 에이다부스트 알고리즘을 이용한 실시간 교통신호 인식)

  • Park, Sanghoon;Lee, Joonwoong
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.214-224
    • /
    • 2016
  • This paper proposes an algorithm to effectively detect the traffic lights and recognize the traffic signals using a monocular camera mounted on the front windshield glass of a vehicle in day time. The algorithm consists of three main parts. The first part is to generate the candidates of a traffic light. After conversion of RGB color model into HSI and YCbCr color spaces, the regions considered as a traffic light are detected. For these regions, edge processing is applied to extract the borders of the traffic light. The second part is to divide the candidates into traffic lights and non-traffic lights using Haar-like features and Adaboost algorithm. The third part is to recognize the signals of the traffic light using a template matching. Experimental results show that the proposed algorithm successfully detects the traffic lights and recognizes the traffic signals in real time in a variety of environments.

Presentation Method Using Depth Information (깊이 정보를 이용한 프레젠테이션 방법)

  • Kim, Ho-Seung;Kwon, Soon-Kak
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.409-415
    • /
    • 2013
  • Recently, various equipments have been developed for convenience of presentations. Presentation equipments added the keyboard and mouse functions to laser pointer and devices have become main method. However these devices have demerits of limited action and a few events. In this paper, we propose a method which increases the degrees of freedom of presentation as the control of the hand by using a depth camera. The proposed method recognizes the horizontal and vertical positions of hand pointer and the distance between hand and camera from both depth and RGB cameras, then performs a presentation event as the location and pattern that the hand touches a screen. The simulation results show that a camera is fixed on left side of the screen, and nine presentation events is correctly performed.

Development of Statistical Analyzing Tool and System of Automatic Magnetizer (착자 자동화 시스템 및 통계분석 툴 개발)

  • Lee, Cheon-Hui;Ha, Gi-Jong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.4
    • /
    • pp.1014-1025
    • /
    • 1996
  • The magnetizer that is magnetizing the magnet which is used for RGB (Red Green Blue) control of CRT(Cath-ode-Ray Tube)and the magnet inspection unit which is used for test the state of magnetizing have been imported by magnet manufactures up to now. They are operation by manual now, so that they are needed lots of time and have an increase in probability of malefunction by operator. In this study, we have developed a united system from the magnet production process to inspection ork to automaized a entire progress. Therefor, as we are testing the status of every work and analyzing exactly the distribution in quality with this system, we have known that the reliability of magnetization and magnetized status test has not only increased, but the rate of inferior quality almost not generated.

  • PDF

A Study on development for image detection tool using two layer voting method (2단계 분류기법을 이용한 영상분류기 개발)

  • 김명관
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.5
    • /
    • pp.605-610
    • /
    • 2002
  • In this paper, we propose a Internet filtering tool which allows parents to manage their children's Internet access, block access to Internet sites they deem inappropriate. The other filtering tools which like Cyber Patrol, NCA Patrol, Argus, Netfilter are oriented only URL filtering or keyword detection methods. Thease methods are used on limited fields application. But our approach is focus on image color space model. First we convert RGB color space to HLS(Hue Luminance Saturation). Next, this HLS histogram learned by our classification method tools which include cohesion factor, naive baysian, N-nearest neighbor. Then we use voting for result from various classification methods. Using 2,000 picture, we prove that 2-layer voting result have better accuracy than other methods.

  • PDF

Development of the Hand Recognition System for the Mouse Control (마우스 제어를 위한 손 인식 시스템 개발)

  • Jeong, Jong-Myeon;Jang, Jung-Ryun;Kim, Yu-Il;Park, Ji-Won;Lee, Won-Joo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.173-174
    • /
    • 2011
  • 본 논문에서는 마우스 제어를 위한 손 인식 시스템을 제안한다. 이를 위하여 배경영상과 입력영상의 차영상을 이용하여 움직임 영역을 구하고, RGB 컬러모델을 HSV 컬러모델로 변환하여 피부색상과 유사한 영역을 얻는다. 이 둘 사이의 교집합을 통하여 손 후보 영역을 추출하고 모폴로지 연산을 통해 잡음을 제거한 후 손 영상을 추출한다. 추출한 손 영상을 모폴로지 연산을 이용하여 손바닥 영역과 손가락 영역으로 분리한 다음 손바닥 영역의 위치정보를 마우스의 좌표로, 손가락의 개수를 마우스 이벤트로 정의하여 마우스를 제어한다. 실험 결과는 제안된 시스템이 마우스 제어에 효과적으로 사용될 수 있음을 보이고 있다.

  • PDF

Position Tracking of Underwater Robot for Nuclear Reactor Inspection using Color Information (색상정보를 이용한 원자로 육안검사용 수중로봇의 위치 추적)

  • 조재완;김창회;서용칠;최영수;김승호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2259-2262
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tend to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color informations, yellow and indigo. The center coordinates extraction procedures is as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences: binarization labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.