• Title/Summary/Keyword: RGB Color

Search Result 885, Processing Time 0.027 seconds

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Image Contrast and Sunlight Readability Enhancement for Small-sized Mobile Display (소형 모바일 디스플레이의 영상 컨트라스트 및 야외시인성 개선 기법)

  • Chung, Jin-Young;Hossen, Monir;Choi, Woo-Young;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.116-124
    • /
    • 2009
  • Recently the CPU performance of modem chipsets or multimedia processors of mobile phone is as high as notebook PC. That is why mobile phone has been emerged as a leading ICON on the convergence of consumer electronics. The various applications of mobile phone such as DMB, digital camera, video telephony and internet full browsing are servicing to consumers. To meet all the demands the image quality has been increasingly important. Mobile phone is a portable device which is widely using in both the indoor and outside environments, so it is needed to be overcome to deteriorate image quality depending on environmental light source. Furthermore touch window is popular on the mobile display panel and it makes contrast loss because of low transmittance of ITO film. This paper presents the image enhancement algorithm to be embedded on image enhancement SoC. In contrast enhancement, we propose Clipped histogram stretching method to make it adaptive with the input images, while S-shape curve and gain/offset method for the static application And CIELCh color space is used to sunlight readability enhancement by controlling the lightness and chroma components which is depended on the sensing value of light sensor. Finally the performance of proposed algorithm is evaluated by using histogram, RGB pixel distribution, entropy and dynamic range of resultant images. We expect that the proposed algorithm is suitable for image enhancement of embedded SoC system which is applicable for the small-sized mobile display.

  • PDF

A New Software for Quantitative Measurement of Strabismus based on Digital Image (디지털 영상 기반 정량적인 사시각 측정을 위한 새로운 소프트웨어)

  • Kim, Tae-Yun;Seo, Sang-Sin;Kim, Young-Jae;Yang, Hee-Kyung;Hwang, Jeong-Min;Kim, Kwang-Gi
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.5
    • /
    • pp.595-605
    • /
    • 2012
  • Various methods for measuring strabismus have been developed and used in clinical diagnosis. However, most of them are based on the visual inspection by clinicians. For this reason, there is a high possibility of subjective evaluation in clinical decisions and they are only useful for cooperative patients. Therefore, the development of a more objective and reproducible method for measuring strabismus is needed. In this paper, we introduce a new software to complement the limitations of previous diagnostic methods. Firstly, we simply obtained facial images of patients and performed several preprocessing steps based on the spherical RGB color model with them. Then, the measurement of strabismus was performed automatically by using our 3D eye model and mathematical algorithm. To evaluate the validity of our software, we performed statistical correlation analysis of the results of the proposed method and the Krimsky test by two clinicians for ten patients. The coefficients of correlation for two clinicians were very high, 0.955 and 0.969, respectively. The coefficient of correlation between two clinicians also showed 0.968. We found a statistically significant correlation between two methods from our results. The newly developed software showed a possibility that it can be used as an alternative or effective assistant tool of previous diagnostic methods for strabismus.

A Study on Image Analysis of Graphene Oxide Using Optical Microscopy (광학 현미경을 이용한 산화 그래핀 이미지 분석 조건에 관한 연구)

  • Lee, Yu-Jin;Kim, Na-Ri;Yoon, Sang-Su;Oh, Youngsuk;Lee, Jea Uk;Lee, Wonoh
    • Composites Research
    • /
    • v.27 no.5
    • /
    • pp.183-189
    • /
    • 2014
  • Experimental considerations have been performed to obtain the clear optical microscopic images of graphene oxide which are useful to probe its quality and morphological information such as a shape, a size, and a thickness. In this study, we investigated the contrast enhancement of the optical images of graphene oxide after hydrazine vapor reduction on a Si substrate coated with a 300 nm-thick $SiO_2$ dielectric layer. Also, a green-filtered light source gave higher contrast images comparing to optical images under standard white light. Furthermore, it was found that a image channel separation technique can be an alternative to simply identify the morphological information of graphene oxide, where red, green, and blue color values are separated at each pixels of the optical image. The approaches performed in this study can be helpful to set up a simple and easy protocol for the morphological identification of graphene oxide using a conventional optical microscope instead of a scanning electron microscopy or an atomic force microscopy.

Design and Implementation of Sensibilities Lighting LED Controller using Modbus for a Ship (Modbus를 이용한 선박용 감성조명 LED 제어기의 설계 및 구현)

  • Jeong, Jeong-Soo;Lee, Sang-Bae
    • Journal of Navigation and Port Research
    • /
    • v.39 no.4
    • /
    • pp.299-305
    • /
    • 2015
  • Modbus is a serial communications protocol, it has since become a practically standard communication protocol, and it is now a commonly available means of connecting industrial electronic devices. Therefore, it can be connected with all devices using Modbus protocol to the measurement and remote control on the ships, buildings, trains, airplanes and etc.. In this paper, we add the Modbus communication protocol to the existing lighting controller sensitivity to enable verification and remote control by external environmental factors, and also introduces a fuzzy inference system was configured by external environmental factors to control LED lighting. External environmental factors of temperature, humidity, illuminance value represented by the LED through a fuzzy control algorithm, the values accepted by the controller through the sensor. Modbus is using the RS485 Serial communication with other devices connected to the temperature, humidity, illumination and LED output status check is possible. In addition, the remote user is changed to enable it is possible to change the RGB values in the desired color change. Produced was confirmed that the LED controller output is based on the temperature, humidity and illumination.

A Study on u-CCTV Fire Prevention System Development of System and Fire Judgement (u-CCTV 화재 감시 시스템 개발을 위한 시스템 및 화재 판별 기술 연구)

  • Kim, Young-Hyuk;Lim, Il-Kwon;Li, Qigui;Park, So-A;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.463-466
    • /
    • 2010
  • In this paper, CCTV based fire surveillance system should aim to development. Advantages and Disadvantages analyzed of Existing sensor-based fire surveillance system and video-based fire surveillance system. To national support U-City, U-Home, U-Campus, etc, spread the ubiquitous environment appropriate to fire surveillance system model and a fire judgement technology. For this study, Microsoft LifeCam VX-1000 using through the capturing images and analyzed for apple and tomato, Finally we used H.264. The client uses the Linux OS with ARM9 S3C2440 board was manufactured, the client's role is passed to the server to processed capturing image. Client and the server is basically a 1:1 video communications. So to multiple receive to video multicast support will be a specification. Is fire surveillance system designed for multiple video communication. Video data from the RGB format to YUV format and transfer and fire detection for Y value. Y value is know movement data. The red color of the fire is determined to detect and calculate the value of Y at the fire continues to detect the movement of flame.

  • PDF

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

A Design and Implementation of Fitness Application Based on Kinect Sensor

  • Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.43-50
    • /
    • 2021
  • In this paper, we design and implement KITNESS, a windows application that feeds back the accuracy of fitness motions based on Kinect sensors. The feature of this application is to use Kinect's camera and joint recognition sensor to give feedback to the user to exercise in the correct fitness position. At this time, the distance between the user and the Kinect is measured using Kinect's IR Emitter and IR Depth Sensor, and the joint, which is the user's joint position, and the Skeleton data of each joint are measured. Using this data, a certain distance is calculated for each joint position and posture of the user, and the accuracy of the posture is determined. And it is implemented so that users can check their posture through Kinect's RGB camera. That is, if the user's posture is correct, the skeleton information is displayed as a green line, and if it is not correct, the inaccurate part is displayed as a red line to inform intuitively. Through this application, the user receives feedback on the accuracy of the exercise position, so he can exercise himself in the correct position. This application classifies the exercise area into three areas: neck, waist, and leg, and increases the recognition rate of Kinect by excluding positions that Kinect does not recognize due to overlapping joints in the position of each exercise area. And at the end of the application, the last exercise is shown as an image for 5 seconds to inspire a sense of accomplishment and to continuously exercise.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Urban Object Classification Using Object Subclass Classification Fusion and Normalized Difference Vegetation Index (객체 서브 클래스 분류 융합과 정규식생지수를 이용한 도심지역 객체 분류)

  • Chul-Soo Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.223-232
    • /
    • 2023
  • A widely used method for monitoring land cover using high-resolution satellite images is to classify the images based on the colors of the objects of interest. In urban areas, not only major objects such as buildings and roads but also vegetation such as trees frequently appear in high-resolution satellite images. However, the colors of vegetation objects often resemble those of other objects such as buildings, roads, and shadows, making it difficult to accurately classify objects based solely on color information. In this study, we propose a method that can accurately classify not only objects with various colors such as buildings but also vegetation objects. The proposed method uses the normalized difference vegetation index (NDVI) image, which is useful for detecting vegetation objects, along with the RGB image and classifies objects into subclasses. The subclass classification results are fused, and the final classification result is generated by combining them with the image segmentation results. In experiments using Compact Advanced Satellite 500-1 imagery, the proposed method, which applies the NDVI and subclass classification together, showed an overall accuracy of 87.42%, while the overall accuracy of the subchannel classification technique without using the NDVI and the subclass classification technique alone were 73.18% and 81.79%, respectively.