• Title/Summary/Keyword: sRGB

Search Result 354, Processing Time 0.023 seconds

A Study on the Dynamic Expression of Fabrics based on RGB-D Sensor and 3D Virtual Clothing CAD System (RGB-D 센서 및 3D Virtual Clothing CAD활용에 의한 패션소재의 동적표현 시스템에 대한 연구)

  • Lee, Jieun;Kim, Soulkey;Kim, Jongjun
    • Journal of Fashion Business
    • /
    • v.17 no.1
    • /
    • pp.30-41
    • /
    • 2013
  • Augmented reality techniques have been increasingly employed in the textile and fashion industry as well as computer graphics sectors. Three-dimensional virtual clothing CAD systems have also been widely used in the textile industries and academic institutes. Motion tracking techniques are grafted together in the 3D and augmented reality techniques in order to develop the virtual three-dimensional clothing and fitting systems in the fashion and textile industry sectors. In this study, three-dimensional virtual clothing sample has been prepared using a 3D virtual clothing CAD along with a 3D scanning and reconstruction system. Motion of the user has been captured through an RGB-D sensor system, and the virtual clothing fitted on the user's body is allowed to move along with the captured motion flow of the user. Acutal fabric specimens are selected for the material characterization. This study is a primary step toward building a comprehensive system for the user to experience interactively virtual clothing under real environment.

Skin Segmentation Using YUV and RGB Color Spaces

  • Al-Tairi, Zaher Hamid;Rahmat, Rahmita Wirza;Saripan, M. Iqbal;Sulaiman, Puteri Suhaiza
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.283-299
    • /
    • 2014
  • Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

Color Image Segmentation Using Adaptive Quantization and Sequential Region-Merging Method (적응적 양자화와 순차적 병합 기법을 사용한 컬러 영상 분할)

  • Kwak, Nae-Joung;Kim, Young-Gil;Kwon, Dong-Jin;Ahn, Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.473-481
    • /
    • 2005
  • In this paper, we propose an image segmentation method preserving object's boundaries by using the number of quantized colors and merging regions using adaptive threshold values. First of all, the proposed method quantizes an original image by a vector quantization and the number of quantized colors is determined differently using PSNR each image. We obtain initial regions from the quantized image, merge initial regions in CIE Lab color space and RGB color space step by step and segment the image into semantic regions. In each merging step, we use color distance between adjacent regions as similarity-measure. Threshold values for region-merging are determined adaptively according to the global mean of the color difference between the original image and its split-regions and the mean of those variations. Also, if the segmented image of RGB color space doesn't split into semantic objects, we merge the image again in the CIE Lab color space as post-processing. Whether the post-processing is done is determined by using the color distance between initial regions of the image and the segmented image of RGB color space. Experiment results show that the proposed method splits an original image into main objects and boundaries of the segmented image are preserved. Also, the proposed method provides better results for objective measure than the conventional method.

  • PDF

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

A Study on the Lighting Control System using Fuzzy Control System and RGB Modules in the Ship's Indoor (퍼지 제어 시스템과 RGB LED 모듈을 이용한 선박 실내용 조명 제어 시스템에 관한 연구)

  • Nam, Young-Cheol;Lee, Sang-Bae
    • Journal of Navigation and Port Research
    • /
    • v.42 no.6
    • /
    • pp.421-426
    • /
    • 2018
  • With regard to LED lighting devices which have currently been commercialized, LED operating sequences are being sold in a fixed state. In such a state, the external environmental factors are not taken into consideration as only the illumination environment application is considered. Currently, it is difficult to create an optimal lighting environment which can adapt to changes in external environmental factors in the ship. Therefore, it was concluded that there is a need to input the external environment value so that the optimal illumination value can be reflected in real time in order to adapt more organically and actively to the change of external environmental factors. In this paper, we used a microprocessor as an integrated management system for environmental data that changes in real time according to existing external environmental factors. In addition, a controller capable of lighting control of RGB LED module by combining fuzzy inference system. For this, a fuzzy control algorithm is designed and a fuzzy control system is constructed. The distance and the illuminance value from the external environment element are input to the sensor, and these values are converted to the optimum illumination value through the fuzzy control algorithm, and are expressed through the dimming control of the RGB LED module and the practical effectiveness of the fuzzy control system is confirmed.

Assessment of Lodged Damage Rate of Soybean Using Support Vector Classifier Model Combined with Drone Based RGB Vegetation Indices (드론 영상 기반 RGB 식생지수 조합 Support Vector Classifier 모델 활용 콩 도복피해율 산정)

  • Lee, Hyun-jung;Go, Seung-hwan;Park, Jong-hwa
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1489-1503
    • /
    • 2022
  • Drone and sensor technologies are enabling digitalization of agricultural crop's growth information and accelerating the development of the precision agriculture. These technologies could be able to assess damage of crops when natural disaster occurs, and contribute to the scientification of the crop insurance assessment method, which is being conducted through field survey. This study was aimed to calculate lodged damage rate from the vegetation indices extracted by drone based RGB images for soybean. Support Vector Classifier (SVC) models were considered by adding vegetation indices to the Crop Surface Model (CSM) based lodged damage rate. Visible Atmospherically Resistant Index (VARI) and Green Red Vegetation Index (GRVI) based lodged damage rate classification were shown the highest accuracy score as 0.709 and 0.705 each. As a result of this study, it was confirmed that drone based RGB images can be used as a useful tool for estimating the rate of lodged damage. The result acquired from this study can be used to the satellite imagery like Sentinel-2 and RapidEye when the damages from the natural disasters occurred.

Design and Implementation of Wireless Lighting LED Controller using Modbus TCP for a Ship (Modbus TCP를 이용한 선박용 무선 LED 제어기의 설계 및 구현)

  • Jeong, Jeong-Soo;Lee, Sang-Bae
    • Journal of Navigation and Port Research
    • /
    • v.41 no.6
    • /
    • pp.395-400
    • /
    • 2017
  • As a serial communications protocol, Modbus has become a practically standard communication protocol and is now a commonly available means of connecting industrial electronic devices. Therefore, all devices can be connected using the Modbus protocol with the measurement and remote control on ships, buildings, trains, airplanes and more. The existing Modbus that has been used is based on serial communication. Modbus TCP uses Ethernet communication based on TCP / IP which is the most widely used Internet protocol today; so, it is faster than serial communication and can be connected to the Internet of Things. In this paper, we designed an algorithm to control LED lighting in a wireless Wi-Fi environment using the Modbus TCP communication protocol, and designed and implemented a LED controller circuit that can check external environmental factors and control remotely through the integrated management system of a ship. Temperature, humidity, current and illuminance values, which are external environmental factors, are received by the controller through the sensors, and these values are communicated to the ship's integrated management system via the Modbus protocol. The Modbus can be connected to Master devices via TCP communication to monitor temperature, humidity, current, illuminance status and LED output values, and also users can change the RGB value remotely in order to change to the desired color. In addition, in order to confirm the implementation of the controller, we developed a simulated ship management system to monitor the temperature, humidity, current and illumination conditions, and change the LED color of the controller by changing the RGB value remotely.

Vegetation Monitoring using Unmanned Aerial System based Visible, Near Infrared and Thermal Images (UAS 기반, 가시, 근적외 및 열적외 영상을 활용한 식생조사)

  • Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.71-91
    • /
    • 2018
  • In recent years, application of UAV(Unmanned Aerial Vehicle) to seed sowing and pest control has been actively carried out in the field of agriculture. In this study, UAS(Unmanned Aerial System) is constructed by combining image sensor of various wavelength band and SfM((Structure from Motion) based image analysis technique in UAV. Utilization of UAS based vegetation survey was investigated and the applicability of precision farming was examined. For this purposes, a UAS consisting of a combination of a VIS_RGB(Visible Red, Green, and Blue) image sensor, a modified BG_NIR(Blue Green_Near Infrared Red) image sensor, and a TIR(Thermal Infrared Red) sensor with a wide bandwidth of $7.5{\mu}m$ to $13.5{\mu}m$ was constructed for a low cost UAV. In addition, a total of ten vegetation indices were selected to investigate the chlorophyll, nitrogen and water contents of plants with visible, near infrared, and infrared wavelength's image sensors. The images of each wavelength band for the test area were analyzed and the correlation between the distribution of vegetation index and the vegetation index were compared with status of the previously surveyed vegetation and ground cover. The ability to perform vegetation state detection using images obtained by mounting multiple image sensors on low cost UAV was investigated. As the utility of UAS equipped with VIS_RGB, BG_NIR and TIR image sensors on the low cost UAV has proven to be more economical and efficient than previous vegetation survey methods that depend on satellites and aerial images, is expected to be used in areas such as precision agriculture, water and forest research.

Online Monitoring System based notifications on Mobile devices with Kinect V2 (키넥트와 모바일 장치 알림 기반 온라인 모니터링 시스템)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1183-1188
    • /
    • 2016
  • Kinect sensor version 2 is a kind of camera released by Microsoft as a computer vision and a natural user interface for game consoles like Xbox one. It allows acquiring color images, depth images, audio input and skeletal data with a high frame rate. In this paper, using depth image, we present a surveillance system of a certain area within Kinect's field of view. With computer vision library(Emgu CV), if an object is detected in the target area, it is tracked and kinect camera takes RGB image to send it in database server. Therefore, a mobile application on android platform was developed in order to notify the user that Kinect has sensed strange motion in the target region and display the RGB image of the scene. User gets the notification in real-time to react in the best way in the case of valuable things in monitored area or other cases related to a reserved zone.

An Improved Fractal Color Image Decoding Based on Data Dependence and Vector Distortion Measure (데이터 의존성과 벡터왜곡척도를 이용한 개선된 프랙탈 칼라영상 복호화)

  • 서호찬;정태일;류권열;권기룡;문광석
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.3
    • /
    • pp.289-296
    • /
    • 1999
  • In this paper, an improved fractal color image decoding method using the data dependence parts and the vector distortion measure is proposed. The vector distortion measure exploits the correlation between different color components. The pixel in RGB color space can be considered as a 30dimensional vector with elements of RGB components. The root mean square error(rms) in RGB color for similarity measure of two blocks R and R' was used. We assume that various parameter necessary in image decoding are stored in the transform table. If the parameter is referenced in decoding image, then decoding is performed by the recursive decoding method. If the parameter is not referenced in decoding image, then the parameters recognize as the data dependence parts and store its in the memory. Non-referenced parts can be decoded only one time, because its domain informations exist in the decoded parts by the recursive decoding method. Non-referenced parts are defined the data dependence parts. Image decoding method using data dependence classifies referenced parts and non-referenced parts using information of transform table. And the proposed method can be decoded only one time for R region decoding speed than Zhang & Po's method, since it is decreased the computational numbers by execution iterated contractive transformations for the referenced range only.

  • PDF