• Title/Summary/Keyword: RGB sensor

Search Result 143, Processing Time 0.029 seconds

The diagnosis of Plasma Through RGB Data Using Rough Set Theory

  • Lim, Woo-Yup;Park, Soo-Kyong;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2010.02a
    • /
    • pp.413-413
    • /
    • 2010
  • In semiconductor manufacturing field, all equipments have various sensors to diagnosis the situations of processes. For increasing the accuracy of diagnosis, hundreds of sensors are emplyed. As sensors provide millions of data, the process diagnosis from them are unrealistic. Besides, in some cases, the results from some data which have same conditions are different. We want to find some information, such as data and knowledge, from the data. Nowadays, fault detection and classification (FDC) has been concerned to increasing the yield. Certain faults and no-faults can be classified by various FDC tools. The uncertainty in semiconductor manufacturing, no-faulty in faulty and faulty in no-faulty, has been caused the productivity to decreased. From the uncertainty, the rough set theory is a viable approach for extraction of meaningful knowledge and making predictions. Reduction of data sets, finding hidden data patterns, and generation of decision rules contrasts other approaches such as regression analysis and neural networks. In this research, a RGB sensor was used for diagnosis plasma instead of optical emission spectroscopy (OES). RGB data has just three variables (red, green and blue), while OES data has thousands of variables. RGB data, however, is difficult to analyze by human's eyes. Same outputs in a variable show different outcomes. In other words, RGB data includes the uncertainty. In this research, by rough set theory, decision rules were generated. In decision rules, we could find the hidden data patterns from the uncertainty. RGB sensor can diagnosis the change of plasma condition as over 90% accuracy by the rough set theory. Although we only present a preliminary research result, in this paper, we will continuously develop uncertainty problem solving data mining algorithm for the application of semiconductor process diagnosis.

  • PDF

Smoke Detection Based on RGB-Depth Camera in Interior (RGB-Depth 카메라 기반의 실내 연기검출)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • In this paper, an algorithm using RGB-depth camera is proposed to detect smoke in interrior. RGB-depth camera, the Kinect provides RGB color image and depth information. The Kinect sensor consists of an infra-red laser emitter, infra-red camera and an RGB camera. A specific pattern of speckles radiated from the laser source is projected onto the scene. This pattern is captured by the infra-red camera and is analyzed to get depth information. The distance of each speckle of the specific pattern is measured and the depth of object is estimated. As the depth of object is highly changed, the depth of object plain can not be determined by the Kinect. The depth of smoke can not be determined too because the density of smoke is changed with constant frequency and intensity of infra-red image is varied between each pixels. In this paper, a smoke detection algorithm using characteristics of the Kinect is proposed. The region that the depth information is not determined sets the candidate region of smoke. If the intensity of the candidate region of color image is larger than a threshold, the region is confirmed as smoke region. As results of simulations, it is shown that the proposed method is effective to detect smoke in interior.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

Optical Design of a Modified Catadioptric Omnidirectional Optical System for a Capsule Endoscope to Image Simultaneously Front and Side Views on a RGB/NIR CMOS Sensor (RGB/NIR CMOS 센서에서 정면 영상과 측면 영상을 동시에 결상하는 캡슐 내시경용 개선된 반사굴절식 전방위 광학계의 광학 설계)

  • Hong, Young-Gee;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.32 no.6
    • /
    • pp.286-295
    • /
    • 2021
  • A modified catadioptric omnidirectional optical system (MCOOS) using an RGB/NIR CMOS sensor is optically designed for a capsule endoscope with the front field of view (FOV) in visible light (RGB) and side FOV in visible and near-infrared (NIR) light. The front image is captured by the front imaging lens system of the MCOOS, which consists of an additional three lenses arranged behind the secondary mirror of the catadioptric omnidirectional optical system (COOS) and the imaging lens system of the COOS. The side image is properly formed by the COOS. The Nyquist frequencies of the sensor in the RGB and NIR spectra are 90 lp/mm and 180 lp/mm, respectively. The overall length of 12 mm, F-number of 3.5, and two half-angles of front and side half FOV of 70° and 50°-120° of the MCOOS are determined by the design specifications. As a result, a spatial frequency of 154 lp/mm at a modulation transfer function (MTF) of 0.3, a depth of focus (DOF) of -0.051-+0.052 mm, and a cumulative probability of tolerance (CPT) of 99% are obtained from the COOS. Also, the spatial frequency at MTF of 170 lp/mm, DOF of -0.035-0.051 mm, and CPT of 99.9% are attained from the front-imaging lens system of the optimized MCOOS.

Robust Vehicle Occupant Detection based on RGB-Depth-Thermal Camera (다양한 환경에서 강건한 RGB-Depth-Thermal 카메라 기반의 차량 탑승자 점유 검출)

  • Song, Changho;Kim, Seung-Hun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.31-37
    • /
    • 2018
  • Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.

Fine Flow Controlling Device for Medicine Injection (의료 약물주입용 미세 유량 제어 장치)

  • Cho, Su-Chan;Shin, Bo-Sung
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.28 no.4
    • /
    • pp.51-55
    • /
    • 2021
  • The nurses manually carry out the intravenous therapy for the patients. Using an Arduino, the fine flow controlling device was invented to provide an ongoing patient care. The medication is injected through a peristaltic pump, and the amount of the solution is controlled with a RGB color sensor. The power of the device is supplied through the batteries. An amount of the injection is measured with LIG strain sensor fabricated by 355nm UV pulsed laser. This system will provide a better medical service.

Performance Analysis of Object Detection Neural Network According to Compression Ratio of RGB and IR Images (RGB와 IR 영상의 압축률에 따른 객체 탐지 신경망 성능 분석)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Lee, Hee Kyung;Choo, Hyon-Gon;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.155-166
    • /
    • 2021
  • Most object detection algorithms are studied based on RGB images. Because the RGB cameras are capturing images based on light, however, the object detection performance is poor when the light condition is not good, e.g., at night or foggy days. On the other hand, high-quality infrared(IR) images regardless of weather condition and light can be acquired because IR images are captured by an IR sensor that makes images with heat information. In this paper, we performed the object detection algorithm based on the compression ratio in RGB and IR images to show the detection capabilities. We selected RGB and IR images that were taken at night from the Free FLIR Thermal dataset for the ADAS(Advanced Driver Assistance Systems) research. We used the pre-trained object detection network for RGB images and a fine-tuned network that is tuned based on night RGB and IR images. Experimental results show that higher object detection performance can be acquired using IR images than using RGB images in both networks.

Transparent Manipulators Accomplished with RGB-D Sensor, AR Marker, and Color Correction Algorithm (RGB-D 센서, AR 마커, 색수정 알고리즘을 활용한 매니퓰레이터 투명화)

  • Kim, Dong Yeop;Kim, Young Jee;Son, Hyunsik;Hwang, Jung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.3
    • /
    • pp.293-300
    • /
    • 2020
  • The purpose of our sensor system is to transparentize the large hydraulic manipulators of a six-ton dual arm excavator from the operator camera view. Almost 40% of the camera view is blocked by the manipulators. In other words, the operator loses 40% of visual information which might be useful for many manipulator control scenarios such as clearing debris on a disaster site. The proposed method is based on a 3D reconstruction technology. By overlaying the camera image from front top of the cabin with the point cloud data from RGB-D (red, green, blue and depth) cameras placed at the outer side of each manipulator, the manipulator-free camera image can be obtained. Two additional algorithms are proposed to further enhance the productivity of dual arm excavators. First, a color correction algorithm is proposed to cope with the different color distribution of the RGB and RGB-D sensors used on the system. Also, the edge overlay algorithm is proposed. Although the manipulators often limit the operator's view, the visual feedback of the manipulator's configurations or states may be useful to the operator. Thus, the overlay algorithm is proposed to show the edge of the manipulators on the camera image. The experimental results show that the proposed transparentization algorithm helps the operator get information about the environment and objects around the excavator.

Vegetation Monitoring using Unmanned Aerial System based Visible, Near Infrared and Thermal Images (UAS 기반, 가시, 근적외 및 열적외 영상을 활용한 식생조사)

  • Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.71-91
    • /
    • 2018
  • In recent years, application of UAV(Unmanned Aerial Vehicle) to seed sowing and pest control has been actively carried out in the field of agriculture. In this study, UAS(Unmanned Aerial System) is constructed by combining image sensor of various wavelength band and SfM((Structure from Motion) based image analysis technique in UAV. Utilization of UAS based vegetation survey was investigated and the applicability of precision farming was examined. For this purposes, a UAS consisting of a combination of a VIS_RGB(Visible Red, Green, and Blue) image sensor, a modified BG_NIR(Blue Green_Near Infrared Red) image sensor, and a TIR(Thermal Infrared Red) sensor with a wide bandwidth of $7.5{\mu}m$ to $13.5{\mu}m$ was constructed for a low cost UAV. In addition, a total of ten vegetation indices were selected to investigate the chlorophyll, nitrogen and water contents of plants with visible, near infrared, and infrared wavelength's image sensors. The images of each wavelength band for the test area were analyzed and the correlation between the distribution of vegetation index and the vegetation index were compared with status of the previously surveyed vegetation and ground cover. The ability to perform vegetation state detection using images obtained by mounting multiple image sensors on low cost UAV was investigated. As the utility of UAS equipped with VIS_RGB, BG_NIR and TIR image sensors on the low cost UAV has proven to be more economical and efficient than previous vegetation survey methods that depend on satellites and aerial images, is expected to be used in areas such as precision agriculture, water and forest research.

RGB-LED-based Optical Camera Communication using Multilevel Variable Pulse Position Modulation for Healthcare Applications

  • Rachim, Vega Pradana;An, Jinyoung;Pham, Quan Ngoc;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.1
    • /
    • pp.6-12
    • /
    • 2018
  • In this paper, a 32-variable pulse position modulation (32-VPPM) scheme is proposed to support a red-green-blue light-emitting-diode (RGB-LED)-based optical camera communication (OCC) system. Our proposed modulation scheme is designed to enhance the OCC data transmission rate, which is targeted for the wearable biomedical data monitoring system. The OCC technology has been utilized as an alternative solution to the radio frequency (RF) wireless system for long-term self-healthcare monitoring. Different biomedical signals, such as electrocardiograms, photoplethysmograms, and respiration signals are being monitored and transmitted wirelessly from the wearable biomedical device to the smartphone receiver. A common 30 frames per second (fps) smartphone camera with a CMOS image sensor is used to record a transmitted optical signal. Moreover, the overall proposed system architecture, modulation scheme, and data demodulation are discussed in this paper. The experimental result shows that the proposed system is able to achieve > 9 kbps using only a common smartphone camera receiver.