• Title/Summary/Keyword: Color line Sensor

Search Result 29, Processing Time 0.025 seconds

On-line Real Time Soil Sensor

  • Shibusawa, S.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.28-33
    • /
    • 2003
  • Achievements in the real-time soil spectro-photometer are: an improved soil penetrator to ensure a uniform soil surface under high speed conditions, real-time collecting of underground soil reflectance, getting underground soil color images, use of a RTK-GPS, and all units are arranged for compactness. With the soil spectrophotometer, field experiments were conducted in a 0.5 ha paddy field. With the original reflectance, averaging and multiple scatter correction, Kubelka-Munk (KM) transformation as soil absorption, its 1st and 2nd derivatives were calculated. When the spectra was highly correlated with the soil parameters, stepwise regression analysis was conducted. Results include the best prediction models for moisture, soil organic matter (SOM), nitrate nitrogen (NO$_3$-N), pH and electric conductivity (EC), and soil maps obtained by block kriging analysis.

  • PDF

A Study on the VLSI Design of Efficient Color Interpolation Technique Using Spatial Correlation for CCD/CMOS Image Sensor (화소 간 상관관계를 이용한 CCD/CMOS 이미지 센서용 색 보간 기법 및 VLSI 설계에 관한 연구)

  • Lee, Won-Jae;Lee, Seong-Joo;Kim, Jae-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.11 s.353
    • /
    • pp.26-36
    • /
    • 2006
  • In this paper, we propose a cost-effective color filter may (CFA) demosaicing method for digital still cameras in which a single CCD or CMOS image sensor is used. Since a CFA is adopted, we must interpolate missing color values in the red, green and blue channels at each pixel location. While most state-of-the-art algorithms invest a great deal of computational effort in the enhancement of the reconstructed image to overcome the color artifacts, we focus on eliminating the color artifacts with low computational complexity. Using spatial correlation of the adjacent pixels, the edge-directional information of the neighbor pixels is used for determining the edge direction of the current pixel. We apply our method to the state-of-the-art algorithms which use edge-directed methods to interpolate the missing color channels. The experiment results show that the proposed method enhances the demosaiced image qualify from $0.09{\sim}0.47dB$ in PSNR depending on the basis algorithm by removing most of the color artifacts. The proposed method was implemented and verified successfully using verilog HDL and FPGA. It was synthesized to gate-level circuits using 0.25um CMOS standard cell library. The total logic gate count is 12K, and five line memories are used.

A Study for AGV Steering Control using Evolution Strategy (진화전략 알고리즘을 이용한 AGV 조향제어에 관한 연구)

  • 이진우;손주한;최성욱;이영진;이권순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.149-149
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

A Fusion Sensor System for Efficient Road Surface Monitorinq on UGV (UGV에서 효율적인 노면 모니터링을 위한 퓨전 센서 시스템 )

  • Seonghwan Ryu;Seoyeon Kim;Jiwoo Shin;Taesik Kim;Jinman Jung
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.18-26
    • /
    • 2024
  • Road surface monitoring is essential for maintaining road environment safety through managing risk factors like rutting and crack detection. Using autonomous driving-based UGVs with high-performance 2D laser sensors enables more precise measurements. However, the increased energy consumption of these sensors is limited by constrained battery capacity. In this paper, we propose a fusion sensor system for efficient surface monitoring with UGVs. The proposed system combines color information from cameras and depth information from line laser sensors to accurately detect surface displacement. Furthermore, a dynamic sampling algorithm is applied to control the scanning frequency of line laser sensors based on the detection status of monitoring targets using camera sensors, reducing unnecessary energy consumption. A power consumption model of the fusion sensor system analyzes its energy efficiency considering various crack distributions and sensor characteristics in different mission environments. Performance analysis demonstrates that setting the power consumption of the line laser sensor to twice that of the saving state when in the active state increases power consumption efficiency by 13.3% compared to fixed sampling under the condition of λ=10, µ=10.

Fabrication and Performance Investigation of Surface Temperature Sensor Using Fluorescent Nanoporous Thin Film II (형광 나노 포러스 박막을 이용한 표면 온도 센서의 제작 및 성능 연구 II)

  • Kim, Hyun Jung;Yoo, Jaisuk;Park, Jinil
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.25 no.12
    • /
    • pp.674-678
    • /
    • 2013
  • We present a non-invasive technique to the measure temperature distribution in nano-sized porous thin films by means of the two-color laser-induced fluorescence (2-LIF) of rhodamine B. The fluorescence induced by the green line of a mercury lamp with the makeup of optical filters was measured on two separate color bands. They can be selected for their strong difference in the temperature sensitivity of the fluorescence quantum yield. This technique allows for absolute temperature measurements by determining the relative intensities on two adequate spectral bands of the same dye. To measure temperature fields, Silica (SiO2) nanoporous structure with 1-um thickness was constructed on a cover glass, and fluorescent dye was absorbed into these porous thin films. The calibration curves of the fluorescence intensity versus temperature were measured in a temperature range of $10-60^{\circ}C$, and visualization and measurement of the temperature field were performed by taking the intensity distributions from the specimen for the temperature field.

Information Fusion of Photogrammetric Imagery and Lidar for Reliable Building Extraction (광학 영상과 Lidar의 정보 융합에 의한 신뢰성 있는 구조물 검출)

  • Lee, Dong-Hyuk;Lee, Kyoung-Mu;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.236-244
    • /
    • 2008
  • We propose a new building detection and description algorithm for Lidar data and photogrammetric imagery using color segmentation, line segments matching, perceptual grouping. Our algorithm consists of two steps. In the first step, from the initial building regions extracted from Lidar data and the color segmentation results from the photogrammetric imagery, we extract coarse building boundaries based on the Lidar results with split and merge technique from aerial imagery. In the secondstep, we extract precise building boundaries based on coarse building boundaries and edges from aerial imagery using line segments matching and perceptual grouping. The contribution of this algorithm is that color information in photogrammetric imagery is used to complement collapsed building boundaries obtained by Lidar. Moreover, linearity of the edges and construction of closed roof form are used to reflect the characteristic of man-made object. Experimental results on multisensor data demonstrate that the proposed algorithm produces more accurate and reliable results than Lidar sensor.

DEVELOPMENT OF CHLOROPHYLL ALGORITHM FOR GEOSTATIONARY OCEAN COLOR IMAGER (GOCI)

  • Min, Jee-Eun;Moon, Jeong-Eon;Shanmugam, Palanisamy;Ryu, Joo-Hyung;Ahn, Yu-Hwan
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.162-165
    • /
    • 2007
  • Chlorophyll concentration is an important factor for physical oceanography as well as biological oceanography. For these necessity many oceanographic researchers have been investigated it for a long time. But investigation using vessel is very inefficient, on the other hands, ocean color remote sensing is a powerful means to get fine-scale (spatial and temporal scale) measurements of chlorophyll concentration. Geostationary Ocean Color Imager (GOCI), for ocean color sensor, loaded on COMS (Communication, Ocean and Meteorological Satellite), will be launched on late 2008 in Korea. According to the necessity of algorithm for GOCI, we developed chlorophyll algorithm for GOCI in this study. There are two types of chlorophyll algorithms. One is an empirical algorithm using band ratio, and the other one is a fluorescence-based algorithms. To develop GOCI chlorophyll algorithm empirically we used bands centered at 412 nm, 443 nm and 555 nm for the DOM absorption, chlorophyll maximum absorption and for absorption of suspended solid material respectively. For the fluorescence-based algorithm we analyzed in-situ remote sensing reflectance $(R_{rs})$ data using baseline method. Fluorescence Line Height $({\Delta}Flu)$ calculated from $R_{rs}$ at bands centered on 681 nm and 688 nm, and ${\Delta}Flu_{(area)}$ are used for development of algorithm. As a result ${\Delta}Flu_{(area)}$ method leads the best fitting for squared correlation coefficient $(R^2)$.

  • PDF

Design and Implementation of the Stop line and Crosswalk Recognition Algorithm for Autonomous UGV (자율 주행 UGV를 위한 정지선과 횡단보도 인식 알고리즘 설계 및 구현)

  • Lee, Jae Hwan;Yoon, Heebyung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.271-278
    • /
    • 2014
  • In spite of that stop line and crosswalk should be aware of the most basic objects in transportation system, its features extracted are very limited. In addition to image-based recognition technology, laser and RF, GPS/INS recognition technology, it is difficult to recognize. For this reason, the limited research in this area has been done. In this paper, the algorithm to recognize the stop line and crosswalk is designed and implemented using image-based recognition technology with the images input through a vision sensor. This algorithm consists of three functions.; One is to select the area, in advance, needed for feature extraction in order to speed up the data processing, 'Region of Interest', another is to process the images only that white color is detected more than a certain proportion in order to remove the unnecessary operation, 'Color Pattern Inspection', the other is 'Feature Extraction and Recognition', which is to extract the edge features and compare this to the previously-modeled one to identify the stop line and crosswalk. For this, especially by using case based feature comparison algorithm, it can identify either both stop line and crosswalk exist or just one exists. Also the proposed algorithm is to develop existing researches by comparing and analysing effect of in-vehicle camera installation and changes in recognition rate of distance estimation and various constraints such as backlight and shadow.

Golf Swing Classification Using Fuzzy System (퍼지 시스템을 이용한 골프 스윙 분류)

  • Park, Junwook;Kwak, Sooyeong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.380-392
    • /
    • 2013
  • A method to classify a golf swing motion into 7 sections using a Kinect sensor and a fuzzy system is proposed. The inputs to the fuzzy logic are the positions of golf club and its head, which are extracted from the information of golfer's joint position and color information obtained by a Kinect sensor. The proposed method consists of three modules: one for extracting the joint's information, another for detecting and tracking of a golf club, and the other for classifying golf swing motions. The first module extracts the hand's position among the joint information provided by a Kinect sensor. The second module detects the golf club as well as its head with the Hough line transform based on the hand's coordinate. Using a fuzzy logic as a classification engine reduces recognition errors and, consequently, improves the performance of robust classification. From the experiments of real-time video clips, the proposed method shows the reliability of classification by 85.2%.

A Design and Implementation of Fitness Application Based on Kinect Sensor

  • Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.43-50
    • /
    • 2021
  • In this paper, we design and implement KITNESS, a windows application that feeds back the accuracy of fitness motions based on Kinect sensors. The feature of this application is to use Kinect's camera and joint recognition sensor to give feedback to the user to exercise in the correct fitness position. At this time, the distance between the user and the Kinect is measured using Kinect's IR Emitter and IR Depth Sensor, and the joint, which is the user's joint position, and the Skeleton data of each joint are measured. Using this data, a certain distance is calculated for each joint position and posture of the user, and the accuracy of the posture is determined. And it is implemented so that users can check their posture through Kinect's RGB camera. That is, if the user's posture is correct, the skeleton information is displayed as a green line, and if it is not correct, the inaccurate part is displayed as a red line to inform intuitively. Through this application, the user receives feedback on the accuracy of the exercise position, so he can exercise himself in the correct position. This application classifies the exercise area into three areas: neck, waist, and leg, and increases the recognition rate of Kinect by excluding positions that Kinect does not recognize due to overlapping joints in the position of each exercise area. And at the end of the application, the last exercise is shown as an image for 5 seconds to inspire a sense of accomplishment and to continuously exercise.