• Title/Summary/Keyword: Remote sensing and sensors

Search Result 353, Processing Time 0.027 seconds

m-Health System for Processing of Clinical Biosignals based Android Platform (안드로이드 플랫폼 기반의 임상 바이오신호 처리를 위한 모바일 헬스 시스템)

  • Seo, Jung-Hee;Park, Hung-Bog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.7
    • /
    • pp.97-106
    • /
    • 2012
  • Management of biosignal data in mobile devices causes many problems in real-time transmission of large volume of multimedia data or storage devices. Therefore, this research paper intends to suggest an m-Health system, a clinical data processing system using mobile in order to provide quick medical service. This system deployed health system on IP network, compounded outputs from many bio sensing in remote sites and performed integrated data processing electronically on various bio sensors. The m-health system measures and monitors various biosignals and sends them to data servers of remote hospitals. It is an Android-based mobile application which patients and their family and medical staff can use anywhere anytime. Medical staff access patient data from hospital data servers and provide feedback on medical diagnosis and prescription to patients or users. Video stream for patient monitoring uses a scalable transcoding technique to decides data size appropriate for network traffic and sends video stream, remarkably reducing loads of mobile systems and networks.

An Approach for the Cross Modality Content-Based Image Retrieval between Different Image Modalities

  • Jeong, Inseong;Kim, Gihong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_2
    • /
    • pp.585-592
    • /
    • 2013
  • CBIR is an effective tool to search and extract image contents in a large remote sensing image database queried by an operator or end user. However, as imaging principles are different by sensors, their visual representation thus varies among image modality type. Considering images of various modalities archived in the database, image modality difference has to be tackled for the successful CBIR implementation. However, this topic has been seldom dealt with and thus still poses a practical challenge. This study suggests a cross modality CBIR (termed as the CM-CBIR) method that transforms given query feature vector by a supervised procedure in order to link between modalities. This procedure leverages the skill of analyst in training steps after which the transformed query vector is created for the use of searching in target images with different modalities. Current initial results show the potential of the proposed CM-CBIR method by delivering the image content of interest from different modality images. Despite its retrieval capability is outperformed by that of same modality CBIR (abbreviated as the SM-CBIR), the lack of retrieval performance can be compensated by employing the user's relevancy feedback, a conventional technique for retrieval enhancement.

A Development of Urban Farm Management System based on USN (USN 기반의 도시 농업 관리 시스템 개발)

  • Ryu, Dae-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1917-1922
    • /
    • 2013
  • The objective of this study is developing urban farm management system based on USN for remote monitoring and control. This system makes it easy to manage urban farm and make the database of collected information for to build the best environment for growing crops. For this, we build a green house and installed several types of sensors and camera through which the remote sensing information collected. In addition, building a web page for user convenience and information in real time to enable control. We confirmed experimentally all functions related to stability for a long period of time through field tests such as collection and transfer of information, environmental control in green house. It will be convenient for farmers to grow crops by providing the time and space constraints and a lot of flexibility. In addition, factory, office, home like environment, including facilities for it will be possible to extend.

CORRELATION ANALYSIS METHOD OF SENSOR DATA FOR PREDICTING THE FOREST FIRE

  • Shon Ho Sun;Chi Jeong Hee;Kim Eun Hee;Ryu Keun Ho;Jung Doo Yeong;kim Kyung Ok
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.186-188
    • /
    • 2005
  • Because forest fire changes the direction according to the environmental elements, it is difficult to predict the direction of it. Currently, though some researchers have been studied to which predict the forest fire occurrence and the direction of it, using the remote detection technique, it is not enough and efficient. And recently because of the development of the sensor technique, a lot of In-Situ sensors are being developed. These kinds of In-Situ sensor data are used to collect the environmental elements such as temperature, humidity, and the velocity of the wind. Accordingly we need the prediction technique about the environmental elements analysis and the direction of the forest fire, using the In-Situ sensor data. In this paper, as a technique for predicting the direction of the forest fire, we propose the correlation analysis technique about In-Situ sensor data such as temperature, humidity, the velocity of the wind. The proposed technique is based on the clustering method and clusters the In-Situ sensor data. And then it analyzes the correlation of the multivariate correlations among clusters. These kinds of prediction information not only helps to predict the direction of the forest fire, but also finds the solution after predicting the environmental elements of the forest fire. Accordingly, this technique is expected to reduce the damage by the forest fire which occurs frequently these days.

  • PDF

A Similarity Weight-based Method to Detect Damage Induced by a Tsunami

  • Jeon, Hyeong-Joo;Kim, Yong-Hyun;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.391-402
    • /
    • 2016
  • Among the various remote sensing sensors compared to the electro-optical sensors, SAR (Synthetic Aperture Radar) is very suitable for assessing damaged areas induced by disaster events owing to its all-weather day and night acquisition capability and sensitivity to geometric variables. The conventional CD (Change Detection) method that uses two-date data is typically used for mapping damage over extensive areas in a short time, but because data from only two dates are used, the information used in the conventional CD is limited. In this paper, we propose a novel CD method that is extended to use data consisting of two pre-disaster SAR data and one post-disaster SAR data. The proposed CD method detects changes by using a similarity weight image derived from the neighborhood information of a pixel in the data from the three dates. We conducted an experiment using three single polarization ALOS PALSAR (Advanced Land Observing Satellite/Phased Array Type L-Band) data collected over Miyagi, Japan which was seriously damaged by the 2011 east Japan tsunami. The results demonstrated that the mapping accuracy for damaged areas can be improved by about 26% with an increase of the g-mean compared to the conventional CD method. These improved results prove the performance of our proposed CD method and show that the proposed CD method is more suitable than the conventional CD method for detecting damaged areas induced by disaster.

Integration of top-down and bottom-up approaches for a complementary high spatial resolution satellite rainfall product in South Korea

  • Nguyen, Hoang Hai;Han, Byungjoo;Oh, Yeontaek;Jung, Woosung;Shin, Daeyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.153-153
    • /
    • 2022
  • Large-scale and accurate observations at fine spatial resolution through a means of remote sensing offer an effective tool for capturing rainfall variability over the traditional rain gauges and weather radars. Although satellite rainfall products (SRPs) derived using two major estimation approaches were evaluated worldwide, their practical applications suffered from limitations. In particular, the traditional top-down SRPs (e.g., IMERG), which are based on direct estimation of rain rate from microwave satellite observations, are mainly restricted with their coarse spatial resolution, while applications of the bottom-up approach, which allows backward estimation of rainfall from soil moisture signals, to novel high spatial resolution soil moisture satellite sensors over South Korea are not introduced. Thus, this study aims to evaluate the performances of a state-of-the-art bottom-up SRP (the self-calibrated SM2RAIN model) applied to the C-band SAR Sentinel-1, a statistically downscaled version of the conventional top-down IMERG SRP, and their integration for a targeted high spatial resolution of 0.01° (~ 1-km) over central South Korea, where the differences in climate zones (coastal region vs. mainland region) and vegetation covers (croplands vs. mixed forests) are highlighted. The results indicated that each single SRP can provide plus points in distinct climatic and vegetated conditions, while their drawbacks have existed. Superior performance was obtained by merging these individual SRPs, providing preliminary results on a complementary high spatial resolution SRP over central South Korea. This study results shed light on the further development of integration framework and a complementary high spatial resolution rainfall product from multi-satellite sensors as well as multi-observing systems (integrated gauge-radar-satellite) extending for entire South Korea, toward the demands for urban hydrology and microscale agriculture.

  • PDF

VALIDATION OF SEA ICE MOTION DERIVED FROM AMSR-E AND SSM/I DATA USING MODIS DATA

  • Yaguchi, Ryota;Cho, Ko-Hei
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.301-304
    • /
    • 2008
  • Since longer wavelength microwave radiation can penetrate clouds, satellite passive microwave sensors can observe sea ice of the entire polar region on a daily basis. Thus, it is becoming popular to derive sea ice motion vectors from a pair of satellite passive microwave sensor images observed at one or few day interval. Usually, the accuracies of derived vectors are validated by comparing with the position data of drifting buoys. However, the number of buoys for validation is always quite limited compared to a large number of vectors derived from satellite images. In this study, the sea ice motion vectors automatically derived from pairs of AMSR-E 89GHz images (IFOV = 3.5 ${\times}$ 5.9km) by an image-to-image cross correlation were validated by comparing with sea ice motion vectors manually derived from pairs of cloudless MODIS images (IFOV=250 ${\times}$ 250m). Since AMSR-E and MODIS are both on the same Aqua satellite of NASA, the observation time of both sensors are the same. The relative errors of AMSR-E vectors against MODIS vectors were calculated. The accuracy validation has been conducted for 5 scenes. If we accept relative error of less than 30% as correct vectors, 75% to 92% of AMSR-E vectors derived from one scene were correct. On the other hand, the percentage of correct sea ice vectors derived from a pair of SSM/I 85GHz images (IFOV = 15 ${\times}$ 13km) observed nearly simultaneously with one of the AMSR-E images was 46%. The difference of the accuracy between AMSR-E and SSM/I is reflecting the difference of IFOV. The accuracies of H and V polarization were different from scene to scene, which may reflect the difference of sea ice distributions and their snow cover of each scene.

  • PDF

Derivation of Green Coverage Ratio Based on Deep Learning Using MAV and UAV Aerial Images (유·무인 항공영상을 이용한 심층학습 기반 녹피율 산정)

  • Han, Seungyeon;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1757-1766
    • /
    • 2021
  • The green coverage ratio is the ratio of the land area to green coverage area, and it is used as a practical urban greening index. The green coverage ratio is calculated based on the land cover map, but low spatial resolution and inconsistent production cycle of land cover map make it difficult to calculate the correct green coverage area and analyze the precise green coverage. Therefore, this study proposes a new method to calculate green coverage area using aerial images and deep neural networks. Green coverage ratio can be quickly calculated using manned aerial images acquired by local governments, but precise analysis is difficult because components of image such as acquisition date, resolution, and sensors cannot be selected and modified. This limitation can be supplemented by using an unmanned aerial vehicle that can mount various sensors and acquire high-resolution images due to low-altitude flight. In this study, we proposed a method to calculate green coverage ratio from manned or unmanned aerial images, and experimentally verified the proposed method. Aerial images enable precise analysis by high resolution and relatively constant cycles, and deep learning can automatically detect green coverage area in aerial images. Local governments acquire manned aerial images for various purposes every year and we can utilize them to calculate green coverage ratio quickly. However, acquired manned aerial images may be difficult to accurately analyze because details such as acquisition date, resolution, and sensors cannot be selected. These limitations can be supplemented by using unmanned aerial vehicles that can mount various sensors and acquire high-resolution images due to low-altitude flight. Accordingly, the green coverage ratio was calculated from the two aerial images, and as a result, it could be calculated with high accuracy from all green types. However, the green coverage ratio calculated from manned aerial images had limitations in complex environments. The unmanned aerial images used to compensate for this were able to calculate a high accuracy of green coverage ratio even in complex environments, and more precise green area detection was possible through additional band images. In the future, it is expected that the rust rate can be calculated effectively by using the newly acquired unmanned aerial imagery supplementary to the existing manned aerial imagery.

Wireless Mobile Sensor Networks with Cognitive Radio Based FPGA for Disaster Management

  • Ananthachari, G.A. Preethi
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1097-1114
    • /
    • 2021
  • The primary objective of this work was to discover a solution for the survival of people in an emergency flood. The geographical information was obtained from remote sensing techniques. Through helpline numbers, people who are in need request support. Although, it cannot be ensured that all the people will acquire the facility. A proper link is required to communicate with people who are at risk in affected areas. Mobile sensor networks with field-programmable gate array (FPGA) self-configurable radios were deployed in damaged areas for communication. Ad-hoc networks do not have a centralized structure. All the mobile nodes deploy a temporary structure and they act as a base station. The mobile nodes are involved in searching the spectrum for channel utilization for better communication. FPGA-based techniques ensure seamless communication for the survivors. Timely help will increase the survival rate. The received signal strength is a vital factor for communication. Cognitive radio ensures channel utilization in an effective manner which results in better signal strength reception. Frequency band selection was carried out with the help of the GRA-MADM method. In this study, an analysis of signal strength for different mobile sensor nodes was performed. FPGA-based implementation showed enhanced outcomes compared to software-based algorithms.

Analysis of Plant Height, Crop Cover, and Biomass of Forage Maize Grown on Reclaimed Land Using Unmanned Aerial Vehicle Technology

  • Dongho, Lee;Seunghwan, Go;Jonghwa, Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.47-63
    • /
    • 2023
  • Unmanned aerial vehicle (UAV) and sensor technologies are rapidly developing and being usefully utilized for spatial information-based agricultural management and smart agriculture. Until now, there have been many difficulties in obtaining production information in a timely manner for large-scale agriculture on reclaimed land. However, smart agriculture that utilizes sensors, information technology, and UAV technology and can efficiently manage a large amount of farmland with a small number of people is expected to become more common in the near future. In this study, we evaluated the productivity of forage maize grown on reclaimed land using UAV and sensor-based technologies. This study compared the plant height, vegetation cover ratio, fresh biomass, and dry biomass of maize grown on general farmland and reclaimed land in South Korea. A biomass model was constructed based on plant height, cover ratio, and volume-based biomass using UAV-based images and Farm-Map, and related estimates were obtained. The fresh biomass was estimated with a very precise model (R2 =0.97, root mean square error [RMSE]=3.18 t/ha, normalized RMSE [nRMSE]=8.08%). The estimated dry biomass had a coefficient of determination of 0.86, an RMSE of 1.51 t/ha, and an nRMSE of 12.61%. The average plant height distribution for each field lot was about 0.91 m for reclaimed land and about 1.89 m for general farmland, which was analyzed to be a difference of about 48%. The average proportion of the maize fraction in each field lot was approximately 65% in reclaimed land and 94% in general farmland, showing a difference of about 29%. The average fresh biomass of each reclaimed land field lot was 10 t/ha, which was about 36% lower than that of general farmland (28.1 t/ha). The average dry biomass in each field lot was about 4.22 t/ha in reclaimed land and about 8 t/ha in general farmland, with the reclaimed land having approximately 53% of the dry biomass of the general farmland. Based on these results, UAV and sensor-based images confirmed that it is possible to accurately analyze agricultural information and crop growth conditions in a large area. It is expected that the technology and methods used in this study will be useful for implementing field-smart agriculture in large reclaimed areas.