• Title/Summary/Keyword: Combined Multi-sensor

Search Result 60, Processing Time 0.029 seconds

APPLICATION OF MERGED MICROWAVE GEOPHYSICAL OCEAN PRODUCTS TO CLIMATE RESEARCH AND NEAR-REAL-TIME ANALYSIS

  • Wentz, Frank J.;Kim, Seung-Bum;Smith, Deborah K.;Gentemann, Chelle
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.150-152
    • /
    • 2006
  • The DISCOVER Project (${\underline{D}}istributed$ ${\underline{I}}nformation$ ${\underline{S}}ervices$ for ${\underline{C}}limate$ and ${\underline{O}}cean$ products and ${\underline{V}}isualizations$ for ${\underline{E}}arth$ ${\underline{R}}esearch$) is a NASA funded Earth Science REASoN project that strives to provide highly accurate, carefully calibrated, long-term climate data records and near-real-time ocean products suitable for the most demanding Earth research applications via easy-to-use display and data access tools. A key element of DISCOVER is the merging of data from the multiple sensors on multiple platforms into geophysical data sets consistent in both time and space. The project is a follow-on to the SSM/I Pathfinder and Passive Microwave ESIP projects which pioneered the simultaneous retrieval of sea surface temperature, surface wind speed, columnar water vapor, cloud liquid water content, and rain rate from SSM/I and TMI observations. The ocean products available through DISCOVER are derived from multi-sensor observations combined into daily products and a consistent multi-decadal climate time series. The DISCOVER team has a strong track record in identifying and removing unexpected sources of systematic error in radiometric measurements, including misspecification of SSM/I pointing geometry, the slightly emissive TMI antenna, and problems with the hot calibration source on AMSR-E. This in-depth experience with inter-calibration is absolutely essential for achieving our objective of merging multi-sensor observations into consistent data sets. Extreme care in satellite inter-calibration and commonality of geophysical algorithms is applied to all sensors. This presentation will introduce the DISCOVER products currently available from the web site, http://www.discover-earth.org and provide examples of the scientific application of both the diurnally corrected optimally interpolated global sea surface temperature product and the 4x-daily global microwave water vapor product.

  • PDF

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

A Low-Cost Lidar Sensor based Glass Feature Extraction Method for an Accurate Map Representation using Statistical Moments (통계적 모멘트를 이용한 정확한 환경 지도 표현을 위한 저가 라이다 센서 기반 유리 특징점 추출 기법)

  • An, Ye Chan;Lee, Seung Hwan
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.103-111
    • /
    • 2021
  • This study addresses a low-cost lidar sensor-based glass feature extraction method for an accurate map representation using statistical moments, i.e. the mean and variance. Since the low-cost lidar sensor produces range-only data without intensity and multi-echo data, there are some difficulties in detecting glass-like objects. In this study, a principle that an incidence angle of a ray emitted from the lidar with respect to a glass surface is close to zero degrees is concerned for glass detection. Besides, all sensor data are preprocessed and clustered, which is represented using statistical moments as glass feature candidates. Glass features are selected among the candidates according to several conditions based on the principle and geometric relation in the global coordinate system. The accumulated glass features are classified according to the distance, which is lastly represented on the map. Several experiments were conducted in glass environments. The results showed that the proposed method accurately extracted and represented glass windows using proper parameters. The parameters were empirically designed and carefully analyzed. In future work, we will implement and perform the conventional SLAM algorithms combined with our glass feature extraction method in glass environments.

Visual and Quantitative Analysis of Different Tastes in liquids with Fuzzy C-means and Principal Component Analysis Using Electronic Tongue System

  • Kim, Joeng-Do;Kim, Dong-Jin;Byun, Hyung-Gi;Ham, Yu-Kyung;Jung, Woo-Suk;Choo, Dae-Won
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.133-137
    • /
    • 2005
  • In this paper, we investigate visual and quantitative analysis of different tastes in the liquids using multi-array chemical sensor (MACS) based on the ion-selective electrodes (ISEs), which is so called the electronic tongue (E-Tongue) system. We apply the Fuzzy C-means (FCM) algorithm combined with Principal Component Analysis (PCA), which can be used to reduce multi-dimensional data to two- or three-dimensional data, to classify visually data patterns detected by E-Tongue system. The proposed technique can be determined the cluster centers and membership grade of patterns through the unsupervised way. The membership grade of an unknown pattern, which does not shown previously, can be visually and analytically determined. Throughout the experimental trails, the E-tongue system combined with the proposed algorithms is demonstrated robust performance for visual and quantitative analysis for different tastes in the liquids.

  • PDF

A Fusion Algorithm considering Error Characteristics of the Multi-Sensor (다중센서 오차특성을 고려한 융합 알고리즘)

  • Hyun, Dae-Hwan;Yoon, Hee-Byung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.274-282
    • /
    • 2009
  • Various location tracking sensors; such as GPS, INS, radar, and optical equipment; are used for tracking moving targets. In order to effectively track moving targets, it is necessary to develop an effective fusion method for these heterogeneous devices. There have been studies in which the estimated values of each sensors were regarded as different models and fused together, considering the different error characteristics of the sensors for the improvement of tracking performance using heterogeneous multi-sensor. However, the rate of errors for the estimated values of other sensors has increased, in that there has been a sharp increase in sensor errors and the attempts to change the estimated sensor values for the Sensor Probability could not be applied in real time. In this study, the Sensor Probability is obtained by comparing the RMSE (Root Mean Square Error) for the difference between the updated and measured values of the Kalman filter for each sensor. The process of substituting the new combined values for the Kalman filter input values for each sensor is excluded. There are improvements in both the real-time application of estimated sensor values, and the tracking performance for the areas in which the sensor performance has rapidly decreased. The proposed algorithm adds the error characteristic of each sensor as a conditional probability value, and ensures greater accuracy by performing the track fusion with the sensors with the most reliable performance. The trajectory of a UAV is generated in an experiment and a performance analysis is conducted with other fusion algorithms.

Event-Based Middleware for Healthcare Applications

  • Kamal, Rossi;Tran, Nguyen H.;Hong, Choong-Seon
    • Journal of Communications and Networks
    • /
    • v.14 no.3
    • /
    • pp.296-309
    • /
    • 2012
  • In existing middleware for body sensor networks, energy limitations, hardware heterogeneity, increases in node temperature, and the absence of software reusability are major problems. In this paper, we propose an event-based grid middleware component that solves these problems using distributed resources in in vivo sensor nodes. In our multi-hop communication, we use a lightweight rendezvous routing algorithm in a publish/subscribe system of event-based communication. To facilitate software reuse and application development, a modified open services gateway initiative has been implemented in our middleware architecture. We evaluated our grid middleware in a cancer treatment scenario with combined hyperthermia, chemotherapy, and radiotherapy procedures, using in vivo sensors.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Simple Bluetooth Wireless Multi-gas Measurement System (간단한 블루투스 무선다중가스센서 계측시스템)

  • Kim, Chul min;Kim, Doyoon;Kim, Yeonsu;Kim, Gyu-tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.51-54
    • /
    • 2020
  • To develop gas-distinguishing sensor system, it is highly required to integrate multiple sensors for effective detection of a single targeted gas or mixture of gases. In addition, it is important to collect the reliable data from individual sensors into one integrated measuring device. Collecting the data of toxic gases on the spot should be done without inhalation. We suggest simple wirelessly running system for data collection that guarantees both reliability of data sources and safety. Here, we made a multi-gas measuring instrument(device) combined with Bluetooth module which provides a safe and precise big data accumulation system.

A Research of LEACH Protocol improved Mobility and Connectivity on WSN using Feature of AOMDV and Vibration Sensor (AOMDV의 특성과 진동 센서를 적용한 이동성과 연결성이 개선된 WSN용 LEACH 프로토콜 연구)

  • Lee, Yang-Min;Won, Joon-We;Cha, Mi-Yang;Lee, Jae-Kee
    • The KIPS Transactions:PartC
    • /
    • v.18C no.3
    • /
    • pp.167-178
    • /
    • 2011
  • As the growth of ubiquitous services, various types of ad hoc networks have emerged. In particular, wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are widely known ad hoc networks, but there are also other kinds of wireless ad hoc networks in which the characteristics of the aforementioned two network types are mixed together. This paper proposes a variant of the Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol modified to be suitable in such a combined network environment. That is, the proposed routing protocol provides node detection and route discovery/maintenance in a network with a large number of mobile sensor nodes, while preserving node mobility, network connectivity, and energy efficiency. The proposed routing protocol is implemented with a multi-hop multi-path algorithm, a topology reconfiguration technique using node movement estimation and vibration sensors, and an efficient path selection and data transmission technique for a great many moving nodes. In the experiments, the performance of the proposed protocol is demonstrated by comparing it to the conventional LEACH protocol.