• Title/Summary/Keyword: Sensor Data Process

Search Result 990, Processing Time 0.025 seconds

Filling Analysis for Missing Turbidity Data in Han River Estuary (한강 하구부에서 결측된 탁도 자료의 보완)

  • Baek, Kyong-Oh;Cho, Hong-Yeon;Lee, Sam-Hee
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.4 s.165
    • /
    • pp.289-298
    • /
    • 2006
  • Turbidity had been measured during five months In Han River estuary at three sites. In this process, missing data occurred due to gauge imitation of the turbidity sensor. A filling method for the missing turbidity data was newly developed in this study. Under the assumption of the time series data with unique period and different amplitudes, the new method can fill the missing data based on the area ratio of each cycle. And the new method was verified through the data set having no missing data. There were little differences between gross area of the original data and that of the revised data by the new method though values of peak were underestimated. As a result, missing turbidity data observed at Han River estuary could be appropriately filled using the new filling method.

Spatial Data Model of Feature-based Digital Map using UFID (UFID를 이용한 객체기반 수치지도 공간 데이터 모델)

  • Kim, Hyeong-Soo;Kim, Sang-Yeob;Lee, Yang-Koo;Seo, Sung-Bo;Park, Ki-Surk;Ryu, Keun-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.71-78
    • /
    • 2009
  • A demand on the spatial data management has been rapidly increased with the introduction and diffusion process of ITS, Telematics, and Wireless Sensor Network. And many different users use the digital map that offers various thematic spatial data. Spatial data for digital map can be managed by tile-based and feature-based data. The existing tile-based digital map management systems have difficult problems such as data construction, history management, and update data based on a spatial object. In order to solve these problems, we proposed the data model for feature-based digital map management system for representation of feature-based seamless map, history management, real-time update of spatial data, and analyzed the validity and utility of the proposed model.

  • PDF

A Novel Way of Context-Oriented Data Stream Segmentation using Exon-Intron Theory (Exon-Intron이론을 활용한 상황중심 데이터 스트림 분할 방안)

  • Lee, Seung-Hun;Suh, Dong-Hyok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.5
    • /
    • pp.799-806
    • /
    • 2021
  • In the IoT environment, event data from sensors is continuously reported over time. Event data obtained in this trend is accumulated indefinitely, so a method for efficient analysis and management of data is required. In this study, a data stream segmentation method was proposed to support the effective selection and utilization of event data from sensors that are continuously reported and received. An identifier for identifying the point at which to start the analysis process was selected. By introducing the role of these identifiers, it is possible to clarify what is being analyzed and to reduce data throughput. The identifier for stream segmentation proposed in this study is a semantic-oriented data stream segmentation method based on the event occurrence of each stream. The existence of identifiers in stream processing can be said to be useful in terms of providing efficiency and reducing its costs in a large-volume continuous data inflow environment.

A Study on the Optimization and Bridge Seismic Response Test of CAFB Using El-centro Seismic Waveforms (El-centro 지진파형을 이용한 CAFB의 최적화 및 교량 지진응답실험에 관한 연구)

  • Heo, Gwang Hee;Lee, Chin Ok;Seo, Sang Gu;Park, Jin Yong;Jeon, Joon Ryong
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.24 no.2
    • /
    • pp.67-76
    • /
    • 2020
  • This study aims to optimize the cochlea-inspired artificial filter bank (CAFB) using El-Centro seismic waveforms and test its performance through a shaking table test on a two-span bridge model. In the process of optimizing the CAFB, El-Centro seismic waveforms were used for the purpose of evaluating how they would affect the optimizing process. Next, the optimized CAFB was embedded in the developed wireless-based intelligent data acquisition (IDAQ) system to enable response measurement in real-time. For its performance evaluation to obtain a seismic response in real-time using the optimized CAFB, a two-span bridge (model structures) was installed in a large shaking table, and a seismic response experiment was carried out on it with El-Centro seismic waveforms. The CAFB optimized in this experiment was able to obtain the seismic response in real-time by compressing it using the embedded wireless-based IDAQ system while the obtained compressed signals were compared with the original signal (un-compressed signal). The results of the experiment showed that the compressed signals were superior to the raw signal in response performance, as well as in data compression effect. They also proved that the CAFB was able to compress response signals effectively in real-time even under seismic conditions. Therefore, this paper established that the CAFB optimized by being embedded in the wireless-based IDAQ system was an economical and efficient data compression sensing technology for measuring and monitoring the seismic response in real-time from structures based on the wireless sensor networks (WSNs).

A study on the efficient early warning method using complex event processing (CEP) technique (복합 이벤트 처리기술을 적용한 효율적 재해경보 전파에 관한 연구)

  • Kim, Hyung-Woo;Kim, Goo-Soo;Chang, Sung-Bong
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2009.08a
    • /
    • pp.157-161
    • /
    • 2009
  • In recent years, there is a remarkable progress in ICTs (Information and Communication Technologies), and then many attempts to apply ICTs to other industries are being made. In the field of disaster managements, ICTs such as RFID (Radio Frequency IDentification) and USN (Ubiquitous Sensor Network) are used to provide safe environments. Actually, various types of early warning systems using USN are now widely used to monitor natural disasters such as floods, landslides and earthquakes, and also to detect human-caused disasters such as fires, explosions and collapses. These early warning systems issue alarms rapidly when a disaster is detected or an event exceeds prescribed thresholds, and furthermore deliver alarm messages to disaster managers and citizens. In general, these systems consist of a number of various sensors and measure real-time stream data, which requires an efficient and rapid data processing technique. In this study, an event-driven architecture (EDA) is presented to collect event effectively and to provide an alert rapidly. A publish/subscribe event processing method to process simple event is introduced. Additionally, a complex event processing (CEP) technique is introduced to process complex data from various sensors and to provide prompt and reasonable decision supports when many disasters happen simultaneously. A basic concept of CEP technique is presented and the advantages of the technique in disaster management are also discussed. Then, how the main processing methods of CEP such as aggregation, correlation, and filtering can be applied to disaster management is considered. Finally, an example of flood forecasting and early alarm system in which CEP is incorporated is presented It is found that the CEP based on the EDA will provide an efficient early warning method when disaster happens.

  • PDF

A Design and Implementation Vessel USN Middleware of Server-Side Method based on Context Aware (Server-Side 방식의 상황 인식 기반 선박 USN 미들웨어 구현 및 설계)

  • Song, Byoung-Ho;Song, Iick-Ho;Kim, Jong-Hwa;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • In this paper, We implemented vessel USN middleware by server-side method considering characteristics of ocean environment. We designed multiple query process module in order to efficient process multidimensional sensor stream data and proposed optimized query plan using Mjoin query and hash table. This paper proposed method that context aware of vessel and manage considering characteristics of ocean. We decided to risk context using SVM algorithm in context awareness management module. As a result, we obtained about 87.5% average accuracy for fire case and about 85.1% average accuracy for vessel risk case by input 5,000 data sets and implemented vessel USN monitoring system.

Characterization of Acousto-ultrasonic Signals for Stamping Tool Wear (프레스 금형 마모에 대한 음-초음파 신호 특성 분석)

  • Kim, Yong-Yun
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.19 no.4
    • /
    • pp.386-392
    • /
    • 2009
  • This paper reports on the research which investigates acoustic signals acquired in progressive compressing, hole blanking, shearing and burr compacting process. The work piece is the head pin of the electric connector, whose raw material is the preformed steel bar. An acoustic sensor was set on the bed of hydraulic press. Because the acquired signals include the dynamic characteristics generated for all the processes, it is required to investigate signal characteristics corresponding to unit process. The corresponding dynamic characteristics to the respective process were first studied by analyzing the signals respectively acquired from compressing, blanking and compacting process. The combined signals were then periodically analyzed from the grinding to the grinding in the sound frequency domain and in the ultrasonic wave. The frequency of around 9 kHz in the sound frequency domain was much correlated to the tool wear. The characteristic frequency in the acoustic emission domain between 100 kHz and 500 kHz was not only clearly observed right after tool grinding but its amplitude was also related to the wear. The frequency amplitudes of 160 kHz and 320 kHz were big enough to be classified by the noise. The noise amplitudes are getting bigger, and their energy was much bigger as coming to the next regrinding. The signal analysis was based on the real time data and its frequency spectrum by Fourier Transform. As a result, the acousto-ultrasonic signals were much related to the tool wear progression.

Optimization of coagulant dosing process in water purification system using neural network (신경회로망을 이용한 상수처리시스템의 응집제 주입공정 최적화)

  • Nam, Ui-Seok;Park, Jong-Jin;Jang, Seok-Ho;Cha, Sang-Yeop;U, Gwang-Bang;Lee, Bong-Guk;Han, Tae-Hwan;Go, Taek-Beom
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.6
    • /
    • pp.644-651
    • /
    • 1997
  • In the water purification plant, chemicals are injected for quick purification of raw water. It is clear that the amount of chemicals intrinsically depends on water quality such as turbidity, temperature, pH and alkalinity. However, the process of chemical reaction to improve water quality (e.g., turbidity) by chemicals is not yet fully clarified nor quantified. The feedback signal in the process of coagulant dosage, which should be measured (through the sensor of the plant) to compute the appropriate amount of chemicals, is also not available. Most traditional methods focus on judging the conditions of purifying reaction and determine the amounts of chemicals through manual operation of field experts using Jar-test data. In this paper, a systematic control strategy is proposed to derive the optimum dosage of coagulant, PAC(Polymerized Aluminium Chloride), using Jar-test results. A neural network model is developed for coagulant dosing and purifying process by means of six input variables (turbidity, temperature, pH, alkalinity of raw water, PAC feed rate, turbidity in flocculation) and one output variable, while considering the relationships to the reaction of coagulation and flocculation. The model is utilized to derive the optimum coagulant dosage (in the sense of minimizing turbidity of water in flocculator). The ability of the proposed control scheme validated through the field test has proved to be of considerable practical value.

  • PDF

An Energy-Efficient Clustering Using Load-Balancing of Cluster Head in Wireless Sensor Network (센서 네트워크에서 클러스터 헤드의 load-balancing을 통한 에너지 효율적인 클러스터링)

  • Nam, Do-Hyun;Min, Hong-Ki
    • The KIPS Transactions:PartC
    • /
    • v.14C no.3 s.113
    • /
    • pp.277-284
    • /
    • 2007
  • The routing algorithm many used in the wireless sensor network features the clustering method to reduce the amount of data transmission from the energy efficiency perspective. However, the clustering method results in high energy consumption at the cluster head node. Dynamic clustering is a method used to resolve such a problem by distributing energy consumption through the re-selection of the cluster head node. Still, dynamic clustering modifies the cluster structure every time the cluster head node is re-selected, which causes energy consumption. In other words, the dynamic clustering approaches examined in previous studies involve the repetitive processes of cluster head node selection. This consumes a high amount of energy during the set-up process of cluster generation. In order to resolve the energy consumption problem associated with the repetitive set-up, this paper proposes the Round-Robin Cluster Header (RRCH) method that fixes the cluster and selects the head node in a round-robin method The RRCH approach is an energy-efficient method that realizes consistent and balanced energy consumption in each node of a generated cluster to prevent repetitious set-up processes as in the LEACH method. The propriety of the proposed method is substantiated with a simulation experiment.

A UHF-band Passive Temperature Sensor Tag Chip Fabricated in $0.18-{\mu}m$ CMOS Process ($0.18-{\mu}m$ CMOS 공정으로 제작된 UHF 대역 수동형 온도 센서 태그 칩)

  • Pham, Duy-Dong;Hwang, Sang-Kyun;Chung, Jin-Yong;Lee, Jong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.10
    • /
    • pp.45-52
    • /
    • 2008
  • We investigated the design of an RF-powered, wireless temperature sensor tag chip using $0.18-{\mu}m$ CMOS technology. The transponder generates its own power supply from small incident RF signal using Schottky diodes in voltage multiplier. Ambient temperature is measured using a new low-power temperature-to-voltage converter, and an 8-bit single-slope ADC converts the measured voltage to digital data. ASK demodulator and digital control are combined to identify unique transponder (ID) sent by base station for multi-transponder applications. The measurement of the temperature sensor tag chip showed a resolution of $0.64^{\circ}C/LSB$ in the range from $20^{\circ}C$ to $100^{\circ}C$, which is suitable for environmental temperature monitoring. The chip size is $1.1{\times}0.34mm^2$, and operates at clock frequency of 100 kHz while consuming $64{\mu}W$ power. The temperature sensor required a -11 dBm RF input power, supported a conversion rate of 12.5 k-samples/sec, and a maximum error of $0.5^{\circ}C$.