• Title/Summary/Keyword: Grid Detection

Search Result 329, Processing Time 0.027 seconds

Mitigation of Impulse Noise Using Slew Rate Limiter in Oversampled Signal for Power Line Communication (전력선 통신에서 오버 샘플링과 Slew Rate 제한을 이용한 임펄스 잡음 제거 기법)

  • Oh, Woojin;Natarajan, Bala
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.4
    • /
    • pp.431-437
    • /
    • 2019
  • PLC(Power Line Communication) is being used in various ways in smart grid system because of the advantages of low cost and high data throughput. However, power line channel has many problems due to impulse noise and various studies have been conducted to solve the problem. Recently, ACDL(Adaptive Cannonical Differential Limiter) which is based on an adaptive clipping with analog nonlinear filter, has been proposed and performs better than the others. In this paper, we show that ACDL is similar to the detection of slew rate with oversampled digital signal by simplification and analysis. Through the simulation under the PRIME standard it is shown that the proposed performs equal to or better than that of ACDL, but significantly reduce the complexity to implement. The BER performance is equal but the complexity is reduced to less than 10%.

LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving (도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘)

  • Noh, Hanseok;Lee, Hyunsung;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

How do diverse precipitation datasets perform in daily precipitation estimations over Africa?

  • Brian Odhiambo Ayugi;Eun-Sung Chung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.158-158
    • /
    • 2023
  • Characterizing the performance of precipitation (hereafter PRE) products in estimating the uncertainties in daily PRE in the era of global warming is of great value to the ecosystem's sustainability and human survival. This study intercompares the performance of different PRE products (gauge-based, satellite and reanalysis) sourced from the Frequent Rainfall Observations on GridS (FROGS) database over diverse climate zones in Africa and identifies regions where they depict minimal uncertainties in order to build optimal maps as a guide for different climate users. This is achieved by utilizing various techniques, including the triple collection (TC) approach, to assess the capabilities and limitations of different PRE products over nine climatic zones over the continent. For daily scale analysis, the uncertainties in light PRE (0.1 5mm/day) are prevalent over most regions in Africa during the study duration (2001-2016). Estimating the occurrence of extreme PRE events based on daily PRE 90th percentile suggests that extreme PRE is mainly detected over central Africa (CAF) region and some coastal regions of west Africa (WAF) where the majority of uncorrected satellite products show good agreement. The detection of PRE days and non-PRE days based on categorical statistics suggests that a perfect POD/FAR score is unattainable irrespective of the product type. Daily PRE uncertainties determined based on quantitative metrics show that consistent, satisfactory performance is demonstrated by the IMERG products (uncorrected), ARCv2, CHIRPSv2, 3B42v7.0 and PERSIANN_CDRv1r1 (corrected), and GPCC, CPC_v1.0, and REGEN_ALL (gauge) during the study period. The optimal maps that show the classification of products in regions where they depict reliable performance can be recommended for various usage for different stakeholders.

  • PDF

Process Development for Optimizing Sensor Placement Using 3D Information by LiDAR (LiDAR자료의 3차원 정보를 이용한 최적 Sensor 위치 선정방법론 개발)

  • Yu, Han-Seo;Lee, Woo-Kyun;Choi, Sung-Ho;Kwak, Han-Bin;Kwak, Doo-Ahn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.3-12
    • /
    • 2010
  • In previous studies, the digital measurement systems and analysis algorithms were developed by using the related techniques, such as the aerial photograph detection and high resolution satellite image process. However, these studies were limited in 2-dimensional geo-processing. Therefore, it is necessary to apply the 3-dimensional spatial information and coordinate system for higher accuracy in recognizing and locating of geo-features. The objective of this study was to develop a stochastic algorithm for the optimal sensor placement using the 3-dimensional spatial analysis method. The 3-dimensional information of the LiDAR was applied in the sensor field algorithm based on 2- and/or 3-dimensional gridded points. This study was conducted with three case studies using the optimal sensor placement algorithms; the first case was based on 2-dimensional space without obstacles(2D-non obstacles), the second case was based on 2-dimensional space with obstacles(2D-obstacles), and lastly, the third case was based on 3-dimensional space with obstacles(3D-obstacles). Finally, this study suggested the methodology for the optimal sensor placement - especially, for ground-settled sensors - using the LiDAR data, and it showed the possibility of algorithm application in the information collection using sensors.

The Fault Diagnosis Model of Ship Fuel System Equipment Reflecting Time Dependency in Conv1D Algorithm Based on the Convolution Network (합성곱 네트워크 기반의 Conv1D 알고리즘에서 시간 종속성을 반영한 선박 연료계통 장비의 고장 진단 모델)

  • Kim, Hyung-Jin;Kim, Kwang-Sik;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.4
    • /
    • pp.367-374
    • /
    • 2022
  • The purpose of this study was to propose a deep learning algorithm that applies to the fault diagnosis of fuel pumps and purifiers of autonomous ships. A deep learning algorithm reflecting the time dependence of the measured signal was configured, and the failure pattern was trained using the vibration signal, measured in the equipment's regular operation and failure state. Considering the sequential time-dependence of deterioration implied in the vibration signal, this study adopts Conv1D with sliding window computation for fault detection. The time dependence was also reflected, by transferring the measured signal from two-dimensional to three-dimensional. Additionally, the optimal values of the hyper-parameters of the Conv1D model were determined, using the grid search technique. Finally, the results show that the proposed data preprocessing method as well as the Conv1D model, can reflect the sequential dependency between the fault and its effect on the measured signal, and appropriately perform anomaly as well as failure detection, of the equipment chosen for application.

Water resources monitoring technique using multi-source satellite image data fusion (다종 위성영상 자료 융합 기반 수자원 모니터링 기술 개발)

  • Lee, Seulchan;Kim, Wanyub;Cho, Seongkeun;Jeon, Hyunho;Choi, Minhae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.8
    • /
    • pp.497-508
    • /
    • 2023
  • Agricultural reservoirs are crucial structures for water resources monitoring especially in Korea where the resources are seasonally unevenly distributed. Optical and Synthetic Aperture Radar (SAR) satellites, being utilized as tools for monitoring the reservoirs, have unique limitations in that optical sensors are sensitive to weather conditions and SAR sensors are sensitive to noises and multiple scattering over dense vegetations. In this study, we tried to improve water body detection accuracy through optical-SAR data fusion, and quantitatively analyze the complementary effects. We first detected water bodies at Edong, Cheontae reservoir using the Compact Advanced Satellite 500(CAS500), Kompsat-3/3A, and Sentinel-2 derived Normalized Difference Water Index (NDWI), and SAR backscattering coefficient from Sentinel-1 by K-means clustering technique. After that, the improvements in accuracies were analyzed by applying K-means clustering to the 2-D grid space consists of NDWI and SAR. Kompsat-3/3A was found to have the best accuracy (0.98 at both reservoirs), followed by Sentinel-2(0.83 at Edong, 0.97 at Cheontae), Sentinel-1(both 0.93), and CAS500(0.69, 0.78). By applying K-means clustering to the 2-D space at Cheontae reservoir, accuracy of CAS500 was improved around 22%(resulting accuracy: 0.95) with improve in precision (85%) and degradation in recall (14%). Precision of Kompsat-3A (Sentinel-2) was improved 3%(5%), and recall was degraded 4%(7%). More precise water resources monitoring is expected to be possible with developments of high-resolution SAR satellites including CAS500-5, developments of image fusion and water body detection techniques.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

A comparative study of nondestructive geomagnetic survey with archeological survey for detection of buried cultural properties in Doojeong-dong site, Cheonan, Chungnam Province (매장문화재 확인을 위한 자력탐사 및 발굴 비교연구: 충남 천안시 두정동 발굴지역)

  • Suh, Man-Cheol;Lee, Nam-Seok
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.3
    • /
    • pp.175-184
    • /
    • 2000
  • A nondestructive experimental feasibility study was conducted using magnetometer to find buried cultural objects at pottery and steel matters in low-relief mountaineous area of Doojeong-dong, Cheonan, Chungnam Province from May 23 to July 18, 1998. Magnetic survey was carried out with $20cm{\times}20cm$ grid in a site of $20m{\times}40m$ before excavation, and the distribution of magnetic anomalies was compared with the results of excavation. Magnetic sensor was located on the surface of ground during the magnetic survey on the basis of an experimental result. Positive magnetic anomalies of maximum 130 nT are found over a pair of potteries. Magnetic anomaly map reveals several anomalous points in the 1st and 4th quadrants of the survey site, from where potteries and their fragments were confirmed. Six points out of seven points cprrelated with magnetic anomaly are found contain earthwares, whereas a magnetically uncorrelated location produced earthware made of unbaked clay. Steel waste such as cans and wires hidden in soil and bushes also influenced magnetic anomalies. Therefore, it is better to remove such steel wastes prior to magnetic survey if possible. Some magnetically anomalous points produced no archaeological object on excavation. This may be explained by shallower level of excavation than burial depth.

  • PDF

The Characteristics analysis of a Flux-lock Type Fault Current Limiter according to the Winding Directions for Power Grid (전력계통 적용을 위한 결선방향에 따른 자속구속형 한류기의 특성 분석)

  • Lee, Mi-Yong;Park, Jeong-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.11
    • /
    • pp.5879-5884
    • /
    • 2013
  • With the rapid industrialization and economical development, the electricity demands of the industrial facilities and densely populated large cities are continuing to increase in Korea. The increase in the power consumption requires the extension of power facilities, but it is difficult to secure spaces for equipment installation in the limited space of urban areas. In addition, the 154 kV or 345 kV transmission systems in Korea has a short transmission distance, and are connected to one another in network structures that ensure the high reliability and stability of power supply. This structure reduces the impedance during the fault in power system, and increases the magnitude of in the short circuit fault current. The superconducting fault current limiter (SFCL) was devised to effectively address these existing problems. The SFCL is a new-concept eco-friendly protective device that ensures fast operation and recovery time for the fault current and does not need additional fault detection devices. Therefore, many studies are being conducted around the world. In this paper, based on the wiring method the initial fault current characteristics, current limiting characteristics, according to the incident angle and the change in inductance current limiting characteristics were analyzed in a multifaceted methods.