• Title/Summary/Keyword: Preprocessed data

Search Result 183, Processing Time 0.028 seconds

Detection of High Impedance Fault Using Adaptive Neuro-Fuzzy Inference System (적응 뉴로 퍼지 추론 시스템을 이용한 고임피던스 고장검출)

  • 유창완
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.4
    • /
    • pp.426-435
    • /
    • 1999
  • A high impedance fault(HIF) is one of the serious problems facing the electric utility industry today. Because of the high impedance of a downed conductor under some conditions these faults are not easily detected by over-current based protection devices and can cause fires and personal hazard. In this paper a new method for detection of HIF which uses adaptive neuro-fuzzy inference system (ANFIS) is proposed. Since arcing fault current shows different changes during high and low voltage portion of conductor voltage waveform we firstly divided one cycle of fault current into equal spanned four data windows according to the mangnitude of conductor voltage. Fast fourier transform(FFT) is applied to each data window and the frequency spectrum of current waveform are chosen asinputs of ANFIS after input selection method is preprocessed. Using staged fault and normal data ANFIS is trained to discriminate between normal and HIF status by hybrid learning algorithm. This algorithm adapted gradient descent and least square method and shows rapid convergence speed and improved convergence error. The proposed method represent good performance when applied to staged fault data and HIFLL(high impedance like load)such as arc-welder.

  • PDF

An Ensemble Model for Machine Failure Prediction (앙상블 모델 기반의 기계 고장 예측 방법)

  • Cheon, Kang Min;Yang, Jaekyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.123-131
    • /
    • 2020
  • There have been a lot of studies in the past for the method of predicting the failure of a machine, and recently, a lot of researches and applications have been generated to diagnose the physical condition of the machine and the parts and to calculate the remaining life through various methods. Survival models are also used to predict plant failures based on past anomaly cycles. In particular, special machine that reflect the fluid flow and process characteristics of chemical plants are connected to hundreds or thousands of sensors, so there are not many factors that need to be considered, such as process and material data as well as application of derivative variables. In this paper, the data were preprocessed through time series anomaly detection based on unsupervised learning to predict the abnormalities of these special machine. Next, clustering results reflecting clustering-based data characteristics were applied to produce additional variables, and a learning data set was created based on the history of past facility abnormalities. Finally, the prediction methodology based on the supervised learning algorithm was applied, and the model update was confirmed to improve the accuracy of the prediction of facility failure. Through this, it is expected to improve the efficiency of facility operation by flexibly replacing the maintenance time and parts supply and demand by predicting abnormalities of machine and extracting key factors.

Time Series Classification of Cryptocurrency Price Trend Based on a Recurrent LSTM Neural Network

  • Kwon, Do-Hyung;Kim, Ju-Bong;Heo, Ju-Sung;Kim, Chan-Myung;Han, Youn-Hee
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.694-706
    • /
    • 2019
  • In this study, we applied the long short-term memory (LSTM) model to classify the cryptocurrency price time series. We collected historic cryptocurrency price time series data and preprocessed them in order to make them clean for use as train and target data. After such preprocessing, the price time series data were systematically encoded into the three-dimensional price tensor representing the past price changes of cryptocurrencies. We also presented our LSTM model structure as well as how to use such price tensor as input data of the LSTM model. In particular, a grid search-based k-fold cross-validation technique was applied to find the most suitable LSTM model parameters. Lastly, through the comparison of the f1-score values, our study showed that the LSTM model outperforms the gradient boosting model, a general machine learning model known to have relatively good prediction performance, for the time series classification of the cryptocurrency price trend. With the LSTM model, we got a performance improvement of about 7% compared to using the GB model.

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

Automatic Detection of Sleep Stages based on Accelerometer Signals from a Wristband

  • Yeo, Minsoo;Koo, Yong Seo;Park, Cheolsoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.21-26
    • /
    • 2017
  • In this paper, we suggest an automated sleep scoring method using machine learning algorithms on accelerometer data from a wristband device. For an experiment, 36 subjects slept for about eight hours while polysomnography (PSG) data and accelerometer data were simultaneously recorded. After the experiments, the recorded signals from the subjects were preprocessed, and significant features for sleep stages were extracted. The extracted features were classified into each sleep stage using five machine learning algorithms. For validation of our approach, the obtained results were compared with PSG scoring results evaluated by sleep clinicians. Both accuracy and specificity yielded over 90 percent, and sensitivity was between 50 and 80 percent. In order to investigate the relevance between features and PSG scoring results, information gains were calculated. As a result, the features that had the lowest and highest information gain were skewness and band energy, respectively. In conclusion, the sleep stages were classified using the top 10 significant features with high information gain.

Decision method for rule-based physical activity status using rough sets (러프집합을 이용한 규칙기반 신체활동상태 결정방법)

  • Lee, Young-Dong;Son, Chang-Sik;Chung, Wan-Young;Park, Hee-Joon;Kim, Yoon-Nyun
    • Journal of Sensor Science and Technology
    • /
    • v.18 no.6
    • /
    • pp.432-440
    • /
    • 2009
  • This paper presents an accelerometer based system for physical activity decision that are capable of recognizing three different types of physical activities, i.e., standing, walking and running, using by rough sets. To collect physical acceleration data, we developed the body sensor node which consists of two custom boards for physical activity monitoring applications, a wireless sensor node and an accelerometer sensor module. The physical activity decision is based on the acceleration data collected from body sensor node attached on the user's chest. We proposed a method to classify physical activities using rough sets which can be generated rules as attributes of the preprocessed data and by constructing a new decision table, rules reduction. Our experimental results have successfully validated that performance of the rule patterns after removing the redundant attribute values are better and exactly same compare with before.

Using Drone and Laser Scanners for As-built Building Information Model Creation of a Cultural Heritage Building (드론 및 레이저스캐너를 활용한 근대 건축물 문화재 빌딩정보 모델 역설계 구축에 관한 연구)

  • Jung, Rae-Kyu;Koo, Bon-Sang;Yu, Young-Su
    • Journal of KIBIM
    • /
    • v.9 no.2
    • /
    • pp.11-20
    • /
    • 2019
  • The use of drones and laser scanners have the potential to drastically reduce the time and costs of conventional techniques employed for field survey of cultural heritage buildings. Moreover, point cloud data can be utilized to create an as-built Building Information Model (BIM), providing a repository for consistent operations information. However, BIM creation is not a requisite for heritage buildings, and their technological possibilities and barriers have not been documented. This research explored the processes required to convert a heritage university building to a BIM model, using existing off-the-shelf software applications. Point cloud data was gathered from drones for the exterior, while a laser scanner was employed for the interior of the building. The point clouds were preprocessed and used as references for the geometry of the building elements, including walls, slabs, windows, doors, and staircases. The BIM model was subsequently created for the individual elements using existing and custom libraries. The model was used to extract 2D CAD drawings that met the requirements of Korea's heritage preservation specifications. The experiment showed that technical improvements were needed to overcome issues of occlusion, modeling errors due to modeler's subjective judgements and point cloud data cleaning and filtering techniques.

An Analysis of Key Elements for FinTech Companies Based on Text Mining: From the User's Review (텍스트 마이닝 기반의 자산관리 핀테크 기업 핵심 요소 분석: 사용자 리뷰를 바탕으로)

  • Son, Aelin;Shin, Wangsoo;Lee, Zoonky
    • The Journal of Information Systems
    • /
    • v.29 no.4
    • /
    • pp.137-151
    • /
    • 2020
  • Purpose Domestic asset management fintech companies are expected to grow by leaps and bounds along with the implementation of the "Data bills." Contrary to the market fever, however, academic research is insufficient. Therefore, we want to analyze user reviews of asset management fintech companies that are expected to grow significantly in the future to derive strengths and complementary points of services that have been provided, and analyze key elements of asset management fintech companies. Design/methodology/approach To analyze large amounts of review text data, this study applied text mining techniques. Bank Salad and Toss, domestic asset management application services, were selected for the study. To get the data, app reviews were crawled in the online app store and preprocessed using natural language processing techniques. Topic Modeling and Aspect-Sentiment Analysis were used as analysis methods. Findings According to the analysis results, this study was able to derive the elements that asset management fintech companies should have. As a result of Topic Modeling, 7 topics were derived from Bank Salad and Toss respectively. As a result, topics related to function and usage and topics on stability and marketing were extracted. Sentiment Analysis showed that users responded positively to function-related topics, but negatively to usage-related topics and stability topics. Through this, we were able to extract the key elements needed for asset management fintech companies.

A Low-Cost Lidar Sensor based Glass Feature Extraction Method for an Accurate Map Representation using Statistical Moments (통계적 모멘트를 이용한 정확한 환경 지도 표현을 위한 저가 라이다 센서 기반 유리 특징점 추출 기법)

  • An, Ye Chan;Lee, Seung Hwan
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.103-111
    • /
    • 2021
  • This study addresses a low-cost lidar sensor-based glass feature extraction method for an accurate map representation using statistical moments, i.e. the mean and variance. Since the low-cost lidar sensor produces range-only data without intensity and multi-echo data, there are some difficulties in detecting glass-like objects. In this study, a principle that an incidence angle of a ray emitted from the lidar with respect to a glass surface is close to zero degrees is concerned for glass detection. Besides, all sensor data are preprocessed and clustered, which is represented using statistical moments as glass feature candidates. Glass features are selected among the candidates according to several conditions based on the principle and geometric relation in the global coordinate system. The accumulated glass features are classified according to the distance, which is lastly represented on the map. Several experiments were conducted in glass environments. The results showed that the proposed method accurately extracted and represented glass windows using proper parameters. The parameters were empirically designed and carefully analyzed. In future work, we will implement and perform the conventional SLAM algorithms combined with our glass feature extraction method in glass environments.

A Study of Weighing System to Apply into Hydraulic Excavator with CNN (CNN기반 굴삭기용 부하 측정 시스템 구현을 위한 연구)

  • Hwang Hun Jeong;Young Il Shin;Jin Ho Lee;Ki Yong Cho
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.133-139
    • /
    • 2023
  • A weighing system calculates the bucket's excavation amount of an excavator. Usually, the excavation amount is computed by the excavator's motion equations with sensing data. But these motion equations have computing errors that are induced by assumptions to the linear systems and identification of the equation's parameters. To reduce computing errors, some commercial weighing system incorporates particular motion into the excavation process. This study introduces a linear regression model on an artificial neural network that has fewer predicted errors and doesn't need a particular pose during an excavation. Time serial data were gathered from a 30tons excavator's loading test. Then these data were preprocessed to be adjusted by MPL (Multi Layer Perceptron) or CNN (Convolutional Neural Network) based linear regression models. Each model was trained by changing hyperparameter such as layer or node numbers, drop-out rate, and kernel size. Finally ID-CNN-based linear regression model was selected.