• Title/Summary/Keyword: Recurrent neural networks

Search Result 285, Processing Time 0.023 seconds

EPS Gesture Signal Recognition using Deep Learning Model (심층 학습 모델을 이용한 EPS 동작 신호의 인식)

  • Lee, Yu ra;Kim, Soo Hyung;Kim, Young Chul;Na, In Seop
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.35-41
    • /
    • 2016
  • In this paper, we propose hand-gesture signal recognition based on EPS(Electronic Potential Sensor) using Deep learning model. Extracted signals which from Electronic field based sensor, EPS have much of the noise, so it must remove in pre-processing. After the noise are removed with filter using frequency feature, the signals are reconstructed with dimensional transformation to overcome limit which have just one-dimension feature with voltage value for using convolution operation. Then, the reconstructed signal data is finally classified and recognized using multiple learning layers model based on deep learning. Since the statistical model based on probability is sensitive to initial parameters, the result can change after training in modeling phase. Deep learning model can overcome this problem because of several layers in training phase. In experiment, we used two different deep learning structures, Convolutional neural networks and Recurrent Neural Network and compared with statistical model algorithm with four kinds of gestures. The recognition result of method using convolutional neural network is better than other algorithms in EPS gesture signal recognition.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

A Comparative Study of Speech Parameters for Speech Recognition Neural Network (음성 인식 신경망을 위한 음성 파라키터들의 성능 비교)

  • Kim, Ki-Seok;Im, Eun-Jin;Hwang, Hee-Yung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.61-66
    • /
    • 1992
  • There have been many researches that uses neural network models for automatic speech recognition, but the main trend was finding the neural network models and learning rules appropriate to automatic speech recognition. However, the choice of the input speech parameter for the neural network as well as neural network model itself is a very important factor for the improvement of performance of the automatic speech recognition system using neural network. In this paper we select 6 speech parameters from surveys of the speech recognition papers which uses neural networks, and analyze the performance for the same data and the same neural network model. We use 8 sets of 9 Korean plosives and 18 sets of 8 Korean vowels. We use recurrent neural network and compare the performance of the 6 speech parameters while the number of nodes is constant. The delta cepstrum of linear predictive coefficients showed best result and the recognition rates are 95.1% for the vowels and 100.0% for plosives.

  • PDF

A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra

  • Galib, S.M.;Bhowmik, P.K.;Avachat, A.V.;Lee, H.K.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4072-4079
    • /
    • 2021
  • This article presents a study on the state-of-the-art methods for automated radioactive material detection and identification, using gamma-ray spectra and modern machine learning methods. The recent developments inspired this in deep learning algorithms, and the proposed method provided better performance than the current state-of-the-art models. Machine learning models such as: fully connected, recurrent, convolutional, and gradient boosted decision trees, are applied under a wide variety of testing conditions, and their advantage and disadvantage are discussed. Furthermore, a hybrid model is developed by combining the fully-connected and convolutional neural network, which shows the best performance among the different machine learning models. These improvements are represented by the model's test performance metric (i.e., F1 score) of 93.33% with an improvement of 2%-12% than the state-of-the-art model at various conditions. The experimental results show that fusion of classical neural networks and modern deep learning architecture is a suitable choice for interpreting gamma spectra data where real-time and remote detection is necessary.

RNN-based integrated system for real-time sensor fault detection and fault-informed accident diagnosis in nuclear power plant accidents

  • Jeonghun Choi;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.3
    • /
    • pp.814-826
    • /
    • 2023
  • Sensor faults in nuclear power plant instrumentation have the potential to spread negative effects from wrong signals that can cause an accident misdiagnosis by plant operators. To detect sensor faults and make accurate accident diagnoses, prior studies have developed a supervised learning-based sensor fault detection model and an accident diagnosis model with faulty sensor isolation. Even though the developed neural network models demonstrated satisfactory performance, their diagnosis performance should be reevaluated considering real-time connection. When operating in real-time, the diagnosis model is expected to indiscriminately accept fault data before receiving delayed fault information transferred from the previous fault detection model. The uncertainty of neural networks can also have a significant impact following the sensor fault features. In the present work, a pilot study was conducted to connect two models and observe actual outcomes from a real-time application with an integrated system. While the initial results showed an overall successful diagnosis, some issues were observed. To recover the diagnosis performance degradations, additive logics were applied to minimize the diagnosis failures that were not observed in the previous validations of the separate models. The results of a case study were then analyzed in terms of the real-time diagnosis outputs that plant operators would actually face in an emergency situation.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Design of a NeuroFuzzy Controller for the Integrated System of Voice and Data Over Wireless Medium Access Control Protocol (무선 매체 접근 제어 프로토콜 상에서의 음성/데이타 통합 시스템을 위한 뉴로 퍼지 제어기 설계)

  • Choi, Won-Seock;Kim, Eung-Ju;Kim, Beom-Soo;Lim, Myo-Taeg
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.1990-1992
    • /
    • 2001
  • In this paper, a NeuroFuzzy controller (NFC) with enhanced packet reservation multiple access (PRMA) protocol for QoS-guaranteed multimedia communication systems is proposed. The enhanced PRMA protocol adopts mini-slot technique for reducing contention cost, and these minislot are futher partitioned into multiple MAC regions for access requests coming from users with their respective QoS (quality-of-service) requirements. And NFC is designed to properly determine the MAC regions and access probability for enhancing the PRMA efficiency under QoS constraint. It mainly contains voice traffic estimator including the slot information estimator with recurrent neural networks (RNNs) using real-time recurrent learning (RTRL), and fuzzy logic controller with Mandani- and Sugeno-type of fuzzy rules. Simulation results show that the enhanced PRMA protocol with NFC can guarantee QoS requirements for all traffic loads and further achieves higher system utilization and less non real-time packet delay, compared to previously studied PRMA, IPRMA, SIR, HAR, and F2RAC.

  • PDF

Deep Learning Approaches to RUL Prediction of Lithium-ion Batteries (딥러닝을 이용한 리튬이온 배터리 잔여 유효수명 예측)

  • Jung, Sang-Jin;Hur, Jang-Wook
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.19 no.12
    • /
    • pp.21-27
    • /
    • 2020
  • Lithium-ion batteries are the heart of energy-storing devices and electric vehicles. Owing to their superior qualities, such as high capacity and energy efficiency, they have become quite popular, resulting in an increased demand for failure/damage prevention and useable life maximization. To prevent failure in Lithium-ion batteries, improve their reliability, and ensure productivity, prognosticative measures such as condition monitoring through sensors, condition assessment for failure detection, and remaining useful life prediction through data-driven prognostics and health management approaches have become important topics for research. In this study, the residual useful life of Lithium-ion batteries was predicted using two efficient artificial recurrent neural networks-ong short-term memory (LSTM) and gated recurrent unit (GRU). The proposed approaches were compared for prognostics accuracy and cost-efficiency. It was determined that LSTM showed slightly higher accuracy, whereas GRUs have a computational advantage.

Linkage of Hydrological Model and Machine Learning for Real-time Prediction of River Flood (수문모형과 기계학습을 연계한 실시간 하천홍수 예측)

  • Lee, Jae Yeong;Kim, Hyun Il;Han, Kun Yeun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.3
    • /
    • pp.303-314
    • /
    • 2020
  • The hydrological characteristics of watersheds and hydraulic systems of urban and river floods are highly nonlinear and contain uncertain variables. Therefore, the predicted time series of rainfall-runoff data in flood analysis is not suitable for existing neural networks. To overcome the challenge of prediction, a NARX (Nonlinear Autoregressive Exogenous Model), which is a kind of recurrent dynamic neural network that maximizes the learning ability of a neural network, was applied to forecast a flood in real-time. At the same time, NARX has the characteristics of a time-delay neural network. In this study, a hydrological model was constructed for the Taehwa river basin, and the NARX time-delay parameter was adjusted 10 to 120 minutes. As a result, we found that precise prediction is possible as the time-delay parameter was increased by confirming that the NSE increased from 0.530 to 0.988 and the RMSE decreased from 379.9 ㎥/s to 16.1 ㎥/s. The machine learning technique with NARX will contribute to the accurate prediction of flow rate with an unexpected extreme flood condition.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.