• Title/Summary/Keyword: Initialization vector

Search Result 30, Processing Time 0.023 seconds

The Security DV-Hop Algorithm against Multiple-Wormhole-Node-Link in WSN

  • Li, Jianpo;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2223-2242
    • /
    • 2019
  • Distance Vector-Hop (DV-Hop) algorithm is widely used in node localization. It often suffers the wormhole attack. The current researches focus on Double-Wormhole-Node-Link (DWNL) and have limited attention to Multi-Wormhole-Node-Link (MWNL). In this paper, we propose a security DV-Hop algorithm (AMLDV-Hop) to resist MWNL. Firstly, the algorithm establishes the Neighbor List (NL) in initialization phase. It uses the NL to find the suspect beacon nodes and then find the actually attacked beacon nodes by calculating the distances to other beacon nodes. The attacked beacon nodes generate and broadcast the conflict sets to distinguish the different wormhole areas. The unknown nodes take the marked beacon nodes as references and mark themselves with different numbers in the first-round marking. If the unknown nodes fail to mark themselves, they will take the marked unknown nodes as references to mark themselves in the second-round marking. The unknown nodes that still fail to be marked are semi-isolated. The results indicate that the localization error of proposed AMLDV-Hop algorithm has 112.3%, 10.2%, 41.7%, 6.9% reduction compared to the attacked DV-Hop algorithm, the Label-based DV-Hop (LBDV-Hop), the Secure Neighbor Discovery Based DV-Hop (NDDV-Hop), and the Against Wormhole DV-Hop (AWDV-Hop) algorithm.

Integrated Algorithm for Identification of Long Range Artillery Type and Impact Point Prediction With IMM Filter (IMM 필터를 이용한 장사정포의 탄종 분리 및 탄착점 예측 통합 알고리즘)

  • Jung, Cheol-Goo;Lee, Chang-Hun;Tahk, Min-Jea;Yoo, Dong-Gil;Sohn, Sung-Hwan
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.8
    • /
    • pp.531-540
    • /
    • 2022
  • In this paper, we present an algorithm that identifies artillery type and rapidly predicts the impact point based on the IMM filter. The ballistic trajectory equation is used as a system model, and three models with different ballistic coefficient values are used. Acceleration was divided into three components of gravity, air resistance, and lift. And lift acceleration was added as a new state variable. The kinematic condition that the velocity vector and lift acceleration are perpendicular was used as a pseudo-measurement value. The impact point was predicted based on the state variable estimated through the IMM filter and the ballistic coefficient of the model with the highest mode probability. Instead of the commonly used Runge-Kutta numerical integration for impact point prediction, a semi-analytic method was used to predict impact point with a small amount of calculation. Finally, a state variable initialization method using the least-square method was proposed. An integrated algorithm including artillery type identification, impact point prediction and initialization was presented, and the validity of the proposed method was verified through simulation.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Optimization of Structure-Adaptive Self-Organizing Map Using Genetic Algorithm (유전자 알고리즘을 사용한 구조적응 자기구성 지도의 최적화)

  • 김현돈;조성배
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.223-230
    • /
    • 2001
  • Since self-organizing map (SOM) preserves the topology of ordering in input spaces and trains itself by unsupervised algorithm, it is Llsed in many areas. However, SOM has a shortcoming: structure cannot be easily detcrmined without many trials-and-errors. Structure-adaptive self-orgnizing map (SASOM) which can adapt its structure as well as its weights overcome the shortcoming of self-organizing map: SASOM makes use of structure adaptation capability to place the nodes of prototype vectors into the pattern space accurately so as to make the decision boundmies as close to the class boundaries as possible. In this scheme, the initialization of weights of newly adapted nodes is important. This paper proposes a method which optimizes SASOM with genetic algorithm (GA) to determines the weight vector of newly split node. The leanling algorithm is a hybrid of unsupervised learning method and supervised learning method using LVQ algorithm. This proposed method not only shows higher performance than SASOM in terms of recognition rate and variation, but also preserves the topological order of input patterns well. Experiments with 2D pattern space data and handwritten digit database show that the proposed method is promising.

  • PDF

The problem resolution algorithm in ESP protocol (ESP 프로토콜에서의 문제점 보완 알고리즘)

  • Lee, Yeong-Ji;Kim, Tae-Yun
    • The KIPS Transactions:PartC
    • /
    • v.9C no.2
    • /
    • pp.189-196
    • /
    • 2002
  • IPSec is a protocol which provides data encryption, message authentication and data integrity on public and open network transmission. In IPSec, ESP protocol is used when it needs to Provide data encryption, authentication and integrity in real transmission Packets. ESP protocol uses DES-CBC encryption mode when sender encrypts packets and receiver decrypts data through this mode IV is used at that tome. This vague has many risks of attack during transmission by attacker because it is transferred clean and opened. If IV value is modified, then decryption of ESP data is impossible and higher level information is changed. In this paper we propose a new algorithm that it encrpty IV values using DES-ECB mode for preventing IV attack and checks integrity of whole ESP data using message authentication function. Therefore, we will protect attacks of IV and data, and guarantee more safe transmission on the public network.

A Mechanism for the Secure IV Transmission in IPSec (IPSec에서 안전한 IV 전송을 위한 메커니즘)

  • Lee, Young-Ji;Park, Nam-Sup;Kim, Tai-Yun
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.2
    • /
    • pp.156-164
    • /
    • 2002
  • IPSec is a protocol which provides data encryption, message authentication and data integrity on public and open network transmission. In IPSec, ESP protocol is used when it needs to provide data encryption, authentication and Integrity In real transmission packets. ESP protocol uses DES-CBC encryption mode when sender encrypts packets and receiver decrypts data through this mode IV is used at that time. This value has many tasks of attack during transmission by attacker because it is transferred clean and opened. If IV value is modified, then decryption of ESP data is impossible and higher level information is changed. In this paper we propose a new algorithm that it encrypts IV values using DES-ECB mode for preventing IV attack and checks integrity of whole ESP data using message authentication function. Therefore, we will protect attacks of IV and data, and guarantee core safe transmission on the public network.

Assessment of Classification Accuracy of fNIRS-Based Brain-computer Interface Dataset Employing Elastic Net-Based Feature Selection (Elastic net 기반 특징 선택을 적용한 fNIRS 기반 뇌-컴퓨터 인터페이스 데이터셋 분류 정확도 평가)

  • Shin, Jaeyoung
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.6
    • /
    • pp.268-276
    • /
    • 2021
  • Functional near-infrared spectroscopy-based brain-computer interface (fNIRS-based BCI) has been receiving much attention. However, we are practically constrained to obtain a lot of fNIRS data by inherent hemodynamic delay. For this reason, when employing machine learning techniques, a problem due to the high-dimensional feature vector may be encountered, such as deteriorated classification accuracy. In this study, we employ an elastic net-based feature selection which is one of the embedded methods and demonstrate the utility of which by analyzing the results. Using the fNIRS dataset obtained from 18 participants for classifying brain activation induced by mental arithmetic and idle state, we calculated classification accuracies after performing feature selection while changing the parameter α (weight of lasso vs. ridge regularization). Grand averages of classification accuracy are 80.0 ± 9.4%, 79.3 ± 9.6%, 79.0 ± 9.2%, 79.7 ± 10.1%, 77.6 ± 10.3%, 79.2 ± 8.9%, and 80.0 ± 7.8% for the various values of α = 0.001, 0.005, 0.01, 0.05, 0.1, 0.2, and 0.5, respectively, and are not statistically different from the grand average of classification accuracy estimated with all features (80.1 ± 9.5%). As a result, no difference in classification accuracy is revealed for all considered parameter α values. Especially for α = 0.5, we are able to achieve the statistically same level of classification accuracy with even 16.4% features of the total features. Since elastic net-based feature selection can be easily applied to other cases without complicated initialization and parameter fine-tuning, we can be looking forward to seeing that the elastic-based feature selection can be actively applied to fNIRS data.

Investigation on the nonintrusive multi-fidelity reduced-order modeling for PWR rod bundles

  • Kang, Huilun;Tian, Zhaofei;Chen, Guangliang;Li, Lei;Chu, Tianhui
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1825-1834
    • /
    • 2022
  • Performing high-fidelity computational fluid dynamics (HF-CFD) to predict the flow and heat transfer state of the coolant in the reactor core is expensive, especially in scenarios that require extensive parameter search, such as uncertainty analysis and design optimization. This work investigated the performance of utilizing a multi-fidelity reduced-order model (MF-ROM) in PWR rod bundles simulation. Firstly, basis vectors and basis vector coefficients of high-fidelity and low-fidelity CFD results are extracted separately by the proper orthogonal decomposition (POD) approach. Secondly, a surrogate model is trained to map the relationship between the extracted coefficients from different fidelity results. In the prediction stage, the coefficients of the low-fidelity data under the new operating conditions are extracted by using the obtained POD basis vectors. Then, the trained surrogate model uses the low-fidelity coefficients to regress the high-fidelity coefficients. The predicted high-fidelity data is reconstructed from the product of extracted basis vectors and the regression coefficients. The effectiveness of the MF-ROM is evaluated on a flow and heat transfer problem in PWR fuel rod bundles. Two data-driven algorithms, the Kriging and artificial neural network (ANN), are trained as surrogate models for the MF-ROM to reconstruct the complex flow and heat transfer field downstream of the mixing vanes. The results show good agreements between the data reconstructed with the trained MF-ROM and the high-fidelity CFD simulation result, while the former only requires to taken the computational burden of low-fidelity simulation. The results also show that the performance of the ANN model is slightly better than the Kriging model when using a high number of POD basis vectors for regression. Moreover, the result presented in this paper demonstrates the suitability of the proposed MF-ROM for high-fidelity fixed value initialization to accelerate complex simulation.

Optimized Implementation of PIPO Lightweight Block Cipher on 32-bit RISC-V Processor (32-bit RISC-V상에서의 PIPO 경량 블록암호 최적화 구현)

  • Eum, Si Woo;Jang, Kyung Bae;Song, Gyeong Ju;Lee, Min Woo;Seo, Hwa Jeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.6
    • /
    • pp.167-174
    • /
    • 2022
  • PIPO lightweight block ciphers were announced in ICISC'20. In this paper, a single-block optimization implementation and parallel optimization implementation of PIPO lightweight block cipher ECB, CBC, and CTR operation modes are performed on a 32-bit RISC-V processor. A single block implementation proposes an efficient 8-bit unit of Rlayer function implementation on a 32-bit register. In a parallel implementation, internal alignment of registers for parallel implementation is performed, and a method for four different blocks to perform Rlayer function operations on one register is described. In addition, since it is difficult to apply the parallel implementation technique to the encryption process in the parallel implementation of the CBC operation mode, it is proposed to apply the parallel implementation technique in the decryption process. In parallel implementation of the CTR operation mode, an extended initialization vector is used to propose a register internal alignment omission technique. This paper shows that the parallel implementation technique is applicable to several block cipher operation modes. As a result, it is confirmed that the performance improvement is 1.7 times in a single-block implementation and 1.89 times in a parallel implementation compared to the performance of the existing research implementation that includes the key schedule process in the ECB operation mode.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.