• Title/Summary/Keyword: Network Technique

Search Result 4,424, Processing Time 0.033 seconds

Improving Non-Profiled Side-Channel Analysis Using Auto-Encoder Based Noise Reduction Preprocessing (비프로파일링 기반 전력 분석의 성능 향상을 위한 오토인코더 기반 잡음 제거 기술)

  • Kwon, Donggeun;Jin, Sunghyun;Kim, HeeSeok;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.491-501
    • /
    • 2019
  • In side-channel analysis, which exploit physical leakage from a cryptographic device, deep learning based attack has been significantly interested in recent years. However, most of the state-of-the-art methods have been focused on classifying side-channel information in a profiled scenario where attackers can obtain label of training data. In this paper, we propose a new method based on deep learning to improve non-profiling side-channel attack such as Differential Power Analysis and Correlation Power Analysis. The proposed method is a signal preprocessing technique that reduces the noise in a trace by modifying Auto-Encoder framework to the context of side-channel analysis. Previous work on Denoising Auto-Encoder was trained through randomly added noise by an attacker. In this paper, the proposed model trains Auto-Encoder through the noise from real data using the noise-reduced-label. Also, the proposed method permits to perform non-profiled attack by training only a single neural network. We validate the performance of the noise reduction of the proposed method on real traces collected from ChipWhisperer board. We demonstrate that the proposed method outperforms classic preprocessing methods such as Principal Component Analysis and Linear Discriminant Analysis.

A proposal on a proactive crawling approach with analysis of state-of-the-art web crawling algorithms (최신 웹 크롤링 알고리즘 분석 및 선제적인 크롤링 기법 제안)

  • Na, Chul-Won;On, Byung-Won
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 2019
  • Today, with the spread of smartphones and the development of social networking services, structured and unstructured big data have stored exponentially. If we analyze them well, we will get useful information to be able to predict data for the future. Large amounts of data need to be collected first in order to analyze big data. The web is repository where these data are most stored. However, because the data size is large, there are also many data that have information that is not needed as much as there are data that have useful information. This has made it important to collect data efficiently, where data with unnecessary information is filtered and only collected data with useful information. Web crawlers cannot download all pages due to some constraints such as network bandwidth, operational time, and data storage. This is why we should avoid visiting many pages that are not relevant to what we want and download only important pages as soon as possible. This paper seeks to help resolve the above issues. First, We introduce basic web-crawling algorithms. For each algorithm, the time-complexity and pros and cons are described, and compared and analyzed. Next, we introduce the state-of-the-art web crawling algorithms that have improved the shortcomings of the basic web crawling algorithms. In addition, recent research trends show that the web crawling algorithms with special purposes such as collecting sentiment words are actively studied. We will one of the introduce Sentiment-aware web crawling techniques that is a proactive web crawling technique as a study of web crawling algorithms with special purpose. The result showed that the larger the data are, the higher the performance is and the more space is saved.

The Status of Managing Posttraumatic Stress in Life Managers for Elderly People Living Alone and Measures for its Improvement: Focusing on Employees in Seoul (독거노인생활관리사의 외상 후 스트레스 관리 실태와 개선 방안: 서울 지역 종사자를 중심으로)

  • Kim, Keun-Hong;Yang, Jae-seok;Lee, Gyeong-jin;Kim, Jeong-yeon
    • 한국노년학
    • /
    • v.37 no.2
    • /
    • pp.293-308
    • /
    • 2017
  • This study aims to examine Life Managers for Elderly People Living Alone (LMEPLAs) in Seoul regarding their traumatic experience and the status of their posttraumatic stress disorder and also how they are coping with it in order to find out ways to improve it. As a study method, we investigated LMEPLAs in Seoul through a self-administered survey regarding whether they had faced any traumatic experience, types of their traumatic experience, diagnosis on posttraumatic stress, and the status of their coping with traumatic experience. According to the study results, 186 respondents (37.57%) have been found to indicate either partial or complete posttraumatic stress symptoms, but the status of their coping with it is very poor. The followings are the results of our discussion. First, it is needed to find out life managers suffering from posttraumatic stress disorder and build up a system to manage them consistently. Second, it is necessary to vitalize education about traumatic experience and posttraumatic stress management. Third, it is urgently needed to build up a system to support life managers who have faced any traumatic experience or been diagnosed to have posttraumatic stress disorder. Fourth, it is demanded to cultivate and arrange experts equipped with specialized knowledge and technique. Fifth, it is needed for them to build a network with medical institutes to receive a prompt diagnosis and specialized treatment.

Development and Application of Two-Dimensional Numerical Tank using Desingularized Indirect Boundary Integral Equation Method (비특이화 간접경계적분방정식방법을 이용한 2차원 수치수조 개발 및 적용)

  • Oh, Seunghoon;Cho, Seok-kyu;Jung, Dongho;Sung, Hong Gun
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.6
    • /
    • pp.447-457
    • /
    • 2018
  • In this study, a two-dimensional fully nonlinear transient wave numerical tank was developed using a desingularized indirect boundary integral equation method. The desingularized indirect boundary integral equation method is simpler and faster than the conventional boundary element method because special treatment is not required to compute the boundary integral. Numerical simulations were carried out in the time domain using the fourth order Runge-Kutta method. A mixed Eulerian-Lagrangian approach was adapted to reconstruct the free surface at each time step. A numerical damping zone was used to minimize the reflective wave in the downstream region. The interpolating method of a Gaussian radial basis function-type artificial neural network was used to calculate the gradient of the free surface elevation without element connectivity. The desingularized indirect boundary integral equation using an isolated point source and radial basis function has no need for information about the element connectivity and is a meshless method that is numerically more flexible. In order to validate the accuracy of the numerical wave tank based on the desingularized indirect boundary integral equation method and meshless technique, several numerical simulations were carried out. First, a comparison with numerical results according to the type of desingularized source was carried out and confirmed that continuous line sources can be replaced by simply isolated sources. In addition, a propagation simulation of a $2^{nd}$-order Stokes wave was carried out and compared with an analytical solution. Finally, simulations of propagating waves in shallow water and propagating waves over a submerged bar were also carried and compared with published data.

A System Recovery using Hyper-Ledger Fabric BlockChain (하이퍼레저 패브릭 블록체인을 활용한 시스템 복구 기법)

  • Bae, Su-Hwan;Cho, Sun-Ok;Shin, Yong-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.2
    • /
    • pp.155-161
    • /
    • 2019
  • Currently, numerous companies and institutes provide services using the Internet, and establish and operate Information Systems to manage them efficiently and reliably. The Information System implies the possibility of losing the ability to provide normal services due to a disaster or disability. It is preparing for this by utilizing a disaster recovery system. However, existing disaster recovery systems cannot perform normal recovery if files for system recovery are corrupted. In this paper, we proposed a system that can verify the integrity of the system recovery file and proceed with recovery by utilizing hyper-ledger fabric blockchain. The PBFT consensus algorithm is used to generate the blocks and is performed by the leader node of the blockchain network. In the event of failure, verify the integrity of the recovery file by comparing the hash value of the recovery file with the hash value in the blockchain and proceed with recovery. For the evaluation of proposed techniques, a comparative analysis was conducted based on four items: existing system recovery techniques and data consistency, able to data retention, recovery file integrity, and using the proposed technique, the amount of traffic generated was analyzed to determine whether it was actually applicable.

Clustering Performance Analysis of Autoencoder with Skip Connection (스킵연결이 적용된 오토인코더 모델의 클러스터링 성능 분석)

  • Jo, In-su;Kang, Yunhee;Choi, Dong-bin;Park, Young B.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.403-410
    • /
    • 2020
  • In addition to the research on noise removal and super-resolution using the data restoration (Output result) function of Autoencoder, research on the performance improvement of clustering using the dimension reduction function of autoencoder are actively being conducted. The clustering function and data restoration function using Autoencoder have common points that both improve performance through the same learning. Based on these characteristics, this study conducted an experiment to see if the autoencoder model designed to have excellent data recovery performance is superior in clustering performance. Skip connection technique was used to design autoencoder with excellent data recovery performance. The output result performance and clustering performance of both autoencoder model with Skip connection and model without Skip connection were shown as graph and visual extract. The output result performance was increased, but the clustering performance was decreased. This result indicates that the neural network models such as autoencoders are not sure that each layer has learned the characteristics of the data well if the output result is good. Lastly, the performance degradation of clustering was compensated by using both latent code and skip connection. This study is a prior study to solve the Hanja Unicode problem by clustering.

An exploratory study for the development of a education framework for supporting children's development in the convergence of "art activity" and "language activity": Focused on Text mining method ('미술'과 '언어' 활동 융합형의 아동 발달지원 교육 프레임워크 개발을 위한 탐색적 연구: 텍스트 마이닝을 중심으로)

  • Park, Yunmi;Kim, Sijeong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.297-304
    • /
    • 2021
  • This study aims not only to access the visual thought-oriented approach that has been implemented in established art therapy and education but also to integrate language education and therapeutic approach to support the development of school-age children. Thus, text mining technique was applied to search for areas where different areas of language and art can be integrated. This research was conducted in accordance with the procedure of basic research, preliminary DB construction, text screening, DB pre-processing and confirmation, stop-words removing, text mining analysis and the deduction about the convergent areas. These results demonstrated that this study draws convergence areas related to regional, communication, and learning functions, areas related to problem solving and sensory organs, areas related to art and intelligence, areas related to information and communication, areas related to home and disability, topics, conceptualization, peer-related areas, integration, reorganization, attitudes. In conclusion, this study is meaningful in that it established a framework for designing an activity-centered convergence program of art and language in the future and attempted a holistic approach to support child development.

Simplified Bridge Weigh-In-Motion Algorithm using Strain Response of Short Span RC T-beam Bridge with no Crossbeam installed (가로보가 없는 단지간 RC T빔교의 변형률 응답을 이용한 단순화된 BWIM (Bridge Weigh-In-Motion) 알고리즘)

  • Jeon, Jun-Chang;Hwang, Yoon Koog;Lee, Hee-Hyun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.3
    • /
    • pp.57-67
    • /
    • 2021
  • A thorough administration of the arterial road network requires a continuous supply of updated and accurate information about the traffic that travels on the roads. One of the ways to effectively obtain the traffic volume and weight distribution of heavy vehicles is the BWIM technique, which is actively being studied. Unlike previous studies, this study was performed to develop a simplified Bridge Weigh-In-Motion (BWIM) algorithm that can easily estimate the axle spacing and weight of a traveling vehicle by utilizing the structural characteristics of the bridge. A short span RC T-beam bridge with no crossbeam installed was selected for the study, and then the strain response characteristics of bridge deck and girder was checked through preliminary field test. Based on the preliminary field test results, a simplified BWIM algorithm suitable for the bridge to be studied was derived. The validity and accuracy of the BWIM algorithm derived in this study were verified through field test. As a result of the verification test, the proposed BWIM algorithm can estimate the axle spacing and gross weight of the travelling vehicles with the average percent error of less than 3%.

MLP-based 3D Geotechnical Layer Mapping Using Borehole Database in Seoul, South Korea (MLP 기반의 서울시 3차원 지반공간모델링 연구)

  • Ji, Yoonsoo;Kim, Han-Saem;Lee, Moon-Gyo;Cho, Hyung-Ik;Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.5
    • /
    • pp.47-63
    • /
    • 2021
  • Recently, the demand for three-dimensional (3D) underground maps from the perspective of digital twins and the demand for linkage utilization are increasing. However, the vastness of national geotechnical survey data and the uncertainty in applying geostatistical techniques pose challenges in modeling underground regional geotechnical characteristics. In this study, an optimal learning model based on multi-layer perceptron (MLP) was constructed for 3D subsurface lithological and geotechnical classification in Seoul, South Korea. First, the geotechnical layer and 3D spatial coordinates of each borehole dataset in the Seoul area were constructed as a geotechnical database according to a standardized format, and data pre-processing such as correction and normalization of missing values for machine learning was performed. An optimal fitting model was designed through hyperparameter optimization of the MLP model and model performance evaluation, such as precision and accuracy tests. Then, a 3D grid network locally assigning geotechnical layer classification was constructed by applying an MLP-based bet-fitting model for each unit lattice. The constructed 3D geotechnical layer map was evaluated by comparing the results of a geostatistical interpolation technique and the topsoil properties of the geological map.

3D Mesh Reconstruction Technique from Single Image using Deep Learning and Sphere Shape Transformation Method (딥러닝과 구체의 형태 변형 방법을 이용한 단일 이미지에서의 3D Mesh 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.160-168
    • /
    • 2022
  • In this paper, we propose a 3D mesh reconstruction method from a single image using deep learning and a sphere shape transformation method. The proposed method has the following originality that is different from the existing method. First, the position of the vertex of the sphere is modified to be very similar to the 3D point cloud of an object through a deep learning network, unlike the existing method of building edges or faces by connecting nearby points. Because 3D point cloud is used, less memory is required and faster operation is possible because only addition operation is performed between offset value at the vertices of the sphere. Second, the 3D mesh is reconstructed by covering the surface information of the sphere on the modified vertices. Even when the distance between the points of the 3D point cloud created by correcting the position of the vertices of the sphere is not constant, it already has the face information of the sphere called face information of the sphere, which indicates whether the points are connected or not, thereby preventing simplification or loss of expression. can do. In order to evaluate the objective reliability of the proposed method, the experiment was conducted in the same way as in the comparative papers using the ShapeNet dataset, which is an open standard dataset. As a result, the IoU value of the method proposed in this paper was 0.581, and the chamfer distance value was It was calculated as 0.212. The higher the IoU value and the lower the chamfer distance value, the better the results. Therefore, the efficiency of the 3D mesh reconstruction was demonstrated compared to the methods published in other papers.