• Title/Summary/Keyword: Implementation technique

Search Result 2,112, Processing Time 0.033 seconds

Power-based Side-Channel Analysis Against AES Implementations: Evaluation and Comparison

  • Benhadjyoussef, Noura;Karmani, Mouna;Machhout, Mohsen
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.264-271
    • /
    • 2021
  • From an information security perspective, protecting sensitive data requires utilizing algorithms which resist theoretical attacks. However, treating an algorithm in a purely mathematical fashion or in other words abstracting away from its physical (hardware or software) implementation opens the door to various real-world security threats. In the modern age of electronics, cryptanalysis attempts to reveal secret information based on cryptosystem physical properties, rather than exploiting the theoretical weaknesses in the implemented cryptographic algorithm. The correlation power attack (CPA) is a Side-Channel Analysis attack used to reveal sensitive information based on the power leakages of a device. In this paper, we present a power Hacking technique to demonstrate how a power analysis can be exploited to reveal the secret information in AES crypto-core. In the proposed case study, we explain the main techniques that can break the security of the considered crypto-core by using CPA attack. Using two cryptographic devices, FPGA and 8051 microcontrollers, the experimental attack procedure shows that the AES hardware implementation has better resistance against power attack compared to the software one. On the other hand, we remark that the efficiency of CPA attack depends statistically on the implementation and the power model used for the power prediction.

Optimal Implementation of Lightweight Block Cipher PIPO on CUDA GPGPU (CUDA GPGPU 상에서 경량 블록 암호 PIPO의 최적 구현)

  • Kim, Hyun-Jun;Eum, Si-Woo;Seo, Hwa-Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1035-1043
    • /
    • 2022
  • With the spread of the Internet of Things (IoT), cloud computing, and big data, the need for high-speed encryption for applications is emerging. GPU optimization can be used to validate cryptographic analysis results or reduced versions theoretically obtained by the GPU in a reasonable time. In this paper, PIPO lightweight encryption implemented in various environments was implemented on GPU. Optimally implemented considering the brute force attack on PIPO. In particular, the optimization implementation applying the bit slicing technique and the GPU elements were used as much as possible. As a result, the implementation of the proposed method showed a throughput of about 19.5 billion per second in the RTX 3060 environment, achieving a throughput of about 122 times higher than that of the previous study.

Optimal Decomposition of Convex Structuring Elements on a Hexagonal Grid

  • Ohn, Syng-Yup
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3E
    • /
    • pp.37-43
    • /
    • 1999
  • In this paper, we present a new technique for the optimal local decomposition of convex structuring elements on a hexagonal grid, which are used as templates for morphological image processing. Each basis structuring element in a local decomposition is a local convex structuring element, which can be contained in hexagonal window centered at the origin. Generally, local decomposition of a structuring element results in great savings in the processing time for computing morphological operations. First, we define a convex structuring element on a hexagonal grid and formulate the necessary and sufficient conditions to decompose a convex structuring element into the set of basis convex structuring elements. Further, a cost function was defined to represent the amount of computation or execution time required for performing dilations on different computing environments and by different implementation methods. Then the decomposition condition and the cost function are applied to find the optimal local decomposition of convex structuring elements, which guarantees the minimal amount of computation for morphological operation. Simulation shows that optimal local decomposition results in great reduction in the amount of computation for morphological operations. Our technique is general and flexible since different cost functions could be used to achieve optimal local decomposition for different computing environments and implementation methods.

  • PDF

GPU-Accelerated Single Image Depth Estimation with Color-Filtered Aperture

  • Hsu, Yueh-Teng;Chen, Chun-Chieh;Tseng, Shu-Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1058-1070
    • /
    • 2014
  • There are two major ways to implement depth estimation, multiple image depth estimation and single image depth estimation, respectively. The former has a high hardware cost because it uses multiple cameras but it has a simple software algorithm. Conversely, the latter has a low hardware cost but the software algorithm is complex. One of the recent trends in this field is to make a system compact, or even portable, and to simplify the optical elements to be attached to the conventional camera. In this paper, we present an implementation of depth estimation with a single image using a graphics processing unit (GPU) in a desktop PC, and achieve real-time application via our evolutional algorithm and parallel processing technique, employing a compute shader. The methods greatly accelerate the compute-intensive implementation of depth estimation with a single view image from 0.003 frames per second (fps) (implemented in MATLAB) to 53 fps, which is almost twice the real-time standard of 30 fps. In the previous literature, to the best of our knowledge, no paper discusses the optimization of depth estimation using a single image, and the frame rate of our final result is better than that of previous studies using multiple images, whose frame rate is about 20fps.

Human Motion Tracking With Wireless Wearable Sensor Network: Experience and Lessons

  • Chen, Jianxin;Zhou, Liang;Zhang, Yun;Ferreiro, David Fondo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.998-1013
    • /
    • 2013
  • Wireless wearable sensor networks have emerged as a promising technique for human motion tracking due to the flexibility and scalability. In such system several wireless sensor nodes being attached to human limb construct a wearable sensor network, where each sensor node including MEMS sensors (such as 3-axis accelerometer, 3-axis magnetometer and 3-axis gyroscope) monitors the limb orientation and transmits these information to the base station for reconstruction via low-power wireless communication technique. Due to the energy constraint, the high fidelity requirement for real time rendering of human motion and tiny operating system embedded in each sensor node adds more challenges for the system implementation. In this paper, we discuss such challenges and experiences in detail during the implementation of such system with wireless wearable sensor network which includes COTS wireless sensor nodes (Imote 2) and uses TinyOS 1.x in each sensor node. Since our system uses the COTS sensor nodes and popular tiny operating system, it might be helpful for further exploration in such field.

Subband Affine Projection Adaptive Filter using Variable Step Size and Pipeline Transform (가변 적응상수와 파이프라인 변환을 이용한 부밴드 인접투사 적응필터)

  • Choi, Hun;Ha, Hong-Gon;Bae, Hyeon-Deok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.104-110
    • /
    • 2009
  • In this paper, we suggest a new technique which employ the pipelined architecture for the implementation of the SAP adaptive filter using variable step size. According as SAP adaptive filter is sufficiently decomposed, a simplified SAP adaptive filter can be derived, and the weights of adaptive sub-filters can be updated by a simple formular without a matrix inversion. The convergence speed and the steady state error of the simplified SAP adaptive filter are improved by using variable step size. For practical implementation, the simplified SAP adaptive sub-filters are transformed by the pipeline technique.

An XPath Accelerator on Relational Databases: An Implementation and Its Tuning (관계형 데이터베이스를 이용한 XPath Accelerator: 구현과 튜닝)

  • Shin Jin-Ho;Na Gap-Joo;Lee Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.12D no.2 s.98
    • /
    • pp.189-198
    • /
    • 2005
  • XML is rapidly becoming the standard for data representation and exchange, and XML documents are being adopted in various applications. Since the late 1990s, some native XML database management systems(DBMSs) have been developed. More recently, commercial relational DBMS vendors try to incorporate full functionalities of XML into their products, such as Oracle, MS SQL and IBM DB2. In this paper, we implement a well-known RDBMS-based XML data storage and indexing technique, called XPath Accelerator, and tune it in an industry-leading RDBMS. Our contributions are two-folds: 1) an in-depth implementation of the XPath Accelerator technique and 2) its tuning to exploit the advanced query processing techniques of an RDBMS.

Design and Implementation of SOA based S/W Services for Dynamic Behavior of Embedded System (임베디드 시스템의 유기적인 동작을 위한 SOA기반의 S/W서비스 설계와 구현)

  • Park, Won-Kyu;Park, Young-Bum
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.29-34
    • /
    • 2010
  • As the nature of the embedded system, it is operated by user specified requirements, the dynamic action(behavior) is needed when the user's requirements change or unexpected situations occur. In this paper, it is proposed that design and implementation of SOA(Service-oriented architecture) based S/W services for dynamic behavior of embedded System. In this proposed technique, the status of embedded system can be checked through Web services, and in the cases of exceptional situations, the required proper actions can be newly updated through Web services. Through this technique, the burden of users concerning exceptional situations can be reduced, and the convenience of use can increased as well.

Development of Field Programmable Gate Array-based Reactor Trip Functions Using Systems Engineering Approach

  • Jung, Jaecheon;Ahmed, Ibrahim
    • Nuclear Engineering and Technology
    • /
    • v.48 no.4
    • /
    • pp.1047-1057
    • /
    • 2016
  • Design engineering process for field programmable gate array (FPGA)-based reactor trip functions are developed in this work. The process discussed in this work is based on the systems engineering approach. The overall design process is effectively implemented by combining with design and implementation processes. It transforms its overall development process from traditional V-model to Y-model. This approach gives the benefit of concurrent engineering of design work with software implementation. As a result, it reduces development time and effort. The design engineering process consisted of five activities, which are performed and discussed: needs/systems analysis; requirement analysis; functional analysis; design synthesis; and design verification and validation. Those activities are used to develop FPGA-based reactor bistable trip functions that trigger reactor trip when the process input value exceeds the setpoint. To implement design synthesis effectively, a model-based design technique is implied. The finite-state machine with data path structural modeling technique together with very high speed integrated circuit hardware description language and the Aldec Active-HDL tool are used to design, model, and verify the reactor bistable trip functions for nuclear power plants.

Reduction in Sample Size for Efficient Monte Carlo Localization (효율적인 몬테카를로 위치추정을 위한 샘플 수의 감소)

  • Yang Ju-Ho;Song Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.450-456
    • /
    • 2006
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial pose estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated through the thinning technique, which is commonly used in image processing, is employed. The global topological map is first created from the given grid map for the environment. The robot then scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be similar to the one obtained off-line from the given grid map. Random samples are drawn near the topological edge instead of being taken with uniform distribution all over the environment, since the robot traverses along the edge. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased without adverse effects on the performance of MCL.