• Title/Summary/Keyword: affine transformation

Search Result 127, Processing Time 0.018 seconds

White-Box AES Implementation Revisited

  • Baek, Chung Hun;Cheon, Jung Hee;Hong, Hyunsook
    • Journal of Communications and Networks
    • /
    • v.18 no.3
    • /
    • pp.273-287
    • /
    • 2016
  • White-box cryptography presented by Chow et al. is an obfuscation technique for protecting secret keys in software implementations even if an adversary has full access to the implementation of the encryption algorithm and full control over its execution platforms. Despite its practical importance, progress has not been substantial. In fact, it is repeated that as a proposal for a white-box implementation is reported, an attack of lower complexity is soon announced. This is mainly because most cryptanalytic methods target specific implementations, and there is no general attack tool for white-box cryptography. In this paper, we present an analytic toolbox on white-box implementations of the Chow et al.'s style using lookup tables. According to our toolbox, for a substitution-linear transformation cipher on n bits with S-boxes on m bits, the complexity for recovering the $$O\((3n/max(m_Q,m))2^{3max(m_Q,m)}+2min\{(n/m)L^{m+3}2^{2m},\;(n/m)L^32^{3m}+n{\log}L{\cdot}2^{L/2}\}\)$$, where $m_Q$ is the input size of nonlinear encodings,$m_A$ is the minimized block size of linear encodings, and $L=lcm(m_A,m_Q)$. As a result, a white-box implementation in the Chow et al.'s framework has complexity at most $O\(min\{(2^{2m}/m)n^{m+4},\;n{\log}n{\cdot}2^{n/2}\}\)$ which is much less than $2^n$. To overcome this, we introduce an idea that obfuscates two advanced encryption standard (AES)-128 ciphers at once with input/output encoding on 256 bits. To reduce storage, we use a sparse unsplit input encoding. As a result, our white-box AES implementation has up to 110-bit security against our toolbox, close to that of the original cipher. More generally, we may consider a white-box implementation of the t parallel encryption of AES to increase security.

Image warping using an adaptive partial matching method (적응적 부분 정합 방법을 이용한 영상 비틀림 방법)

  • 임동근;호요성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.12
    • /
    • pp.2783-2797
    • /
    • 1997
  • This paper proposes a new motion estimation algorithm that employs matching in a variable search area. Instead of uisg a fixed search range for coarse motion estimation, we examine a varying search range, which is determined adaptively by the peak signal to noise ratio (PSNR) of the frame difference. The hexagonal matching method is one of the refined methods in image warping. It produces improved image quality, but it requires a large amount of computataions. The proposed adaptive partial matching method reduces computational complexity below about 50% of the hexagonal matching method, while maintaining the image quality comparable. The performance of two motion compensation methods, which combine the affine or bilinear transformation with the proposed motion estimation algorithm, is evaluated based on the following criteria:computtational complexity, number of coding bits, and reconstructed image quality. The quality of reconstructed images by the proposed method is substantially improved relative to the conventional BMA method, and is comparable to the full hexagonal matching method;in addition, computational complexity and the number of coding bits are reduced significantly.

  • PDF

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.

Deep-learning based SAR Ship Detection with Generative Data Augmentation (영상 생성적 데이터 증강을 이용한 딥러닝 기반 SAR 영상 선박 탐지)

  • Kwon, Hyeongjun;Jeong, Somi;Kim, SungTai;Lee, Jaeseok;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Ship detection in synthetic aperture radar (SAR) images is an important application in marine monitoring for the military and civilian domains. Over the past decade, object detection has achieved significant progress with the development of convolutional neural networks (CNNs) and lot of labeled databases. However, due to difficulty in collecting and labeling SAR images, it is still a challenging task to solve SAR ship detection CNNs. To overcome the problem, some methods have employed conventional data augmentation techniques such as flipping, cropping, and affine transformation, but it is insufficient to achieve robust performance to handle a wide variety of types of ships. In this paper, we present a novel and effective approach for deep SAR ship detection, that exploits label-rich Electro-Optical (EO) images. The proposed method consists of two components: a data augmentation network and a ship detection network. First, we train the data augmentation network based on conditional generative adversarial network (cGAN), which aims to generate additional SAR images from EO images. Since it is trained using unpaired EO and SAR images, we impose the cycle-consistency loss to preserve the structural information while translating the characteristics of the images. After training the data augmentation network, we leverage the augmented dataset constituted with real and translated SAR images to train the ship detection network. The experimental results include qualitative evaluation of the translated SAR images and the comparison of detection performance of the networks, trained with non-augmented and augmented dataset, which demonstrates the effectiveness of the proposed framework.

Analysis of Co-registration Performance According to Geometric Processing Level of KOMPSAT-3/3A Reference Image (KOMPSAT-3/3A 기준영상의 기하품질에 따른 상호좌표등록 결과 분석)

  • Yun, Yerin;Kim, Taeheon;Oh, Jaehong;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.221-232
    • /
    • 2021
  • This study analyzed co-registration results according to the geometric processing level of reference image, which are Level 1R and Level 1G provided from KOMPSAT-3 and KOMPSAT-3A images. We performed co-registration using each Level 1R and Level 1G image as a reference image, and Level 1R image as a sensed image. For constructing the experimental dataset, seven Level 1R and 1G images of KOMPSAT-3 and KOMPSAT-3A acquired from Daejeon, South Korea, were used. To coarsely align the geometric position of the two images, SURF (Speeded-Up Robust Feature) and PC (Phase Correlation) methods were combined and then repeatedly applied to the overlapping region of the images. Then, we extracted tie-points using the SURF method from coarsely aligned images and performed fine co-registration through affine transformation and piecewise Linear transformation, respectively, constructed with the tie-points. As a result of the experiment, when Level 1G image was used as a reference image, a relatively large number of tie-points were extracted than Level 1R image. Also, in the case where the reference image is Level 1G image, the root mean square error of co-registration was 5 pixels less than the case of Level 1R image on average. We have shown from the experimental results that the co-registration performance can be affected by the geometric processing level related to the initial geometric relationship between the two images. Moreover, we confirmed that the better geometric quality of the reference image achieved the more stable co-registration performance.

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

An fMRI study on the cerebellar lateralization during visuospatial and verbal tasks (공간 및 언어 과제 수행 시 소뇌의 편측화에 관한 뇌 기능 연구)

  • Chung, Soon-Cheol;Sohn, Jin-Hun;Choi, Mi-Hyun;Lee, Su-Jeong;Yang, Jae-Woong;Lee, Beob-Yi
    • Science of Emotion and Sensibility
    • /
    • v.12 no.4
    • /
    • pp.425-432
    • /
    • 2009
  • The purposes of the study were to examine cerebellar areas and lateralization responsible for visuospatial and verbal tasks using functional Magnetic Resonance Imaging(fMRI). Eight healthy male college students($21.5\;{\pm}\;2.3$ years) and eight male college students($23.3\;{\pm}\;0.5$ years) participated in this fMRI study of visuospatial and verbal tasks, respectively. Functional brain images were taken from 3T MRI using the single-shot EPI method. All functional images were aligned with anatomical images using affine transformation routines built into SPM99. The experiment consisted of four blocks. Each block included a control task(1 minute) and a cognitive task(1 minute). A run was 8 minutes long. Using the subtraction procedure, activated areas in the cerebellum during the visuospatial and verbal tasks were color-coded by t-score. A cerebellar lateralization index was calculated for both cognition tasks using number of activated voxels. The activated cerebellar regions during the both cognition tasks of this study agree with previous results. Since the number of activated voxels of the left and right cerebellar hemisphere was almost same, there was no cerebellar lateralization for both cognition tasks.

  • PDF