• Title/Summary/Keyword: 픽셀기반

Search Result 675, Processing Time 0.025 seconds

Color Image Encryption using MLCA and Bit-oriented operation (MLCA와 비트 단위 연산을 이용한 컬러 영상의 암호화)

  • Yun, Jae-Sik;Nam, Tae-Hee;Cho, Sung-Jin;Kim, Seok-Tae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.141-143
    • /
    • 2010
  • This paper presents a problem of the existing encryption method using MLCA or complemented MLCA and propose a method to resolve this problem. With the existing encryption methods, the result of encryption is affected by the original image because of spatial redundancy of adjacent pixels. In this proposed method, we transform spatial coordinates of all pixels into encrypted coordinates. We also encrypt color values of the original image by operating XOR with pseudo-random numbers. This can solve the problem of existing methods and improve the levels of encryption by randomly encrypting pixel coordinates and pixel values of original image. The effectiveness of the proposed method is proved by conducting histogram, key space analysis.

  • PDF

Illumination Robust Feature Descriptor Based on Exact Order (조명 변화에 강인한 엄격한 순차 기반의 특징점 기술자)

  • Kim, Bongjoe;Sohn, Kwanghoon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.77-87
    • /
    • 2013
  • In this paper, we present a novel method for local image descriptor called exact order based descriptor (EOD) which is robust to illumination changes and Gaussian noise. Exact orders of image patch is induced by changing discrete intensity value into k-dimensional continuous vector to resolve the ambiguity of ordering for same intensity pixel value. EOD is generated from overall distribution of exact orders in the patch. The proposed local descriptor is compared with several state-of-the-art descriptors over a number of images. Experimental results show that the proposed method outperforms many state-of-the-art descriptors in the presence of illumination changes, blur and viewpoint change. Also, the proposed method can be used for many computer vision applications such as face recognition, texture recognition and image analysis.

Acceleration of Feature-Based Image Morphing Using GPU (GPU를 이용한 특징 기반 영상모핑의 가속화)

  • Kim, Eun-Ji;Yoon, Seung-Hyun;Lee, Jieun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.13-24
    • /
    • 2014
  • In this study, a graphics-processing-unit (GPU)-based acceleration technique is proposed for the feature-based image morphing. This technique uses the depth-buffer of the graphics hardware to calculate efficiently the shortest distance between a pixel and the control lines. The pairs of control lines between the source image and the destination image are determined by user's input, and the distance function of each control line is rendered using two rectangles and two cones. The distance between each pixel and its nearest control line is stored in the depth buffer through the graphics pipeline, and this is used to conduct the morphing operation efficiently. The pixel-unit morphing operation is parallelized using the compute unified device architecture (CUDA) to reduce the morphing time. We demonstrate the efficiency of the proposed technique using several experimental results.

Post-processing Algorithm Based on Edge Information to Improve the Accuracy of Semantic Image Segmentation (의미론적 영상 분할의 정확도 향상을 위한 에지 정보 기반 후처리 방법)

  • Kim, Jung-Hwan;Kim, Seon-Hyeok;Kim, Joo-heui;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.3
    • /
    • pp.23-32
    • /
    • 2021
  • Semantic image segmentation technology in the field of computer vision is a technology that classifies an image by dividing it into pixels. This technique is also rapidly improving performance using a machine learning method, and a high possibility of utilizing information in units of pixels is drawing attention. However, this technology has been raised from the early days until recently for 'lack of detailed segmentation' problem. Since this problem was caused by increasing the size of the label map, it was expected that the label map could be improved by using the edge map of the original image with detailed edge information. Therefore, in this paper, we propose a post-processing algorithm that maintains semantic image segmentation based on learning, but modifies the resulting label map based on the edge map of the original image. After applying the algorithm to the existing method, when comparing similar applications before and after, approximately 1.74% pixels and 1.35% IoU (Intersection of Union) were applied, and when analyzing the results, the precise targeting fine segmentation function was improved.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

3D object Modeling based on Superquadrics and Constructive Solid Geometry (Superquadric 과 CSG에 기반한 3차원 모델링)

  • 김대현;이선호;김태은;최종수
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.04a
    • /
    • pp.149-152
    • /
    • 2000
  • 3차원 물체 형상 모델링은 인식에 있어서 중요한 역할을 차지하고 있다. 기존의 픽셀(pixel)기반 영상표현은 물체 고유의 유기적 구조를 반영할 수 없고, 에지(edge)나 기반 물체 표현법은 물체의 자세한 표현이 가능하지만 물체인식을 위해서는 많은 양의 속성들을 만들어내게된다. 따라서 물체인식을 위해서는 물체의 형상특징을 직선적으로 기술할 수 있는 체적소 기반 물체 표현 방법이 필요하다. 본 논문에서는 몇 개의 파리미터를 이용하여 3차원 정보를 효과적으로 얻을 수 있는 superquadric과 이를 기본 단위로 한 CSG(Constructive Solid Geometry) tree를 이용하여 3 차원 물체 형상모델링에 대해서 기술한다.

  • PDF

Dense-Depth Map Estimation with LiDAR Depth Map and Optical Images based on Self-Organizing Map (라이다 깊이 맵과 이미지를 사용한 자기 조직화 지도 기반의 고밀도 깊이 맵 생성 방법)

  • Choi, Hansol;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.283-295
    • /
    • 2021
  • This paper proposes a method for generating dense depth map using information of color images and depth map generated based on lidar based on self-organizing map. The proposed depth map upsampling method consists of an initial depth prediction step for an area that has not been acquired from LiDAR and an initial depth filtering step. In the initial depth prediction step, stereo matching is performed on two color images to predict an initial depth value. In the depth map filtering step, in order to reduce the error of the predicted initial depth value, a self-organizing map technique is performed on the predicted depth pixel by using the measured depth pixel around the predicted depth pixel. In the process of self-organization map, a weight is determined according to a difference between a distance between a predicted depth pixel and an measured depth pixel and a color value corresponding to each pixel. In this paper, we compared the proposed method with the bilateral filter and k-nearest neighbor widely used as a depth map upsampling method for performance comparison. Compared to the bilateral filter and the k-nearest neighbor, the proposed method reduced by about 6.4% and 8.6% in terms of MAE, and about 10.8% and 14.3% in terms of RMSE.

Block Fragile Watermarking Based on LUT (LUT 기반의 블록 연성 워터마킹)

  • Joo Eun-Kyong;Kang Hyun-Ho;Park Ji-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.9
    • /
    • pp.1294-1303
    • /
    • 2004
  • This paper proposes new block fragile watermarking for image authentication and integrity by using the existing pixel-based scheme and block-based scheme. The proposed scheme is performed as fellows. First, we choose LUT(Look Up Table) from each pixel of original image and information of the corresponding block. Next, we insert a watermark, modifying original image with values to compare binary original image with the watermark to be embedded. As a result, we provide the means to overcome some weakness of the existing scheme. Binary logo as watermark can be detected from watermarked image and altered location can also be detected by the unit of pixel or that of block in our scheme.

  • PDF

Light Contribution Based Importance Sampling for the Many-Light Problem (다광원 문제를 위한 광원 기여도 기반의 중요도 샘플링)

  • Kim, Hyo-Won;Ki, Hyun-Woo;Oh, Kyoung-Su
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06b
    • /
    • pp.240-245
    • /
    • 2008
  • 컴퓨터 그래픽스에서 많은 광원들을 포함하는 장면을 사실적으로 렌더링하기 위해서는, 많은 양의 조명 계산을 수행해야 한다. 다수의 광원들로부터 빠르게 조명 계산을 하기 위해 많이 사용되는 기법 중에 몬테 카를로(Monte Carlo) 기법이 있다. 본 논문은 이러한 몬테 카를로(Monte Carlo) 기법을 기반으로, 다수의 광원들을 효과적으로 샘플링 할 수 있는 새로운 중요도 샘플링 기법을 제안한다. 제안된 기법의 두 가지 핵심 아이디어는 첫째, 장면 내에 다수의 광원이 존재하여도 어떤 특정 지역에 많은 영향을 주는 광원은 일부인 경우가 많다는 점이고 두 번째는 공간 일관성(spatial coherence)이 낮거나 그림자 경계 지역에 위치한 픽셀들은 영향을 받는 주요 광원이 서로 다르다는 점이다. 제안된 기법은 이러한 관찰에 착안하여 특정 지역에 광원이 기여하는 정도를 평가하고 이에 비례하게 확률 밀도 함수(PDF: Probability Density Function)를 결정하는 방법을 제안한다. 이를 위하여 이미지 공간상에서 픽셀들을 클러스터링(clustering)하고 클러스터 구조를 기반으로 대표 샘플을 선정한다. 선정된 대표 샘플들로부터 광원들의 기여도를 평가하고 이를 바탕으로 클러스터 단위의 확률 밀도 함수를 결정하여 최종 렌더링을 수행한다. 본 논문이 제안하는 샘플링 기법을 적용했을 때 전통적인 샘플링 방식과 비교하여 같은 샘플링 개수에서 노이즈(noise)가 적게 발생하는 좋은 화질을 얻을 수 있었다. 제안된 기법은 다수의 조명과 다양한 재질, 복잡한 가려짐이 존재하는 장면을 효과적으로 표현할 수 있다.

  • PDF

Implementation of a 'Rasterization based on Vector Algorithm' suited for a Multi-thread Shader architecture (Multi-Thread 쉐이더 구조에 적합한 Vector 기반의 Rasterization 알고리즘의 구현)

  • Lee, Ju-Suk;Kim, Woo-Young;Lee, Bo-Haeng;Lee, Kwang-Yeob
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.10
    • /
    • pp.46-52
    • /
    • 2009
  • A Multi-Core/Multi-Thread architecture is adopted for the Shader processor to enhance the processing performance. The Shader processor is designed to utilize its processing core IP for multiple purposes, such as Vertex-Shading, Rasterization, Pixel-Shading, etc. In this paper, we propose a 'Rasterization based on Vector Algorithm' that makes parallel pixels processing possible with Multi-Core and Multi-Thread architecture on the Shader Core. The proposed algorithm takes only 2% operation counts of the Scan-Line Algorithm and processes pixels independently.