• Title/Summary/Keyword: Multi-Image

Search Result 2,916, Processing Time 0.026 seconds

Digital Watermarking Technique of Compressed Multi-view Video with Layered Depth Image (계층적 깊이 영상으로 압축된 다시점 비디오에 대한 디지털 워터마크 기술)

  • Lim, Joong-Hee;Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • In this paper, proposed digital image watermarking technique with lifting wavelet transformation. This watermark technique can be easily extended in video content fields. Therefore, we apply this watermark technique to layered depth image structure that is efficient compression method of multi-view video with depth images. This application steps are very simple, because watermark is inserted only reference image. And watermarks of the other view images borrow from reference image. Each view image of multi-view video may be guaranteed authentication and copyright.

  • PDF

Color Correction Using Chromaticity of Highlight Region in Multi-Scaled Retinex

  • Jang, In-Su;Park, Kee-Hyon;Ha, Yeong-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.59-62
    • /
    • 2009
  • In general, as a dynamic range of digital still camera is narrower than a real scene‘s, it is hard to represent the shadow region of scene. Thus, multi-scaled retinex algorithm is used to improve detail and local contrast of the shadow region in an image by dividing the image by its local average images through Gaussian filtering. However, if the chromatic distribution of the original image is not uniform and dominated by a certain chromaticity, the chromaticity of the local average image depends on the dominant chromaticity of original image, thereby the colors of the resulting image are shifted to a complement color to the dominant chromaticity. In this paper, a modified multi-scaled retinex method to reduce the influence of the dominant chromaticity is proposed. In multi-scaled retinex process, the local average images obtained by Gaussian filtering are divided by the average chromaticity values of the original image in order to reduce the influence of dominant chromaticity. Next, the chromaticity of illuminant is estimated in highlight region and the local average images are corrected by the estimated chromaticity of illuminant. In experiment, results show that the proposed method improved the local contrast and detail without color distortion.

  • PDF

A New Connected Coherence Tree Algorithm For Image Segmentation

  • Zhou, Jingbo;Gao, Shangbing;Jin, Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1188-1202
    • /
    • 2012
  • In this paper, we propose a new multi-scale connected coherence tree algorithm (MCCTA) by improving the connected coherence tree algorithm (CCTA). In contrast to many multi-scale image processing algorithms, MCCTA works on multiple scales space of an image and can adaptively change the parameters to capture the coarse and fine level details. Furthermore, we design a Multi-scale Connected Coherence Tree algorithm plus Spectral graph partitioning (MCCTSGP) by combining MCCTA and Spectral graph partitioning in to a new framework. Specifically, the graph nodes are the regions produced by CCTA and the image pixels, and the weights are the affinities between nodes. Then we run a spectral graph partitioning algorithm to partition on the graph which can consider the information both from pixels and regions to improve the quality of segments for providing image segmentation. The experimental results on Berkeley image database demonstrate the accuracy of our algorithm as compared to existing popular methods.

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

Comparative Analysis of Cartesian Trajectory and MultiVane Trajectory Using ACR Phantom in MRI : Using Image Intensity Uniformity Test and Low-contrast Object Detectability Test (ACR 팬텀을 이용한 Cartesian Trajectory와 MultiVane Trajectory의 비교분석 : 영상강도 균질성과 저대조도 검체 검출률 test를 사용하여)

  • Nam, Soon-Kwon;Choi, Joon-Ho
    • Journal of radiological science and technology
    • /
    • v.42 no.1
    • /
    • pp.39-46
    • /
    • 2019
  • This study conducted a comparative analysis of differences between cartesian trajectory in a linear rectangular coordinate system and MultiVane trajectory in a nonlinear rectangular coordinate system axial T1 and axial T2 images using an American College of Radiology(ACR) phantom. The phantom was placed at the center of the head coil and the top-to-bottom and left-to-right levels were adjusted by using a level. The experiment was performed according to the Phantom Test Guidance provided by the ACR, and sagittal localizer images were obtained. As shown in Figure 2, slices # 1 and # 11 were scanned after placing them at the center of a $45^{\circ}$ wedge shape, and a total of 11 slices were obtained. According to the evaluation results, the image intensity uniformity(IIU) was 93.34% for the cartesian trajectory, and 93.19% for the MultiVane trajectory, both of which fall under the normal range in the axial T1 image. The IIU for the cartesian trajectory was 0.15% higher than that for the MultiVane trajectory. In axial T2, the IIU was 96.44% for the cartesian trajectory, and 95.97% for the MultiVane trajectory, which fall under the normal range. The IIU for the cartesian trajectory was by 0.47% higher than that for the MultiVane trajectory. As a result, the cartesian technique was superior to the MultiVane technique in terms of the high-contrast spatial resolution, image intensity uniformity, and low-contrast object detectability.

Image alignment method based on CUDA SURF for multi-spectral machine vision application (다중 스펙트럼 머신비전 응용을 위한 CUDA SURF 기반의 영상 정렬 기법)

  • Maeng, Hyung-Yul;Kim, Jin-Hyung;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1041-1051
    • /
    • 2014
  • In this paper, we propose a new image alignment technique based on CUDA SURF in order to solve the initial image alignment problem that frequently occurs in machine vision applications. Machine vision systems using multi-spectral images have recently become more common for solving various decision problems that cannot be performed by the human vision system. These machine vision systems mostly use markers for the initial image alignment. However, there are some applications where the markers cannot be used and the alignment techniques have to be changed whenever their markers are changed. In order to solve these problems, we propose a new image alignment method for multi-spectral machine vision applications based on SURF extracting image features without depending on markers. In this paper, we propose an image alignment method that obtains a sufficient number of feature points from multi-spectral images using SURF and removes outlier iteratively based on a least squares method. We further propose an effective preliminary scheme for removing mismatched feature point pairs that may affect the overall performance of the alignment. In addition, we reduce the execution time by implementing the proposed method using CUDA based on GPGPU in order to guarantee real-time operation. Simulation results show that the proposed method is able to align images effectively in applications where markers cannot be used.

GAN-based Image-to-image Translation using Multi-scale Images (다중 스케일 영상을 이용한 GAN 기반 영상 간 변환 기법)

  • Chung, Soyoung;Chung, Min Gyo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.767-776
    • /
    • 2020
  • GcGAN is a deep learning model to translate styles between images under geometric consistency constraint. However, GcGAN has a disadvantage that it does not properly maintain detailed content of an image, since it preserves the content of the image through limited geometric transformation such as rotation or flip. Therefore, in this study, we propose a new image-to-image translation method, MSGcGAN(Multi-Scale GcGAN), which improves this disadvantage. MSGcGAN, an extended model of GcGAN, performs style translation between images in a direction to reduce semantic distortion of images and maintain detailed content by learning multi-scale images simultaneously and extracting scale-invariant features. The experimental results showed that MSGcGAN was better than GcGAN in both quantitative and qualitative aspects, and it translated the style more naturally while maintaining the overall content of the image.

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Efficient Text Localization using MLP-based Texture Classification (신경망 기반의 텍스춰 분석을 이용한 효율적인 문자 추출)

  • Jung, Kee-Chul;Kim, Kwang-In;Han, Jung-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.180-191
    • /
    • 2002
  • We present a new text localization method in images using a multi-layer perceptron(MLP) and a multiple continuously adaptive mean shift (MultiCAMShift) algorithm. An automatically constructed MLP-based texture classifier generates a text probability image for various types of images without an explicit feature extraction. The MultiCAMShift algorithm, which operates on the text probability Image produced by an MLP, can place bounding boxes efficiently without analyzing the texture properties of an entire image.

Content-Based Image Retrieval Using Multi-Resolution Multi-Direction Filtering-Based CLBP Texture Features and Color Autocorrelogram Features

  • Bu, Hee-Hyung;Kim, Nam-Chul;Yun, Byoung-Ju;Kim, Sung-Ho
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.991-1000
    • /
    • 2020
  • We propose a content-based image retrieval system that uses a combination of completed local binary pattern (CLBP) and color autocorrelogram. CLBP features are extracted on a multi-resolution multi-direction filtered domain of value component. Color autocorrelogram features are extracted in two dimensions of hue and saturation components. Experiment results revealed that the proposed method yields a lot of improvement when compared with the methods that use partial features employed in the proposed method. It is also superior to the conventional CLBP, the color autocorrelogram using R, G, and B components, and the multichannel decoded local binary pattern which is one of the latest methods.