• Title/Summary/Keyword: Texture Enhancement

Search Result 78, Processing Time 0.027 seconds

PDE-based Image Interpolators

  • Cha, Young-Joon;Kim, Seong-Jai
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.1010-1019
    • /
    • 2010
  • This article presents a PDE-based interpolation algorithm to effectively reproduce high resolution imagery. Conventional PDE-based interpolation methods can produce sharp edges without checkerboard effects; however, they are not interpolators but approximators and tend to weaken fine structures. In order to overcome the drawback, a texture enhancement method is suggested as a post-process of PDE-based interpolation methods. The new method rectifies the image by simply incorporating the bilinear interpolation of the weakened texture components and therefore makes the resulting algorithm an interpolator. It has been numerically verified that the new algorithm, called the PDE-based image interpolator (PII), restores sharp edges and enhances texture components satisfactorily. PII outperforms the PDE-based skeleton-texture decomposition (STD) approach. Various numerical examples are shown to verify the claim.

Discriminatory Projection of Camouflaged Texture Through Line Masks

  • Bhajantri, Nagappa;Pradeep, Kumar R.;Nagabhushan, P.
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.660-677
    • /
    • 2013
  • The blending of defective texture with the ambience texture results in camouflage. The gray value or color distribution pattern of the camouflaged images fails to reflect considerable deviations between the camouflaged object and the sublimating background demands improved strategies for texture analysis. In this research, we propose the implementation of an initial enhancement of the image that employs line masks, which could result in a better discrimination of the camouflaged portion. Finally, the gray value distribution patterns are analyzed in the enhanced image, to fix the camouflaged portions.

Hepatocellular Carcinoma: Texture Analysis of Preoperative Computed Tomography Images Can Provide Markers of Tumor Grade and Disease-Free Survival

  • Jiseon Oh;Jeong Min Lee;Junghoan Park;Ijin Joo;Jeong Hee Yoon;Dong Ho Lee;Balaji Ganeshan;Joon Koo Han
    • Korean Journal of Radiology
    • /
    • v.20 no.4
    • /
    • pp.569-579
    • /
    • 2019
  • Objective: To investigate the usefulness of computed tomography (CT) texture analysis (CTTA) in estimating histologic tumor grade and in predicting disease-free survival (DFS) after surgical resection in patients with hepatocellular carcinoma (HCC). Materials and Methods: Eighty-one patients with a single HCC who had undergone quadriphasic liver CT followed by surgical resection were enrolled. Texture analysis of tumors on preoperative CT images was performed using commercially available software. The mean, mean of positive pixels (MPP), entropy, kurtosis, skewness, and standard deviation (SD) of the pixel distribution histogram were derived with and without filtration. The texture features were then compared between groups classified according to histologic grade. Kaplan-Meier and Cox proportional hazards analyses were performed to determine the relationship between texture features and DFS. Results: SD and MPP quantified from fine to coarse textures on arterial-phase CT images showed significant positive associations with the histologic grade of HCC (p < 0.05). Kaplan-Meier analysis identified most CT texture features across the different filters from fine to coarse texture scales as significant univariate markers of DFS. Cox proportional hazards analysis identified skewness on arterial-phase images (fine texture scale, spatial scaling factor [SSF] 2.0, p < 0.001; medium texture scale, SSF 3.0, p < 0.001), tumor size (p = 0.001), microscopic vascular invasion (p = 0.034), rim arterial enhancement (p = 0.024), and peritumoral parenchymal enhancement (p = 0.010) as independent predictors of DFS. Conclusion: CTTA was demonstrated to provide texture features significantly correlated with higher tumor grade as well as predictive markers of DFS after surgical resection of HCCs in addition to other valuable imaging and clinico-pathologic parameters.

GAN-Based Local Lightness-Aware Enhancement Network for Underexposed Images

  • Chen, Yong;Huang, Meiyong;Liu, Huanlin;Zhang, Jinliang;Shao, Kaixin
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.575-586
    • /
    • 2022
  • Uneven light in real-world causes visual degradation for underexposed regions. For these regions, insufficient consideration during enhancement procedure will result in over-/under-exposure, loss of details and color distortion. Confronting such challenges, an unsupervised low-light image enhancement network is proposed in this paper based on the guidance of the unpaired low-/normal-light images. The key components in our network include super-resolution module (SRM), a GAN-based low-light image enhancement network (LLIEN), and denoising-scaling module (DSM). The SRM improves the resolution of the low-light input images before illumination enhancement. Such design philosophy improves the effectiveness of texture details preservation by operating in high-resolution space. Subsequently, local lightness attention module in LLIEN effectively distinguishes unevenly illuminated areas and puts emphasis on low-light areas, ensuring the spatial consistency of illumination for locally underexposed images. Then, multiple discriminators, i.e., global discriminator, local region discriminator, and color discriminator performs assessment from different perspectives to avoid over-/under-exposure and color distortion, which guides the network to generate images that in line with human aesthetic perception. Finally, the DSM performs noise removal and obtains high-quality enhanced images. Both qualitative and quantitative experiments demonstrate that our approach achieves favorable results, which indicates its superior capacity on illumination and texture details restoration.

CT Image Analysis of Hepatic Lesions Using CAD ; Fractal Texture Analysis

  • Hwang, Kyung-Hoon;Cheong, Ji-Wook;Lee, Jung-Chul;Lee, Hyung-Ji;Choi, Duck-Joo;Choe, Won-Sick
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.326-327
    • /
    • 2007
  • We investigated whether the CT images of hepatic lesions could be analyzed by computer-aided diagnosis (CAD) tool. We retrospectively reanalyzed 14 liver CT images (10 hepatocellular cancers and 4 benign liver lesions; patients who presented with hepatic masses). The hepatic lesions on CT were segmented by rectangular ROI technique and the morphologic features were extracted and quantitated using fractal texture analysis. The contrast enhancement of hepatic lesions was also quantified and added to the differential diagnosis. The best discriminating function combining the textural features and the values of contrast enhancement of the lesions was created using linear discriminant analysis. Textural feature analysis showed moderate accuracy in the differential diagnosis of hepatic lesions, but statistically insignificant. Combining textural analysis and contrast enhancement value resulted in improved diagnostic accuracy, but further studies are needed.

  • PDF

Inter-layer Texture and Syntax Prediction for Scalable Video Coding

  • Lim, Woong;Choi, Hyomin;Nam, Junghak;Sim, Donggyu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.6
    • /
    • pp.422-433
    • /
    • 2015
  • In this paper, we demonstrate inter-layer prediction tools for scalable video coders. The proposed scalable coder is designed to support not only spatial, quality and temporal scalabilities, but also view scalability. In addition, we propose quad-tree inter-layer prediction tools to improve coding efficiency at enhancement layers. The proposed inter-layer prediction tools generate texture prediction signal with exploiting texture, syntaxes, and residual information from a reference layer. Furthermore, the tools can be used with inter and intra prediction blocks within a large coding unit. The proposed framework guarantees the rate distortion performance for a base layer because it does not have any compulsion such as constraint intra prediction. According to experiments, the framework supports the spatial scalable functionality with about 18.6%, 18.5% and 25.2% overhead bits against to the single layer coding. The proposed inter-layer prediction tool in multi-loop decoding design framework enables to achieve coding gains of 14.0%, 5.1%, and 12.1% in BD-Bitrate at the enhancement layer, compared to a single layer HEVC for all-intra, low-delay, and random access cases, respectively. For the single-loop decoding design, the proposed quad-tree inter-layer prediction can achieve 14.0%, 3.7%, and 9.8% bit saving.

Disparity Refinement near the Object Boundaries for Virtual-View Quality Enhancement

  • Lee, Gyu-cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.5
    • /
    • pp.2189-2196
    • /
    • 2015
  • Stereo matching algorithm is usually used to obtain a disparity map from a pair of images. However, the disparity map obtained by using stereo matching contains lots of noise and error regions. In this paper, we propose a virtual-view synthesis algorithm using disparity refinement in order to improve the quality of the synthesized image. First, the error region is detected by examining the consistency of the disparity maps. Then, motion information is acquired by applying optical flow to texture component of the image in order to improve the performance. Then, the occlusion region is found using optical flow on the texture component of the image in order to improve the performance of the optical flow. The refined disparity map is finally used for the synthesis of the virtual view image. The experimental results show that the proposed algorithm improves the quality of the generated virtual-view.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

A Comparative Analysis Between <Leonardo.Ai> and <Meshy> as AI Texture Generation Tools

  • Pingjian Jie;Xinyi Shan;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.333-339
    • /
    • 2023
  • In three-dimensional(3D) modeling, texturing plays a crucial role as a visual element, imparting detail and realism to models. In contrast to traditional texturing methods, the current trend involves utilizing AI tools such as Leonardo.Ai and Meshy to create textures for 3D models in a more efficient and precise manner. This paper focuses on 3D texturing, conducting a comprehensive comparative study of AI tools, specifically Leonardo.Ai and Meshy. By delving into the performance, functional differences, and respective application scopes of these two tools in the generation of 3D textures, we highlight potential applications and development trends within the realm of 3D texturing. The efficient use of AI tools in texture creation also has the potential to drive innovation and enhancement in the field of 3D modeling. In conclusion, this research aims to provide a comprehensive perspective for researchers, practitioners, and enthusiasts in related fields, fostering further innovation and development in this domain.

Texture-Spatial Separation based Feature Distillation Network for Single Image Super Resolution (단일 영상 초해상도를 위한 질감-공간 분리 기반의 특징 분류 네트워크)

  • Hyun Ho Han
    • Journal of Digital Policy
    • /
    • v.2 no.3
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, I proposes a method for performing single image super resolution by separating texture-spatial domains and then classifying features based on detailed information. In CNN (Convolutional Neural Network) based super resolution, the complex procedures and generation of redundant feature information in feature estimation process for enhancing details can lead to quality degradation in super resolution. The proposed method reduced procedural complexity and minimizes generation of redundant feature information by splitting input image into two channels: texture and spatial. In texture channel, a feature refinement process with step-wise skip connections is applied for detail restoration, while in spatial channel, a method is introduced to preserve the structural features of the image. Experimental results using proposed method demonstrate improved performance in terms of PSNR and SSIM evaluations compared to existing super resolution methods, confirmed the enhancement in quality.