• 제목/요약/키워드: Texture Enhancement

검색결과 78건 처리시간 0.018초

PDE-based Image Interpolators

  • Cha, Young-Joon;Kim, Seong-Jai
    • 한국통신학회논문지
    • /
    • 제35권12C호
    • /
    • pp.1010-1019
    • /
    • 2010
  • This article presents a PDE-based interpolation algorithm to effectively reproduce high resolution imagery. Conventional PDE-based interpolation methods can produce sharp edges without checkerboard effects; however, they are not interpolators but approximators and tend to weaken fine structures. In order to overcome the drawback, a texture enhancement method is suggested as a post-process of PDE-based interpolation methods. The new method rectifies the image by simply incorporating the bilinear interpolation of the weakened texture components and therefore makes the resulting algorithm an interpolator. It has been numerically verified that the new algorithm, called the PDE-based image interpolator (PII), restores sharp edges and enhances texture components satisfactorily. PII outperforms the PDE-based skeleton-texture decomposition (STD) approach. Various numerical examples are shown to verify the claim.

Discriminatory Projection of Camouflaged Texture Through Line Masks

  • Bhajantri, Nagappa;Pradeep, Kumar R.;Nagabhushan, P.
    • Journal of Information Processing Systems
    • /
    • 제9권4호
    • /
    • pp.660-677
    • /
    • 2013
  • The blending of defective texture with the ambience texture results in camouflage. The gray value or color distribution pattern of the camouflaged images fails to reflect considerable deviations between the camouflaged object and the sublimating background demands improved strategies for texture analysis. In this research, we propose the implementation of an initial enhancement of the image that employs line masks, which could result in a better discrimination of the camouflaged portion. Finally, the gray value distribution patterns are analyzed in the enhanced image, to fix the camouflaged portions.

Hepatocellular Carcinoma: Texture Analysis of Preoperative Computed Tomography Images Can Provide Markers of Tumor Grade and Disease-Free Survival

  • Jiseon Oh;Jeong Min Lee;Junghoan Park;Ijin Joo;Jeong Hee Yoon;Dong Ho Lee;Balaji Ganeshan;Joon Koo Han
    • Korean Journal of Radiology
    • /
    • 제20권4호
    • /
    • pp.569-579
    • /
    • 2019
  • Objective: To investigate the usefulness of computed tomography (CT) texture analysis (CTTA) in estimating histologic tumor grade and in predicting disease-free survival (DFS) after surgical resection in patients with hepatocellular carcinoma (HCC). Materials and Methods: Eighty-one patients with a single HCC who had undergone quadriphasic liver CT followed by surgical resection were enrolled. Texture analysis of tumors on preoperative CT images was performed using commercially available software. The mean, mean of positive pixels (MPP), entropy, kurtosis, skewness, and standard deviation (SD) of the pixel distribution histogram were derived with and without filtration. The texture features were then compared between groups classified according to histologic grade. Kaplan-Meier and Cox proportional hazards analyses were performed to determine the relationship between texture features and DFS. Results: SD and MPP quantified from fine to coarse textures on arterial-phase CT images showed significant positive associations with the histologic grade of HCC (p < 0.05). Kaplan-Meier analysis identified most CT texture features across the different filters from fine to coarse texture scales as significant univariate markers of DFS. Cox proportional hazards analysis identified skewness on arterial-phase images (fine texture scale, spatial scaling factor [SSF] 2.0, p < 0.001; medium texture scale, SSF 3.0, p < 0.001), tumor size (p = 0.001), microscopic vascular invasion (p = 0.034), rim arterial enhancement (p = 0.024), and peritumoral parenchymal enhancement (p = 0.010) as independent predictors of DFS. Conclusion: CTTA was demonstrated to provide texture features significantly correlated with higher tumor grade as well as predictive markers of DFS after surgical resection of HCCs in addition to other valuable imaging and clinico-pathologic parameters.

GAN-Based Local Lightness-Aware Enhancement Network for Underexposed Images

  • Chen, Yong;Huang, Meiyong;Liu, Huanlin;Zhang, Jinliang;Shao, Kaixin
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.575-586
    • /
    • 2022
  • Uneven light in real-world causes visual degradation for underexposed regions. For these regions, insufficient consideration during enhancement procedure will result in over-/under-exposure, loss of details and color distortion. Confronting such challenges, an unsupervised low-light image enhancement network is proposed in this paper based on the guidance of the unpaired low-/normal-light images. The key components in our network include super-resolution module (SRM), a GAN-based low-light image enhancement network (LLIEN), and denoising-scaling module (DSM). The SRM improves the resolution of the low-light input images before illumination enhancement. Such design philosophy improves the effectiveness of texture details preservation by operating in high-resolution space. Subsequently, local lightness attention module in LLIEN effectively distinguishes unevenly illuminated areas and puts emphasis on low-light areas, ensuring the spatial consistency of illumination for locally underexposed images. Then, multiple discriminators, i.e., global discriminator, local region discriminator, and color discriminator performs assessment from different perspectives to avoid over-/under-exposure and color distortion, which guides the network to generate images that in line with human aesthetic perception. Finally, the DSM performs noise removal and obtains high-quality enhanced images. Both qualitative and quantitative experiments demonstrate that our approach achieves favorable results, which indicates its superior capacity on illumination and texture details restoration.

CT Image Analysis of Hepatic Lesions Using CAD ; Fractal Texture Analysis

  • Hwang, Kyung-Hoon;Cheong, Ji-Wook;Lee, Jung-Chul;Lee, Hyung-Ji;Choi, Duck-Joo;Choe, Won-Sick
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 춘계학술발표대회
    • /
    • pp.326-327
    • /
    • 2007
  • We investigated whether the CT images of hepatic lesions could be analyzed by computer-aided diagnosis (CAD) tool. We retrospectively reanalyzed 14 liver CT images (10 hepatocellular cancers and 4 benign liver lesions; patients who presented with hepatic masses). The hepatic lesions on CT were segmented by rectangular ROI technique and the morphologic features were extracted and quantitated using fractal texture analysis. The contrast enhancement of hepatic lesions was also quantified and added to the differential diagnosis. The best discriminating function combining the textural features and the values of contrast enhancement of the lesions was created using linear discriminant analysis. Textural feature analysis showed moderate accuracy in the differential diagnosis of hepatic lesions, but statistically insignificant. Combining textural analysis and contrast enhancement value resulted in improved diagnostic accuracy, but further studies are needed.

  • PDF

Inter-layer Texture and Syntax Prediction for Scalable Video Coding

  • Lim, Woong;Choi, Hyomin;Nam, Junghak;Sim, Donggyu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.422-433
    • /
    • 2015
  • In this paper, we demonstrate inter-layer prediction tools for scalable video coders. The proposed scalable coder is designed to support not only spatial, quality and temporal scalabilities, but also view scalability. In addition, we propose quad-tree inter-layer prediction tools to improve coding efficiency at enhancement layers. The proposed inter-layer prediction tools generate texture prediction signal with exploiting texture, syntaxes, and residual information from a reference layer. Furthermore, the tools can be used with inter and intra prediction blocks within a large coding unit. The proposed framework guarantees the rate distortion performance for a base layer because it does not have any compulsion such as constraint intra prediction. According to experiments, the framework supports the spatial scalable functionality with about 18.6%, 18.5% and 25.2% overhead bits against to the single layer coding. The proposed inter-layer prediction tool in multi-loop decoding design framework enables to achieve coding gains of 14.0%, 5.1%, and 12.1% in BD-Bitrate at the enhancement layer, compared to a single layer HEVC for all-intra, low-delay, and random access cases, respectively. For the single-loop decoding design, the proposed quad-tree inter-layer prediction can achieve 14.0%, 3.7%, and 9.8% bit saving.

Disparity Refinement near the Object Boundaries for Virtual-View Quality Enhancement

  • Lee, Gyu-cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권5호
    • /
    • pp.2189-2196
    • /
    • 2015
  • Stereo matching algorithm is usually used to obtain a disparity map from a pair of images. However, the disparity map obtained by using stereo matching contains lots of noise and error regions. In this paper, we propose a virtual-view synthesis algorithm using disparity refinement in order to improve the quality of the synthesized image. First, the error region is detected by examining the consistency of the disparity maps. Then, motion information is acquired by applying optical flow to texture component of the image in order to improve the performance. Then, the occlusion region is found using optical flow on the texture component of the image in order to improve the performance of the optical flow. The refined disparity map is finally used for the synthesis of the virtual view image. The experimental results show that the proposed algorithm improves the quality of the generated virtual-view.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • 제36권2호
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

A Comparative Analysis Between <Leonardo.Ai> and <Meshy> as AI Texture Generation Tools

  • Pingjian Jie;Xinyi Shan;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • 제11권4호
    • /
    • pp.333-339
    • /
    • 2023
  • In three-dimensional(3D) modeling, texturing plays a crucial role as a visual element, imparting detail and realism to models. In contrast to traditional texturing methods, the current trend involves utilizing AI tools such as Leonardo.Ai and Meshy to create textures for 3D models in a more efficient and precise manner. This paper focuses on 3D texturing, conducting a comprehensive comparative study of AI tools, specifically Leonardo.Ai and Meshy. By delving into the performance, functional differences, and respective application scopes of these two tools in the generation of 3D textures, we highlight potential applications and development trends within the realm of 3D texturing. The efficient use of AI tools in texture creation also has the potential to drive innovation and enhancement in the field of 3D modeling. In conclusion, this research aims to provide a comprehensive perspective for researchers, practitioners, and enthusiasts in related fields, fostering further innovation and development in this domain.

단일 영상 초해상도를 위한 질감-공간 분리 기반의 특징 분류 네트워크 (Texture-Spatial Separation based Feature Distillation Network for Single Image Super Resolution)

  • 한현호
    • 디지털정책학회지
    • /
    • 제2권3호
    • /
    • pp.1-7
    • /
    • 2023
  • 본 논문은 단일 영상을 이용하여 초해상도 방법을 수행하기 위해 질감-공간 영역을 분리한 뒤 세부정보를 중심으로 특징을 분류하는 방법을 제안한다. CNN(Convolutional Neural Network) 기반의 초해상도는 세부정보를 개선하기 위한 특징 추정 과정에서의 복잡한 절차와 중복된 특징 정보의 생성으로 인해 초해상도에서 가장 중요한 기준인 품질 저하가 발생할 수 있다. 제안하는 방법은 절차적 복잡성을 줄이고 중복 특징 정보의 생성을 최소화하여 초해상도 결과의 품질을 개선하기 위해 입력 영상을 질감과 공간의 두 채널로 분리하였다. 질감 채널에서는 세부정보 복원을 위해 다중스케일로 변환한 영상에 단계별 skip-connection을 적용한 잔차 블록 구조를 적용하여 특징 정제 과정을 수행함으로써 특징 추출을 개선하였고, 공간 채널에서는 평활화된 형태의 특징을 활용하여 잡음을 제거하고 구조적 특징을 유지하도록 하였다. 제안하는 방법을 이용해 실험한 결과 기존 초해상도 방법대비 PSNR 및 SSIM 성능 평가에서 향상된 결과를 보여 품질이 개선됨을 확인할 수 있었다.