• 제목/요약/키워드: Map texture

검색결과 207건 처리시간 0.028초

입력 영상과의 상관관계를 이용한 변이 지도 영상의 개선 및 객체 분할 (Disparity map image Improvement and object segmentation using the Correlation of Original Image)

  • 신동진;최민수;한동일
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.317-318
    • /
    • 2006
  • There are lot of noises and errors in depth map image which is gotten by using a stereo camera. These errors are caused by mismatching of the corresponding points which occur in texture-less region of input images of stereo camera or occlusions. In this paper, we use a method which is able to get rid of the noises through post processing and reduce the errors of disparity values which are caused by the mismatching in the texture-less region of input images through the correlation between the depth map images and the input images. Then we propose a novel method which segments the object by using the improved disparity map images and projections.

  • PDF

비디오 기반의 질감 전이 기법 (Texture Transfer Based on Video)

  • 콩푸팔라;이호창;윤경현
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2012년도 한국컴퓨터종합학술대회논문집 Vol.39 No.1(C)
    • /
    • pp.406-407
    • /
    • 2012
  • Texture transfer is a NPR technique for expressing various styles according to source (reference) image. By late 2000s, there are many texture transfer researches. But video base researchers are not active. Moreover, they didn't use important feature like directional information which need to express detail characteristics of target. So, we propose a new method to generate texture transfer animation (using video) with directional effect for maintaining temporal coherence and controlling coherence direction of texture. For maintaining temporal coherence, we use optical flow and confidence map to adapt for occlusion/disocclusion boundaries. And we control direction of texture for taking structure of input. For expressing various texture effects according to different regions, we calculate gradient based on directional weight. With these techniques, our algorithm can make animation result that maintain temporal coherence and express directional texture effect. It is reflect the characteristics of source and target image well. And our result can express various texture directions automatically.

MPEG-4 MAC(Multiple Auxiliary Component) 기반 스테레오스코픽 비디오 부호화 (Stereoscopic Video Coding Using MPEG-4 Multiple Auxiliary Component)

  • 조숙희;윤국진;안충현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.167-170
    • /
    • 2003
  • We propose stereoscopic video coding method using the syntax of MAC(Multiple auxiliary component) that was added to the MPEC-4 visual version 2 in order to describe the transparency of the video object. We also define the novel MAC's semantics in MPEG-4 that should support the proposed coding method. The major difference between the existing coding method and the proposed coding method is the addition of the residual texture coding. The proposed coding method assigns disparity map and residual texture to 3 components of MAC: one component for disparity map and the rest 2 components fer the luminance and chrominance data of the residual texture, respectively. The performance of the proposed method is evaluated in terms of PSNR by computer simulations.

  • PDF

Tsunami-induced Change Detection Using SAR Intensity and Texture Information Based on the Generalized Gaussian Mixture Model

  • Jung, Min-young;Kim, Yong-il
    • 한국측량학회지
    • /
    • 제34권2호
    • /
    • pp.195-206
    • /
    • 2016
  • The remote sensing technique using SAR data have many advantages when applied to the disaster site due to its wide coverage and all-weather acquisition availability. Although a single-pol (polarimetric) SAR image cannot represent the land surface better than a quad-pol SAR image can, single-pol SAR data are worth using for disaster-induced change detection. In this paper, an automatic change detection method based on a mixture of GGDs (generalized Gaussian distribution) is proposed, and usability of the textural features and intensity is evaluated by using the proposed method. Three ALOS/PALSAR images were used in the experiments, and the study site was Norita City, which was affected by the 2011 Tohoku earthquake. The experiment results showed that the proposed automatic change detection method is practical for disaster sites where the large areas change. The intensity information is useful for detecting disaster-induced changes with a 68.3% g-mean, but the texture information is not. The autocorrelation and correlation show the interesting implication that they tend not to extract agricultural areas in the change detection map. Therefore, the final tsunami-induced change map is produced by the combination of three maps: one is derived from the intensity information and used as an initial map, and the others are derived from the textural information and used as auxiliary data.

Animal Fur Recognition Algorithm Based on Feature Fusion Network

  • Liu, Peng;Lei, Tao;Xiang, Qian;Wang, Zexuan;Wang, Jiwei
    • Journal of Multimedia Information System
    • /
    • 제9권1호
    • /
    • pp.1-10
    • /
    • 2022
  • China is a big country in animal fur industry. The total production and consumption of fur are increasing year by year. However, the recognition of fur in the fur production process still mainly relies on the visual identification of skilled workers, and the stability and consistency of products cannot be guaranteed. In response to this problem, this paper proposes a feature fusion-based animal fur recognition network on the basis of typical convolutional neural network structure, relying on rapidly developing deep learning techniques. This network superimposes texture feature - the most prominent feature of fur image - into the channel dimension of input image. The output feature map of the first layer convolution is inverted to obtain the inverted feature map and concat it into the original output feature map, then Leaky ReLU is used for activation, which makes full use of the texture information of fur image and the inverted feature information. Experimental results show that the algorithm improves the recognition accuracy by 9.08% on Fur_Recognition dataset and 6.41% on CIFAR-10 dataset. The algorithm in this paper can change the current situation that fur recognition relies on manual visual method to classify, and can lay foundation for improving the efficiency of fur production technology.

실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법 (A 2D / 3D Map Modeling of Indoor Environment)

  • 조상우;박진우;권용무;안상철
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

다중 파라메터 MR 영상에서 텍스처 분석을 통한 자동 전립선암 검출 (Automated Prostate Cancer Detection on Multi-parametric MR imaging via Texture Analysis)

  • 김영지;정주립;홍헬렌;황성일
    • 한국멀티미디어학회논문지
    • /
    • 제19권4호
    • /
    • pp.736-746
    • /
    • 2016
  • In this paper, we propose an automatic prostate cancer detection method using position, signal intensity and texture feature based on SVM in multi-parametric MR images. First, to align the prostate on DWI and ADC map to T2wMR, the transformation parameters of DWI are estimated by normalized mutual information-based rigid registration. Then, to normalize the signal intensity range among inter-patient images, histogram stretching is performed. Second, to detect prostate cancer areas in T2wMR, SVM classification with position, signal intensity and texture features was performed on T2wMR, DWI and ADC map. Our feature classification using multi-parametric MR imaging can improve the prostate cancer detection rate on T2wMR.

텍스쳐맵 경로 재설정 툴의 개발에 관한 연구 (A Study of Reassigning Texture Map File for Correcting Directory)

  • 송밝음
    • 한국멀티미디어학회논문지
    • /
    • 제21권4호
    • /
    • pp.535-544
    • /
    • 2018
  • Developing a new texture path tool is crucial in 3D industries to automatically manage and set up texture paths for characters and environment modelings. For a better study, I compare the problems, methods and functions of the new tool(SongRepath tool) with other commonly used tools such as File Path Editor, Genie, He Texture Path. Next, I analyze the top down method system for these commonly used tools, and create a new bottom-up algorithm for a better use and efficiency. Finally, analyzing and comparing SongRepath tool with other tools in terms of operation in convenience, steps for results, checking number of Texture files, checking the numbers of new paths, checking the numbers of files not found, checking the numbers of path not changed, errors or problems, and availability of cancelation while proceeding the result. As a result, this study shows that SongRepath tool has less errors and saves more time on the processing works.

지표면 별 영상잡음과 영상질감을 이용한 SAR 클러터 영상 생성 (SAR Clutter Image Generation Based on Measured Speckles and Textures)

  • 권순구;오이석
    • 대한원격탐사학회지
    • /
    • 제25권4호
    • /
    • pp.375-381
    • /
    • 2009
  • 본 논문에서는 다양한 종류의 지표면에 대하여 분석하여 산란 특성을 연구하고 SAR 클러터 영상을 제작하고 실제 SAR 클러터 영상과 비교한다. 먼저 지표면의 특성을 분석하기 위해 각각의 지표면에 대해서 입력변수를 측정한다. 측정한 데이터를 이용하여 Oh 모델, PO 모델, radiative transfer model(RTM)을 이용하여 각도 별 산란계수를 구하였다. SAR 영상 생성을 위해 먼저 측정 지역의 DEM (digital elevation map)과 LCM (land cover map)데이터를 제작한다. DEM 데이터의 단일 픽셀(pixel)의 높이 정보를 이용하여 픽셀의 입사각을 계산하고 입사각에 따른 해당 지표면의 산란 계수를 대입한다. LCM 데이터는 해당 지역의 답사를 통해 논, 밭, 산, 길, 인공물 등을 1:5000 지도에 기입하여 SAR 영상 생성에 사용한다. DEM 데이터와 LCM 데이터를 사용하여 입사각과 지표면 종류에 따른 계수를 계산하고 영상잡음(speckle)과 영상질감(texture)을 이용하여 SAR 클러터 영상을 생성하고 실제 영상과 비교한다.

Magnetic resonance imaging texture analysis for the evaluation of viable ovarian tissue in patients with ovarian endometriosis: a retrospective case-control study

  • Lee, Dayong;Lee, Hyun Jung
    • Journal of Yeungnam Medical Science
    • /
    • 제39권1호
    • /
    • pp.24-30
    • /
    • 2022
  • Background: Texture analysis has been used as a method for quantifying image properties based on textural features. The aim of the present study was to evaluate the usefulness of magnetic resonance imaging (MRI) texture analysis for the evaluation of viable ovarian tissue on the perfusion map of ovarian endometriosis. Methods: To generate a normalized perfusion map, subtracted T1-weighted imaging (T1WI), T1WI and contrast-enhanced T1W1 with sequences were performed using the same parameters in 25 patients with surgically confirmed ovarian endometriosis. Integrated density is defined as the sum of the values of the pixels in the image or selection. We investigated the parameters for texture analysis in ovarian endometriosis, including angular second moment (ASM), contrast, correlation, inverse difference moment (IDM), and entropy, which is equivalent to the product of area and mean gray value. Results: The perfusion ratio and integrated density of normal ovary were 0.52±0.05 and 238.72±136.21, respectively. Compared with the normal ovary, the affected ovary showed significant differences in total size (p<0.001), fractional area ratio (p<0.001), and perfusion ratio (p=0.010) but no significant differences in perfused tissue area (p=0.158) and integrated density (p=0.112). In comparison of parameters for texture analysis between the ovary with endometriosis and the contralateral normal ovary, ASM (p=0.004), contrast (p=0.002), IDM (p<0.001), and entropy (p=0.028) showed significant differences. A linear regression analysis revealed that fractional area had significant correlations with ASM (r2=0.211), IDM (r2=0.332), and entropy (r2=0.289). Conclusion: MRI texture analysis could be useful for the evaluation of viable ovarian tissues in patients with ovarian endometriosis.