• Title/Summary/Keyword: image of scientists

Search Result 1,102, Processing Time 0.021 seconds

Visibility Method for Transparent Splats on Depth Image Based Rendering (깊이 기반 3차원 영상 렌더링에서 투명한 스플랫을 사용한 가시성 기법)

  • Suh, Mo-Young;Chung, Woo-Nam;Han, Tack-Don
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.706-708
    • /
    • 2005
  • 본 논문에서는 투명한 스플랫을 사용한 깊이 기반의 3차원 이미지(Depth image based 3D rendered image) 렌더링에서의 가시성 기법을 제시한다. 이는 기존의 두 개의 패스로 이루어지는 가시성 기법을 z-버퍼 알고리즘과 변화된 McMillan's 알고리즘을 사용하여 하나의 패스로 구성함으로써 성능을 향상시켰다. 또한 스플랫의 순서에 따라 올바르지 않은 기준설정으로 인해 발생하는 화질의 문제점을 McMillan's 알고리즘을 수정함으로써 해결하였다.

  • PDF

Creating images for navigation in the image-based virtual environment (영상기반 가상환경에서 네비게이션을 위한 영상 생성)

  • 신동준;한창호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.532-534
    • /
    • 2000
  • 영상기반 렌더링(image-based rendering)은 적은 비용으로 실시간 영상을 생성할 수 있다는 장점이 있지만, 원시 영상(source image)만으로 가상환경을 생성하기에는 부족하다. 원시 영상과 함께 카메라 정보, 깊이정보, 사용자 입력 등을 이용하는데, 적은 수의 원시 영상과 추가 정보를 이용하여 원하는 장면을 생성하기 위해 다양한 방법이 연구되고 있다. 본 논문에서는 영상 기반의 가상환경에서 네비게이션을 위해 필요한 영상을 영상 참조기법을 통해 생성한다. 깊이가 깊지 않은 가상환경에서는 하나의 영상만으로도 이동 표현이 가능하지만 깊이가 깊을 경우 추가적인 영상을 필요로 하게 된다. 이 두 영상간의 새로운 영상을 모핑(morphing)을 통해 생성할 수도 있지만 사용자 입력이 많고 시간이 오래 걸린다는 단점이 있다. 영상 참조 기법은 가상환경에서 적은 사용자 입력으로 빠르게 네비게이션을 위한 영상을 생성할 수 있다.

  • PDF

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

Multi GPU Based Image Registration for Cerebrovascular Extraction and Interactive Visualization (뇌혈관 추출과 대화형 가시화를 위한 다중 GPU기반 영상정합)

  • Park, Seong-Jin;Shin, Yeong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.445-449
    • /
    • 2009
  • In this paper, we propose a computationally efficient multi GPU accelerated image registration technique to correct the motion difference between the pre-contrast CT image and post-contrast CTA image. Our method consists of two steps: multi GPU based image registration and a cerebrovascular visualization. At first, it computes a similarity measure considering the parallelism between both GPUs as well as the parallelism inside GPU for performing the voxel-based registration. Then, it subtracts a CT image transformed by optimal transformation matrix from CTA image, and visualizes the subtracted volume using GPU based volume rendering technique. In this paper, we compare our proposed method with existing methods using 5 pairs of pre-contrast brain CT image and post-contrast brain CTA image in order to prove the superiority of our method in regard to visual quality and computational time. Experimental results show that our method well visualizes a brain vessel, so it well diagnose a vessel disease. Our multi GPU based approach is 11.6 times faster than CPU based approach and 1.4 times faster than single GPU based approach for total processing.

Quantification of Melanin Density at Epidermal Basal Layer by Using Confocal Scanning Laser Microscope (CSLM) (Confocal Scanning Laser Microscope (CSLM)을 이용한 피부 기저층 멜라닌 밀도의 정량화)

  • Kim, Dong Hyun;Lee, Sung Ho;Oh, Myoung Jin;Choi, Go Woon;Yang, Woo Chul;Park, Chang Seo
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.40 no.3
    • /
    • pp.259-268
    • /
    • 2014
  • Non-invasive technologies in skin research have enabled to use a live image of living skin without a biopsy or histologic processing of tissue. Confocal scanning laser microscope (CSLM) operated at a near-infrared wavelength of 830 nm allows visualization of inner structure of skin as a non-invasive manner. According to previous researches using CSLM, melanin cap and papillary ring were clearly observed in pigmented areas between stratum basale and papillary dermis. In this study, conversional analysis of CSLM digital images into numerical estimation using scanning probe image processor (SPIP) software was attempted for the first time. It is concluded that a quantification of CSLM images can pave way to expand the field of applications of CSLM.

Estimating Geometric Transformation of Planar Pattern in Spherical Panoramic Image (구면 파노라마 영상에서의 평면 패턴의 기하 변환 추정)

  • Kim, Bosung;Park, Jong-Seung
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1185-1194
    • /
    • 2015
  • A spherical panoramic image does not conform to the pin-hole camera model, and, hence, it is not possible to utilize previous techniques consisting of plane-to-plane transformation. In this paper, we propose a new method to estimate the planar geometric transformation between the planar image and a spherical panoramic image. Our proposed method estimates the transformation parameters for latitude, longitude, rotation and scaling factors when the matching pairs between a spherical panoramic image and a planar image are given. A planar image is projected into a spherical panoramic image through two steps of nonlinear coordinate transformations, which makes it difficult to compute the geometric transformation. The advantage of using our method is that we can uncover each of the implicit factors as well as the overall transformation. The experiment results show that our proposed method can achieve estimation errors of around 1% and is not affected by deformation factors, such as the latitude and rotation.

Efficient Image Retrieval using Minimal Spatial Relationships (최소 공간관계를 이용한 효율적인 이미지 검색)

  • Lee, Soo-Cheol;Hwang, Een-Jun;Byeon, Kwang-Jun
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.383-393
    • /
    • 2005
  • Retrieval of images from image databases by spatial relationship can be effectively performed through visual interface systems. In these systems, the representation of image with 2D strings, which are derived from symbolic projections, provides an efficient and natural way to construct image index and is also an ideal representation for the visual query. With this approach, retrieval is reduced to matching two symbolic strings. However, using 2D-string representations, spatial relationships between the objects in the image might not be exactly specified. Ambiguities arise for the retrieval of images of 3D scenes. In order to remove ambiguous description of object spatial relationships, in this paper, images are referred by considering spatial relationships using the spatial location algebra for the 3D image scene. Also, we remove the repetitive spatial relationships using the several reduction rules. A reduction mechanism using these rules can be used in query processing systems that retrieve images by content. This could give better precision and flexibility in image retrieval.

Image Watermarking Robust to Geometrical Attacks based on Normalization using Invariant Centroid (불변의 무게중심을 이용한 영상 정규화에 기반한 기하학적 공격에 강인한 워터마킹)

  • 김범수;최재각
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.3
    • /
    • pp.243-251
    • /
    • 2004
  • This paper proposes a digital image watermarking scheme, which is robust to geometrical attacks. The method improves image normalization-based watermarking (INW) technique that doesn't effectively deal with geometrical attacks with cropping. Image normalization is based on the moments of the image, however, in general, geometrical attacks bring the image boundary cropping and the moments are not preserved original ones. Thereafter the normalized images of before and after are not same form, i.e., the synchronization is lost. To solve the cropping problem of INW, Invariant Centroid (IC) is proposed in this paper. IC is a gravity center of a central area on a gray scale image that is invariant although an image is geometrically attacked and the only central area, which has less cropping possibility by geometrical attacks, is used for normalization. Experimental results show that the IC-based method is especially robust to geometrical attack with cropping.

SIFT based Image Similarity Search using an Edge Image Pyramid and an Interesting Region Detection (윤곽선 이미지 피라미드와 관심영역 검출을 이용한 SIFT 기반 이미지 유사성 검색)

  • Yu, Seung-Hoon;Kim, Deok-Hwan;Lee, Seok-Lyong;Chung, Chin-Wan;Kim, Sang-Hee
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.345-355
    • /
    • 2008
  • SIFT is popularly used in computer vision application such as object recognition, motion tracking, and 3D reconstruction among various shape descriptors. However, it is not easy to apply SIFT into the image similarity search as it is since it uses many high dimensional keypoint vectors. In this paper, we present a SIFT based image similarity search method using an edge image pyramid and an interesting region detection. The proposed method extracts keypoints, which is invariant to contrast, scale, and rotation of image, by using the edge image pyramid and removes many unnecessary keypoints from the image by using the hough transform. The proposed hough transform can detect objects of ellipse type so that it can be used to find interesting regions. Experimental results demonstrate that the retrieval performance of the proposed method is about 20% better than that of traditional SIFT in average recall.

Visual Programming Environment for Effective Teaching and Research in Image Processing (영상처리에서 효율적인 교육과 연구를 위한 비주얼 프로그래밍 환경 개발)

  • Lee Jeong Heon;Heo Hoon;Chae Oksam
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.1
    • /
    • pp.50-61
    • /
    • 2005
  • With the wide spread use of multimedia device, the demand for the image processing engineers are increasing in various fields. However there are few engineers who can develop practical applications in the image processing area. To teach practical image processing techniques, we need a visual programming environment which can efficiently present the image processing theories and, at the same time, provide interactive experiments for the theory presented. In this paper, we propose a visual programming environment of the integrated environment for image processing. It consists of the theory presentation systems and experiment systems based on the visual programming environment. The theory presentation systems support multimedia data, web documents and powerpoint files. The proposed system provides an integrated environment for application development as well as education. The proposed system accumulates the teaching materials and exercise data and it manages, an ideal image processing education and research environment to students and instructors.