• Title/Summary/Keyword: 그레이스케일 표현 계층

Search Result 2, Processing Time 0.015 seconds

An Image-Based Annotation for DICOM Standard Image (DICOM 표준 영샹을 위한 이미지 기반의 주석)

  • Jang Seok-Hwan;Kim Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.9
    • /
    • pp.1321-1328
    • /
    • 2004
  • In this article, we present a new DICOM object able to create image-based annotations in DICOM image. Since the proposed image-based annotation uses images themselves of annotation, various types like character, sketch, and scanning image, etc., can be imported into annotation easily. The proposed annotation is inserted into DICOM image directly but they do not influence original DICOM image quality by using independent data channel. The proposed annotation is expected to be very useful to medium and small clinics that cannot afford picture archiving and communication systems or electronic medical record.

  • PDF

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.