• Title/Summary/Keyword: Separating image

Search Result 162, Processing Time 0.025 seconds

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

Cloud Broadcasting Service Platform (클라우드 방송 서비스 플랫폼)

  • Kim, Hong-Ik;Lee, Dong-Ik;Lee, Jong-Han
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.3
    • /
    • pp.623-638
    • /
    • 2017
  • Application fields of cloud technologies have been gradually expanded with development of technology development and diversification of services. Cloud technology is used for investment efficiency, operation efficiency and service competitive advantage in digital broadcasting platform. Recently, Cloud broadcasting platform commercialized for UI(User Interface) and data broadcasting in Korea, and broadcasting service competition becomes fierce. Cloud technology of broadcasting provides remove a service dependency hardware resource and software architecture on STB device, and unified operation of user interface and service using cloud server without legacy separating management of STB types. In this paper, we explain application effects in image based cloud broadcasting service platform.

Study of User Reuse Intention for Gamified Interactive Movies upon Flow Experience

  • Han, Zhe;Lee, Hyun-Seok
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.281-293
    • /
    • 2020
  • As Christine Daley suggested, "interaction-image" is considered to be typical in the age of "Cinema 3.0", which integrates the interactivity of game art and obscures the boundary between producers and customers. In this case, users are allowed to involve actively in the scene as "players" to manage the tempo of the story to some extent, it, thus, makes users pleased to watch interactive movies repeatedly for trying a diverse option to unlock more branch lines. Accordingly, this paper aims to analyze the contributory factors and effect mechanism of users' reuse intention for gamified interactive movies and offer specific concepts to improve the reuse intention from the interactive film production and operation perspectives. Upon integrating the Flow theory and Technology Acceptance Model (TAM) and separating the intrinsic and extrinsic motivations of key factors based on Stimulus-Organism-Response (S-O-R), the research builds an empirical analysis model for users' reuse intention with cognition, design, attitude emotional experience and conducts an empirical analysis on 425 pieces of valid sample data applying SPSS22 and Amos23. The results show that user satisfaction and flow experience impact users' reuse intention highly and perceived usefulness, perceived ease of use, perceived enjoyment, remote perception, interactivity, and flow experience have significant positive influence on user satisfaction experience.

A Study on the Development of E-book Contents for Fashion Online Entrepreneurship Education (패션온라인창업 교육을 위한 전자책 콘텐츠 개발에 대한 연구)

  • Hwa-Yeon Jeong;Eun-Hee Hong
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.26 no.1
    • /
    • pp.33-44
    • /
    • 2024
  • This study developed e-book content in order to use e-books as a tool to provide more efficient classes to learners who are familiar with smart devices and online spaces. E-book contents were produced using Sigil-0.9.10. The development process is as follows. Before e-book development, it is necessary to prepare manuscript files, image files to be inserted, fonts to be used, and e-book covers. After inserting the book cover images, it is necessary to register the table of contents using the title tag and register the free fonts. Also, a style must be created for text or images used in the main text connected to a file containing the entire text. Then, after separating the entire text file into separate files according to each chapter, the text is completed in turn. E-books were produced focusing on hyperlink functions so that educational content and various example images could be accessed. Currently, there is a lack of research on e-books as textbooks in universities within the fashion design major. In the future, if e-book contents are developed according to the characteristics of courses and the level of learners, they can be used as effective teaching tools.

An Automatic Segmentation Method for Video Object Plane Generation (비디오 객체 생성을 위한 자동 영상 분할 방법)

  • 최재각;김문철;이명호;안치득;김성대
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.146-155
    • /
    • 1997
  • The new video coding standard Iv1PEG-4 is enabling content-based functionalities. It requires a prior decomposition of sequences into video object planes (VOP's) so that each VOP represents moving objets. This paper addresses an image segmentation method for separating moving objects from still background (non-moving area) in video sequences using a statistical hypothesis test. In the proposed method. three consecutive image frames are exploited and a hypothesis testing is performed by comparing two means from two consecutive difference images. which results in a T-test. This hypothesis test yields a change detection mask that indicates moving areas (foreground) and non-moving areas (background), Moreover. an effective method for extracting

  • PDF

Automatic Liver Segmentation on Abdominal Contrast-enhanced CT Images for the Pre-surgery Planning of Living Donor Liver Transplantation

  • Jang, Yujin;Hong, Helen;Chung, Jin Wook
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.1
    • /
    • pp.37-40
    • /
    • 2014
  • Purpose For living donor liver transplantation, liver segmentation is difficult due to the variability of its shape across patients and similarity of the density of neighbor organs such as heart, stomach, kidney, and spleen. In this paper, we propose an automatic segmentation of the liver using multi-planar anatomy and deformable surface model in portal phase of abdominal contrast-enhanced CT images. Method Our method is composed of four main steps. First, the optimal liver volume is extracted by positional information of pelvis and rib and by separating lungs and heart from CT images. Second, anisotropic diffusing filtering and adaptive thresholding are used to segment the initial liver volume. Third, morphological opening and connected component labeling are applied to multiple planes for removing neighbor organs. Finally, deformable surface model and probability summation map are performed to refine a posterior liver surface and missing left robe in previous step. Results All experimental datasets were acquired on ten living donors using a SIEMENS CT system. Each image had a matrix size of $512{\times}512$ pixels with in-plane resolutions ranging from 0.54 to 0.70 mm. The slice spacing was 2.0 mm and the number of images per scan ranged from 136 to 229. For accuracy evaluation, the average symmetric surface distance (ASD) and the volume overlap error (VE) between automatic segmentation and manual segmentation by two radiologists are calculated. The ASD was $0.26{\pm}0.12mm$ for manual1 versus automatic and $0.24{\pm}0.09mm$ for manual2 versus automatic while that of inter-radiologists was $0.23{\pm}0.05mm$. The VE was $0.86{\pm}0.45%$ for manual1 versus automatic and $0.73{\pm}0.33%$ for manaual2 versus automatic while that of inter-radiologist was $0.76{\pm}0.21%$. Conclusion Our method can be used for the liver volumetry for the pre-surgery planning of living donor liver transplantation.

Comparison between Colour Intensity of Tongue Body and That of Tongue Coat under the Ultraviolet Light in RGB system of Peeling Tongue Coat Image (RGB 컬러모델에서 자외선 조명하 박락태(剝落苔)의 설태와 설질 사이의 색 강도 차이에 관한 연구)

  • Nam, Dong-Hyun;Kim, Ji-Hye;Lee, Woo-Beom;Lee, Sang-Suk;Hong, You-Sik
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.15 no.2
    • /
    • pp.149-158
    • /
    • 2011
  • Objectives: The objective of this study is to compare the colour intensity of tongue body and that of tongue coat under the visible light and the ultraviolet light. Methods: We selected 7 subjects with completely or partially peeled tongue coat among the recruited 94 adults for the experiment. We took each tongue picture under the visible light and the ultraviolet light (315-400 nm) and then extracted sample images from the tongue body and tongue coat regions. Mean, median and mode of colour intensity from the sample images were calculated in 256 RGB system. Results: The green and the blue colour intensities of the tongue coats were significantly higher than those of the tongue bodies under the visible light. In all channels, the red, green and blue, the colour intensities of the tongue coats were significantly higher than those of the tongue bodies under the ultraviolet light. The colour differences between tongue coats and tongue bodies under the ultraviolet light were significantly higher than the colour differences under the visible light. Especially the colour difference under the ultraviolet light was highest in the green channel. Conclusions: We suggested that green colour image of the RGB system taken under the ultraviolet light could be used for more easy separating tongue coat region from tongue body.

Road Crack Detection based on Object Detection Algorithm using Unmanned Aerial Vehicle Image (드론영상을 이용한 물체탐지알고리즘 기반 도로균열탐지)

  • Kim, Jeong Min;Hyeon, Se Gwon;Chae, Jung Hwan;Do, Myung Sik
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.155-163
    • /
    • 2019
  • This paper proposes a new methodology to recognize cracks on asphalt road surfaces using the image data obtained with drones. The target section was Yuseong-daero, the main highway of Daejeon. Furthermore, two object detection algorithms, such as Tiny-YOLO-V2 and Faster-RCNN, were used to recognize cracks on road surfaces, classify the crack types, and compare the experimental results. As a result, mean average precision of Faster-RCNN and Tiny-YOLO-V2 was 71% and 33%, respectively. The Faster-RCNN algorithm, 2Stage Detection, showed better performance in identifying and separating road surface cracks than the Yolo algorithm, 1Stage Detection. In the future, it will be possible to prepare a plan for building an infrastructure asset-management system using drones and AI crack detection systems. An efficient and economical road-maintenance decision-support system will be established and an operating environment will be produced.

Stereo System for Tracking Moving Object using Log-Polar Transformation and ZDF (로그폴라 변환과 ZDF를 이용한 이동 물체 추적 스테레오 시스템)

  • Yoon, Jong-Kun;Park, Il-;Lee, Yong-Bum;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.61-69
    • /
    • 2002
  • Active stereo vision system allows us to localize a target object by passing only the features of small disparities without heavy computation for identifying the target. This simple method, however, is not applicable to the situations where a distracting background is included or the target and other objects are located on the zero disparity area simultaneously To alleviate these problems, we combined filtering with foveation which employs high resolution in the center of the visual field and suppresses the periphery which is usually less interesting. We adopted an image pyramid or log-polar transformation for foveated imaging representation. We also extracted the stereo disparity of the target by using projection to keep the stereo disparity small during tracking. Our experiments show that log-polar transformation is superior to either an image pyramid or traditional method in separating a target from the distracting background and fairly enhances the tracking performance.

Automatic Extraction of Focused Video Object from Low Depth-of-Field Image Sequences (낮은 피사계 심도의 동영상에서 포커스 된 비디오 객체의 자동 검출)

  • Park, Jung-Woo;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.851-861
    • /
    • 2006
  • The paper proposes a novel unsupervised video object segmentation algorithm for image sequences with low depth-of-field (DOF), which is a popular photographic technique enabling to represent the intention of photographer by giving a clear focus only on an object-of-interest (OOI). The proposed algorithm largely consists of two modules. The first module automatically extracts OOIs from the first frame by separating sharply focused OOIs from other out-of-focused foreground or background objects. The second module tracks OOIs for the rest of the video sequence, aimed at running the system in real-time, or at least, semi-real-time. The experimental results indicate that the proposed algorithm provides an effective tool, which can be a basis of applications, such as video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing systems.