• Title/Summary/Keyword: video representation

Search Result 194, Processing Time 0.029 seconds

Similarity Measurement Method of Trajectory using Indexing Information of Moving Object in Video (비디오 내 이동 객체의 색인 정보를 이용한 궤적 유사도 측정 기법)

  • Kim, Jeong In;Choi, Chang;Kim, Pan Koo
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.43-47
    • /
    • 2012
  • The recent proliferation of multimedia data necessitates the effectively and efficiently retrieving of multimedia data. These research not only focus on the retrieving methods of text matching but also on using the multimedia data features. Therefore, this paper is a similarity measurement method of trajectory using indexing information of moving object in video, for similarity measurement. This method consists of 2 steps. Firstly, Video data is processed indexing for trajectory extraction of moving objects using CCTV. Finally, we describe to compare DTW(Dynamic Time Warping) to TSR(Tansent Space Representation) algorithm.

  • PDF

User-created multi-view video generation with portable camera in mobile environment (모바일 환경의 이동형 카메라를 이용한 사용자 저작 다시점 동영상의 제안)

  • Sung, Bo Kyung;Park, Jun Hyoung;Yeo, Ji Hye;Ko, Il Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.157-170
    • /
    • 2012
  • Recently, user-created video shows high increasing in production and consumption. Among these, videos records an identical subject in limited space with multi-view are coming out. Occurring main reason of this kind of video is popularization of portable camera and mobile web environment. Multi-view has studied in visually representation technique fields for point of view. Definition of multi-view has been expanded and applied to various contents authoring lately. To make user-created videos into multi-view contents can be a kind of suggestion as a user experience for new form of video consumption. In this paper, we show the possibility to make user-created videos into multi-view video content through analyzing multi-view video contents even there exist attribute differentiations. To understanding definition and attribution of multi-view classified and analyzed existing multi-view contents. To solve time axis arranging problem occurred in multi-view processing proposed audio matching method. Audio matching method organize feature extracting and comparing. To extract features is proposed MFCC that is most universally used. Comparing is proposed n by n. We proposed multi-view video contents that can consume arranged user-created video by user selection.

A Study on Gender Identity Expressed in Fashion in Music Video

  • Jeong, Ha-Na;Choy, Hyon-Sook
    • International Journal of Costume and Fashion
    • /
    • v.6 no.2
    • /
    • pp.28-42
    • /
    • 2006
  • In present modern society, media contributes more to the constructing of personal identities than any other medium. Music video, a postmodernism branch among a variety of media, offers a complex experience of sounds combined with visual images. In particular. fashion in music video helps conveying contexts effectively and functions as a medium of immediate communication by visual effect. Considering the socio-cultural effects of music video. gender identity represented in fashion in it can be of great importance. Therefore, this study is geared to the reconsidering of gender identity represented through costumes in music video by analyzing fashions in it. Gender identity in socio-cultural category is classified as masculinity, femininity, and the third sex. By examining fashions based on the classification. this study will help to create new design concepts and to understand gender identity in fashion. The results of this study are as follows: First. masculinity in music video fashion was categorized into stereotyped masculinity, sexual masculinity. and metro sexual masculinity. Second, femininity in music video fashion was categorized into stereotyped femininity. sexual femininity, and contra sexual femininity. Third, the third sex in music video fashion was categorized into transvestism, masculinization of female, and feminization of male. This phenomenon is presented into music videos through females in male attire and males in female attire. Through this research, gender identity represented in fashion of music video was demonstrated, and the importance of the relationship between representation of identity through fashion and socio-cultural environment was reconfirmed.

Hydrodynamic scene separation from video imagery of ocean wave using autoencoder (오토인코더를 이용한 파랑 비디오 영상에서의 수리동역학적 장면 분리 연구)

  • Kim, Taekyung;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2019
  • In this paper, we propose a hydrodynamic scene separation method for wave propagation from video imagery using autoencoder. In the coastal area, image analysis methods such as particle tracking and optical flow with video imagery are usually applied to measure ocean waves owing to some difficulties of direct wave observation using sensors. However, external factors such as ambient light and weather conditions considerably hamper accurate wave analysis in coastal video imagery. The proposed method extracts hydrodynamic scenes by separating only the wave motions through minimizing the effect of ambient light during wave propagation. We have visually confirmed that the separation of hydrodynamic scenes is reasonably well extracted from the ambient light and backgrounds in the two videos datasets acquired from real beach and wave flume experiments. In addition, the latent representation of the original video imagery obtained through the latent representation learning by the variational autoencoder was dominantly determined by ambient light and backgrounds, while the hydrodynamic scenes of wave propagation independently expressed well regardless of the external factors.

Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle (무인 항공기 촬영 동영상을 위한 실시간 안정화 기법)

  • Cho, Hyun-Tae;Bae, Hyo-Chul;Kim, Min-Uk;Yoon, Kyoungro
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

A Generation Method of Spatially Encoded Video Data for Geographic Information Systems

  • Joo, In-Hak;Hwang, Tae-Hyun;Choi, Kyoung-Ho;Jang, Byung-Tae
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.801-803
    • /
    • 2003
  • In this paper, we present a method for generating and providing spatially encoded video data that can be effectively used by GIS applications. We collect the video data by a mobile mapping system called 4S-Van that is equipped by GPS, INS, CCD camera, and DVR system. The information about spatial object appearing in video, such as occupied region in each frame, attribute value, and geo-coordinate, are generated and encoded. We suggest methods that can generate such data for each frame in semi-automatic manner. We adopt standard MPEG-7 metadata format for representation of the spatially encoded video data to be generally used by GIS application. The spatial and attribute information encoded to each video frame can make visual browsing between map and video possible. The generated video data can be provided and applied to various GIS applications where location and visual data are both important.

  • PDF

Improvement of Character-net via Detection of Conversation Participant (대화 참여자 결정을 통한 Character-net의 개선)

  • Kim, Won-Taek;Park, Seung-Bo;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.241-249
    • /
    • 2009
  • Recently, a number of researches related to video annotation and representation have been proposed to analyze video for searching and abstraction. In this paper, we have presented a method to provide the picture elements of conversational participants in video and the enhanced representation of the characters using those elements, collectively called Character-net. Because conversational participants are decided as characters detected in a script holding time, the previous Character-net suffers serious limitation that some listeners could not be detected as the participants. The participants who complete the story in video are very important factor to understand the context of the conversation. The picture elements for detecting the conversational participants consist of six elements as follows: subtitle, scene, the order of appearance, characters' eyes, patterns, and lip motion. In this paper, we present how to use those elements for detecting conversational participants and how to improve the representation of the Character-net. We can detect the conversational participants accurately when the proposed elements combine together and satisfy the special conditions. The experimental evaluation shows that the proposed method brings significant advantages in terms of both improving the detection of the conversational participants and enhancing the representation of Character-net.

Generation and Coding of Layered Depth Images for Multi-view Video Representation with Depth Information (깊이정보를 포함한 다시점 비디오로부터 계층적 깊이영상 생성 및 부호화 기법)

  • Yoon, Seung-Uk;Lee, Eun-Kyung;Kim, Sung-Yeol;Ho, Yo-Sung;Yun, Kug-Jin;Kim, Dae-Hee;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.375-378
    • /
    • 2005
  • The multi-view video is a collection of multiple videos capturing the same scene at different viewpoints. The multi-view video can be used in various applications, including free viewpoint TV and three-dimensional TV. Since the data size of the multi-view video linearly increases as the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. The multi-view video can be coded using the concept of the layered depth image (LDI). In this paper, we describe a procedure to generate LDI from the natural multi-view video and present a method to encode multi-view video using the concept of LDI.

  • PDF

Semantic-Based Video Representation and Storing Techniques for Video Streaming Services (비디오스트리밍 서비스를 위한 의미기반 비디오 표현 및 저장 기법)

  • Lee, Seok-Ryong
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.05a
    • /
    • pp.505-509
    • /
    • 2004
  • 본 논문에서는 비디오 스트림 서버에서 의미 기반 검색을 가능하게 하기 위하여 대용량 스트림 데이터를 효과적으로 표현하고 저장하는 기법을 제시한다. 비디오 스트림 내의 각 프레임을 다차원 공간상의 점으로 사상함으로써 비디오 스트림은 다차원 시퀀스(multidimensional sequence)로 표현되고, 이 시퀀스는 다시 비디오 세그먼트로 분할된다. 분할된 세그먼트로부터 정적인 특성과 연속된 프레임의 움직임을 나타내는 트랜드 벡터(trend vector)등의 의미 정보를 추출하여 모델링 함으로서 스트림 데이터를 효과적으로 표현한다. 또한 제안된 기법은 효율적인 검색을 위하여 비디오 세그먼트를 인덱싱하고 저장하는 방법을 제공함으로써 공간 사용의 효율성을 높이고 신속한 검색을 가능하게 한다.

  • PDF

Meta-representation of Video Game through the Cross-media Storytelling: Focusing on the Animated Motion Picture Game Over (크로스미디어 스토리텔링을 통한 비디오 게임의 메타적 재현 : 애니메이션 <게임오버>를 중심으로)

  • Cho, Eun-Ha
    • Journal of Korea Game Society
    • /
    • v.12 no.3
    • /
    • pp.25-36
    • /
    • 2012
  • Cross-Media Storytelling(CMS) is the new method of media representation. It picks the features and the elements in one media, and uses them in another media. 'Remediation' in digital era uses the content of old media in new form based on new technology. But 'CMS' represents the basic elements of the media experience in each unique style of media. It changes the focus from the technology to experience. So CMS is the new strategy of the media not based on the new technology. Adam PESapane's (2006) is a example for this strategy. It takes the game media as a subject matter. But it expresses the meta-representation of game experience in the "stop motion animation" Especially it emphasizes the narrative chain between the usual phenomenon and the visual imagination. And it shows the possibility of representation of new media experience in the old media genre. So it suggests the conditions of CMS.