• Title/Summary/Keyword: Subtitle Creator

Search Result 2, Processing Time 0.019 seconds

Expansion of K-Content by Global Fandom : Focusing on 'Fansub' Community of Viki (글로벌 팬덤을 통한 한류 콘텐츠의 확대 : Viki의 '팬 자막' 커뮤니티를 중심으로)

  • Kim, Young-Hwan;Jung, Hoe-Kyung
    • Journal of Digital Convergence
    • /
    • v.17 no.11
    • /
    • pp.523-530
    • /
    • 2019
  • This study examines how global fandom for Korean dramas is formed and maintained by examining the reason and purpose of voluntarily making the subtitles of Korean drama through in-depth email interviews with foreign subtitle producers(fan subber) working on a video site called Viki.com. The research focused on Viki's fan community, which has grown into the most influential Korean Wave platform. Collective intelligence expresses in the fan community produces more than professional results and they are acting as consumers, cultural producers, and second creators of K-content. In order to continue the spread of K-content, it needs to pay more attention to the long-term strategy of global fandom combined with the fan community activities of the new media platform and network effects.

Automatic Generation Subtitle Service with Kinetic Typography according to Music Sentimental Analysis (음악 감정 분석을 통한 키네틱 타이포그래피 자막 자동 생성 서비스)

  • Ji, Youngseo;Lee, Haram;Lim, SoonBum
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1184-1191
    • /
    • 2021
  • In a pop song, the creator's intention is communicated to the user through music and lyrics. Lyric meaning is as important as music, but in most cases lyrics are delivered to users in a static form without non-verbal cues. Providing lyrics in a static text format is inefficient in conveying the emotions of a music. Recently, lyrics video with kinetic typography are increasingly provided, but producing them requires expertise and a lot of time. Therefore, in this system, the emotions of the lyrics are found through the analysis of the text of the lyrics, and the deep learning model is trained with the data obtained by converting the melody into a Mel-spectrogram format to find the appropriate emotions for the music. It sets properties such as motion, font, and color using the emotions found in the music, and automatically creates a kinetic typography video. In this study, we tried to enhance the effect of conveying the meaning of music through this system.