• Title/Summary/Keyword: Lyrics

Search Result 142, Processing Time 0.028 seconds

The Effect of BTS Preference on Fandom Star & Fan Community Identification and Purchase Intention - Focused on Korean and Southeast Asian - (BTS의 선호요인이 팬덤 동일시욕구와 구매의도에 미치는 영향 - 한국 및 동남아 팬을 중심으로 -)

  • Kim, Yoon-Chul
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.2
    • /
    • pp.1-14
    • /
    • 2020
  • This study was initiated by the interest in identifying what the characteristics of BTS' preference is in the expanded K-Pop market. For this study, a survey was conducted to Taiwan, Thailand, Vietnam, and Korea where BTS is popular. The results of this study show that Vietnam and Thailand have the most positive perceptions of most of the BTS preferences, and the factors affecting the highest quality were analyzed by the differentiating sense of BTS. BTS' preference is an independent variable consisting of five factors: singers and music, a discriminative sense, global communication, meditative lyrics and Korean sentiment. And it has been shown to have a statistically significant influence on both the fandom star and the fan community at a high level. In particular, the Identification desire for fandom star shows that the discriminative sense and meditative lyrics affect the positive at a high level. Also, the identification desire for the fan community's found that the attraction of singers and music affects the highest level of affection. This study was extended to Southeast Asian and Korean fans through a wide range of survey participants, and it is meaningful that a new perspective on the BTS preference was available. Nonetheless, Failure to take into account the various variables that may affect the fandom effect and the intent to purchase, and the lack of a survey of fans in the U.S. and Europe, which has more fans worldwide, could be a limitation of the study.

A Search for the Origins of Traditional Arirang Songs in Seoul Area (서울지역의 전래 아리랑 노래의 시원(始原)에 대한 탐색)

  • Myung Ok Yu
    • Journal of Naturopathy
    • /
    • v.12 no.1
    • /
    • pp.24-30
    • /
    • 2023
  • Background: Arirang is a UNESCO Intangible Cultural Heritage and intangible cultural property No. 129. However, research on the origin of Arirang in Seoul is still narrowly conducted, and it is necessary to investigate it academically. Purpose: This study is to clarify the research on the origin of traditional Arirang in Seoul area on a theoretical basis. Methods: I searched various documents to find the source of Arirang in Seoul. Results: The record of 'Arirang' was first confirmed as 'Arirang Taryeong(song)' in 'Hanyang-ga' in Maecheonyarok (Maecheon's history) by Hwang (1894). After that, Hulbert (1896) published the first modern sheet music and lyrics of <A-ra-rung> on music paper. In addition, Lee Sang Jun (1914) edited <Old Korean Folk Songbook, First volume> and recorded the lyrics and score titled 'Arirang Taryeong' on page 25 and the long Arirang Taryeong. Conclusions: Literally, the origin of 'Arirang in Seoul' is 'Arirang Taryeong' first recorded in 'Hanyang-ga' of Maecheonyarok. Arirang song, which originated in Hanyang, can be called Seoul Arirang. It is suggested that Seoul Arirang has a very high value as a protected cultural heritage of Seoul because of its historical and cultural characteristics.

Improved Lexicon-driven based Chord Symbol Recognition in Musical Images

  • Dinh, Cong Minh;Do, Luu Ngoc;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.12 no.4
    • /
    • pp.53-61
    • /
    • 2016
  • Although extensively developed, optical music recognition systems have mostly focused on musical symbols (notes, rests, etc.), while disregarding the chord symbols. The process becomes difficult when the images are distorted or slurred, although this can be resolved using optical character recognition systems. Moreover, the appearance of outliers (lyrics, dynamics, etc.) increases the complexity of the chord recognition. Therefore, we propose a new approach addressing these issues. After binarization, un-distortion, and stave and lyric removal of a musical image, a rule-based method is applied to detect the potential regions of chord symbols. Next, a lexicon-driven approach is used to optimally and simultaneously separate and recognize characters. The score that is returned from the recognition process is used to detect the outliers. The effectiveness of our system is demonstrated through impressive accuracy of experimental results on two datasets having a variety of resolutions.

An Analysis of the Criteria and Development of Korean Traditional Children's Songs (한국 전래동요의 분석 준거 연구)

  • Kwon, Dae Won;Cho, Jin Hee
    • Korean Journal of Childcare and Education
    • /
    • v.10 no.1
    • /
    • pp.95-109
    • /
    • 2014
  • The purpose of this study is to provide a reference for analyzing Korean traditional children's songs. A survey of opinions on the analysis criteria was conducted two times with 30 experts involved in early childhood education and its related field as the primary target. The SPSS 12.0 program was used to calculate the standard deviation and average. As a result of the expert opinion survey, 3 main categories (lyrics, music, and acting), 11 sub categories of the subsequent criteria for analysis and 33 sub-sub categories were finalized as items for analysis criteria.

Authoring Tool of Musical Slide Show MAF Contents

  • Sabirin Muhammad Syah Houari;Kim Mun-Churl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.289-295
    • /
    • 2006
  • The Musical Slide Show MAF, which currently being standardized by MPEG, conveys the concept of combining several established standard technologies in a single file format. It defines the format of packing up MP3 audio data, along with MPEG-7 Simple Metadata Profile and MPEG-21 Digital Item Declaration metadata; with JPEC images and optional text, and synchronizes them all together to create a slideshow of JPEC image data associated to MP3 audio data during the audio playback. The implementation of Musical Slide Show MAF can be a music karaoke file where users can sing along while listening to the music, view the JPEG slideshow and reading the lyrics; or a story-telling file where users can listen to the narrated story by looking at the related illustration slideshow of the story In this paper we present the tool to producing the Musical Slide Show MAF contents. Regardless the knowledge of user on the MAF file format, the authoring tool simplify the manner of packaging several multimedia contents into single file.

  • PDF

A Study on Humanity Convergence Map using space metaphor and POI (point of interest) of Big Data (빅데이터 중 POI와 공간 메타포를 활용한 인문 융합 지도 연구)

  • Lee, Won-Tae;Kang, Jang-Mook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.43-50
    • /
    • 2015
  • Google, Yahoo, Daum and Naver has the POI(point of interest) service. And POI on the map is expending to social commerce, SNS, social game and social shopping. At the same time the uses's position on the map is the starting point of the Humanities Story. That means our current position is the place for stories of tales, children's song, fictional characters, the film background, lyrics and the birth of great people. This study points out that service has the limited to cafe, restaurant and hospital, and suggests the Humanities fusion Map Service which is combined with the POI information.

Contents Analysis and Synthesis Scheme for Music Album Cover Art

  • Moon, Dae-Jin;Rho, Seung-Min;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.305-311
    • /
    • 2010
  • Most recent web search engines perform effective keyword-based multimedia contents retrieval by investigating keywords associated with multimedia contents on the Web and comparing them with query keywords. On the other hand, most music and compilation albums provide professional artwork as cover art that will be displayed when the music is played. If the cover art is not available, then the music player just displays some dummy or random images, but this has been a source of dissatisfaction. In this paper, in order to automatically create cover art that is matched with music contents, we propose a music album cover art creation scheme based on music contents analysis and result synthesis. We first (i) analyze music contents and their lyrics and extract representative keywords, (ii) expand the keywords using WordNet and generate various queries, (iii) retrieve related images from the Web using those queries, and finally (iv) synthesize them according to the user preference for album cover art. To show the effectiveness of our scheme, we developed a prototype system and reported some results.

A Music Recommendation Method Using Emotional States by Contextual Information

  • Kim, Dong-Joo;Lim, Kwon-Mook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.10
    • /
    • pp.69-76
    • /
    • 2015
  • User's selection of music is largely influenced by private tastes as well as emotional states, and it is the unconsciousness projection of user's emotion. Therefore, we think user's emotional states to be music itself. In this paper, we try to grasp user's emotional states from music selected by users at a specific context, and we analyze the correlation between its context and user's emotional state. To get emotional states out of music, the proposed method extracts emotional words as the representative of music from lyrics of user-selected music through morphological analysis, and learns weights of linear classifier for each emotional features of extracted words. Regularities learned by classifier are utilized to calculate predictive weights of virtual music using weights of music chosen by other users in context similar to active user's context. Finally, we propose a method to recommend some pieces of music relative to user's contexts and emotional states. Experimental results shows that the proposed method is more accurate than the traditional collaborative filtering method.

A study on the futuristic concept fashion style of K-pop music videos -Focusing on the 4th generation girl groups- (케이팝 뮤직비디오의 미래주의 컨셉 패션 스타일 연구 -4세대 걸그룹을 중심으로-)

  • Xie Xiaoying;Youngjae Lee
    • Journal of Fashion Business
    • /
    • v.28 no.3
    • /
    • pp.104-121
    • /
    • 2024
  • This study examined the integration of futurist fashion in 4th-generation K-pop girl groups, focusing on their world views, music videos, and fashion images. The key aim was to identify and analyze distinctive elements of futurist fashion within K-pop. K-pop's global popularity is driven by dynamic music, choreography, and avant-garde fashion. Futurism, an art movement emphasizing technology and innovation, continues to influence contemporary fashion trends in K-pop. This study seeks to provide insights into symbolic meanings and expressions of futurist fashion in 4th generation K-pop girl groups. Groups such as Gidle, Aespa, IVE, LE SSERAFIM, and New Jeans were analyzed. Data were collected from their music videos, lyrics, and costumes, focusing on silhouette, color, material, and pattern. This study highlights the significant role of futurist fashion in K-pop, showing how 4th-generation girl groups lead in integrating these elements. This research provides valuable insights for understanding and further exploring the evolution of K-pop fashion.

Application and Technology of Voice Synthesis Engine for Music Production (음악제작을 위한 음성합성엔진의 활용과 기술)

  • Park, Byung-Kyu
    • Journal of Digital Contents Society
    • /
    • v.11 no.2
    • /
    • pp.235-242
    • /
    • 2010
  • Differently from instruments which synthesized sounds and tones in the past, voice synthesis engine for music production has reached to the level of creating music as if actual artists were singing. It uses the samples of human voices naturally connected to the different levels of phoneme within the frequency range. Voice synthesis engine is not simply limited to the music production but it is changing cultural paradigm through the second creations of new music type including character music concerts, media productions, albums, and mobile services. Currently, voice synthesis engine technology makes it possible that users input pitch, lyrics, and musical expression parameters through the score editor and they mix and connect voice samples brought from the database to sing. New music types derived from such a development of computer music has sparked a big impact culturally. Accordingly, this paper attempts to examine the specific case studies and the synthesis technologies for users to understand the voice synthesis engine more easily, and it will contribute to their variety of music production.