• Title/Summary/Keyword: 음악적 표현

Search Result 306, Processing Time 0.03 seconds

The relationship between general creativity and musical creativity of children (유아의 일반적 창의성과 음악적 창의성과의 관계)

  • Kim, Youn-Ju
    • Korean Journal of Childcare and Education
    • /
    • v.1 no.1
    • /
    • pp.21-35
    • /
    • 2005
  • The goal of this study was to examine the relationship between general creativity and musical creativity of children, and utilize the results as the basic data to develope of children' music education. The subjects were 50 children of a kindergarten located in the Jeonlabukdo area. The instruments employed were general creativity and musical creativity scales. The data were analyzed with the SPSS program. They were analyzed by Pearson Correlation. The results of this study are as follows: First, creativity was correlated positively with 'making of the melody'. Second, fluency was correlated positively with 'making of the melody', abstraction of title was correlated positively with 'making of the song words', and elaboration was correlated positively with making of the melody and expression by rhythmic movement.

  • PDF

A study on the Anger-Control Music Program for Decrease in Aggressiveness and Anger of Neglected Children (방임된 아동의 공격성과 분노 감소를 위한 분노조절 음악 프로그램 연구)

  • Lee, Ju Young
    • Journal of Music and Human Behavior
    • /
    • v.5 no.2
    • /
    • pp.17-39
    • /
    • 2008
  • This study aims to decrease aggressiveness and anger of neglected children through anger-control music program. In this study, 4 neglected children were provided with 15 anger-control music programs for 30 minutes twice a week. Based on anger-control program, song writing and music therapy such as playing musical instruments were applied to the children according to the objective of each stage of the program. The changes in aggressiveness and anger were measured, and the behaviors of the participants during the musical activity were analysed. According to the analysis of the data, the anger-control music program proved to be effective in reducing aggressiveness and anger of the neglected children. What this result means is that the neglected children, who lack social skills and confidence, are able to express their emotions without difficulty and anxiety by playing musical instruments and singing songs.

  • PDF

Music Tempo Tracking and Motion Pattern Selection for Dancing Robots (댄싱 로봇의 구현을 위한 음악 템포 추출 및 모션 패턴 결정 방법)

  • Jun, Myoung-Jae;Ryu, Minsoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.369-370
    • /
    • 2009
  • Robot이 음악에 맞춰 어떤 행동을 하기 위해선 먼저 Acoustic을 이해 할 수 있는 인지 능력이 필요하며 인지한 음악적 내용을 Dance Motion에 가깝게 Action을 표현할 수 있어야 한다. 본 논문에서는 신호처리와 기계학습을 사용하여 음악의 Tempo를 Tracking하고 이것을 참고하여 행동 Pattern을 결정하는 Dance Robot System을 소개한다.

A Study on Jamaican music Used in Moombaton (뭄바톤에서 사용된 자메이카 음악에 관한 연구)

  • Park, Beom-geun;Cho, Tae-seon
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.273-280
    • /
    • 2021
  • The purpose of this study is to analyze the music of the Moombaton genre and to find out the characteristics of Jamaican music in it. I analyzed Lewis Poncy's and BTS's rhythmically, melodically and instrumentally to find the characteristics of Jamaican music. As a result, the rhythm features a Dembow rhythm and reggae rhythm derived from traditional Jamaican music. and As a feature of the melody, toasting, a unique Jamaican expression method, was used. Instrumental features include Latin American percussion, South American melodic instruments, and the use of tr-808. This study is meaningful in creating another fusion genre by finding the elements of Jamaican music in the current popular genre of EDM.

A Tag-based Music Recommendation Using UniTag Ontology (UniTag 온톨로지를 이용한 태그 기반 음악 추천 기법)

  • Kim, Hyon Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.133-140
    • /
    • 2012
  • In this paper, we propose a music recommendation method considering users' tags by collaborative tagging in a social music site. Since collaborative tagging allows a user to add keywords chosen by himself to web resources, it provides users' preference about the web resources concretely. In particular, emotional tags which represent human's emotion contain users' musical preference more directly than factual tags which represent facts such as musical genre and artists. Therefore, to classify the tags into the emotional tags and the factual tags and to assign weighted values to the emotional tags, a tag ontology called UniTag is developed. After preprocessing the tags, the weighted tags are used to create user profiles, and the music recommendation algorithm is executed based on the profiles. To evaluate the proposed method, a conventional playcount-based recommendation, an unweighted tag-based recommendation, and an weighted tag-based recommendation are executed. Our experimental results show that the weighted tag-based recommendation outperforms other two approaches in terms of precision.

Emotion Transition Model based Music Classification Scheme for Music Recommendation (음악 추천을 위한 감정 전이 모델 기반의 음악 분류 기법)

  • Han, Byeong-Jun;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.159-166
    • /
    • 2009
  • So far, many researches have been done to retrieve music information using static classification descriptors such as genre and mood. Since static classification descriptors are based on diverse content-based musical features, they are effective in retrieving similar music in terms of such features. However, human emotion or mood transition triggered by music enables more effective and sophisticated query in music retrieval. So far, few works have been done to evaluate the effect of human mood transition by music. Using formal representation of such mood transitions, we can provide personalized service more effectively in the new applications such as music recommendation. In this paper, we first propose our Emotion State Transition Model (ESTM) for describing human mood transition by music and then describe a music classification and recommendation scheme based on the ESTM. In the experiment, diverse content-based features were extracted from music clips, dimensionally reduced by NMF (Non-negative Matrix Factorization, and classified by SVM (Support Vector Machine). In the performance analysis, we achieved average accuracy 67.54% and maximum accuracy 87.78%.

  • PDF

Validation of RESPECT-Music With a Korean Sample (한국판 음악 기능 척도의 타당화와 정서적 적응과의 관계)

  • Lee, Jung Yun;Kim, Minhee
    • Journal of Music and Human Behavior
    • /
    • v.14 no.2
    • /
    • pp.45-70
    • /
    • 2017
  • The purpose of this study was to validate the Korean version of RESPECT-Music, which measures personal, social and cultural function of music, and to examine the correlation between the measured RESPECT data and the data measured from other scales for emotions. A survey was conducted with two separate groups of undergraduate students. Exploratory factor analysis with sample A (N=212) and confirmatory factor analysis and correlation analyses with sample B (N=296) were conducted. The result of exploratory factor analysis generated 10 factors as influential factors in music use, which was the similar results to the original scale: background, values, focus, dancing, family bonding, cultural identity, political attitudes, venting, emotional expression and social bonding. In the confirmatory factor analysis, this 35-item measurement was found to obtain adequate internal consistency and reliability. In addition, the correlations were found with other scales measuring emotional adjustment. Specifically, RESPECT showed a positive correlation with scales for positive affect, reappraisal, negative mood regulation and repair. Among the generated factors as music function, dancing was highly correlated with emotional adjustment, while political attitudes was negatively correlated with emotional adjustment. The results indicate that music use in our everyday lives is intercorrelated with the intrapersonal and interpersonal motives and emotional adjustment, while the function of music that influences cultural identity was not associated with the level of emotional adjustment. Implications for future studies were also suggested.

Miles Davis and Post-Modernism Through 'Bitches Brew' ('Bitches Brew'를 통하여 본 마일즈 데이비스와 포스트모더니즘)

  • Kim, Hyoeng-Chun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.12a
    • /
    • pp.3-6
    • /
    • 2011
  • 재즈의 가장 중심적 인물인 마일즈 데이비스의 'Bitches Brew'는 일반적으로 70년대 이후 현대 음악의 뿌리로 언급된다. 즉, �� 음악과 재즈가 합쳐진 퓨전음악이 시작으로 보는 것인데, 이 연구에서는 단지 시대적 흐름에 따라 형성된 악기 편성과 연주 방식에 프리재즈의 영향이 더해지면서 탄생한 것으로 본다. 수많은 예술적 표현 가운데 새로움에 대한 갈망과 형식에 대한 반감은 문학에서도 포스트-모더니즘적 작품들로 나타나게 되는데, 그 표현 방식에서의 주제의식이 마일즈 데이비스의 'Bitches Brew'의 그것과 상당히 흡사하다. 존 바스의 주제의식과 마일즈 데이비스의 기존의 양식을 다르게 보는 시각을 비교, 분석하여 재즈의 포스트모더니즘을 유추하여 본다.

  • PDF

Classic Music Analysis use Schemata (Schemata를 이용한 클래식 음악 분석)

  • 송화섭;김규년;정의필
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.280-282
    • /
    • 1999
  • 현존하는 클래식 음악에는 음악적, 심리적 작곡약속이 있다. 작곡약속을 악식(樂式) 혹은 음악형식(音樂形式)이라고 한다. 즉, 모든 악곡은 일정한 형식에 의하여 작곡된다. 이러한 이유로 악곡에서는 어떤 특징적인 note관계가 규칙적으로 반복해서 나타난다. 이러한 특성은 note간의 관계가 어떻게 변화하는가에 따라서, 악곡 전체에서 segment의 시작과 끝으로 인식되어진다. 본 논문에서는 악곡의 분석을 위해 실제 악보를 컴퓨터 데이터 형식으로 표현하기 위한 SFCM(Score Format for Computer Music)을 정의하여, 악곡의 note를 분석해서 각 소절(measure)별로 대표음 집합을 추출할 수 있도록 하였다. 각 소절의 대표음 집합을 이용해서, note의 변화에 따른 schematic을 생성한다 schematic 생성과 분석을 위해 note-schema의 규칙과 형식을 정의해 놓은 CNSDB(Changing-Note Schema DataBase)를 제안한다. 이러한 데이터 베이스를 이용하여 특징적인 규칙을 찾아내고, 적용해 악곡에서 segment를 나눌수 있다. 본 논문에서는 1700년대의 클래식 음악에서 특히 잘 나타나는 규칙을 적용해서 분석하였다.

  • PDF

Musical Synthesizer using fish movement (물고기에 의한 실시간 음악 생성기)

  • 장선연;이만재
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.622-624
    • /
    • 2003
  • 음악이란 사람의 느낌을 인위적인 작업을 통해 만들어내는 산물이라고 할 수 있다. 작곡자가 사물을 통해서 또는 어떤 영감을 통해서 만들어내고 있는 것이 대부분의 음악이었다. 이 런 것들은 다분히 작곡자의 의도나 듣기 좋도록 구성된 짜임새를 갖음으로써 청취자로 하여금 음악의 표현에 대한 편견을 갖게 할 수도 있다. 인위적임에서 벗어난 음악을 만들거나 우연의 음악을 만들거나 또는 간단한 조작을 통해서 다양한 음악을 만들어 내는 작업들은 많이 있어왔다. 그러나, 이것도 사람이 개입하지 않고서는 안 된다는 인위성을 배제할 수 없다. 따라서 본 논문에서는 컴퓨터 비젼을 이용하여 물고기의 움직임과 상태를 통해 스스로 음악을 만들어 내면서 기존의 음악이 가진 인위성을 제거 하려고 하였다. 그리고 사람의 작업 없이도 음악의 조화로움을 실시간으로 이끌어 낼 수 있도록 하여 좀 더 자연스런 음악으로서의 가치를 갖는 Musical Synthesizer를 구현하였다.

  • PDF