• Title/Summary/Keyword: vowels

Search Result 691, Processing Time 0.027 seconds

The Relationship between Visual Stress and MBTI Personality Types (시각적 스트레스와 MBTI 성격유형과의 관계)

  • Kim, Sun-Uk;Han, Seung-Jo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.4036-4044
    • /
    • 2012
  • This study is aimed to investigate the association between web-based visual stress and MBTI personality types. The stressor deriving visual stress is built by 14 vowels out of Korean alphabet as a content and parallel striples as a background on the screen, which is given to each subject during 5min. The dependent variable indicating how much human takes visual stress is the reduction rate of flicker fusion frequency, which is evaluated with visual flicker fusion frequency tester. The independent variables are gender and 8 MBTI personality types(E-I, S-N, T-F, and J-P), and hypotheses are based on human information processing model and previous studies. The results address that the reduction rate is not significantly affected by gender, S-N, and J-P, but E-I and T-F have significant influences on it. The reduction rate in I-type is almost 2 times as much as that in E-type and T-type has the rate 2.2 times more than F-type. This study can be applicable to determine the adequate personnel for jobs requiring less sensibility to visual stressors in areas that human error may lead to critical damages to an overall system.

Space Structure Character of Hangeul Typography (한글 타이포그래피의 공간 구조적 특성)

  • Kim, Young-Kook;Park, Seong-Hyeon
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.3
    • /
    • pp.86-96
    • /
    • 2008
  • General development basis of letter system is recognized by formative value in terms of its function and structure. principle of clustered writing is the most significant feature of Hangeul typography as considered that it is based on function and formativeness. Thus, not only by changes with its form but also by its characteristic syllable combination, space structure is made as consonants and vowels are combined in single letter, then the combination develop into word, sentence, paragraph to make second, third space structure character. This character has significant impact on readability that is core function of typography. With this property, space structure character is regarded as very important component of Hangeul typography. First, space structure character of Hangeul typography is reviewed by relating it to visual perception of gestalt psychology and compared square-framed letter and framed latter By applying square-framed letter and framed latter in same sentence, legibility and readability were studied. Researcher has found that space structure character of Hangeul typography has significant impact on its function, and in terms of future design, it is very critical not only for design but also for communication environment as space structure formativeness of Hangeul typography interact with communication that is basic concept.

Comparative Analysis on Pronunciation Contents in Korean Integrated Textbooks (한국어 통합 교재에 나타난 발음 내용의 비교 분석)

  • Park, Eunha
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.4
    • /
    • pp.268-278
    • /
    • 2018
  • The purpose of this study is to compare and analyze phonetic items such as the phonemic system, phonological rules, and pronunciation descriptions and notations incorporated in the textbooks. Based on our analysis results, we point out the problems related to pronunciation education, and suggest directions for improvement. First, the presentation order of consonants and vowels in the phonological systems sections of each textbook was different. We recommend that a standard for consonant and vowel presentation order should be prepared, but that this standard should take into consideration the specific purpose of the textbook; the learning strategies and goals, as well as the possibility of teaching and learning. Second, similar to phonemic systems, the presentation order of phonological rules was different for each textbook. To create a standard order for phonological rules, we have to standardize the order of presentation of rules and determine which rules should be presented. Furthermore, when describing phonological rules, the content should be described in common and essential terms as much as possible without the use of jargon. Third, in other matters of pronunciation, there were problems such as examples for pronunciation and lack of exercises. Regarding this, we propose to provide sentences or dialogues as examples for pronunciation, and to link these to various activities and other language functions for pronunciation practice.

Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis (HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5763-5768
    • /
    • 2014
  • Maintaining a voice color is important when compounding both the normal voice because an emotion is not expressed with various emotional voices in a single synthesizer. When a synthesizer is developed using the recording data of too many expressed emotions, a voice color cannot be maintained and each synthetic speech is can be heard like the voice of different speakers. In this paper, the speech data was recorded and the change in the voice color was analyzed to develop an emotional HMM-based speech synthesizer. To realize a speech synthesizer, a voice was recorded, and a database was built. On the other hand, a recording process is very important, particularly when realizing an emotional speech synthesizer. Monitoring is needed because it is quite difficult to define emotion and maintain a particular level. In the realized synthesizer, a normal voice and three emotional voice (Happiness, Sadness, Anger) were used, and each emotional voice consists of two levels, High/Low. To analyze the voice color of the normal voice and emotional voice, the average spectrum, which was the measured accumulated spectrum of vowels, was used and the F1(first formant) calculated by the average spectrum was compared. The voice similarity of Low-level emotional data was higher than High-level emotional data, and the proposed method can be monitored by the change in voice similarity.

A new feature specification for vowel height (모음 높이의 새로운 표기법에 대하여)

  • Park Cheon-Bae
    • MALSORI
    • /
    • no.27_28
    • /
    • pp.27-56
    • /
    • 1994
  • Processes involving the change of vowel height are natural enough to be found in many languages. It is essential to have a better feature specification for vowel height to grasp these processes properly, Standard Phonology adopts the binary feature system, and vowel height is represented by the two features, i.e., [\pm high] and [\pm low]. This has its own merits. But it is defective because it is misleading when we count the number of features used in a rule to compare the naturalness of rules. This feature system also cannot represent more than three degrees of height, We wi31 discard the binary features for vowel height. We consider to adopt the multivalued feature [n high] for the property of height. However, this feature cannot avoid the arbitrariness resulting from the number values denoting vowel height. It is not easy to expect whether the number in question is the largest or not It also is impossible to decide whether a larger number denotes a higher vowel or a lower vowel. Furthermore this feature specification requires an ad hoc condition such as n > 3 or n \geq 2, whenever we want to refer to a natural class including more than one degree of height The altelnative might be Particle Phonology, or Dependency Phonology. These might be apt for multivalued vowel height systems, as their supporters argue. However, the feature specification of Particle Phonology will be discarded because it does not observe strictly the assumption that the number of the particle a is decisive in representing the height. One a in a representation can denote variant degrees of height such as [e], [I], [a], [a ] and [e ]. This also means that we cannot represent natural classes in terms of the number of the particle a, Dependency Phonology also has problems in specifying a degree of vowel height by the dependency relations between the elements. There is no unique element to represent vowel height since every property has to be defined in terms of the dependency relations between two or more elements, As a result it is difficult to formulate a rule for vowel height change, especially when the phenomenon involves a chain of vowel shifts. Therefore, we suggest a new feature specification for vowel height (see Chapter 3). This specification resorts to a single feature H and a few >'s which refer exclusively to the degree of the tongue height when a vowel is pronounced. It can cope with more than three degrees of height because it is fundamentally a multivalued scalar feature. This feature also obviates the ad hoc condition for a natural class while the [n high] type of multivalued feature suffers from it. Also this feature specification conforms to our expection that the notation should become simpler as the generality of the class increases, in that the fewer angled brackets are used, the more vowels are included, Incidentally, it has also to be noted that, by adopting a single feature for vowel height, it is possible to formulate a simpler version of rules involving the changes of vowel height especially when they involve vowel shifts found in many languages.

  • PDF

Hangul Porting and Display Performance Comparison of an Embedded System (임베디드 시스템을 위한 한글 포팅 및 출력 성능 비교)

  • Oh, Sam-Kweon;Park, Geun-Duk;Kim, Byoung-Kuk
    • Journal of Digital Contents Society
    • /
    • v.10 no.4
    • /
    • pp.493-499
    • /
    • 2009
  • Three methods frequently used for Hangul display in computer systems are Standard Johab Code in which each of Hangul consonants and vowels is given a 5-bit code and each syllable created by combining them forms a 2-byte code, Standard Wansung Code in which each of all the syllables generally used for Hangul presentation forms a 2-byte code, and Unicode in which each syllable in most of the world's language systems is given a unique code so that it allows computers to consistently represent and manipulate them in a unified manner. An embedded system in general has a lower processing power and a limited amount of storage space, compared to a personal compute(PC) system. According to its usage, however, the former may have a processing power equal to that of the latter. Hence, when Hangul display needs to be adopted, an embedded system must choose a display method suitable for its own resource environment. This paper introduces a TFT LCD initialization method and pixel display functions of an LN2440SBC embedded board on which an LP35, a 3.5" TFT LCD kit, is attached. Using the initialization and pixel display functions, in addition, we compare three aforementioned Hangul display methods, in terms of their processing speeds and amounts of memory space required. According to experiments, Standard Johab Code requires less amount of memory space but more processing time than Standard Wansung Code, and Unicode requires the largest amount of memory space but the least processing time.

  • PDF

Place Assimilation in OT

  • Lee, Sechang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.109-116
    • /
    • 1996
  • In this paper, I would like to explore the possibility that the nature of place assimilation can be captured in terms of the OCP within the Optimality Theory (Mccarthy & Prince 1999. 1995; Prince & Smolensky 1993). In derivational models, each assimilatory process would be expressed through a different autosegmental rule. However, what any such model misses is a clear generalization that all of those processes have the effect of avoiding a configuration in which two consonantal place nodes are adjacent across a syllable boundary, as illustrated in (1):(equation omitted) In a derivational model, it is a coincidence that across languages there are changes that have the result of modifying a structure of the form (1a) into the other structure that does not have adjacent consonantal place nodes (1b). OT allows us to express this effect through a constraint given in (2) that forbids adjacent place nodes: (2) OCP(PL): Adjacent place nodes are prohibited. At this point, then, a question arises as to how consonantal and vocalic place nodes are formally distinguished in the output for the purpose of applying the OCP(PL). Besides, the OCP(PL) would affect equally complex onsets and codas as well as coda-onset clusters in languages that have them such as English. To remedy this problem, following Mccarthy (1994), I assume that the canonical markedness constraint is a prohibition defined over no more than two segments, $\alpha$ and $\beta$: that is, $^{*}\{{\alpha, {\;}{\beta{\}$ with appropriate conditions imposed on $\alpha$ and $\beta$. I propose the OCP(PL) again in the following format (3) OCP(PL) (table omitted) $\alpha$ and $\beta$ are the target and the trigger of place assimilation, respectively. The '*' is a reminder that, in this format, constraints specify negative targets or prohibited configurations. Any structure matching the specifications is in violation of this constraint. Now, in correspondence terms, the meaning of the OCP(PL) is this: the constraint is violated if a consonantal place $\alpha$ is immediately followed by a consonantal place $\bebt$ in surface. One advantage of this format is that the OCP(PL) would also be invoked in dealing with place assimilation within complex coda (e.g., sink [si(equation omitted)k]): we can make the constraint scan the consonantal clusters only, excluding any intervening vowels. Finally, the onset clusters typically do not undergo place assimilation. I propose that the onsets be protected by certain constraint which ensures that the coda, not the onset loses the place feature.

  • PDF

The implementation of Korean adult's optimal formant setting by Praat scripting (성인 포먼트 측정에서의 최적 세팅 구현: Praat software와 관련하여)

  • Park, Jiyeon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.97-108
    • /
    • 2019
  • An automated Praat script was implemented to measure optimal formant frequencies for adults. Optimal formant analysis could be interpreted to show that the deviation of formant frequency that resulted from the two variously combined setting parameters (maximum formant and number of formants) was minimal. To increase the reliability of formant analysis, LPC order should be set differently, based on the gender or vowel type. Praat recommends 5,000 Hz and 5,500 Hz as maximum formant settings and, at the same time, recommends 5 as the number of formants for males and females. However, verification is needed to determine whether these recommended settings are valid for Korean vowels. Statistical analysis showed that formant frequencies significantly varied across the adapted scripts, especially with respect to the data on females. Formant plots and statistical results showed that linear_script and qtone_script are much more reliable in formant measurements. Among four kinds of scripts, the linear and qtone_scripts proved to be more stable and reliable. While the linear_script was designed to have a linearly increased formant step in for-loop, the increment of formant step in the qtone_script was arranged by quarter tone scale (base frequency×common ratio ($\sqrt[24]{2}$)). When looking at the tendency of the formant setting drawn by the two referred algorithms in the context of front vowel [i, e], the maximum formant was set higher; and the number of formants set at a lower value than recommended by Praat. The back vowel [o, u], on the contrary, has a lower maximum formant and a higher number of formants than the standard setting.

The Design of Keyword Spotting System based on Auditory Phonetical Knowledge-Based Phonetic Value Classification (청음 음성학적 지식에 기반한 음가분류에 의한 핵심어 검출 시스템 구현)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.169-178
    • /
    • 2003
  • This study outlines two viewpoints the classification of phone likely unit (PLU) which is the foundation of korean large vocabulary speech recognition, and the effectiveness of Chiljongseong (7 Final Consonants) and Paljogseong (8 Final Consonants) of the korean language. The phone likely classifies the phoneme phonetically according to the location of and method of articulation, and about 50 phone-likely units are utilized in korean speech recognition. In this study auditory phonetical knowledge was applied to the classification of phone likely unit to present 45 phone likely unit. The vowels 'ㅔ, ㅐ'were classified as phone-likely of (ee) ; 'ㅒ, ㅖ' as [ye] ; and 'ㅚ, ㅙ, ㅞ' as [we]. Secondly, the Chiljongseong System of the draft for unified spelling system which is currently in use and the Paljongseonggajokyong of Korean script haerye were illustrated. The question on whether the phonetic value on 'ㄷ' and 'ㅅ' among the phonemes used in the final consonant of the korean fan guage is the same has been argued in the academic world for a long time. In this study, the transition stages of Korean consonants were investigated, and Ciljonseeng and Paljongseonggajokyong were utilized in speech recognition, and its effectiveness was verified. The experiment was divided into isolated word recognition and speech recognition, and in order to conduct the experiment PBW452 was used to test the isolated word recognition. The experiment was conducted on about 50 men and women - divided into 5 groups - and they vocalized 50 words each. As for the continuous speech recognition experiment to be utilized in the materialized stock exchange system, the sentence corpus of 71 stock exchange sentences and speech corpus vocalizing the sentences were collected and used 5 men and women each vocalized a sentence twice. As the result of the experiment, when the Paljongseonggajokyong was used as the consonant, the recognition performance elevated by an average of about 1.45% : and when phone likely unit with Paljongseonggajokyong and auditory phonetic applied simultaneously, was applied, the rate of recognition increased by an average of 1.5% to 2.02%. In the continuous speech recognition experiment, the recognition performance elevated by an average of about 1% to 2% than when the existing 49 or 56 phone likely units were utilized.

Learning-associated Reward and Penalty in Feedback Learning: an fMRI activation study (학습피드백으로서 보상과 처벌 관련 두뇌 활성화 연구)

  • Kim, Jinhee;Kan, Eunjoo
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.1
    • /
    • pp.65-90
    • /
    • 2017
  • Rewards or penalties become informative only when contingent on an immediately preceding response. Our goal was to determine if the brain responds differently to motivational events depending on whether they provide feedback with the contingencies effective for learning. Event-related fMRI data were obtained from 22 volunteers performing a visuomotor categorical task. In learning-condition trials, participants learned by trial and error to make left or right responses to letter cues (16 consonants). Monetary rewards (+500) or penalties (-500) were given as feedback (learning feedback). In random-condition trials, cues (4 vowels) appeared right or left of the display center, and participants were instructed to respond with the appropriate hand. However, rewards or penalties (random feedback) were given randomly (50/50%) regardless of the correctness of response. Feedback-associated BOLD responses were analyzed with ANOVA [trial type (learning vs. random) x feedback type (reward vs. penalty)] using SPM8 (voxel-wise FWE p < .001). The right caudate nucleus and right cerebellum showed activation, whereas the left parahippocampus and other regions as the default mode network showed deactivation, both greater for learning trials than random trials. Activations associated with reward feedback did not differ between the two trial types for any brain region. For penalty, both learning-penalty and random-penalty enhanced activity in the left insular cortex, but not the right. The left insula, however, as well as the left dorsolateral prefrontal cortex and dorsomedial prefrontal cortex/dorsal anterior cingulate cortex, showed much greater responses for learning-penalty than for random-penalty. These findings suggest that learning-penalty plays a critical role in learning, unlike rewards or random-penalty, probably not only due to its evoking of aversive emotional responses, but also because of error-detection processing, either of which might lead to changes in planning or strategy.