• Title/Summary/Keyword: action units

Search Result 153, Processing Time 0.027 seconds

Presentation of budge sonance with small action on the body motion

  • Kim, Jeong-lae;Kim, Kyu-dong
    • International journal of advanced smart convergence
    • /
    • v.4 no.1
    • /
    • pp.35-39
    • /
    • 2015
  • This study was presented the small action by the budge sonance function. An estimation of budge sonance function was acquired displacements across all condition with a variation of small action. The budge sonance function was to be indicated to express the flow rate of body motion. Their function was suggested an issue of the action condition by budge sonance. This system was proposed a combination of the body motion and small action. The acquired sonance signal was to render the small action of body motion with budge sonance function. The analysis of budge function was generally realized a variation from displacements on the fast body motion. Budge sonance signal of action that vision condition was acquired to a variation of the $Vi-{\beta}_{AVG}$ with $(-4.954){\pm}(-5.42)$ units, that vestibular condition was acquired to a variation for the $Ve-{\beta}_{AVG}$ with $(-2.288){\pm}0.212$ units, that somatosensory condition was acquired to a variation for the $So-{\beta}_{AVG}$ with $(-0.47){\pm}0.511$ units, that CNS condition was acquired to a variation for the $C-{\beta}_{AVG}$ with $(-0.171){\pm}(-0.012)$ units. Budge sonance function was proposed the small action from axial action on body control. We know a body motion response from axial action was not only variation of budge sonance, but also body motion of fast body motion.

A Study on Expression Analysis of Animation Character Using Action Units(AU) (Action Units(AU)를 사용한 애니메이션 캐릭터 표정 분석)

  • Shin, Hyun-Min;Weon, Sun-Hee;Kim, Gye-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.163-167
    • /
    • 2009
  • 본 논문에서는 크게 2단계에 걸쳐 다양한 형태의 얼굴을 가진 2차원 애니메이션 상의 캐릭터 얼굴구성요소를 추출하고 표정을 분석한다. 첫 번째 단계에서는, 기존의 얼굴인식 및 표정인식 분야에서 이용되었던 동적메쉬모델을 간소화하여 캐릭터 얼굴에 적용하기 위한 최적의 표준 메쉬모델을 제작하고, 이 모델을 사용하여 얼굴구성요소의 위치 및 형태정보를 추출한다. 두 번째 단계에서는, 앞 단계에서 추출된 3가지 얼굴구성요소(눈썹, 눈, 입)를 사용하여 FACS(Facial Action Coding System)에 정의된 AU(Action Units) 44개 중 12개의 AU를 사용하여 캐릭터의 5까지 기본적인 얼굴 표정에 대해 분석 및 정의한다. 본 논문에서 정의한 AU로 기본적인 5가지 얼굴표정에 대해 표정분석 정확도를 측정하였고, 서로 다른 캐릭터에 실험함으로써 제안된 AU정의의 타당성을 제시한다.

  • PDF

STRUCTURE OF UNIT-IFP RINGS

  • Lee, Yang
    • Journal of the Korean Mathematical Society
    • /
    • v.55 no.5
    • /
    • pp.1257-1268
    • /
    • 2018
  • In this article we first investigate a sort of unit-IFP ring by which Antoine provides very useful information to ring theory in relation with the structure of coefficients of zero-dividing polynomials. Here we are concerned with the whole shape of units and nilpotent elements in such rings. Next we study the properties of unit-IFP rings through group actions of units on nonzero nilpotent elements. We prove that if R is a unit-IFP ring such that there are finite number of orbits under the left (resp., right) action of units on nonzero nilpotent elements, then R satisfies the descending chain condition for nil left (resp., right) ideals of R and the upper nilradical of R is nilpotent.

3D Face Modeling based on FACS (Facial Action Coding System) (FACS 기반을 둔 3D 얼굴 모델링)

  • Oh, Du-Sik;Kim, Yu-Sung;Kim, Jae-Min;Cho, Seoung-Won;Chung, Sun-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1015-1016
    • /
    • 2008
  • In this paper, the method which searchs a character of face and transforms it by FACS(Facial Action Coding System) for face modeling is suggested. FACS has a function to build an expression of face to AUs(Action Units) and make various face expressions. The system performs to find accurate Action Units of sample face and use setted AUs. Consequently it carries out the coefficient for transforming face model by 2D AUs matching.

  • PDF

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

Study of quake wavelength of dynamic movement with posture

  • Kim, Jeong-lae;Hwang, Kyu-sung
    • International journal of advanced smart convergence
    • /
    • v.4 no.1
    • /
    • pp.99-103
    • /
    • 2015
  • Quake wavelength technique was designed of the sway by the body. There was presented a concept of the dangle wavelength by twisting condition of posture. We compared to the twisting condition for an average variation and maximum variation with the movement. There was used a combination system and correlation system of the posture. Their correlation signal was presented a control data by the dynamic movement. The quake wavelength system was to be formation of activity aspects by posture. The correlation of wavelength technique was applied to the a little action of posture variation signal. Quake wavelength by the dynamic movement was determined to a variation of vision condition of the $Vi-{\alpha}_{AVG}$ with $(-1.27){\pm}(-0.34)$ units, that vestibular condition of the $Ve-{\alpha}_{AVG}$ with $(-0.49){\pm}(-0.4)$ units, that somatosensory condition of the $So-{\alpha}_{AVG}$ with $0.037{\pm}0.269$ units, that CNS condition of the $C-{\alpha}_{AVG}$ with $(-0.049){\pm}0.015$ units. As the study of the quake wavelength technique was depended on the action system of body movement that a maximum and averag values was used a movement of combination data. The system was required an action signal for the form of actual signal on the basis of a little movement condition in the body. The human action systemwas compared to maximum and average from the movement derived the body. Therefore, their system was controlled to evaluate posture condition for the body correlation.

RINGS WITH A FINITE NUMBER OF ORBITS UNDER THE REGULAR ACTION

  • Han, Juncheol;Park, Sangwon
    • Journal of the Korean Mathematical Society
    • /
    • v.51 no.4
    • /
    • pp.655-663
    • /
    • 2014
  • Let R be a ring with identity, X(R) the set of all nonzero, non-units of R and G(R) the group of all units of R. We show that for a matrix ring $M_n(D)$, $n{\geq}2$, if a, b are singular matrices of the same rank, then ${\mid}o_{\ell}(a){\mid}={\mid}o_{\ell}(b){\mid}$, where $o_{\ell}(a)$ and $o_{\ell}(b)$ are the orbits of a and b, respectively, under the left regular action. We also show that for a semisimple Artinian ring R such that $X(R){\neq}{\emptyset}$, $$R{{\sim_=}}{\oplus}^m_{i=1}M_n_i(D_i)$$, with $D_i$ infinite division rings of the same cardinalities or R is isomorphic to the ring of $2{\times}2$ matrices over a finite field if and only if ${\mid}o_{\ell}(x){\mid}={\mid}o_{\ell}(y){\mid}$ for all $x,y{\in}X(R)$.

Prompt Tuning for Facial Action Unit Detection in the Wild

  • Vu Ngoc Tu;Huynh Van Thong;Aera Kim;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.732-734
    • /
    • 2023
  • Facial Action Units Detection (FAUs) problem focuses on identifying various detail units expressing on the human face, as defined by the Facial Action Coding System, which constitutes a fine-grained classification problem. This is a challenging task in computer vision. In this study, we propose a Prompt Tuning approach to address this problem, involving a 2-step training process. Our method demonstrates its effectiveness on the Affective in the Wild dataset, surpassing other existing methods in terms of both accuracy and efficiency.

ON THE MINKOWSKI UNITS OF 2-PERIODIC KNOTS

  • Lee, Sang-Youl
    • Bulletin of the Korean Mathematical Society
    • /
    • v.38 no.3
    • /
    • pp.475-486
    • /
    • 2001
  • In this paper we give a relationship among the Minkowski units, for all odd prime number including $\infty$, of 2-periodic knot is $S^3$, its factor knot, and the 2-component link consisting of the factor knot and the set of fixed points of the periodic action.

  • PDF

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.