• 제목/요약/키워드: action units

검색결과 153건 처리시간 0.024초

Presentation of budge sonance with small action on the body motion

  • Kim, Jeong-lae;Kim, Kyu-dong
    • International journal of advanced smart convergence
    • /
    • 제4권1호
    • /
    • pp.35-39
    • /
    • 2015
  • This study was presented the small action by the budge sonance function. An estimation of budge sonance function was acquired displacements across all condition with a variation of small action. The budge sonance function was to be indicated to express the flow rate of body motion. Their function was suggested an issue of the action condition by budge sonance. This system was proposed a combination of the body motion and small action. The acquired sonance signal was to render the small action of body motion with budge sonance function. The analysis of budge function was generally realized a variation from displacements on the fast body motion. Budge sonance signal of action that vision condition was acquired to a variation of the $Vi-{\beta}_{AVG}$ with $(-4.954){\pm}(-5.42)$ units, that vestibular condition was acquired to a variation for the $Ve-{\beta}_{AVG}$ with $(-2.288){\pm}0.212$ units, that somatosensory condition was acquired to a variation for the $So-{\beta}_{AVG}$ with $(-0.47){\pm}0.511$ units, that CNS condition was acquired to a variation for the $C-{\beta}_{AVG}$ with $(-0.171){\pm}(-0.012)$ units. Budge sonance function was proposed the small action from axial action on body control. We know a body motion response from axial action was not only variation of budge sonance, but also body motion of fast body motion.

Action Units(AU)를 사용한 애니메이션 캐릭터 표정 분석 (A Study on Expression Analysis of Animation Character Using Action Units(AU))

  • 신현민;원선희;김계영
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2008년도 제39차 동계학술발표논문집 16권2호
    • /
    • pp.163-167
    • /
    • 2009
  • 본 논문에서는 크게 2단계에 걸쳐 다양한 형태의 얼굴을 가진 2차원 애니메이션 상의 캐릭터 얼굴구성요소를 추출하고 표정을 분석한다. 첫 번째 단계에서는, 기존의 얼굴인식 및 표정인식 분야에서 이용되었던 동적메쉬모델을 간소화하여 캐릭터 얼굴에 적용하기 위한 최적의 표준 메쉬모델을 제작하고, 이 모델을 사용하여 얼굴구성요소의 위치 및 형태정보를 추출한다. 두 번째 단계에서는, 앞 단계에서 추출된 3가지 얼굴구성요소(눈썹, 눈, 입)를 사용하여 FACS(Facial Action Coding System)에 정의된 AU(Action Units) 44개 중 12개의 AU를 사용하여 캐릭터의 5까지 기본적인 얼굴 표정에 대해 분석 및 정의한다. 본 논문에서 정의한 AU로 기본적인 5가지 얼굴표정에 대해 표정분석 정확도를 측정하였고, 서로 다른 캐릭터에 실험함으로써 제안된 AU정의의 타당성을 제시한다.

  • PDF

STRUCTURE OF UNIT-IFP RINGS

  • Lee, Yang
    • 대한수학회지
    • /
    • 제55권5호
    • /
    • pp.1257-1268
    • /
    • 2018
  • In this article we first investigate a sort of unit-IFP ring by which Antoine provides very useful information to ring theory in relation with the structure of coefficients of zero-dividing polynomials. Here we are concerned with the whole shape of units and nilpotent elements in such rings. Next we study the properties of unit-IFP rings through group actions of units on nonzero nilpotent elements. We prove that if R is a unit-IFP ring such that there are finite number of orbits under the left (resp., right) action of units on nonzero nilpotent elements, then R satisfies the descending chain condition for nil left (resp., right) ideals of R and the upper nilradical of R is nilpotent.

FACS 기반을 둔 3D 얼굴 모델링 (3D Face Modeling based on FACS (Facial Action Coding System))

  • 오두식;김유성;김재민;조성원;정선태
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.1015-1016
    • /
    • 2008
  • In this paper, the method which searchs a character of face and transforms it by FACS(Facial Action Coding System) for face modeling is suggested. FACS has a function to build an expression of face to AUs(Action Units) and make various face expressions. The system performs to find accurate Action Units of sample face and use setted AUs. Consequently it carries out the coefficient for transforming face model by 2D AUs matching.

  • PDF

얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식 (Facial Expression Recognition using Face Alignment and AdaBoost)

  • 정경중;최재식;장길진
    • 전자공학회논문지
    • /
    • 제51권11호
    • /
    • pp.193-201
    • /
    • 2014
  • 본 논문에서는 얼굴영상에 나타난 사람의 표정을 인식하기 위해 얼굴검출, 얼굴정렬, 얼굴단위 추출, 그리고 AdaBoost를 이용한 학습 방법과 효과적인 인식방법을 제안한다. 입력영상에서 얼굴 영역을 찾기 위해서 얼굴검출을 수행하고, 검출된 얼굴영상에 대하여 학습된 얼굴모델과 정렬(Face Alignment)을 수행한 후, 얼굴의 표정을 나타내는 단위요소(Facial Units)들을 추출한다. 본 논문에서 제안하는 얼굴 단위요소들을 표정을 표현하기 위한 기본적인 액션유닛(AU, Action Units)의 하위집합으로 눈썹, 눈, 코, 입 부분으로 나눠지며, 이러한 액션유닛에 대하여 AdaBoost 학습을 수행하여 표정을 인식한다. 얼굴유닛은 얼굴표정을 더욱 효율적으로 표현할 수 있고 학습 및 테스트에서 동작하는 시간을 줄여주기 때문에 실시간 응용분야에 적용하기 적합하다. 실험결과, 제안하는 표정인식 시스템은 실시간 환경에서 90% 이상의 우수한 성능을 보여준다.

Study of quake wavelength of dynamic movement with posture

  • Kim, Jeong-lae;Hwang, Kyu-sung
    • International journal of advanced smart convergence
    • /
    • 제4권1호
    • /
    • pp.99-103
    • /
    • 2015
  • Quake wavelength technique was designed of the sway by the body. There was presented a concept of the dangle wavelength by twisting condition of posture. We compared to the twisting condition for an average variation and maximum variation with the movement. There was used a combination system and correlation system of the posture. Their correlation signal was presented a control data by the dynamic movement. The quake wavelength system was to be formation of activity aspects by posture. The correlation of wavelength technique was applied to the a little action of posture variation signal. Quake wavelength by the dynamic movement was determined to a variation of vision condition of the $Vi-{\alpha}_{AVG}$ with $(-1.27){\pm}(-0.34)$ units, that vestibular condition of the $Ve-{\alpha}_{AVG}$ with $(-0.49){\pm}(-0.4)$ units, that somatosensory condition of the $So-{\alpha}_{AVG}$ with $0.037{\pm}0.269$ units, that CNS condition of the $C-{\alpha}_{AVG}$ with $(-0.049){\pm}0.015$ units. As the study of the quake wavelength technique was depended on the action system of body movement that a maximum and averag values was used a movement of combination data. The system was required an action signal for the form of actual signal on the basis of a little movement condition in the body. The human action systemwas compared to maximum and average from the movement derived the body. Therefore, their system was controlled to evaluate posture condition for the body correlation.

RINGS WITH A FINITE NUMBER OF ORBITS UNDER THE REGULAR ACTION

  • Han, Juncheol;Park, Sangwon
    • 대한수학회지
    • /
    • 제51권4호
    • /
    • pp.655-663
    • /
    • 2014
  • Let R be a ring with identity, X(R) the set of all nonzero, non-units of R and G(R) the group of all units of R. We show that for a matrix ring $M_n(D)$, $n{\geq}2$, if a, b are singular matrices of the same rank, then ${\mid}o_{\ell}(a){\mid}={\mid}o_{\ell}(b){\mid}$, where $o_{\ell}(a)$ and $o_{\ell}(b)$ are the orbits of a and b, respectively, under the left regular action. We also show that for a semisimple Artinian ring R such that $X(R){\neq}{\emptyset}$, $$R{{\sim_=}}{\oplus}^m_{i=1}M_n_i(D_i)$$, with $D_i$ infinite division rings of the same cardinalities or R is isomorphic to the ring of $2{\times}2$ matrices over a finite field if and only if ${\mid}o_{\ell}(x){\mid}={\mid}o_{\ell}(y){\mid}$ for all $x,y{\in}X(R)$.

Prompt Tuning for Facial Action Unit Detection in the Wild

  • ;;김애라;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.732-734
    • /
    • 2023
  • Facial Action Units Detection (FAUs) problem focuses on identifying various detail units expressing on the human face, as defined by the Facial Action Coding System, which constitutes a fine-grained classification problem. This is a challenging task in computer vision. In this study, we propose a Prompt Tuning approach to address this problem, involving a 2-step training process. Our method demonstrates its effectiveness on the Affective in the Wild dataset, surpassing other existing methods in terms of both accuracy and efficiency.

ON THE MINKOWSKI UNITS OF 2-PERIODIC KNOTS

  • Lee, Sang-Youl
    • 대한수학회보
    • /
    • 제38권3호
    • /
    • pp.475-486
    • /
    • 2001
  • In this paper we give a relationship among the Minkowski units, for all odd prime number including $\infty$, of 2-periodic knot is $S^3$, its factor knot, and the 2-component link consisting of the factor knot and the set of fixed points of the periodic action.

  • PDF

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.