• Title/Summary/Keyword: motion feature vector

Search Result 91, Processing Time 0.022 seconds

Real-time structural damage detection using wireless sensing and monitoring system

  • Lu, Kung-Chun;Loh, Chin-Hsiung;Yang, Yuan-Sen;Lynch, Jerome P.;Law, K.H.
    • Smart Structures and Systems
    • /
    • v.4 no.6
    • /
    • pp.759-777
    • /
    • 2008
  • A wireless sensing system is designed for application to structural monitoring and damage detection applications. Embedded in the wireless monitoring module is a two-tier prediction model, the auto-regressive (AR) and the autoregressive model with exogenous inputs (ARX), used to obtain damage sensitive features of a structure. To validate the performance of the proposed wireless monitoring and damage detection system, two near full scale single-story RC-frames, with and without brick wall system, are instrumented with the wireless monitoring system for real time damage detection during shaking table tests. White noise and seismic ground motion records are applied to the base of the structure using a shaking table. Pattern classification methods are then adopted to classify the structure as damaged or undamaged using time series coefficients as entities of a damage-sensitive feature vector. The demonstration of the damage detection methodology is shown to be capable of identifying damage using a wireless structural monitoring system. The accuracy and sensitivity of the MEMS-based wireless sensors employed are also verified through comparison to data recorded using a traditional wired monitoring system.

A Study on Video Data Protection Method based on MPEG using Dynamic Shuffling (동적 셔플링을 이용한 MPEG기반의 동영상 암호화 방법에 관한 연구)

  • Lee, Ji-Bum;Lee, Kyoung-Hak;Ko, Hyung-Hwa
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.58-65
    • /
    • 2007
  • This dissertation proposes digital video protection algorithm lot moving image based on MPEG. Shuffling-based encryption algorithms using a fixed random shuffling table are quite simple and effective but vulnerable to the chosen plaintext attack. To overcome this problem, it is necessary to change the key used for generation of the shuffling table. However, this may pose a significant burden on the security key management system. A better approach is to generate the shuffling table based on the local feature of an image. In order to withstand the chosen plaintext attack, at first, we propose a interleaving algorithm that is adaptive to the local feature of an image. Secondly, using the multiple shuffling method which is combined interleaving with existing random shuffling method, we encrypted the DPCM processed 8*8 blocks. Experimental results showed that the proposed algorithm needs only 10% time of SEED encryption algorithm and moreover there is no overhead bit. In video sequence encryption, multiple random shuffling algorithms are used to encrypt the DC and AC coefficients of intra frame, and motion vector encryption and macroblock shuffling are used to encrypt the intra-coded macroblock in predicted frame.

  • PDF

Rule-based and Probabilistic Event Recognition of Independent Objects for Interpretation of Emergency Scenarios (긴급 상황 시나리오 해석을 위한 독립 객체의 규칙 기반 및 확률적 이벤트 인식)

  • Lee, Jun-Cheol;Choi, Chang-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.301-314
    • /
    • 2008
  • The existing event recognition is accomplished with the limited systematic foundation, and thus much longer learning time is needed for emergency scenario interpretation due to large scale of probability data. In this paper, we propose a method for nile-based event recognition of an independent object(human) which extract a feature vectors from the object and analyze the behavior pattern of each object and interpretation of emergency scenarios using a probability and object's events. The event rule of an independent object is composed of the Primary-event, Move-event, Interaction-event, and 'FALL DOWN' event and is defined through feature vectors of the object and the segmented motion orientated vector (SMOV) in which the dynamic Bayesian network is applied. The emergency scenario is analyzed using current state of an event and its post probability. In this paper, we define diversified events compared to that of pre-existing method and thus make it easy to expand by increasing independence of each events. Accordingly, semantics information, which is impossible to be gained through an.

  • PDF

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

Application of CSP Filter to Differentiate EEG Output with Variation of Muscle Activity in the Left and Right Arms (좌우 양팔의 근육 활성도 변화에 따른 EEG 출력 구분을 위한 CSP 필터의 적용)

  • Kang, Byung-Jun;Jeon, Bu-Il;Cho, Hyun-Chan
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.654-660
    • /
    • 2020
  • Through the output of brain waves during muscle operation, this paper checks whether it is possible to find characteristic vectors of brain waves that are capable of dividing left and right movements by extracting brain waves in specific areas of muscle signal output that include the motion of the left and right muscles or the will of the user within EEG signals, where uncertainties exist considerably. A typical surface EMG and noninvasive brain wave extraction method does not exist to distinguish whether the signal is a motion through the degree of ionization by internal neurotransmitter and the magnitude of electrical conductivity. In the case of joint and motor control through normal robot control systems or electrical signals, signals that can be controlled by the transmission and feedback control of specific signals can be identified. However, the human body lacks evidence to find the exact protocols between the brain and the muscles. Therefore, in this paper, efficiency is verified by utilizing the results of application of CSP (Common Spatial Pattern) filter to verify that the left-hand and right-hand signals can be extracted through brainwave analysis when the subject's behavior is performed. In addition, we propose ways to obtain data through experimental design for verification, to verify the change in results with or without filter application, and to increase the accuracy of the classification.

Development of Exercise Analysis System Using Bioelectric Abdominal Signal (복부생체전기신호를 이용한 운동 분석 시스템 개발)

  • Gang, Gyeong Woo;Min, Chul Hong;Kim, Tae Seon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.183-190
    • /
    • 2012
  • Conventional physical activity monitoring systems, which use accelerometers, global positioning system (GPS), heartbeats, or body temperature information, showed limited performances due to their own restrictions on measurement environment and measurable activity types. To overcome these limitations, we developed a portable exercise analysis system that can analyze aerobic exercises as well as isotonic exercises. For bioelectric signal acquisition during exercise, waist belt with two body contact electrodes was used. For exercise analysis, the measured signals were firstly divided into two signal groups with different frequency ranges which can represent respiration related signal and muscular motion related signal, respectively. After then, power values, differential of power values, and median frequency values were selected for feature values. Selected features were used as inputs of support vector machine (SVM) to classify the exercise types. For verification of statistical significance, ANOVA and multiple comparison test were performed. The experimental results showed 100% accuracy for classification of aerobic exercise and isotonic resistance exercise. Also, classification of aerobic exercise, isotonic resistance exercise, and hybrid types of exercise revealed 92.7% of accuracy.