• Title/Summary/Keyword: motion map

Search Result 325, Processing Time 0.03 seconds

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

Non-stationary vibration and super-harmonic resonances of nonlinear viscoelastic nano-resonators

  • Ajri, Masoud;Rastgoo, Abbas;Fakhrabadi, Mir Masoud Seyyed
    • Structural Engineering and Mechanics
    • /
    • v.70 no.5
    • /
    • pp.623-637
    • /
    • 2019
  • This paper analyzes the non-stationary vibration and super-harmonic resonances in nonlinear dynamic motion of viscoelastic nano-resonators. For this purpose, a new coupled size-dependent model is developed for a plate-shape nano-resonator made of nonlinear viscoelastic material based on modified coupled stress theory. The virtual work induced by viscous forces obtained in the framework of the Leaderman integral for the size-independent and size-dependent stress tensors. With incorporating the size-dependent potential energy, kinetic energy, and an external excitation force work based on Hamilton's principle, the viscous work equation is balanced. The resulting size-dependent viscoelastically coupled equations are solved using the expansion theory, Galerkin method and the fourth-order Runge-Kutta technique. The Hilbert-Huang transform is performed to examine the effects of the viscoelastic parameter and initial excitation values on the nanosystem free vibration. Furthermore, the secondary resonance due to the super-harmonic motions are examined in the form of frequency response, force response, Poincare map, phase portrait and fast Fourier transforms. The results show that the vibration of viscoelastic nanosystem is non-stationary at higher excitation values unlike the elastic ones. In addition, ignoring the small-size effects shifts the secondary resonance, significantly.

Automatic Recognition of Local Wrinkles in Textile Using Block Matching Algorithm (블록 정합을 이용한 국부적인 직물 구김 인식)

  • Lee, Hyeon-Jin;Kim, Eun-Jin;Lee, Il-Byeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11
    • /
    • pp.3165-3177
    • /
    • 1999
  • With the recent outstanding advance in computer software and hardware, a number of researches to enhance the manufacturing speed and the process accuracy has been undertaken in many fields of textile industry. Frequently issued problems of automatic recognition of textile wrinkles in a grey scale image are as follows. First, changes in grey level intensity of wrinkles are so minute. Second, as both colors and patterns in a grey scale image appear in grey level intensity, it is difficult to sort out the wrinkle information only. Third, it is also difficult to distinguish grey level intensity changed by wrinkles from those by uneven illumination. This paper suggests a method of automatic recognition of textile wrinkles that can solve above problems concerned with wrinkles, which can be raised in a manufacturing process as one of errors. In this paper, we first make the outline of wrinkles distinctly, apply the block matching algorithm used in motion estimation, and then estimate block locations of target images corresponding to blocks of standard images with the assumption that wrinkles are kind of textile distortions caused by directional forces. We plot a "wrinkle map" considering distances between wrinkles as depths of wrinkles. But because mismatch can occur by different illumination intensity and changes in tensions and directions of the force, there are also undesirable patterns in the map. Post processing is needed to filter them out and get wrinkles information only. We use average grey level intensity of wrinkle map to recognize wrinkles. When it comes to textile with colors and patterns, previous researches on wrinkles in grey scale image hasn't been successful. But we make it possible by considering wrinkles as distortion.istortion.

  • PDF

Online Human Tracking Based on Convolutional Neural Network and Self Organizing Map for Occupancy Sensors (점유 센서를 위한 합성곱 신경망과 자기 조직화 지도를 활용한 온라인 사람 추적)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.642-655
    • /
    • 2018
  • Occupancy sensors installed in buildings and households turn off the light if the space is vacant. Currently PIR(pyroelectric infra-red) motion sensors have been utilized. Recently, the researches using camera sensors have been carried out in order to overcome the demerit of PIR that cannot detect stationary people. The detection of moving and stationary people is a main functionality of the occupancy sensors. In this paper, we propose an on-line human occupancy tracking method using convolutional neural network (CNN) and self-organizing map. It is well known that a large number of training samples are needed to train the model offline. To solve this problem, we use an untrained model and update the model by collecting training samples online directly from the test sequences. Using videos capurted from an overhead camera, experiments have validated that the proposed method effectively tracks human.

An Atmospheric Numerical Simulation for Production of High Resolution Wind Map on Land and A Estimation of Strong Wind on the ground (고해상도 육상바람지도 구축을 위한 기상장 수치모의 및 지상강풍 추정)

  • Jung, Woo-Sik;Lee, Hwa-Woon;Park, Jong-Kil;Kim, Hyun-Goo;Kim, Dong-Hyuk;Choi, Hyo-Jin;Kim, Min-Jeong
    • 한국태양에너지학회:학술대회논문집
    • /
    • 2009.04a
    • /
    • pp.145-149
    • /
    • 2009
  • High-resolution atmospheric numerical system was set up to simulate the motion of the atmosphere and to produce the wind map on land. The results of several simulations were improved compare to the past system, because of using the fine geographical data, such as terrain height and land-use data, and the meteorological data assimilation. To estimate surface maximum wind speed when a typhoon is expected to strike the Korea peninsula, wind information at the upper level atmosphere was applied. Using 700hPa data, wind speed at the height of 300m was estimated, and surface wind speed was estimated finally considering surface roughness length. This study used formula from other countries and estimated RMW but RMW estimation formula apt to Korea should be developed for future.

  • PDF

Video Object Extraction Using Contour Information (윤곽선 정보를 이용한 동영상에서의 객체 추출)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.33-45
    • /
    • 2011
  • In this paper, we present a method for extracting video objects efficiently by using the modified graph cut algorithm based on contour information. First, we extract objects at the first frame by an automatic object extraction algorithm or the user interaction. To estimate the objects' contours at the current frame, motion information of objects' contour in the previous frame is analyzed. Block-based histogram back-projection is conducted along the estimated contour point. Each color model of objects and background can be generated from back-projection images. The probabilities of links between neighboring pixels are decided by the logarithmic based distance transform map obtained from the estimated contour image. Energy of the graph is defined by predefined color models and logarithmic distance transform map. Finally, the object is extracted by minimizing the energy. Experimental results of various test images show that our algorithm works more accurately than other methods.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Dosimetric Analysis of Respiratory-Gated RapidArc with Varying Gating Window Times (호흡연동 래피드아크 치료 시 빔 조사 구간 설정에 따른 선량 변화 분석)

  • Yoon, Mee Sun;Kim, Yong-Hyeob;Jeong, Jae-Uk;Nam, Taek-Keun;Ahn, Sung-Ja;Chung, Woong-Ki;Song, Ju-Young
    • Progress in Medical Physics
    • /
    • v.26 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • The gated RapidArc may produce a dosimetric error due to the stop-and-go motion of heavy gantry which can misalign the gantry restart position and reduce the accuracy of important factors in RapidArc delivery such as MLC movement and gantry speed. In this study, the effect of stop-and-go motion in gated RapidArc was analyzed with varying gating window time, which determines the total number of stop-and-go motions. Total 10 RapidArc plans for treatment of liver cancer were prepared. The RPM gating system and the moving phantom were used to set up the accurate gating window time. Two different delivery quality assurance (DQA) plans were created for each RapidArc plan. One is the portal dosimetry plan and the other is MapCHECK2 plan. The respiratory cycle was set to 4 sec and DQA plans were delivered with three different gating conditions: no gating, 1-sec gating window, and 2-sec gating window. The error between calculated dose and measured dose was evaluated based on the pass rate calculated using the gamma evaluation method with 3%/3 mm criteria. The average pass rates in the portal dosimetry plans were $98.72{\pm}0.82%$, $94.91{\pm}1.64%$, and $98.23{\pm}0.97%$ for no gating, 1-sec gating, and 2-sec gating, respectively. The average pass rates in MapCHECK2 plans were $97.80{\pm}0.91%$, $95.38{\pm}1.31%$, and $97.50{\pm}0.96%$ for no gating, 1-sec gating, and 2-sec gating, respectively. We verified that the dosimetric accuracy of gated RapidArc increases as gating window time increases and efforts should be made to increase gating window time during the RapidArc treatment process.