• Title/Summary/Keyword: Key-motion

Search Result 569, Processing Time 0.023 seconds

Stereoscopic Video Conversion Based on Image Motion Classification and Key-Motion Detection from a Two-Dimensional Image Sequence (영상 운동 분류와 키 운동 검출에 기반한 2차원 동영상의 입체 변환)

  • Lee, Kwan-Wook;Kim, Je-Dong;Kim, Man-Bae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1086-1092
    • /
    • 2009
  • Stereoscopic conversion has been an important and challenging issue for many 3-D video applications. Usually, there are two different stereoscopic conversion approaches, i.e., image motion-based conversion that uses motion information and object-based conversion that partitions an image into moving or static foreground object(s) and background and then converts the foreground in a stereoscopic object. As well, since the input sequence is MPEG-1/2 compressed video, motion data stored in compressed bitstream are often unreliable and thus the image motion-based conversion might fail. To solve this problem, we present the utilization of key-motion that has the better accuracy of estimated or extracted motion information. To deal with diverse motion types, a transform space produced from motion vectors and color differences is introduced. A key-motion is determined from the transform space and its associated stereoscopic image is generated. Experimental results validate effectiveness and robustness of the proposed method.

Joint Overlapped Block Motion Compensation Using Eight-Neighbor Block Motion Vectors for Frame Rate Up-Conversion

  • Li, Ran;Wu, Minghu;Gan, Zongliang;Cui, Ziguan;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.10
    • /
    • pp.2448-2463
    • /
    • 2013
  • The traditional block-based motion compensation methods in frame rate up-conversion (FRUC) only use a single uniquely motion vector field. However, there will always be some mistakes in the motion vector field whether the advanced motion estimation (ME) and motion vector analysis (MA) algorithms are performed or not. Once the motion vector field has many mistakes, the quality of the interpolated frame is severely affected. In order to solve the problem, this paper proposes a novel joint overlapped block motion compensation method (8J-OBMC) which adopts motion vectors of the interpolated block and its 8-neighbor blocks to jointly interpolate the target block. Since the smoothness of motion filed makes the motion vectors of 8-neighbor blocks around the interpolated block quite close to the true motion vector of the interpolated block, the proposed compensation algorithm has the better fault-tolerant capability than traditional ones. Besides, the annoying blocking artifacts can also be effectively suppressed by using overlapped blocks. Experimental results show that the proposed method is not only robust to motion vectors estimated wrongly, but also can to reduce blocking artifacts in comparison with existing popular compensation methods.

Prediction of dryout-type CHF for rod bundle in natural circulation loop under motion condition

  • Huang, Siyang;Tian, Wenxi;Wang, Xiaoyang;Chen, Ronghua;Yue, Nina;Xi, Mengmeng;Su, G.H.;Qiu, Suizheng
    • Nuclear Engineering and Technology
    • /
    • v.52 no.4
    • /
    • pp.721-733
    • /
    • 2020
  • In nuclear engineering, the occurrence of critical heat flux (CHF) is complicated for rod bundle, and it is much more difficult to predict the CHF when it is in natural circulation under motion condition. In this paper, the dryout-type CHF is investigated for the rod bundle in a natural circulation loop under rolling motion condition based on the coupled analysis of subchannel method, a one-dimensional system analysis method and a CHF mechanism model, namely the three-fluid model for annular flow. In order to consider the rolling effect of the natural circulation loop, the subchannel model is connected to the one-dimensional system code at the inlet and outlet of the rod bundle. The subchannel analysis provides the local thermal hydraulic parameters as input for the CHF mechanism model to calculate the occurrence of CHF. The rolling motion is modeled by additional motion forces in the momentum equation. First, the calculation methods of the natural circulation and CHF are validated by a published natural circulation experiment data and a CHF empirical correlation, respectively. Then, the CHF of the rod bundle in a natural circulation loop under both the stationary and rolling motion condition is predicted and analyzed. According to the calculation results, CHF under stationary condition is smaller than that under rolling motion condition. Besides, the CHF decreases with the increase of the rolling period and angular acceleration amplitude within the range of inlet subcooling and mass flux adopted in the current research. This paper can provide useful information for the prediction of CHF in natural circulation under motion condition, which is important for the nuclear reactor design improvement and safety analysis.

A study on motion capture animation process : Focusing on short animation film 'Drip' (모션 캡처 애니메이션 프로세스 연구 : 단편 애니메이션 'Drip'을 중심으로)

  • kim, Jisoo
    • Journal of Korea Game Society
    • /
    • v.16 no.4
    • /
    • pp.97-104
    • /
    • 2016
  • This study suggests a technique to implement the production of animation by blending between key frame animation and motion capture animation through short animation 'Drip.' It reduced the time taken to produce an animation by not only enabling efficient process management through mutual organic connection but also conducting a process of mutually making up for weak points of key frame animation and motion capture animation. Through this, it was intended to be helpful in efficient animation production by overcoming the limitation of key frame animation and motion capture animation and perceiving and applying a complex process.

Stereoscopic Conversion based on Key Frames (키 프레임 기반 스테레오스코픽 변환 방법)

  • 김만배;박상훈
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.219-228
    • /
    • 2002
  • In this paper, we propose a new method of converting 2D video into 3D stereoscopic video, called stereoscopic conversion. In general, stereoscopic images are produced using the motion informations. However unreliable motion informations obtained especially from block-based motion estimation cause the wrong generation of stereoscopic images. To solve for this problem, we propose a stereoscopic conversion method based upon the utilization of key frame that has the better accuracy of estimated motion informations. As well, as generation scheme of stereoscopic images associated with the motion type of each key frame is proposed. For the performance evaluation of our proposed method, we apply it to five test images and measure the accuracy of key frame-based stereoscopic conversion. Experimental results show that our proposed method has the accuracy more than about 90 percent in terms of the detection ratio of key frames.

Motion planning of a steam generator mobile tube-inspection robot

  • Xu, Biying;Li, Ge;Zhang, Kuan;Cai, Hegao;Zhao, Jie;Fan, Jizhuang
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1374-1381
    • /
    • 2022
  • Under the influence of nuclear radiation, the reliability of steam generators (SGs) is an important factor in the efficiency and safety of nuclear power plant (NPP) reactors. Motion planning that remotely manipulates an SG mobile tube-inspection robot to inspect SG heat transfer tubes is the mainstream trend of NPP robot development. To achieve motion planning, conditional traversal is usually used for base position optimization, and then the A* algorithm is used for path planning. However, the proposed approach requires considerable processing time and has a single expansion during path planning and plan paths with many turns, which decreases the working speed of the robot. Therefore, to reduce the calculation time and improve the efficiency of motion planning, modifications such as the matrix method, improved parent node, turning cost, and improved expanded node were proposed in this study. We also present a comprehensive evaluation index to evaluate the performance of the improved algorithm. We validated the efficiency of the proposed method by planning on a tube sheet with square-type tube arrays and experimenting with Model SG.

Unsupervised Motion Pattern Mining for Crowded Scenes Analysis

  • Wang, Chongjing;Zhao, Xu;Zou, Yi;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.12
    • /
    • pp.3315-3337
    • /
    • 2012
  • Crowded scenes analysis is a challenging topic in computer vision field. How to detect diverse motion patterns in crowded scenarios from videos is the critical yet hard part of this problem. In this paper, we propose a novel approach to mining motion patterns by utilizing motion information during both long-term period and short interval simultaneously. To capture long-term motions effectively, we introduce Motion History Image (MHI) representation to access to the global perspective about the crowd motion. The combination of MHI and optical flow, which is used to get instant motion information, gives rise to discriminative spatial-temporal motion features. Benefitting from the robustness and efficiency of the novel motion representation, the following motion pattern mining is implemented in a completely unsupervised way. The motion vectors are clustered hierarchically through automatic hierarchical clustering algorithm building on the basis of graphic model. This method overcomes the instability of optical flow in dealing with time continuity in crowded scenes. The results of clustering reveal the situations of motion pattern distribution in current crowded videos. To validate the performance of the proposed approach, we conduct experimental evaluations on some challenging videos including vehicles and pedestrians. The reliable detection results demonstrate the effectiveness of our approach.

Digital Character Motion Using Motion Capturing System (광학식 모션 캡쳐(Optical Motion Capture)방식을 이용한 디지털 캐릭터 움직임)

  • Choi, Tae-Jun;Ryu, Seuc-Ho;Lee, Dong-Lyeor;Lee, Wan-Bok
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.8
    • /
    • pp.109-116
    • /
    • 2007
  • Motion Capture in multimedia contents of the imagination world is utilized in various field such as game, movie, TV. Motion Capture technology of most of game is utilized. As well as is more realistic if use motion Capture and superior time and monetary aspect than previous key frame (Key-Framing) way as well as can display screen that do dynamic, there is more excellent advantage in qualitative aspect. But, is utilizing in some specialty companies and use example was not informed much about problem when is very lacking, and utilize yet in learned circles. Extract motion data of various action using optic motion Capture of motion Capture equipment in this treatise, and investigated about problem that appear when applied in character that is different from a person.

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

An fMRI Study on the Differences in the Brain Regions Activated by an Identical Audio-Visual Clip Using Major and Minor Key Arrangements (동일한 영상자극을 이용한 장조음악과 단조음악에 의해 유발된 뇌 활성화의 차이 : fMRI 연구)

  • Lee, Chang-Kyu;Eum, Young-Ji;Kim, Yeon-Kyu;Watanuki, Shigeki;Sohn, Jin-Hun
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.109-112
    • /
    • 2009
  • The purpose of this study was to examine the differences in the brain activation evoked by music arranged in major and minor key used with an identical motion film during the fMRI testing. A part of the audio-visual combinations composed by Iwamiya and Sano were used for the study stimuli. This audio- visual clip was originally developed by combining a small motion segment of the animation "The Snowman" and music arranged in both major and minor key from the original jazz music "Avalon" rewritten in a classical style. Twenty-seven Japanese male graduate and undergraduate students participated in the study. Brain regions more activated by the major key than the minor key when presented with the identical motion film were the left cerebellum, the right fusiform gyrus, the right superior occipital, the left superior orbito frontal, the right pallidum, the left precuneus, and the bilateral thalamus. On the other hand, brain regions more activated by the minor key than the major key when presented with the identical motion film were the right medial frontal, the left inferior orbito frontal, the bilateral superior parietal, the left postcentral, and the right precuneus. The study showed a difference in brain regions activated between the two different stimulus (i.e., major key and minor key) controlling for the visual aspect of the experiment. These findings imply that our brain systematically generates differently in the way it processes music written in major and minor key(Supported by the User Science Institute of Kyushu University, Japan and the Korea Science and Engineering Foundation).

  • PDF