• Title/Summary/Keyword: Frame camera

Search Result 612, Processing Time 0.032 seconds

The Kinematical Analysis of Straddle Jump to Push up Motion on Sports Aerobics (스포츠 에어로빅스 Straddle Jump to Push up 동작의 운동학적 분석)

  • Kim, Cha-Nam
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.2
    • /
    • pp.77-90
    • /
    • 2002
  • This study serves the purpose of understanding about correct jump and landing motion through Kinematical Analysis of Straddle Jump to Push up Motion at target by four elite sports aerobics athletes have more than four years career. And further more that make good assistance for coaches effective guidance through an offer basic data and correct diagnosis, evaluate of motions. It was picture-taked by two-video camera for Straddle Jump to Push up Motions. Camera speeds are 60 frame/sec. There are Kinematical Variation elements for analysis, the displacement of COG, each angle displacement left/right of shoulder-joint, each angle displacement left/right of knee-joint and each speed left/right of tip of the toes. Every each person accomplished severaly 3 times and we have acquired this conclusion. The conclusions were as follows; 1. Each situation for displacement of COG showed low height of COG by phase 1, 4, 5(79.05${\pm}9.07,\;46.41{\pm}3.65,\;18.66{\pm}0.54cm$) and It showed high height of COG by phase 2, 3($120.80{\pm}6.13,\;148.12{\pm}9.19cm$). 2. Each displacement left, right of shoulder-joint flexion by phase 1($91.07{\pm}8.30,\;90.77{\pm}5.72$deg/sec)and It showed maximal extension angles by phase 2($102.48{\pm}10.00,\;102.39{\pm}10.51$deg/sec). in part of phase 3, left of shoulder-joint angle($94.43{\pm}4.12$deg/sec) showed flexion phase 1, the other right shoulder-joint angle(88.38${\pm}$4.98deg/sec) showed more a little lower than phase 1, in last phase that showed most low by phase 4($70.58{\pm}13.72,\;54.24{\pm}11.58$deg/sec). 3. Each displacement left, right of hip joint showed maximal extent conditions by phase 2, 3($160.35{\pm}22.68,\;1534.77{\pm}5.40$deg/sec, $150.04{\pm}12.79,\;145.54{\pm}13.00$deg/sec) beside, ankle-joint showed minimal angle by phase 1, 4($93.59{\pm}18.92,\;85.37{\pm}13.23$deg/sec, $66.60{\pm}15.77,\;80.60{\pm}16.57$deg/sec). 4. Each displacement left, right of hip joint showed maximal extent conditions by phase 2($157.15{\pm}9.13,\;163.52{\pm}8.18$deg/sec), and right of hip joint showed minimal angle by phase 3($110.87{\pm}13.81,\;77.53{\pm}8.95$deg/sec) It showed alike condition of low angle by phase 1, 4($91.04{\pm}2.31,\;96.26{\pm}2.20$deg/sec). 5. Each displacement left, right of knee-joint showed maximal extent conditions by phase 1, 3, 4($173.46{\pm}2.95,\;171.51{\pm}5.44$deg/sec, $172.24{\pm}4.49,\;171.26{\pm}0.65$deg/sec, $162.78{\pm}2.13,\;164.10{\pm}5.97$deg/sec) but It showed flexion only left of knee-joint by phase 2($164.45{\pm}7.51,\;159.38{\pm}3.48$deg/sec). 6. Each speed left, right of the tip of the toes showed most fastest when someone jumped with lift up leges by phase 1, 2($321.32{\pm}67.91,\;316.90{\pm}41.97$cm/sec, $410.06{\pm}153.06,\;399.77{\pm}189.34$cm/sec), It showed more less speed than phase 1,2 by phase 3($169.74{\pm}67.17,\;150.00{\pm}63.80$cm/sec) and It showed most slow speed than phase 1,2,3 by phase 4($87.22{\pm}34.90,\;85.72{\pm}52.23$cm/sec).

Effects of Use of the Iodine Contrast Medium on Gamma Camera Imaging (요오드 조영제 사용이 감마카메라 영상에 미치는 영향)

  • Pyo, Sung-Jae;Cho, Yun-Ho;Choi, Jae-Ho
    • Journal of radiological science and technology
    • /
    • v.39 no.4
    • /
    • pp.557-564
    • /
    • 2016
  • Effects of Gamma camera imaging on gamma ray counting rates as a function of use and density of the iodine contrast medium currently in primary use for clinics, and changes in gamma ray counting rates as a function of the contrast medium status upon attenuation correction using a CT absorption coefficient in an SPECT/CT attenuation correction will be considered herein. For experimental materials used $^{99m}TcO_4$ 370 MBq and Pamiray 370 mg, Iomeron 350 mg, Visipaque 320 mg, Bonorex 300 mg of iodine contrast medium. For image acquisition, planar imaging was consecutively filmed for 1, 2, 3, 4, 5 min, respectively, 30 min after administration of $^{99m}TcO_4$. while 60 views were filmed per frame for 20 min at 55 min for the SPECT/CT imaging. In planar imaging, the gamma ray counting rates as a function of filming time were reduced showing a statistically significant difference when mixed according to the type of contrast medium density rather than when the radioactive isotope $^{99m}TcO_4$ and the saline solution were mixed. In the tomography for mixing of the radioactive isotope $^{99m}TcO_4$ and saline solution, the mean counting rate without correction by the CT absorption coefficient is $182{\pm}26counts$, while the counting rate with correction by the CT absorption coefficient is $531.3{\pm}34counts$. In the tomography for mixing of the radioactive isotope $^{99m}TcO_4$ and the saline solution with the contrast medium, the mean values before attenuation correction by CT absorption coefficient were $166{\pm}29$, $158.3{\pm}17$, $154{\pm}36$, and $150{\pm}33counts$ depending on the densities of the contrast medium, while the mean values after attenuation correction were $515{\pm}03$, $503{\pm}10$, $496{\pm}31$, and $488.7{\pm}33counts$, showing significant differences in both cases when comparatively evaluated with the imaging for no mixing of the contrast medium. Iodine contrast medium affects the rate of gamma ray. Therefore, You should always be preceded before another test on the day of dignosis.

A Study of Experimental Image Direction for Short Animation Movies -focusing in short film and (단편애니메이션의 실험적 영상연출 연구 -<탱고>와 <페스트 필름>을 중심으로)

  • Choi, Don-Ill
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.375-391
    • /
    • 2014
  • Animation movie is a non-photorealistic animated art that consists of formative language forming a frame based on a story and cuts describing frames that form the cuts. Therefore, in expressing an image, artistic expression methods and devices for a formative space are should be provided in a frame while cuts have the images between frames faithfully. Short animation movie is produced by various image experiments with unique image expressions rather than narration for expressing subjective discourse of a writer. Therefore, image style that forms unique images and various image directions are important factors. This study compared the experimental image directions of and , both of which showed a production method of film manipulation. First, while uses pixilation that produces images obtained from live images through painting and many optical disclosure process on a cell mat, was made with diverse collage techniques such as tearing, cutting, pasting, and folding hundreds of scenes from action movies. Second, expresses non-causal relationship of characters by their repetitive behaviors and circulatory image structure through a fixed camera angle, resisting typical scene transition. On the other hand, has an advancing structure that progresses antagonistic relationship of characters through diverse camera angles and scene transition of unique images. Third, in terms of editing, uses a long-take short cut technique in which the whole image consists of one short cut, though it seems to be many scenes with the appearance of various characters. On the other hand, maximizes visual fun and commitment by image reconstruction with hundreds of various short cuts. That is, both works have common features of an experimental work that shows expansion of animated image expressions through film manipulation that is different form general animation productions. On top of that, delivers routine life of diverse human beings without clear narration through image of conceptualized spaces. expresses it in a new image space through image reconstruction with collage technique and speedy progress, setting a binary opposition structure.

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

A Stereo Video Avatar for Supporting Visual Communication in a $CAVE^{TM}$-like System ($CAVE^{TM}$-like 시스템에서 시각 커뮤니케이션 지원을 위한 스테레오 비디오 아바타)

  • Rhee Seon-Min;Park Ji-Young;Kim Myoung-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.6
    • /
    • pp.354-362
    • /
    • 2006
  • This paper suggests a method for generating high qualify stereo video avatar to support visual communication in a CAVE$^{TM}$-like system. In such a system because of frequent change of light projected onto screens around user, it is not easy to extract user silhouette robustly, which is an essential step to generate a video avatar. In this study, we use an infrared reflective image acquired by a grayscale camera with a longpass filter so that the change of visible light on a screen is blocked to extract robust user silhouette. In addition, using two color cameras positioned at a distance of a binocular disparity of human eyes, we acquire two stereo images of the user for fast generation and stereoscopic display of a high quality video avatar without 3D reconstruction. We also suggest a fitting algorithm of a silhouette mask on an infrared reflective image into an acquired color image to remove background. Generated stereo images of a video avatar are texture mapped into a plane in virtual world and can be displayed in stereoscopic using frame sequential stereo method. Suggested method have advantages that it generates high quality video avatar taster than 3D approach and it gives stereoscopic feeling to a user 2D based approach can not provide.

Regional Projection Histogram Matching and Linear Regression based Video Stabilization for a Moving Vehicle (영역별 수직 투영 히스토그램 매칭 및 선형 회귀모델 기반의 차량 운행 영상의 안정화 기술 개발)

  • Heo, Yu-Jung;Choi, Min-Kook;Lee, Hyun-Gyu;Lee, Sang-Chul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.798-809
    • /
    • 2014
  • Video stabilization is performed to remove unexpected shaky and irregular motion from a video. It is often used as preprocessing for robust feature tracking and matching in video. Typical video stabilization algorithms are developed to compensate motion from surveillance video or outdoor recordings that are captured by a hand-help camera. However, since the vehicle video contains rapid change of motion and local features, typical video stabilization algorithms are hard to be applied as it is. In this paper, we propose a novel approach to compensate shaky and irregular motion in vehicle video using linear regression model and vertical projection histogram matching. Towards this goal, we perform vertical projection histogram matching at each sub region of an input frame, and then we generate linear regression model to extract vertical translation and rotation parameters with estimated regional vertical movement vector. Multiple binarization with sub-region analysis for generating the linear regression model is effective to typical recording environments where occur rapid change of motion and local features. We demonstrated the effectiveness of our approach on blackbox videos and showed that employing the linear regression model achieved robust estimation of motion parameters and generated stabilized video in full automatic manner.

The Comparison of Motion Correction Methods in Myocardial Perfusion SPECT (심근관류 SPECT에서 움직임 보정 방법들의 비교)

  • Park, Jang-Won;Nam, Ki-Pyo;Lee, Hoon-Dong;Kim, Sung-Hwan
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.28-32
    • /
    • 2014
  • Purpose Patient motion during myocardial perfusion SPECT can produce images that show visual artifacts and perfusion defects. This artifacts and defects remain a significant source of unsatisfactory myocardial perfusion SPECT. Motion correction has been developed as a way to correct and detect the patient motion for reducing artifacts and defects, and each motion correction uses different algorithm. We corrected simulated motion patterns with several motion correction methods and compared those images. Materials and Methods Phantom study was performed. The anthropomorphic torso phantom was made with equal counts from patient's body and simulated defect was added in myocardium phantom for to observe the change in defect. Vertical motion was intentionally generated by moving phantom downward in a returning pattern and in a non-returning pattern throughout the acquisition. In addition, Lateral motion was generated by moving phantom upward in a returning pattern and in a non-returning pattern. The simulated motion patterns were detected and corrected similarly to no-motion pattern image and QPS score, after Motion Detection and Correction Method (MDC), stasis, Hopkins method were applied. Results In phantom study, Changes of perfusion defect were shown in the anterior wall by the simulated phantom motions, and inferior wall's defect was found in some situations. The changes derived from motion were corrected by motion correction methods, but Hopkins and Stasis method showed visual artifact, and this visual artifact did not affect to perfusion score. Conclusion It was confirmed that motion correction method is possible to reduce the motion artifact and artifactual perfusion defect, through the apply on the phantom tests. Motion Detection and Correction Method (MDC) performed better than other method with polar map image and perfusion score result.

  • PDF

An analysis of Factorial structure of Kinematic variables in Bowling (볼링의 운동학적 분석과 주요인 구조분석)

  • Lee, Kyung-Il
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.2
    • /
    • pp.381-392
    • /
    • 2002
  • This study attempted to indentify changeability of the factorial structure of kinematic analysis in bowling. Subjects of group composed of three groups : Higher bowers who are national representative bowers with 200 average point and one pro-bowler. Middle bowlers who are three common persons with 170 average points. Lower bowler who are three common persons with 150 average points. Motion analysis on throw motion in three groups respectively has been made through three-dimension cinematography using DLT method. Two high-speed video camera at operating 180 and 60 frame per secondary. T-test factorial structure analysis has been used to define variable relations. It was concluded that : 1. The difference of x1, x2, x4, x8, x9, x11, x12, x13 where significant between two group. 2. The difference of number of spin and angle of the back-hand where statistically significant between two group(p<.001, p<.05) 3. The correlation over r=.5 between the kinematic data x1, x2, x3, x9, x10, x11. In the rotation loading matrix Factor 1 was x1, x2, x9, x10 and Factor 2 relates to x3, x11. 4. In order to obtain the factor score as follow as ; Factor 1 = (0.248)X1 + (0.265)X2 + (-0.074)X3 + (0.259)X9 + (0.259)X10 + (-0.025)X11 Factor 2=(-0.016)X1 + (-0.055)X2 + (0.84)X3 + (-0.013)X9 + (-0.007)X10 + (0.553)X11.

A User Driven Adaptive Bandwidth Video Streaming System (사용자 기반 가변 대역폭 영상 스트리밍 시스템)

  • Chung, Yeongjee;Ozturk, Yusuf
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.825-840
    • /
    • 2015
  • Adaptive bitrate (ABR) streaming technology has become an important and prevalent feature in many multimedia delivery systems, with content providers such as Netflix and Amazon using ABR streaming to increase bandwidth efficiency and provide the maximum user experience when channel conditions are not ideal. Where such systems could see improvement is in the delivery of live video with a closed loop cognitive control of video encoding. In this paper, we present streaming camera system which provides spatially and temporally adaptive video streams, learning the user's preferences in order to make intelligent scaling decisions. The system employs a hardware based H.264/AVC encoder for video compression. The encoding parameters can be configured by the user or by the cognitive system on behalf of the user when the bandwidth changes. A cognitive video client developed in this study learns the user's preferences(i.e. video size over frame rate) over time and intelligently adapts encoding parameters when the channel conditions change. It has been demonstrated that the cognitive decision system developed has the ability to control video bandwidth by altering the spatial and temporal resolution, as well as the ability to make scaling decisions.