• Title/Summary/Keyword: Motion Pictures

Search Result 152, Processing Time 0.029 seconds

Substream-based out-of-sequence packet scheduling for streaming stored media (저장매체 스트리밍에서 substream에 기초한 비순차 패킷 스케줄링)

  • Choi Su Jeong;Ahn Hee June;Kang Sang Hyuk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.10C
    • /
    • pp.1469-1483
    • /
    • 2004
  • We propose a packet scheduling algorithms for streaming media. We assume that the receiver periodically reports back the channel throughput. From the original video data, the importance level of a video packet is determined by its relative position within its group of pictures, taking into account the motion-texture discrimination and temporal scalability. Thus, we generate a number of nested substreams. Using feedback information from the receiver and statistical characteristics of the video, we model the streaming system as a queueing system, compute the run-time decoding failure probability of a Same in each substream based on effective bandwidth approach, and determine the optimum substream to be sent at that moment in time. Since the optimum substream is updated periodically, the resulting sending order is different from the original playback order. From experiments with real video data, we show that our proposed scheduling scheme outperforms the conventional sequential sending scheme.

An Implementation of a Web Transaction Processing System (웹 트랜잭션 처리 시스템의 구현)

  • Lee, Gang-U;Kim, Hyeong-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.533-542
    • /
    • 1999
  • 웹은 지금까지의 문자 위주의 응용에서 벗어나, 그림, 동영상과 같은 멀티미디어 자료를 브라우저를 통해 접근할 수 있도록 하여, 점차 많은 응용분야에서 사용되고 있다. 최근에 들어서는 데이타베이스와 웹의 연동을 통해, 보다 다양하고, 양질의 웹 서비스를 지원하고자 하는 노력으로, 많은 연구자들이 웹 통로에 대한 많은 연구로 좋은 결과를 내고 있다. 그러나 웹과 데이타베이스의 연동에 있어 필요한 웹 트랜잭션 처리시 발생하는 문제에 있어서는 아직까지 만족할 만한 연구결과가 제시되지 않고 있다.본 논문에서는 웹 통로에서 웹 트랜잭션 처리를 위한 시스템인 WebTP를 제안하고, 이를 구현하였다. WebTP는 워크, 워크-전역변수 등을 도입하여, 웹 응용 단위에서의 저장점 설치와 부분 철회와 트랜잭션 상태 관리 기법을 제공하여, 시스템의 가용성과 안정성을 높이며, 구조적인 웹 응용작성이 가능토록 하여, 복잡한 웹 응용 개발을 돕는다.Abstract Enabling users to access multi-media data like images and motion pictures, in addition to the traditional text-based data, Web becomes a popular platform for various applications. Recently, many researchers have paid a large amount of works on integrating Web and databases to improve the quality of Web services, and have produced many noticeable results on this area. However, no enough research results have been presented on processing Web transactions that are essential in integrating Web and databases.In this paper, we have designed and implemented WebTP, which is a Web transaction processing system for Web gateways to databases. WebTP, by introducing work and work-global-variables, provides a more robust state management, supports application-level savepoints and partial rollbacks. It also nhances productity by providing a modular way to develop Web applications.

A Comparison of PM10 Exposure Characteristics of Swine Farmers by Body Parts using Direct-reading Instrument (직독식 기기를 이용한 양돈작업자의 신체부위별 PM10 노출 특성 비교 연구)

  • Sin, Sojung;Kim, Hyocher;Kim, Kyung-ran;Seo, Mintae;Park, Sooin;Kim, Kyungmin;Kim, Kyungsu
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.29 no.2
    • /
    • pp.159-166
    • /
    • 2019
  • Objectives: The purpose of this study was to evaluate the personal exposure to $PM_{10}$ by body parts for the development of dust monitoring wearable device for swine farmers. Methods: Tasks were classified by using motion pictures taken by action cameras attached to swine farmers. Concentrations of $PM_{10}$ were measured by attaching direct-reading instruments at the head, neck and waist of worker. Differences of $PM_{10}$ exposure between body parts were analyzed with linear regression. Results: We identified three tasks(vaccination, moving pigs, and manure treatment). $PM_{10}$ concentration during vaccination was the highest among the tasks, and the body part showing the highest concentration of $PM_{10}$ was the waist regardless of task. In all tasks, the closer distance between the body parts, the higher were the R-squared values(vaccination 0.4221, moving pigs 0.6990, and manure treatment 0.2164). Conclusions: We presumed that $PM_{10}$ concentrations were affected by the parts of the body in which they were measured. In order to develop swine farmer's wearable device for monitoring dust concentration in air, the determination of the positions of monitoring sensor to ensure accurate measurement is essential. Considering the results of this study, wearable sensor should be positioned at the waist.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Virtual reference image-based video coding using FRUC algorithm (FRUC 알고리즘을 사용한 가상 참조 이미지 기반 부호화 기술 연구)

  • Yang, Fan;Han, Heeji;Choi, Haechul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.650-652
    • /
    • 2022
  • Frame rate up-conversion (FRUC) algorithm is an image interpolation technology that improves the frame rate of moving pictures. This solves problems such as screen shake or blurry motion caused by low frame rate video in high-definition digital video systems, and provides viewers with a more free and smooth visual experience. In this paper, we propose a video compression technique using deep learning-based FRUC algorithm. The proposed method compresses and transmits after excluding some images from the original video, and uses a deep learning-based interpolation method in the decoding process to restore the excluded images, thereby compressing them with high efficiency. In the experiment, the compression performance was evaluated using the decoded image and the image restored by the FRUC algorithm after encoding the video by skipping 1 or 3 pages. When 1 and 3 sheets were excluded, the average BD-rate decreased by 81.22% and 27.80%. The reason that excluding three images has lower encoding efficiency than excluding one is because the PSNR of the image reconstructed by the FRUC method is low.

  • PDF

Kinematical Characteristics of the Translational and Pendular Movements of each Cervical Vertebra at the Flexion and Extension Motion (굴곡과 신전 수동운동 상태에서 개별경추의 진자운동 및 병진운동의 운동학적인 특징)

  • Park, Sung Hyuk;Choi, Han Sung;Hong, Hoon Pyo;Ko, Young Gwan
    • Journal of Trauma and Injury
    • /
    • v.19 no.2
    • /
    • pp.126-134
    • /
    • 2006
  • Purpose: The aim of this study was to determine the kinematical characteristics of the pendular and the translational movements of each cervical vertebra at flexion and extension for understanding the mechanism of injury to the cervical spine. Methods: Twenty volunteers, young men (24~37 years), with clinically and radiographically normal cervical spines were studied. We induced two directional passive movements and then took X-ray pictures. The range of pendular movement could be measured by measuring the variation of the distance between the center point of two contiguous cervical vertebrae, and the range of translational movement could be measured by measuring the variation of the shortest distance between the center point of a vertebra and an imaginary line connecting the center points of two lower contiguous cervical vertebrae. The measurements were done by using a picture archiving and communicating system (PACS). Results: The total length of all cervical vertebrae in the neutral position was, on average, 133.66 mm, but in both flexion and extension, the lengths were widened to 134.83 mm and 134.79 mm, respectively. The directions of both the pendular and the translational movements changed at the $2^{nd}$ cervical vertebra, and the ranges of both movements were significantly larger from the $5^{th}$ cervical vertebra to the $7^{th}$ cervical vertebra for flexion and combined flexion and extension motion (p<0.05). Conclusion: The kinematical characteristics for flexion and extension motions were variable at each level of cervical vertebrae. The $1^{st}$ and the $2^{nd}$ cervical vertebrae and from the $5^{th}$ to the $7^{th}$ cervical vertebrae were the main areas of cervical spinal injury. This shows, according to "Hook's law," that the tissues supporting this area could be weak, and that this area is sensitive to injury.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

A Study on Expressing 3D Animation by Visual Direction : focused on 〈 How to train your dragon 〉 (시각적 연출에 의한 3D 입체 애니메이션 표현 연구: 〈드래곤 길들이기〉를 중심으로)

  • Kim, Jung-Hyun
    • Cartoon and Animation Studies
    • /
    • s.26
    • /
    • pp.1-30
    • /
    • 2012
  • The purpose of animation is to give interesting stories to an audience through motion. To achieve the purpose, over the past century since its inception, animation has adopted many kinds of technologies, and thus developed diverse narrative methods and visual expression techniques. In addition, with the advancement of expression techniques, all elements making up animation have gradually been systemized, and at the same time, have helped express the worlds beyond the reality. As a result, people have faced the era when an audience can watch everything imaginated by an animation director on a big screen. These days, more efforts have been made in order for the audience to feel much more than enjoy pictures moving in a frame. In other words, the purpose of the animation is changing from the passive viewing of animation to feeling and sensing stuffs through the animation. In the center of the changing process is 3D technology which gives new interesting to an audience. Sometime ago, a 3D animation movie was produced in Korea. But it did not bring out box-office profits, for it failed to give satisfaction to an audience who expected high perfection and beauty being able to be rivalled to those of international 3D animation movies. The failure is attributable to the fact that the domestic 3D animation production industry is merely in the early stage, and has not sufficient human resources, technology, and experiences in producing 3D animation films. Moreover, the problem is that most studies on 3D focus on the technologies related to reenactment, but that few studies on the images, which an audience directly faces, have been conducted. Under the domestic circumstance, the study on stereoscopic image screen of , a 3D stereoscopic animation film which was released in 2010 and has been seen as the best successful 3D stereoscopic animation, is worthwhile. Thus this thesis conducted theoretical consideration and case analysis focusing on the visual direction that creates the pictures to deliver abundant three dimensional effect so that it can be used as a basic data when producing high quality-domestic 3D animation and training professional labor forces. In the result, it was found that the 3D animation was not a new area, but the area which has been expanded and changed by applying the characteristics of 3D image based on the principles of the existing media aesthetics. This study might be helpful to establish the foundation of the theoretical studies necessary for producing 3D animation contents for realizing the sense of reality.

Image Separation of Talker from a Background by Differential Image and Contours Information (차영상 및 윤곽선에 의한 배경에서 화자분리)

  • Park Jong-Il;Park Young-Bum;Yoo Hyun-Joong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.671-678
    • /
    • 2005
  • In this paper, we suggest an algorithm that allows us to extract the important obbject from motion pictures and then replace the background with arbitrary images. The suggested technique can be used not only for protecting privacy and reducing the size of data to be transferred by removing the background of each frame, but also for replacing the background with user-selected image in video communication systems including mobile phones. Because of the relatively large size of image data, digital image processing usually takes much of the resources like memory and CPU. This can cause trouble especially for mobile video phones which typically have restricted resources. In our experiments, we could reduce the requirements of time and memory for processing the images by restricting the search area to the vicinity of major object's contour found in the previous frame based on the fact that the movement of major object is not wide or rapid in general. Specifically, we detected edges and used the edge image of the initial frame to locate candidate-object areas. Then, on the located areas, we computed the difference image between adjacent frames and used it to determine and trace the major object that might be moving. And then we computed the contour of the major object and used it to separate major object from the background. We could successfully separate major object from the background and replate the background with arbitrary images.

Tracking Moving Object using Hierarchical Search Method (계층적 탐색기법을 이용한 이동물체 추적)

  • 방만식;김태식;김영일
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.3
    • /
    • pp.568-576
    • /
    • 2003
  • This paper proposes a moving object tracking algorithm by using hierarchical search method in dynamic scenes. Proposed algorithm is based on two main steps: generation step of initial model from different pictures, and tracking step of moving object under the time-yawing scenes. With a series of this procedure, tracking process is not only stable under far distance circumstance with respect to the previous frame but also reliable under shape variation from the 3-dimensional(3D) motion and camera sway, and consequently, by correcting position of moving object, tracking time is relatively reduced. Partial Hausdorff distance is also utilized as an estimation function to determine the similarity between model and moving object. In order to testify the performance of proposed method, the extraction and tracking performance have tested using some kinds of moving car in dynamic scenes. Experimental results showed that the proposed algorithm provides higher performance. Namely, matching order is 28.21 times on average, and considering the processing time per frame, it is 53.21ms/frame. Computation result between the tracking position and that of currently real with respect to the root-mean-square(rms) is 1.148. In the occasion of different vehicle in terms of size, color and shape, tracking performance is 98.66%. In such case as background-dependence due to the analogy to road is 95.33%, and total average is 97%.