• Title/Summary/Keyword: 비디오카메라

Search Result 578, Processing Time 0.027 seconds

A Study on the Adequate HD Camera Focal Length in the Broadcasting Studio using LED Video Wall (LED 비디오월을 사용하는 방송환경에서 HD 카메라의 적정 초점거리 연구)

  • Choi, Ki-chang;Kwon, Soon-chul;Lee, Seung-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.713-721
    • /
    • 2022
  • In order to use the LED video wall in the broadcasting studio, there are a few things to be aware of. First, since the pixels are closely arranged, a moire phenomenon may occur due to a short arrangement period, and second, the distance between pixels (pixel pitch) may be recorded on the image sensor of the broadcasting camera. When moire occurs or pixel pitch is observed, viewers feel uncomfortable. Moire effect can be reduced by adjusting the shooting distance or angle of the camera, but in order to prevent the pixel pitch from being recorded on the image sensor, secure a sufficient distance between the LED video wall and camera. even when the distance secured, the zoom lens used in the broadcasting studio must be operated by appropriately changing the magnification. If the focal length is changed by changing the magnification to obtain a desired angle of view, the pixel pitch may be unintentionally recorded. In this study we propose the range that the pixel pitch is not observed while changing the magnification ratio of the zoom lens when the distance from the video wall is sufficiently secured. The content was played back on the LED video wall and the LED video wall was recorded on the server using an HD camera equipped with a B4 mount zoom lens

MPEG Video Segmentation using Two-stage Neural Networks and Hierarchical Frame Search (2단계 신경망과 계층적 프레임 탐색 방법을 이용한 MPEG 비디오 분할)

  • Kim, Joo-Min;Choi, Yeong-Woo;Chung, Ku-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.114-125
    • /
    • 2002
  • In this paper, we are proposing a hierarchical segmentation method that first segments the video data into units of shots by detecting cut and dissolve, and then decides types of camera operations or object movements in each shot. In our previous work[1], each picture group is divided into one of the three detailed categories, Shot(in case of scene change), Move(in case of camera operation or object movement) and Static(in case of almost no change between images), by analysing DC(Direct Current) component of I(Intra) frame. In this process, we have designed two-stage hierarchical neural network with inputs of various multiple features combined. Then, the system detects the accurate shot position, types of camera operations or object movements by searching P(Predicted), B(Bi-directional) frames of the current picture group selectively and hierarchically. Also, the statistical distributions of macro block types in P or B frames are used for the accurate detection of cut position, and another neural network with inputs of macro block types and motion vectors method can reduce the processing time by using only DC coefficients of I frames without decoding and by searching P, B frames selectively and hierarchically. The proposed method classified the picture groups in the accuracy of 93.9-100.0% and the cuts in the accuracy of 96.1-100.0% with three different together is used to detect dissolve, types of camera operations and object movements. The proposed types of video data. Also, it classified the types of camera movements or object movements in the accuracy of 90.13% and 89.28% with two different types of video data.

The Interesting Moving Objects Tracking Algorithm using Color Informations on Multi-Video Camera (다중 비디오카메라에서 색 정보를 이용한 특정 이동물체 추적 알고리듬)

  • Shin, Chang-Hoon;Lee, Joo-Shin
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.267-274
    • /
    • 2004
  • In this paper, the interesting moving objects tracking algorithm using color information on Multi-Video camera is proposed Moving objects are detected by using difference image method and integral projection method to background image and objects image only with hue area, after converting RGB color coordination of image which is input from multi-video camera into HSI color coordination. Hue information of the detected moving area are normalized by 24 steps from 0$^{\circ}$ to 360$^{\circ}$ It is used for the feature parameters of the moving objects that three normalization levels with the highest distribution and distance among three normalization levels after obtaining a hue distribution chart of the normalized moving objects. Moving objects identity among four cameras is distinguished with distribution of three normalization levels and distance among three normalization levels, and then the moving objects are tracked and surveilled. To examine propriety of the proposed method, four cameras are set up indoor difference places, humans are targeted for moving objects. As surveillance results of the interesting human, hue distribution chart variation of the detected Interesting human at each camera in under 10%, and it is confirmed that the interesting human is tracked and surveilled by using feature parameters at four cameras, automatically.

Design of Camera Model for Implementation of Spherical PTAM (구면 PTAM의 구현을 위한 카메라 모델 설계)

  • Kim, Ki-Sik;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.607-610
    • /
    • 2020
  • 시각적 환경 인식을 위하여 PTAM 연구가 활발히 이루어지고 있다. 최근 모든 방향의 시야각을 제공하는 구면 비디오를 위한 연구로 확장되고 있다. 기존의 구면 SLAM 방법은 Unified Sphere Model을 사용하며 앞면 시야각만 제공할 수 있는 한계가 있다. 본 논문에서는 구면 비디오를 위한 PTAM의 구현을 위한 카메라 모델을 제시한다. 제안된 카메라 모델은 핀홀 투영 카메라에 기반한 듀얼 영상 평면을 사용한다. 제안 방법은 앞면 시야각에 제약되지 않으며 전체 시야각을 지원한다. 또한 구면 바디오의 PTAM 적용 과정에서 평면 연산식을 직접 적용할 수 있는 장점이 있다.

A Study on 3D Graphics Registration of Image Sequences using Planar Surface (평면을 이용한 이미지 시퀀스에서의 3D 그래픽 정합에 대한 연구)

  • 김주완;장병태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.190-192
    • /
    • 2003
  • 본 논문은 캘리브레이션 정보를 모르는 카메라로부터 얻은 시퀀스 영상에서 공간상에서 평면인 물체의영상 정보를 이용하여 카메라 내부 및 외부 파라미터를 추정하고, 이를 이용하여 가상의 3D 그래픽을 시퀀스 영상에 정합하는 방법을 제안한다. 제안된 방법은 기존의 방법에 비해 손쉽게 이미지에 가상의 3D 그래픽 오브젝트를 정합할 수 있으며, 눈에 보이는 정합오차를 최소화하며 DirectX와 같은 3D 그래픽 툴과 쉽게 연동이 되는 장정이 있다. 본 연구는 비디오와 같은 영상에 3D 영상을 합성하는 대화형 비디오 컨텐트 개발에 활용할 수 있을 것으로 기대된다.

  • PDF

무전해 니켈 도금막의 초정밀 경면 절삭

  • Korea Optical Industry Association
    • The Optical Journal
    • /
    • s.96
    • /
    • pp.49-54
    • /
    • 2005
  • 프로젝션 TV, 비디오 카메라나 CD 등과 같은 전자 광학((Electro-optics) 제품 시장이 확대되고 있다. 이들 제품의 광학 부품을 소형경량화·고성능화·저가격화 하기 위해 각 제조업체에서 독자적인 비구면 가공 기술을 개발하고 있으며, 종전의 유리 렌즈에서 좀더 가벼운 플라스틱 렌즈로 이동하고 있다. 이 광학 부품들의 요구 정밀도는 형상 정밀도, 면 거칠기 모두 요구값이 높아지고 있으며 초정밀 비구면 경면 절삭 기술이 이용되고 있다. 본 고에서는 프로젝션 TV 및 비디오 카메라의 비구면 플라스틱용 금형을 대상으로 무전해 니켈 도금막의 초정밀 경면 절삭 기술에 대해 소개한다.

  • PDF

Moving Object Detection and Tracking in Multi-view Compressed Domain (비디오 압축 도메인에서 다시점 카메라 기반 이동체 검출 및 추적)

  • Lee, Bong-Ryul;Shin, Youn-Chul;Park, Joo-Heon;Lee, Myeong-Jin
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.1
    • /
    • pp.98-106
    • /
    • 2013
  • In this paper, we propose a moving object detection and tracking method for multi-view camera environment. Based on the similarity and characteristics of motion vectors and coding block modes extracted from compressed bitstreams, validation of moving blocks, labeling of the validated blocks, and merging of neighboring blobs are performed. To continuously track objects for temporary stop, crossing, and overlapping events, a window based object updating algorithm is proposed for single- and multi-view environments. Object detection and tracking could be performed with an acceptable level of performance without decoding of video bitstreams for normal, temporary stop, crossing, and overlapping cases. The rates of detection and tracking are over 89% and 84% in multi-view environment, respectively. The rates for multi-view environment are improved by 6% and 7% compared to those of single-view environment.

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Extracting Camera Motions by Analyzing Video Data (비디오 데이터 분석에 의한 카메라의 동작 추출)

  • Jang, Seok-Woo;Lee, Keun-Soo;Choi, Hyung-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.8
    • /
    • pp.65-80
    • /
    • 1999
  • This paper presents an elegant method, an affine-model based approach, that can qualitatively estimate the information of camera motion. We define various types of camera motion by means of parameters of an affine-model. To get those parameters form images, we fit an affine-model to the field of instantaneous velocities, rather than raw images. We correlate consecutive images to get instantaneous velocities. The size filtering of the velocities are applied to remove noisy components, and the regression approach is employed for the fitting procedure. The fitted values of the parameters are examined to get the estimates of camera motion. The experimental results show that the suggested approach can yield the qualitative information of camera motion successfully.

  • PDF

Virtual Multiview Image Composition based on RGB Texture and Depth Data (RGB 텍스쳐와 깊이 데이터를 이용한 가상 다시점 영상의 생성 및 그래픽스 합성)

  • Hwang, Won-Young;Kwon, Jun-Sup;Kim, Man-Bae;Choi, Chang-Yeol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.93-96
    • /
    • 2006
  • 2D 및 입체 영상 콘텐츠의 공급이 많아지면서 실감 콘텐츠에 대한 관심이 더욱 높아지고 있다. 실감 콘텐츠는 다시점 카메라로부터 획득한 다시점 비디오, 깊이 카메라에서 얻은 RGB 영상과 컴퓨터 그래픽스와 같은 synthetic data를 합성하여 보다 실감나게 제작된다. 다시점 카메라를 이용하면 다시점 비디오를 쉽게 획득할 수 있으나 제작비용이 많이 들고, 깊이 카메라를 사용하면 시스템 구성은 상대적으로 용이하나 시점 영상이 하나라는 단점이 있다. 본 논문에서는 깊이 카메라에서 얻은 RGB 텍스쳐 데이터와 깊이 데이터로부터 가상 다시점 영상을 생성하고, 생성된 영상에 컴퓨터 그래픽스를 합성하는 방법을 제안한다. 제안한 방법은 다시점 카메라 시스템을 사용하지 않고도 시점에 따른 가상 시점 화상을 용이하게 제작하여 그래픽 객체를 합성한다. 합성된 다시점 3D 모니터나 입체 모니터를 이용하여 3차원으로 실감나게 시청할 수 있다.

  • PDF