• Title/Summary/Keyword: Camera View

Search Result 830, Processing Time 0.027 seconds

Development of UV Corona Camera for the Detecting of Discharge on Power Facility using UV Transmittance Improvement Filter (UV 투과율 향상 필터 기술을 이용한 전력설비 방전 검출용 자외선 코로나 카메라 개발)

  • Kim, Young-Seok;Choi, Myeong-Il;Kim, Chong-Min;Bang, Sun-Bae;Shong, Kil-Mok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.11
    • /
    • pp.1656-1661
    • /
    • 2012
  • UV inspection technology is being used for predictive maintenance of power facility together with IR-thermography and ultrasonic devices. In this paper, the UV corona camera design, fabrication, and perform a simple test to be take advantage of the diagnostic equipment. The UV corona camera developed a $6.4^{\circ}{\times}4.8^{\circ}$ of the field of view as a conventional camera to diagnose a wide range of slightly enlarged, and power equipment to measure the distance between the camera and the distance meter has been attached. The transmission between 250 to 280nm was 11% ($12.5%{\times}88%{\times}98%$) by combing the transmission on absorption film, window and other filter(UG 5, nickel sulphate and so on). In a distance of 5m with the UV corona camera it is possible to detect partial discharge with a PD level of 2.5pC and a RIV level of $3.6dB{\mu}V$.

Pre-processing of Depth map for Multi-view Stereo Image Synthesis (다시점 영상 합성을 위한 깊이 정보의 전처리)

  • Seo Kwang-Wug;Han Chung-Shin;Yoo Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.91-99
    • /
    • 2006
  • Pre-processing is one of image processing techniques to enhance image quality or appropriately convert a given image into another form for a specific purpose. An 8 bit depth map obtained by a depth camera usually contains a lot of noisy components caused by the characteristics of depth camera and edges are also more distorted by the quality of a source object and illumination condition comparing with edges in RGB texture image. To reduce this distortion, we use noise removing filters, but they are only able to reduce noise components, so that distorted edges of depth map can not be properly recovered. In this paper, we propose an algorithm that can reduce noise components and also enhance the quality of edges of depth map by using edges in RGB texture. Consequently, we can reduce errors in multi-view stereo image synthesis process.

Server and Client Simulator for Web-based 3D Image Communication

  • Ko, Jung-Hwan;Lee, Sang-Tae;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.5 no.4
    • /
    • pp.38-44
    • /
    • 2004
  • In this paper, a server and client simulator for the web-based multi-view 3D image communication system is implemented by using the IEEE 1394 digital cameras, Intel Xeon server computer and Microsoft's DirectShow programming library. In the proposed system, two-view image is initially captured by using the IEEE 1394 stereo camera and then, this data is compressed through extraction of its disparity information in the Intel Xeon server computer and transmitted to the client system, in which multi-view images are generated through the intermediate views reconstruction method and finally display on the 3D display monitor. Through some experiments it is found that the proposed system can display 8-view image having a grey level of 8 bits with a frame rate of 15 fps.

View synthesis in uncalibrated images (임의 카메라 구조에서의 영상 합성)

  • Kang, Ji-Hyun;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.437-438
    • /
    • 2006
  • Virtual view synthesis is essential for 3DTV systems, which utilizes the motion parallax cue. In this paper, we propose a multi-step view synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We describe an efficient image rectification procedure which guarantees that an interpolation process produce valid views. This rectification method can deal with all possible camera motions. The idea consists of using a polar parameterization of the image around the epipole. Then, to generate intermediate views, we use an efficient dense disparity estimation algorithm considering features of stereo image pairs. Main concepts of the algorithm are based on the region dividing bidirectional pixel matching. The estimated disparities are used to synthesize intermediate view of stereo images. We use computer simulation to show the result of the proposed algorithm.

  • PDF

View-Dependent Real-time Rain Streaks Rendering (카메라 의존적 파티클 시스템을 이용한 실시간 빗줄기 렌더링)

  • Im, Jingi.;Sung, Mankyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.468-480
    • /
    • 2021
  • Realistic real-time rain streaks rendering has been treated as a very difficult problem because of various natural phenomena. Also, in creating and managing a large number of particles, a large amount of computer resources had to be used. Therefore, in this paper, we propose a more efficient real-time rain streaks rendering algorithm by generating view-dependent rain particles and expressing a large amount of rain even with a small number. By creating a 'rain space' dependent on the field of view of the camera moving in real time, particles are rendered only in that space. Accordingly, even if a small number of particles are rendered, since the rendering is performed in a limited space, an effect of rendering a very large amount of particles can be obtained. This enables very efficient real-time rendering of rain streaks.

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

Flexible GGOP prediction structure for multi-view video coding (다시점 동영상 부호화를 위한 가변형 다시점GOP 예측 구조)

  • Yoon, Jae-Won;Seo, Jung-Dong;Kim, Yong-Tae;Park, Chang-Seob;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.420-430
    • /
    • 2006
  • In this paper, we propose a flexible GGOP prediction structure to improve coding efficiency for multi-view video coding. In general, reference software used for MVC uses the fixed GGOP prediction structure. However, the performance of MVC depends on the base view and numbers of B-pictures between I-picture(or P-picture) and P-picture. In order to implement the flexible GGOP prediction structure, the location of base view is decided according to the global disparities among the adjacent sequences. Numbers of B-pictures between I-picture(or P-picture) and P-picture are decided by camera arrangement such as the baseline distance among the cameras. The proposed method shows better result than the reference software of MVC. The proposed prediction structure shows considerable reduction of coded bits by 7.1%.

An effective indoor video surveillance system based on wide baseline cameras (Wide baseline 카메라 기반의 효과적인 실내공간 감시시스템)

  • Kim, Woong-Chang;Kim, Seung-Kyun;Choi, Kang-A;Jung, June-Young;Ko, Sung-Jea
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.317-323
    • /
    • 2010
  • The video surveillance system is adopted in many places due to its efficiency and constancy in monitoring a specific area over a long period of time. However, many surveillance systems composed of a single static camera often produce unsatisfactory results due to their lack of field of view. In this paper, we present a video surveillance system based on wide baseline stereo cameras to overcome the limitation. We adopt the codebook algorithm and mathematical morphology to robustly model the foreground pixels of the moving object in the scene and calculate the trajectory of the moving object via 3D reconstruction. The experimental results show that the proposed system detects a moving object and generates a top view trajectory successfully to track the location of the object in the world coordinates.

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Design of an Optical System for a Space Target Detection Camera

  • Zhang, Liu;Zhang, Jiakun;Lei, Jingwen;Xu, Yutong;Lv, Xueying
    • Current Optics and Photonics
    • /
    • v.6 no.4
    • /
    • pp.420-429
    • /
    • 2022
  • In this paper, the details and design process of an optical system for space target detection cameras are introduced. The whole system is divided into three structures. The first structure is a short-focus visible light system for rough detection in a large field of view. The field of view is 2°, the effective focal length is 1,125 mm, and the F-number is 3.83. The second structure is a telephoto visible light system for precise detection in a small field of view. The field of view is 1°, the effective focal length is 2,300 mm, and the F-number is 7.67. The third structure is an infrared light detection system. The field of view is 2°, the effective focal length is 390 mm, and the F-number is 1.3. The visible long-focus narrow field of view and visible short-focus wide field of view are switched through a turning mirror. Design results show that the modulation transfer functions of the three structures of the system are close to the diffraction limit. It can further be seen that the short-focus wide-field-of-view distortion is controlled within 0.1%, the long-focus narrow-field-of-view distortion within 0.5%, and the infrared subsystem distortion within 0.2%. The imaging effect is good and the purpose of the design is achieved.