• Title/Summary/Keyword: Dynamic scene

Search Result 145, Processing Time 0.022 seconds

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

A Facial Expression Recognition Method Using Two-Stream Convolutional Networks in Natural Scenes

  • Zhao, Lixin
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.399-410
    • /
    • 2021
  • Aiming at the problem that complex external variables in natural scenes have a greater impact on facial expression recognition results, a facial expression recognition method based on two-stream convolutional neural network is proposed. The model introduces exponentially enhanced shared input weights before each level of convolution input, and uses soft attention mechanism modules on the space-time features of the combination of static and dynamic streams. This enables the network to autonomously find areas that are more relevant to the expression category and pay more attention to these areas. Through these means, the information of irrelevant interference areas is suppressed. In order to solve the problem of poor local robustness caused by lighting and expression changes, this paper also performs lighting preprocessing with the lighting preprocessing chain algorithm to eliminate most of the lighting effects. Experimental results on AFEW6.0 and Multi-PIE datasets show that the recognition rates of this method are 95.05% and 61.40%, respectively, which are better than other comparison methods.

A HDR Algorithm for Single Image Based on Exposure Fusion Using Variable Gamma Coefficient (가변적 감마 계수를 이용한 노출융합기반 단일영상 HDR기법)

  • Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1059-1067
    • /
    • 2021
  • In this paper, a HDR algorithm for a single image is proposed using the exposure fusion, that adaptively calculates gamma correction coefficients according to the image distribution. Since typical HDR methods should use at least three images with different exposure values at the same scene, the main problem was that they could not be applied at the single shot image. Thus, HDR enhancements based on a single image using tone mapping and histogram modifications were recently presented, but these created some location-specific noises due to improper corrections. Therefore, the proposed algorithm calculates proper gamma coefficients according to the distribution of the input image and generates different exposure images which are corrected by the dark and the bright region stretching. A HDR image reproduction controlling exposure fusion weights among the gamma corrected and the original pixels is presented. As the result, the proposed algorithm can reduce certain noises at both the flat and the edge areas and obtain subjectively superior image quality to that of conventional methods.

An Approximation Technique for Real-time Rendering of Phong Reflection Model with Image-based Lighting (영상 기반 조명을 적용한 퐁 반사 모델의 실시간 렌더링을위한 근사 기법)

  • Jeong, Taehong;Shin, Hyun Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.1
    • /
    • pp.13-19
    • /
    • 2014
  • In this paper, we introduce a real-time method to render a 3D scene using image-based lighting. Previous approaches for image-based lighting focused on diffuse reflection and mirror-like specular reflection. In this paper, we provide a simple preprocessing approach to efficiently approximate Phong reflection model, which has been used for computer graphics applications for several decades. For diffuse reflection, we generate a texture map for diffuse reflection by integrating the source image in preprocessing step, similarly to the previous approaches. We adopt the similar idea to produce a set of specular reflection maps for various material shininess. By doing this, we can render a dynamic scene without high computational complexity or numerous texture map access.

Image Fusion using RGB and Near Infrared Image (컬러 영상과 근적외선 영상을 이용한 영상 융합)

  • Kil, Taeho;Cho, Nam Ik
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.515-524
    • /
    • 2016
  • Infrared (IR) wavelength is out of visible range and thus usually cut by hot filters in general commercial cameras. However, some information from the near-IR (NIR) range is known to improve the overall visibility of scene in many cases. For example when there is fog or haze in the scene, NIR image has clearer visibility than visible image because of its stronger penetration property. In this paper, we propose an algorithm for fusing the RGB and NIR images to obtain the enhanced images of the outdoor scenes. First, we construct a weight map by comparing the contrast of the RGB and NIR images, and then fuse the two images based on the weight map. Experimental results show that the proposed method is effective in enhancing visible image and removing the haze.

Detection of Pavement Borderline in Natural Scene using Radial Region Split for Visually Impaired Person (방사형 영역 분할법에 의한 자연영상에서의 보도 경계선 검출)

  • Weon, Sun-Hee;Kim, Gye-Young;Na, Hyeon-Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.7
    • /
    • pp.67-76
    • /
    • 2012
  • This paper proposes an efficient method that helps a visually impaired person to detect a pavement borderline. A pedestrian is equipped with a camera so that the front view of a natural scene is captured. Our approach analyzes the captured image and detects the borderline of a pavement in a very robust manner. Our approach performs the task in two steps. In a first step, our approach detects a vanishing point and vanishing lines by applying an edge operator. The edge operator is designed to take a threshold value adaptively so that it can handle a dynamic environment robustly. The second step is to determine the borderlines of a pavement based on vanishing lines detected in the first step. It analyzes the vanishing lines to form VRays that confines the pavement only. The VRays segments out the pavement region in a radial manner. We compared our approach against Canny edge detector. Experimental results show that our approach detects borderlines of a pavement very accurately in various situations.

Global Utopia and Local Anxiety on the Stage of the Korean Musical

  • Choi, Sung Hee
    • Cross-Cultural Studies
    • /
    • v.36
    • /
    • pp.123-147
    • /
    • 2014
  • The purpose of this essay is three-fold: to trace the genealogy of the Korean musical, which ever since its inception in the 1960s has been seeking to modernize Korean theater with Broadway as a constant role model; to investigate how the national and the global conflict and are conflated in the form of the Korean musical in the process of its (dis)identification with Broadway; and to examine how its intercultural translations reveal and reflect the dilemma and ambivalence posed by globalization in our era. Drawing on Richard Dyer's signature article Entertainment and Utopia, I analyze how the Korean musical manifests and conduits competing utopian impulses of Korean/Global audiences. I also attempt to problematize the formulaic notion of Broadway musicalsthe Superior Other!which implies a global hegemony that does not, in fact, exist because the boundary between the global and the local as well as the power dynamics of global culture are not fixed but constantly moving and changing. Today's musical scene in Korea shows interesting reversals from the 1990s, when Korean producers were eager to debut on Broadway and impress American audiences. Korean producers no longer look up to Broadway as a final destination; instead they want to make Seoul a new Broadway. They import Broadway musicals and turn them into Korean shows. The glamor of Broadway is no longer the main attraction of musicals in Korea. What young audiences look for most is the glamor of K-pop idols and utopian feelings of abundance, energy, intensity, transparency and community, which they can experience live in the musical with their favorite stars right in front of their eyes. In conclusion, I delve into the complex dynamics of recent Korean musicals with Thomas Friedman's theory of Globalization 3.0 as reference. The binary formula of Global/America versus Local/Korea cannot be applied to the dynamic and intercultural musical scene of today. Globalization is not a uniform phenomenon but rather a twofold (multifold) process of global domination and dissemination, in which the global and the local conflict and are conflated constantly. As this study tries to illuminate, the Korean musical has evolved in a huge net of interdependences between the global and the local with a range of sources, powers and influences.

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

Grid Acceleration Structure for Efficiently Tracing the Secondary Rays in Dynamic Scenes on Mobile Platforms (모바일 환경에서의 동적 장면의 효율적인 이차 광선 추적을 위한 격자 가속 구조)

  • Seo, Woong;Choi, Byeongjun;Ihm, Insung
    • Journal of KIISE
    • /
    • v.44 no.6
    • /
    • pp.573-580
    • /
    • 2017
  • Despite the recent remarkable advances in the computing power of mobile devices, the heat and battery problems still restrict their performances, particularly compared to PCs. Therefore, in the application of the ray-tracing technique for high-quality rendering, the consideration of a method that traces only the secondary rays while the effects of the primary rays are generated through rasterization-based OpenGL ES rendering is worthwhile. Given that most of the rendering time is for the secondary-ray processing in such a method, a new volume-grid technique for dynamic scenes that enhances the tracing performance of the secondary rays with a low coherence is proposed here. The proposed method attempts to model all of the possible spatial secondary rays in a fixed number of sampling rays, thereby alleviating the visitation problem regarding all of the cells along the ray in a uniform grid. Also, a hybrid rendering pipeline that speeds up the overall rendering performance by exploiting the mobile-device CPU and GPU is presented.

MTF measuring method of TDI camera electronics

  • Kim, Young-Sun;Kong, Jong-Pil;Heo, Haeng-Pal;Park, Jong-Euk;Yong, Sang-Soon;Choi, Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.540-543
    • /
    • 2007
  • The modulation transfer function (MTF) in a camera system is a measurement of how well the system will faithfully reproduce the original scene. The electro-optical camera system consists of optics, an array of pixels, and an electronics which is related to the image signal chain. The system MTF can be cascaded with each element's MTF in the frequency domain. That is to say, the electronics MTF including the detector MTF can be recalculated easily by the acquired system MTF if the well-known test optics is used in the measuring process. A Time-Delay and Integration (TDI) detector can make a signal increase by taking multiple exposures of the same object and adding them. It can be considered the various methods to measure the MTF of the TDI camera system. This paper shows the actual and practical MTF measuring methods for the detector and electronics in the TDI camera. The several methods are described according to the scan direction as well as the TDI stages such as the single line mode and the multiple-lines mode. The measuring is performed in the in the static condition or dynamic condition to get the point spread function (PSF) or the line spread function (LSF). Especially, the dynamic test bench is used to simulate on track velocity to synchronize with TDI read out frequency for the dynamic movement.

  • PDF