• Title/Summary/Keyword: Center scene

Search Result 193, Processing Time 0.023 seconds

Study on Simulator Sickness Measure on Scene Movement Based Ship Handing Simulator Using SSQ and COP (시각적 동요 기반 선박운항 시뮬레이터에서 SSQ와 COP를 이용한 시뮬레이터 멀미 계측에 관한 연구)

  • Fang, Tae-Hyun;Jang, Jun-Hyuk;Oh, Seung-Bin;Kim, Hong-Tae
    • Journal of Navigation and Port Research
    • /
    • v.38 no.5
    • /
    • pp.485-491
    • /
    • 2014
  • In this paper, it is proposed that the effects of simulator sickness due to scene movement in ship handling simulator can be measured by using center of pressure (COP) and a simulator sickness questionnaire (SSQ). For experiments of simulator sickness, twelve participants are exposed to scenes movement from ship handling simulator according to three steps of sea states. During experiments, COPs for subjects are measured by force plate. After exposure to scenes movement, subjects describe their sickness symptoms by answering the SSQ. Throughput analysing the results of scene movement, SSQ, and COP, the relation between the simulator sickness and COP is investigated. It is suggested that formulations for SSQ score and COP with respect to sea state are obtained by the curve fitting technique, and the longitudinal COP can be used for measuring the simulator sickness.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Geometric and Semantic Improvement for Unbiased Scene Graph Generation

  • Ruhui Zhang;Pengcheng Xu;Kang Kang;You Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2643-2657
    • /
    • 2023
  • Scene graphs are structured representations that can clearly convey objects and the relationships between them, but are often heavily biased due to the highly skewed, long-tailed relational labeling in the dataset. Indeed, the visual world itself and its descriptions are biased. Therefore, Unbiased Scene Graph Generation (USGG) prefers to train models to eliminate long-tail effects as much as possible, rather than altering the dataset directly. To this end, we propose Geometric and Semantic Improvement (GSI) for USGG to mitigate this issue. First, to fully exploit the feature information in the images, geometric dimension and semantic dimension enhancement modules are designed. The geometric module is designed from the perspective that the position information between neighboring object pairs will affect each other, which can improve the recall rate of the overall relationship in the dataset. The semantic module further processes the embedded word vector, which can enhance the acquisition of semantic information. Then, to improve the recall rate of the tail data, the Class Balanced Seesaw Loss (CBSLoss) is designed for the tail data. The recall rate of the prediction is improved by penalizing the body or tail relations that are judged incorrectly in the dataset. The experimental findings demonstrate that the GSI method performs better than mainstream models in terms of the mean Recall@K (mR@K) metric in three tasks. The long-tailed imbalance in the Visual Genome 150 (VG150) dataset is addressed better using the GSI method than by most of the existing methods.

Incorporation of Scene Geometry in Least Squares Correlation Matching for DEM Generation from Linear Pushbroom Images

  • Kim, Tae-Jung;Yoon, Tae-Hun;Lee, Heung-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.182-187
    • /
    • 1999
  • Stereo matching is one of the most crucial parts in DEM generation. Naive stereo matching algorithms often create many holes and blunders in a DEM and therefore a carefully designed strategy must be employed to guide stereo matching algorithms to produce “good” 3D information. In this paper, we describe one such a strategy designed by the use of scene geometry, in particular, the epipolarity for generation of a DEM from linear pushbroom images. The epipolarity for perspective images is a well-known property, i.e., in a stereo image pair, a point in the reference image will map to a line in the search image uniquely defined by sensor models of the image pair. This concept has been utilized in stereo matching by applying epipolar resampling prior to matching. However, the epipolar matching for linear pushbroom images is rather complicated. It was found that the epipolarity can only be described by a Hyperbola- shaped curve and that epipolar resampling cannot be applied to linear pushbroom images. Instead, we have developed an algorithm of incorporating such epipolarity directly in least squares correlation matching. Experiments showed that this approach could improve the quality of a DEM.

  • PDF

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Sidelobe Reduction Method for Improvement of Airborne SAR Image (항공 SAR 영상 화질 개선을 위한 사이드로브 감소 기법)

  • Shin, Hee-Sub;Ok, Jae-Woo;Woo, Jae-Choon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.26 no.11
    • /
    • pp.1027-1030
    • /
    • 2015
  • In the airborne SAR, the motion errors induced by atmospheric turbulence decrease the resolution and increase the sidelobes. If the sidelobes are not properly compensated, the image quality is degraded. Thus, in this paper, we have introduced the sidelobe reduction method to increase the image quality. After we calculate the scene center based on the estimated squint angle for the flight path partitioned by the subaperture technique, we perform the motion compensation for the scene center. Then, after we perform the recursive sidelobe reduction for the region of interest in the reconstructed SAR image, we extend it for the full image.

A Study on the Visualization of the Earthquake Information in AR Environments (AR 환경에서의 지진 정보 가시화 방안 연구)

  • Bae, Seonghun;Jung, Gichul;Kim, EunHee
    • Korean Journal of Computational Design and Engineering
    • /
    • v.20 no.1
    • /
    • pp.55-64
    • /
    • 2015
  • The earthquake is a natural disaster causing loss of life or property damage and happens more often in Korea recently. Moreover, considering the increase of massive buildings, it is required to predict and visualize the information of the vibration in a building. In this paper, we developed a prototype framework to visualize the displacement information in the AR environments. In order to avoid the irregular halts of the scene and the unnatural distortion of the object, this framework uses the synchronization method at the scene update time and the interpolation of the sensor data for the displacement of vertices. In addition, we studied displacement estimation methods with the acceleration data to extend this framework to the system with accelerators.

Scene Change Detection Using Global Direction & Center of Edge (전역적 에지의 중점 및 방향성을 이용한 장면 전환 검출)

  • Lee, Jeong-Bong;Yoon, Pil-Young;Park, Jang-Chun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.751-754
    • /
    • 2002
  • 장면 전환 검출(Scene Change Detction)수행 방법으로 객체 인식에 의한 검출이 아닌 전체 영상의 전역적인 형태 흐름을 기반으로 한 검출 시스템을 제안한다. 형태흐름의 변하는 영상의 전체적 모양에 관한 전역적 특징을 이용하여 영상내에 존재하는 에지, 에지의 중심, 표준 편차 및 에너지의 분포 변환에서 추출할 수 있다. 본 논문에서는 효율적인 에지 검출을 위하여 미디언 필터와 개량형 라플라시안 필터를 사용한다. 일반적으로 이용되는 라플라시안 필터를 사용하였을 때 획득할 수 있는 에지 정보보다 우수한 정보를 얻을 수 있었고 보다. 정착한 장면 전환을 검출하기 위해 이 에지 정보를 수평$(0^{\circ})$, 수직$(90^{\circ})$, 대각선$(45^{\circ},\;135^{\circ})$ 방향으로 세분화한 뒤에 프레임간에 각도 방향별 에지 정보를 파악하여 방향별 에지 에너지(Energy of edge)의 상대적인 성분 분포의 비교를 수행함으로써 정확성을 높였다.

  • PDF

Scene-based Nonuniformity Correction by Deep Neural Network with Image Roughness-like and Spatial Noise Cost Functions

  • Hong, Yong-hee;Song, Nam-Hun;Kim, Dae-Hyeon;Jun, Chan-Won;Jhee, Ho-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.11-19
    • /
    • 2019
  • In this paper, a new Scene-based Nonuniformity Correction (SBNUC) method is proposed by applying Image Roughness-like and Spatial Noise cost functions on deep neural network structure. The classic approaches for nonuniformity correction require generally plenty of sequential image data sets to acquire accurate image correction offset coefficients. The proposed method, however, is able to estimate offset from only a couple of images powered by the characteristic of deep neural network scheme. The real world SWIR image set is applied to verify the performance of proposed method and the result shows that image quality improvement of PSNR 70.3dB (maximum) is achieved. This is about 8.0dB more than the improved IRLMS algorithm which preliminarily requires precise image registration process on consecutive image frames.

Extension of Range Migration Algorithm for Airborne SAR Data Processing

  • Shin, Hee-Sub;Song, Won-Gyu;Son, Jun-Won;Jung, Yong-Hwan;Lim, Jong-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.857-860
    • /
    • 2005
  • Several algorithms have been developed for the data processing of spotlight synthetic aperture radar (SAR). In particular, the range migration algorithm (RMA) does not assume that illuminating wavefronts are planar. Also, a high resolution image can be obtained by the RMA. This paper introduces an extension of the original RMA to enable a more efficient airborne SAR data processing. We consider more general motion and scene than the original RMA. The presented formulation is analyzed by using the principle of the stationary phase. Finally, the extended algorithm is tested with numerical simulations using the pulsed spotlight SAR.

  • PDF