• Title/Summary/Keyword: Parallax

Search Result 252, Processing Time 0.021 seconds

Planning Large Program of Stellar Maser Study with KaVA

  • Cho, Se-Hyung;Imai, Hiroshi
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.114-114
    • /
    • 2014
  • We present our activities linking to planning of possible forms of large program to study on circumstellar H2O and SiO maser sources with KaVA. A great advantage of KaVA for the stellar maser observations is the combination of the unique capability of the multi-frequency phase referencing technique of KVN and the dual-beam astrometry of VERA with the KaVA's relative dense antenna configuration. We have demonstrated this advantage through the test observations conducted by the KaVA Evolved Stars Sub-working Group since 2012 March. Snapshot KaVA imaging is confirmed to be possible in integration time of 0.5 hour at the 22 GHz band and 1.0 hour at the 43 GHz band in typical cases. This implies that large snapshot imaging surveys towards many H2O and SiO stellar masers are possible within a reasonable machine time (e.g., scans on ~100 maser sources within 200 hours). This possibility enables us to select the maser sources, which are suitable for future long-term (10 years) intensive (biweekly-monthly) monitoring observations, from 1000 potential target candidates selected from dual-frequency band (K/Q-bands) KVN single-dish observations. The output of the survey programs will be used for statistical analysis of the structures of individual stellar maser clumps and the spatio-kinematical structures of circumstellar envelopes with accelerating outflows. The combination of astrometry in milliarcsecond(mas) level and the multi-phase referencing technique yields not only trigonometric parallax distances to the masers but also precise position reference for registration of different maser lines. The accuracy of the map registration affects interpretation of the excitation mechanism of the SiO maser lines and the origin of the variety of the maser actions, which are expected to reflect periodic behaviors of the circumstellar envelope with stellar pulsation. Currently we are checking the technical feasibility of KaVA operations for this combination. After this feasibility test, the long-term monitoring campaign program will run as one of KaVA's legacy projects.

  • PDF

Stereoscopic Camera with a CCD and Two Zoom Lenses (단일 CCD와 두개의 줌렌즈로 구성한 입체 카메라)

  • Lee, Sang-Eun;Jo, Jae-Heung;Jung, Eui-Min;Lee, Kag-Hyeon
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.1
    • /
    • pp.38-46
    • /
    • 2006
  • The stereoscopic camera based on the image formation principle on human eyes and the brain is designed and fabricated by using a CCD and two zoom lenses. As two zoom lenses are separated as 65 mm of the human ocular distance with the wide angle of view of $50^{\circ}$ and the variable convergence angle from $0^{\circ}$ to $16^{\circ}$, the camera can be operated by the similar binocular parallax as human eyes. In order to take the dynamic stereoscopic picture, a shutter blade for the selection of the left and right images in turns, an X-cube image combiner fur the composition of these two images through the blade, and a CCD with 60 frames per second are used.

Simulation of Distortion in Image Space due to Observer's Rotation Movement in Stereoscopic Display, and Its Correction (스테레오스코픽 디스플레이에서 관찰자의 회전이동에 따른 영상공간의 왜곡과 왜곡 보정에 대한 전산모사)

  • Kim, Dong-Wook;Lee, Kwang-Hoon;Kim, Sung-Kyu;Chang, Eun-Young
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.87-93
    • /
    • 2009
  • Variation of the observer's viewing position is one of the major causes of image space distortion in the stereoscopic display. Especially, a large image distortion, which is caused by different depth direction positions of the observer's two eyes, is made by the observer's rotation movement in relation to the center of the screen. This is different from distortion of horizontal and depth directional movement of the observer. In this paper, we analyzed distortion of the image space due to the observer's rotation movement and showed the corrected result of distortion through simulation in the stereoscopic display. Finally, we showed that the distortion shape of the observer's rotation movement is different from horizontal and depth directional movement of the observer.

OGLE-2015-BLG-1482L:The first isolated Galactic bulge microlens

  • Chung, Sun-Ju;Zhu, Wei;Udalski, Andrzej;Lee, Chung-Uk;Ryu, Yoon-Hyun;Jung, Youn Kil;Shin, In-Gu;Yee, Jennifer C.;Hwang, Kyu-Ha;Gould, Andrew
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.1
    • /
    • pp.44.1-44.1
    • /
    • 2017
  • The single lens event OGLE-2015-BLG-1482 has been simultaneously observed from two ground-based surveys and from Spitzer. The Spitzer data exhibit finite-source effects due to the passage of the lens close to or directly over the surface of the source star as seen from Spitzer. Thanks to measurements of the microns parallax and the finite-source effect, we find that the lens of OGLE-2015-BLG-1482 is a very low-mass star with the mass $0.10{\pm}0.02M{\odot}$ or a brown dwarf with the mass $55{\pm}9MJ$, which are respectively located at $DLS=0.80{\pm}0.19kpc$ and $DLS=0.54{\pm}0.08kpc$, and thus it is the first isolated low-mass microlens that has been located in the Galactic bulge. The degeneracy between the two solutions is severe. The fundamental reason for the degeneracy is that the finite-source effect is seen only in a single data point from Spitzer and this single data point gives rise to two ${\rho}$ solutions.

  • PDF

Adversarial Framework for Joint Light Field Super-resolution and Deblurring (라이트필드 초해상도와 블러 제거의 동시 수행을 위한 적대적 신경망 모델)

  • Lumentut, Jonathan Samuel;Baek, Hyungsun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.672-684
    • /
    • 2020
  • Restoring a low resolution and motion blurred light field has become essential due to the growing works on parallax-based image processing. These tasks are known as light-field enhancement process. Unfortunately, only a few state-of-the-art methods are introduced to solve the multiple problems jointly. In this work, we design a framework that jointly solves light field spatial super-resolution and motion deblurring tasks. Particularly, we generate a straight-forward neural network that is trained under low-resolution and 6-degree-of-freedom (6-DOF) motion-blurred light field dataset. Furthermore, we propose the strategy of local region optimization on the adversarial network to boost the performance. We evaluate our method through both quantitative and qualitative measurements and exhibit superior performance compared to the state-of-the-art methods.

Comparison of the Size of objects in the Virtual Reality Space and real space (가상현실 공간상에서 물체의 크기와 실제 크기간의 비교연구)

  • Kim, Yun-Jung
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.383-398
    • /
    • 2017
  • Virtual Reality contents are being used as media in various fields. In order for the virtual reality contents to be realistic, the scale of the objects in the virtual reality must be the same as the actual size, and the user must feel the same size. However, even if the size of the character in the virtual reality space is made equal to the size in comparison with the size of the character in the reality, the distortion of the size can occur when the user looks at the object in the image with the HMD. In this paper, I investigate the requirements related to size in virtual reality, and try to find out what difference these requirements have in virtual reality and how the difference affects users. Experiments and surveys to compare the size of objects in virtual reality space and the size of objects in real space were conducted to investigate how scale distortion occurs at distant and near places. I hope that this paper will be a useful research for virtual reality developers.

A Study on Generation of Free Stereo Mosaic Image Using Video Sequences (비디오 프레임 영상을 이용한 자유 입체 모자이크 영상 제작에 관한 연구)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, June-Ku
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.4
    • /
    • pp.453-460
    • /
    • 2009
  • For constructing 3D information using aerial photograph or video sequences, left and right stereo images having different viewing angle should be prepared in overlapping area. In video sequences, left and right stereo images would be generated by mosaicing left and right slice images extracted in consecutive video sequences. Therefore, this paper is focused on generating left and right stereo mosaic images that are able to construct 3D information and video sequences could be made for the best use. In the stereo mosaic generation, motion parameters between video sequences should be firstly determined. In this paper, to determine motion parameters, free mosaic method using geometric relationship, such as relative orientation parameters, between consecutive frame images without GPS/INS geo-data have applied. After determining the motion parameters, the mosaic image have generated by 4 step processes: image registration, image slicing, determining on stitching line, and 3D image mosaicking. As the result of experiment, generated stereo mosaic image and analyzed result of x, y-parallax have showed.

A Method of Frame Synchronization for Stereoscopic 3D Video (스테레오스코픽 3D 동영상을 위한 동기화 방법)

  • Park, Youngsoo;Kim, Dohoon;Hur, Namho
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.850-858
    • /
    • 2013
  • In this paper, we propose a method of frame synchronization for stereoscopic 3D video to solve the viewing problem caused by synchronization errors between a left video and a right video using the temporal frame difference image depending on the movement of objects. Firstly, we compute two temporal frame difference images from the left video and the right video which are corrected the vertical parallax between two videos using rectification, and calculate two horizontal projection profiles of two temporal frame difference images. Then, we find a pair of synchronized frames of the two videos by measuring the mean of absolute difference (MAD) of two horizontal projection profiles. Experimental results show that the proposed method can be used for stereoscopic 3D video, and is robust against Gaussian noise and video compression by H.264/AVC.

Affine Model for Generating Stereo Mosaic Image from Video Frames (비디오 프레임 영상의 자유 입체 모자이크 영상 제작을 위한 부등각 모델 연구)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Koh, Jin-Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.3
    • /
    • pp.49-56
    • /
    • 2009
  • Recently, a generation of high quality mosaic images from video sequences has been attempted by a variety of investigations. Among the matter of investigation, in this paper, generation on stereo mosaic utilizing airborne-video sequence images is focused upon. The stereo mosaic is made by creating left and right mosaic which are fabricated by front and rear slices having different viewing angle in consecutive video frames. For making the stereo mosaic, motion parameters which are able to define geometric relationship between consecutive video frames are determined. For determining motion parameters, affine model which is able to explain relative motion parameters is applied by this paper. The mosaicing method using relative motion parameters is called by free mosaic. The free mosaic proposed in this paper consists of 4 step processes: image registration with reference to first frame using affine model, front and rear slicing, stitching line definition and image mosaicing. As the result of experiment, the left and right mosaic image, anaglyphic image for stereo mosaic images are showed and analyzed y-parallax for checking accuracy.

  • PDF

Biomimetic approach object detection sensors using multiple imaging (다중 영상을 이용한 생체모방형 물체 접근 감지 센서)

  • Choi, Myoung Hoon;Kim, Min;Jeong, Jae-Hoon;Park, Won-Hyeon;Lee, Dong Heon;Byun, Gi-Sik;Kim, Gwan-Hyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.91-93
    • /
    • 2016
  • From the 2-D image extracting three-dimensional information as the latter is in the bilateral sibeop using two camera method and when using a monocular camera as a very important step generally as "stereo vision". There in today's CCTV and automatic object tracking system used in many medium much to know the site conditions or work developed more clearly by using a stereo camera that mimics the eyes of humans to maximize the efficiency of avoidance / control start and multiple jobs can do. Object tracking system of the existing 2D image will have but can not recognize the distance to the transition could not be recognized by the observer display using a parallax of a stereo image, and the object can be more effectively controlled.

  • PDF