• Title/Summary/Keyword: Motion Parallax

Search Result 56, Processing Time 0.03 seconds

Astrometric Detectability of Parallax Effect in Gravitational Microlensing Events (중력렌즈 사건의 측성적 시차효과 검출에 대한 연구)

  • HAN CHEONGHO
    • Publications of The Korean Astronomical Society
    • /
    • v.15 no.1
    • /
    • pp.15-19
    • /
    • 2000
  • The lens mass determined from the photometrically obtained Einstein time scale suffers from large uncertainty due to the lens parameter degeneracy. The uncertainty can be substantially reduced if the mass is determined from the lens proper motion obtained from astrometric measurements of the source image centroid shifts, ${\delta}{\theta}_c$, by using high precision interferometers from space-based platform such as the Space Interferometry Mission (SIM), and ground-based interferometers soon available on several 8-10m class telescopes. However, for the complete resolution of the lens parameter degeneracy it is required to determine the lens parallax by measuring the parallax-induced deviations in the centroid shifts trajectory, ${\Delta}{\delta}{\theta}_c$ aloe. In this paper, we investigate the detectabilities of ${\delta}{\theta}_c$ and ${\Delta}{\delta}{\theta}_c$ by determining the distributions of the maximum centroid shifts, $f({\delta}{\theta}_{c,max})$, and the average maximum deviations, $(<{\Delta}{\delta}_{c,max}>)$, for different types of Galactic microlensing events caused by various masses. From this investigation, we find that as long as source stars are bright enough for astrometric observations it is expected that $f({\delta}{\theta}_c)$ for most events caused by lenses with masses greater than 0.1 $M_\bigodot$ regardless of the event types can be easily detected from observations by using not only the SIM (with a detection threshold but also the ${\delta}{\theta}_{th}\;\~3{\mu}as)$ but also the ground-based interferometers $(with\;{\delta}{\theta}_{th}\;\~3{\mu}as)$. However, from ground-based observations, it will be difficult to detect ${\Delta}{\delta}{\theta}_c$ for most Galactic bulge self-lensing events, and the detection will be restricted only for small fractions of disk-bulge and halo-LMC events for which the deviations are relatively large. From observations by using the SIM, on the other hand, detecting ${\Delta}{\delta}{\theta}_c$ will be possible for majority of disk and halo events and for a substantial fraction of bulge self-lensing events. For the complete resolution of the lens parameter degeneracy, therefore, SIM observations will be essential.

  • PDF

3D stereoscopic representation of title in broadcasting, the distance standardize for the study of parallax (입체영상 방송텍스트에서 입체감을 위한 패럴렉스 데이터 표준화에 관한 연구)

  • Oh, Moon Seok;Lee, Yun Sang
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.4
    • /
    • pp.111-118
    • /
    • 2011
  • Recent advances in the media have no special change is the development of the 3D stereoscopic image, which started in the movie is coming over now to the broadcast. Confusing variety having, in the production of 3D images that are waiting for the standardized production. 3D images of them being used in broadcast subtitles, first because there is no standardized production systems, making it look is dedicated to the time and effort. This research necessary to create 3D images of these subtitles, titles, text-based objects, such as Rig imaging using a standardized way to synthesize the most stable is proposed. First, with captions or titles, and the readability and understanding of the uniqueness to the human eye to create an environment that is kind of crowd. Because of this, excessive camera Ferrell Rex (gap) created a branch bunch of snow, work should not hurt readability. 100 adult men and women throughout the experiment.

TRIGONOMETRIC DISTANCE AND PROPER MOTION OF IRAS 20056+3350: A MASSIVE STAR FORMING REGION ON THE SOLAR CIRCLE

  • BURNS, ROSS A.;NAGAYAMA, TAKUMI;HANDA, TOSHIHIRO;OMODAKA, TOSHIHIRO;NAKAGAWA, AKIHARU;NAKANISHI, HIROYUKI;HAYASHI, MASAHIKO;SHIZUGAM, MAKOTO
    • Publications of The Korean Astronomical Society
    • /
    • v.30 no.2
    • /
    • pp.121-123
    • /
    • 2015
  • We report our measurements of the trigonometric distance and proper motion of IRAS 20056+3350, obtained from the annual parallax of $H_2O$ masers. Our distance of $D=4.69^{+0.65}_{-0.51}kpc$, which is 2.8 times larger than the near kinematic distance adopted in the literature, places IRAS 20056+3350 at the leading tip of the Local arm and proximal to the Solar circle. We estimated the proper motion of IRAS 20056+3350 to be (${\mu}_{\alpha}cos{\delta}$, ${\mu}_{\delta}$) = ($-2.62{\pm}0.33$, $-5.65{\pm}0.52$) $mas\;yr^{-1}$ from the group motion of $H_2O$ masers, and use our results to estimate the angular velocity of Galactic rotation at the Galactocentric distance of the Sun, ${\Omega}_0=29.75{\pm}2.29km\;s^{-1}kpc^{-1}$, which is consistent with the values obtained for other tangent points and Solar circle objects.

From Exoscope into the Next Generation

  • Nishiyama, Kenichi
    • Journal of Korean Neurosurgical Society
    • /
    • v.60 no.3
    • /
    • pp.289-293
    • /
    • 2017
  • An exoscope, high-definition video telescope operating monitor system to perform microsurgery has recently been proposed an alternative to the operating microscope. It enables surgeons to complete the operation assistance by visualizing magnified images on a display. The strong points of exoscope are the wide field of view and deep focus. It minimized the need for repositioning and refocusing during the procedure. On the other hand, limitation of magnifying object was an emphasizing weak point. The procedures are performed under 2D motion images with a visual perception through dynamic cue and stereoscopically viewing corresponding to the motion parallax. Nevertheless, stereopsis is required to improve hand and eye coordination for high precision works. Consequently novel 3D high-definition operating scopes with various mechanical designs have been developed according to recent high-tech innovations in a digital surgical technology. It will set the stage for the next generation in digital image based neurosurgery.

Geocentric parallax measurements of Near-Earth Asteroid using Baselines with domestic small-size observatories (국내 소형천문대 기선을 이용한 근접 소행성 지심시차 측정)

  • Jeong, Eui Oan;Sohn, Jungjoo
    • Journal of the Korean earth science society
    • /
    • v.37 no.7
    • /
    • pp.398-407
    • /
    • 2016
  • We cooperated with four domestic educational astronomical observatories to construct a baseline and perform simultaneous observations to determine the geocentric parallax, distance, and motion of 1036 Ganymed, an Amor asteroid near the Earth. Observations were made on the day when simultaneous observations were possible from September to November 2011. Measured distances of 1036 Ganymed were 0.394 AU on Sept. 26, 0.365 AU on Oct. 11, and 0.340 AU on Oct. 25, respectively, which were within the error range as compared with the measured distances proposed by the US Jet Propulsion Laboratory. The 1036 Ganymed showed a tilting motion during the observation period, and the tangential angular velocities were measured at $0.037-0.052^{{\prime {\prime}}\;sec^{-1}$. Through this study, it was shown that the simultaneous observations among educational astronomical observations can obtain distance measurements with an error range of about 5% for asteroids near 0.4 AU. And it expected to be used as a research & education program emphasizing collaborative observation activities based on a network between observatories.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Affine Model for Generating Stereo Mosaic Image from Video Frames (비디오 프레임 영상의 자유 입체 모자이크 영상 제작을 위한 부등각 모델 연구)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Koh, Jin-Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.3
    • /
    • pp.49-56
    • /
    • 2009
  • Recently, a generation of high quality mosaic images from video sequences has been attempted by a variety of investigations. Among the matter of investigation, in this paper, generation on stereo mosaic utilizing airborne-video sequence images is focused upon. The stereo mosaic is made by creating left and right mosaic which are fabricated by front and rear slices having different viewing angle in consecutive video frames. For making the stereo mosaic, motion parameters which are able to define geometric relationship between consecutive video frames are determined. For determining motion parameters, affine model which is able to explain relative motion parameters is applied by this paper. The mosaicing method using relative motion parameters is called by free mosaic. The free mosaic proposed in this paper consists of 4 step processes: image registration with reference to first frame using affine model, front and rear slicing, stitching line definition and image mosaicing. As the result of experiment, the left and right mosaic image, anaglyphic image for stereo mosaic images are showed and analyzed y-parallax for checking accuracy.

  • PDF

View synthesis with sparse light field for 6DoF immersive video

  • Kwak, Sangwoon;Yun, Joungil;Jeong, Jun-Young;Kim, Youngwook;Ihm, Insung;Cheong, Won-Sik;Seo, Jeongil
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Virtual view synthesis, which generates novel views similar to the characteristics of actually acquired images, is an essential technical component for delivering an immersive video with realistic binocular disparity and smooth motion parallax. This is typically achieved in sequence by warping the given images to the designated viewing position, blending warped images, and filling the remaining holes. When considering 6DoF use cases with huge motion, the warping method in patch unit is more preferable than other conventional methods running in pixel unit. Regarding the prior case, the quality of synthesized image is highly relevant to the means of blending. Based on such aspect, we proposed a novel blending architecture that exploits the similarity of the directions of rays and the distribution of depth values. By further employing the proposed method, results showed that more enhanced view was synthesized compared with the well-designed synthesizers used within moving picture expert group (MPEG-I). Moreover, we explained the GPU-based implementation synthesizing and rendering views in the level of real time by considering the applicability for immersive video service.

A Patch Packing Method Using Guardband for Efficient 3DoF+ Video Coding (3DoF+ 비디오의 효율적인 부호화를 위한 보호대역을 사용한 패치 패킹 기법)

  • Kim, Hyun-Ho;Kim, Yong-Ju;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.185-191
    • /
    • 2020
  • MPEG-I is actively working on standardization on the immersive video coding which provides up to 6 degree of freedom (6DoF) in terms of viewpoint. In a virtual space of 3DoF+, which is defined as an extension of 360 with motion parallax, looking at the scene from another viewpoint (another position in space) requires rendering an additional viewpoint using multiple videos included in the 3DoF+ video. In the MPEG-I Visual workgroup, efficient coding methods for 3DoF+ video are being studied, and they released Test Model for Immersive Video (TMIV) recently. This paper presents a patch packing method which packs the patches into atlases efficiently for improving coding efficiency of 3DoF+ video in TMIV. The proposed method improves the reconstructed view quality with reduced coding artifacts by introducing guardbands between patches in the atlas.

Optical System Design for a Head-up Display through Analysis of Distortion and Biocular Parallax (왜곡수차 및 양안시차 분석을 통한 헤드업 디스플레이용 광학계 설계)

  • Kim, Kum-Ho;Park, Sung-Chan
    • Korean Journal of Optics and Photonics
    • /
    • v.31 no.2
    • /
    • pp.88-95
    • /
    • 2020
  • In this study, we present methods to quantitatively analyze and correct the distortions and biocular parallaxes in a head-up display (HUD). To analyze asymmetrical distortions, five kinds of distortions are proposed and evaluated at five eye positions of an eyebox. The differences between distortions evaluated at the four corners of the eyebox and that at the center are defined as the relative distortions, which occur due to head motion of the driver. We also define the convergence and divergence parallaxes at six biocular positions in the eyebox to quantitatively analyze them. Using these analytical methods, we constrain the degree of biocular parallaxes and distortion changes with eye position to be small, so that an optical system nearly free from them can be obtained by optimization design for HUD optics.