• Title/Summary/Keyword: parallax

Search Result 252, Processing Time 0.03 seconds

OGLE-2017-BLG-1049: ANOTHER GIANT PLANET MICROLENSING EVENT

  • Kim, Yun Hak;Chung, Sun-Ju;Udalski, A.;Bond, Ian A.;Jung, Youn Kil;Gould, Andrew;Albrow, Michael D.;Han, Cheongho;Hwang, Kyu-Ha;Ryu, Yoon-Hyun;Shin, In-Gu;Shvartzvald, Yossi;Yee, Jennifer C.;Zang, Weicheng;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyoun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Dong-Joo
    • Journal of The Korean Astronomical Society
    • /
    • v.53 no.6
    • /
    • pp.161-168
    • /
    • 2020
  • We report the discovery of a giant exoplanet in the microlensing event OGLE-2017-BLG-1049, with a planet-host star mass ratio of q = 9.53 ± 0.39 × 10-3 and a caustic crossing feature in Korea Microlensing Telescope Network (KMTNet) observations. The caustic crossing feature yields an angular Einstein radius of θE = 0.52 ± 0.11 mas. However, the microlens parallax is not measured because the time scale of the event, tE ≃ 29 days, is too short. Thus, we perform a Bayesian analysis to estimate physical quantities of the lens system. We find that the lens system has a star with mass Mh = 0.55+0.36-0.29 M⊙ hosting a giant planet with Mp = 5.53+3.62-2.87 MJup, at a distance of DL = 5.67+1.11-1.52 kpc. The projected star-planet separation is a⊥ = 3.92+1.10-1.32 au. This means that the planet is located beyond the snow line of the host. The relative lens-source proper motion is μrel ~ 7 mas yr-1, thus the lens and source will be separated from each other within 10 years. After this, it will be possible to measure the flux of the host star with 30 meter class telescopes and to determine its mass.

3D Stereoscopic Augmented Reality with a Monocular Camera (단안카메라 기반 삼차원 입체영상 증강현실)

  • Rho, Seungmin;Lee, Jinwoo;Hwang, Jae-In;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.11-20
    • /
    • 2016
  • This paper introduces an effective method for generating 3D stereoscopic images that gives immersive 3D experiences to viewers using mobile-based binocular HMDs. Most of previous AR systems with monocular cameras have a common limitation that the same real-world images are provided to the viewer's eyes without parallax. In this paper, based on the assumption that viewers focus on the marker in the scenario of marker based AR, we recovery the binocular disparity about a camera image and a virtual object using the pose information of the marker. The basic idea is to generate the binocular disparity for real-world images and a virtual object, where the images are placed on the 2D plane in 3D defined by the pose information of the marker. For non-marker areas in the images, we apply blur effects to reduce the visual discomfort by decreasing their sharpness. Our user studies show that the proposed method for 3D stereoscopic image provides high depth feeling to viewers compared to the previous binocular AR systems. The results show that our system provides high depth feelings, high sense of reality, and visual comfort, compared to the previous binocular AR systems.

A SUPER-JUPITER MICROLENS PLANET CHARACTERIZED BY HIGH-CADENCE KMTNET MICROLENSING SURVEY OBSERVATIONS OF OGLE-2015-BLG-0954

  • SHIN, I.-G.;RYU, Y.-H.;UDALSKI, A.;ALBROW, M.;CHA, S.-M.;CHOI, J.-Y.;CHUNG, S.-J.;HAN, C.;HWANG, K.-H.;JUNG, Y.K.;KIM, D.-J.;KIM, S.-L.;LEE, C.-U.;LEE, Y.;PARK, B.-G.;PARK, H.;POGGE, R.W.;YEE, J.C.;PIETRUKOWICZ, P.;MROZ, P.;KOZLOWSKI, S.;POLESKI, R.;SKOWRON, J.;SOSZYNSKI, I.;SZYMANSKI, M.K.;ULACZYK, K.;WYRZYKOWSKI, L.;PAWLAK, M.;GOULD, A.
    • Journal of The Korean Astronomical Society
    • /
    • v.49 no.3
    • /
    • pp.73-81
    • /
    • 2016
  • We report the characterization of a massive (mp = 3.9±1.4Mjup) microlensing planet (OGLE-2015-BLG-0954Lb) orbiting an M dwarf host (M = 0.33 ± 0.12M) at a distance toward the Galactic bulge of $0.6^{+0.4}_{-0.2}kpc$, which is extremely nearby by microlensing standards. The planet-host projected separation is a⊥ ~ 1.2AU. The characterization was made possible by the wide-field (4 deg2) high cadence (Γ = 6 hr–1) monitoring of the Korea Microlensing Telescope Network (KMTNet), which had two of its three telescopes in commissioning operations at the time of the planetary anomaly. The source crossing time t* = 16 min is among the shortest ever published. The high-cadence, wide-field observations that are the hallmark of KMTNet are the only way to routinely capture such short crossings. High-cadence resolution of short caustic crossings will preferentially lead to mass and distance measurements for the lens. This is because the short crossing time typically implies a nearby lens, which enables the measurement of additional effects (bright lens and/or microlens parallax). When combined with the measured crossing time, these effects can yield planet/host masses and distance.

Three-Dimensional Conversion of Two-Dimensional Movie Using Optical Flow and Normalized Cut (Optical Flow와 Normalized Cut을 이용한 2차원 동영상의 3차원 동영상 변환)

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Joo-Hwan;Kang, Jin-Mo;Lee, Byoung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.1
    • /
    • pp.16-22
    • /
    • 2009
  • We propose a method to convert a two-dimensional movie to a three-dimensional movie using normalized cut and optical flow. In this paper, we segment an image of a two-dimensional movie to objects first, and then estimate the depth of each object. Normalized cut is one of the image segmentation algorithms. For improving speed and accuracy of normalized cut, we used a watershed algorithm and a weight function using optical flow. We estimate the depth of objects which are segmented by improved normalized cut using optical flow. Ordinal depth is estimated by the change of the segmented object label in an occluded region which is the difference of absolute values of optical flow. For compensating ordinal depth, we generate the relational depth which is the absolute value of optical flow as motion parallax. A final depth map is determined by multiplying ordinal depth by relational depth, then dividing by average optical flow. In this research, we propose the two-dimensional/three-dimensional movie conversion method which is applicable to all three-dimensional display devices and all two-dimensional movie formats. We present experimental results using sample two-dimensional movies.

DETERMINATIONS OF ITS ABSOLUTE DIMENSIONS AND DISTANCE BY THE ANALYSES OF LIGHT AND RADIAL-VELOCITY CURVES OF THE CONTACT BINARY - II. CK Bootis (접촉쌍성의 광도와 시선속도곡선의 분석에 의한 절대 물리량과 거리의 결정-II. CK Bootis)

  • Lee, Jae-Woo;Lee, Chung-Uk;Kim, Chun-Hwey;Kang, Young-Beom;Koo, Jae-Rim
    • Journal of Astronomy and Space Sciences
    • /
    • v.21 no.4
    • /
    • pp.275-282
    • /
    • 2004
  • We completed the light curves of the contact binary CK Boo for 13 nights from June to July in 2004 using the 1-m reflector and BVR filters at Mt. Lemmon Optical Astronomy Observatory, and determined four new times of minimum light (three timings for primary eclipse, one for secondary). With contact mode of the 1998-version Wilson-Devinney binary model, we analyzed our BVR light curves and Rucinski & Lu (1999)'s radial-velocity ones. As a result, we found CK boo to be A-type over-contact binary ($f=84\%$) with the low mass ratio (q=0.11) and orbital inclination ($i=65^{\circ}$). Absolute dimensions of the system are determined from our new solution; $M_1=1.42Me{\odot},\;M_2=0.15M{\odot},\;R_1=1.47R{\odot},\;R_2=0.59M{\odot}$ and the distance to it is derived as about 129pc. Our distance is well consistent with that ($157{\pm}33pc$) from the Hipparcos trigonometric parallax within the limit of the error yielded by the latter.

Development of Day and Night Scope with BS Prism (BS 프리즘을 이용한 주야 조준경 개발)

  • Lee, Dong-Hee
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.19 no.3
    • /
    • pp.339-344
    • /
    • 2014
  • Purpose: This study relates to the development of the day and night scope using the reflecting surface of a BS (beam splitting) prism. Methods: We have placed the LCD panel and the dot reticle generator to the top and bottom of the reflecting surface of the BS prism, and have placed a reflector, which is designed to doublet type, in the front of the BS prism. Doing so, we have developed a new type of day and night scope, which is able to image the virtual image of dot reticle from the dot reticle generator to the direction of the observer, to make the observer survey the peripheral information of the outside target by 1x magnification, and to make the observer survey the image of the LCD panel directly. Results: We could develope a new type of day and night scope, which has the function of night scope as thermal image display device at night time and the function of day scope as dot sight at day time, by letting the reflective surface of the BS prism have the role of dot sight which reflects the dot reticle and have the role of reflective optical system by which the observer surveys the night thermal image displayed in LCD panel. Conclusions: In this study, we have developed the new type of day and night scope which is able to play the role of the day or night scope selectively, combining the existing dot sight and the existing night scope by using the BS prism. By doing so, we could design and fabricate the new type of day and night scope with the BS prism which can further increase the rapidity of firing and provide more convenience in the mounting of a firearm than the detachable combination of an existing dot sight and an existing night scope.

(Distance and Speed Measurements of Moving Object Using Difference Image in Stereo Vision System) (스테레오 비전 시스템에서 차 영상을 이용한 이동 물체의 거리와 속도측정)

  • 허상민;조미령;이상훈;강준길;전형준
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1145-1156
    • /
    • 2002
  • A method to measure the speed and distance of moving object is proposed using the stereo vision system. One of the most important factors for measuring the speed and distance of moving object is the accuracy of object tracking. Accordingly, the background image algorithm is adopted to track the rapidly moving object and the local opening operator algorithm is used to remove the shadow and noise of object. The extraction efficiency of moving object is improved by using the adaptive threshold algorithm independent to variation of brightness. Since the left and right central points are compensated, the more exact speed and distance of object can be measured. Using the background image algorithm and local opening operator algorithm, the computational processes are reduced and it is possible to achieve the real-time processing of the speed and distance of moving object. The simulation results show that background image algorithm can track the moving object more rapidly than any other algorithm. The application of adaptive threshold algorithm improved the extraction efficiency of the target by reducing the candidate areas. Since the central point of the target is compensated by using the binocular parallax, the error of measurement for the speed and distance of moving object is reduced. The error rate of measurement for the distance from the stereo camera to moving object and for the speed of moving object are 2.68% and 3.32%, respectively.

  • PDF

KMT-2018-BLG-0029LB: A VERY LOW MASS-RATIO Spitzer MICROLENS PLANET

  • Gould, Andrew;Ryu, Yoon-Hyun;Novati, Sebastiano Calchi;Zang, Weicheng;Albrow, Michael D.;Chung, Sun-Ju;Han, Cheongho;Hwang, Kyu-Ha;Jung, Youn Kil;Shin, In-Gu;Shvartzvald, Yossi;Yee, Jennifer C.;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyoun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Dong-Joo;Lee, Yongseok;Park, Byeong-Gon;Pogge, Richard W.;Beichman, Charles;Bryden, Geoff;Carey, Sean;Gaudi, B. Scott;Henderson, Calen B.;Zhu, Wei;Fouque, Pascal;Penny, Matthew T.;Petric, Andreea;Burdullis, Todd;Mao, Shude
    • Journal of The Korean Astronomical Society
    • /
    • v.53 no.1
    • /
    • pp.9-26
    • /
    • 2020
  • At q = 1.81 ± 0.20 × 10-5, KMT-2018-BLG-0029Lb has the lowest planet-host mass ratio q of any microlensing planet to date by more than a factor of two. Hence, it is the first planet that probes below the apparent "pile-up" at q = 5-10 ×10-5. The event was observed by Spitzer, yielding a microlens-parallax πE measurement. Combined with a measurement of the Einstein radius θE from finite-source effects during the caustic crossings, these measurements imply masses of the host Mhost = 1.14+0.10-0.12 M and planet Mplanet = 7.59+0.75-0.69 M, system distance DL = 3.38+0.22-0.26 kpc and projected separation a = 4.27+0.21-0.23 AU. The blended light, which is substantially brighter than the microlensed source, is plausibly due to the lens and could be observed at high resolution immediately.

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF

Assessment of LODs and Positional Accuracy for 3D Model based on UAV Images (무인항공영상 기반 3D 모델의 세밀도와 위치정확도 평가)

  • Lee, Jae One;Kim, Doo Pyo;Sung, Sang Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.197-205
    • /
    • 2020
  • Compared to aerial photogrammetry, UAV photogrammetry has advantages in acquiring and utilizing high-resolution images more quickly. The production of 3D models using UAV photogrammetry has become an important issue at a time when the applications of 3D spatial information are proliferating. Therefore, this study assessed the feasibility of utilizing 3D models produced by UAV photogrammetry through quantitative and qualitative analyses. The qualitative analysis was performed in accordance with the LODs (Level of Details) specified in the 3D Land Spatial Information Construction Regulation. The results showed that the features on planes have a high LoD while features with elevation differences have a low LoD due to the occlusion area and parallax. Quantitative analysis was performed using the 3D coordinates obtained from the CPs (Checkpoints) and edges of nearby structures. The mean errors for residuals at CPs were 0.042 m to 0.059 m in the horizontal and 0.050 m to 0.161 m in the vertical coordinates while the mean errors in the structure's edges were 0.068 m and 0.071 m in horizontal and vertical coordinates, respectively. Therefore, this study confirmed the potential of 3D models from UAV photogrammetry for analyzing the digital twin and slope as well as BIM (Building Information Modeling).