• Title/Summary/Keyword: combined error motion

Search Result 46, Processing Time 0.021 seconds

Cosmology with peculiar velocity surveys

  • Qin, Fei
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.43.5-44
    • /
    • 2021
  • In the local Universe, the gravitational effects of mass density fluctuations exert perturbations on galaxies' redshifts on top of Hubble's Law, called 'peculiar velocities'. These peculiar velocities provide an excellent way to test the cosmological model in the nearby Universe. In this talk, we present new cosmological constraints using peculiar velocities measured with the 2MASS Tully-Fisher survey (2MTF), 6dFGS peculiar-velocity survey (6dFGSv), the Cosmicflows-3 and Cosmicflows-4TF compilation. Firstly, the dipole and the quadrupole of the peculiar velocity field, commonly named 'bulk flow' and 'shear' respectively, enable us to test whether our cosmological model accurately describes the motion of galaxies in the nearby Universe. We develop and use a new estimators that accurately preserves the error distribution of the measurements to measure these moments. In all cases, our results are consistent with the predictions of the Λ cold dark matter model. Additionally, measurements of the growth rate of structure, fσ8 in the low-redshift Universe allow us to test different gravitational models. We developed a new estimator of the "momentum" (density weighted peculiar velocity) power spectrum and use joint measurements of the galaxy density and momentum power spectra to place new constraints on the growth rate of structure from the combined 2MTF and 6dFGSv data. We recover a constraint of fσ8=0.404+0.082-0.081 at an effective redshift zeff=0.03. This measurement is also fully consistent with the expectations of General Relativity and the Λ Cold Dark Matter cosmological model.

  • PDF

Comparative Analysis of DTM Generation Method for Stream Area Using UAV-Based LiDAR and SfM (여름철 UAV 기반 LiDAR, SfM을 이용한 하천 DTM 생성 기법 비교 분석)

  • Gou, Jaejun;Lee, Hyeokjin;Park, Jinseok;Jang, Seongju;Lee, Jonghyuk;Kim, Dongwoo;Song, Inhong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.3
    • /
    • pp.1-14
    • /
    • 2024
  • Gaining an accurate 3D stream geometry has become feasible with Unmanned Aerial Vehicle (UAV), which is crucial for better understanding stream hydrodynamic processes. The objective of this study was to investigate series of filters to remove stream vegetation and propose the best method for generating Digital Terrain Models (DTMs) using UAV-based point clouds. A stream reach approximately 500 m of the Bokha stream in Icheon city was selected as the study area. Point clouds were obtained in August 1st, 2023, using Phantom 4 multispectral and Zenmuse L1 for Structure from Motion (SfM) and Light Detection And Ranging (LiDAR) respectively. Three vegetation filters, two morphological filters, and six composite filters which combined vegetation and morphological filters were applied in this study. The Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were used to assess each filters comparing with the two cross-sections measured by leveling survey. The vegetation filters performed better in SfM, especially for short vegetation areas, while the morphological filters demonstrated superior performance on LiDAR, particularly for taller vegetation areas. Overall, the composite filters combining advantages of two types of filters performed better than single filter application. The best method was the combination of Progressive TIN (PTIN) and Color Indicies of Vegetation Extraction (CIVE) for SfM, showing the smallest MAE of 0.169 m. The proposed method in this study can be utilized for constructing DTMs of stream and thus contribute to improving the accuracy of stream hydrodynamic simulations.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

Cloth Modeling using Implicit Constraint Enforcement (묵시적 제한방법을 이용한 옷 모델링 방법)

  • Hong, Min;Lee, Seung-Hyun;Park, Doo-Soon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.4
    • /
    • pp.516-524
    • /
    • 2008
  • This paper presents a new modeling technique for the simulation of cloth specific characteristics with a set of hard constraints using an implicit constraint enforcement scheme. A conventional explicit Baumgarte constraint stabilization method has several defects. It requires users to pick problem-dependent coefficients to achieve fast convergence and has inherent stabilization limits. The proposed implicit constraint enforcement method is stable with large time steps, does not require problem dependent feed-back parameters, and guarantees the natural physics-based motion of an object. In addition, its computational complexity is the same as the explicit Baumgarte method. This paper describes a formulation of implicit constraint enforcement and provides a constraint error analysis. The modeling technique for complex components of cloth such as seams, buttons, sharp creases, wrinkles, and prevention of excessive elongation are explained. Combined with an adaptive constraint activation scheme, the results using the proposed method show the substantial enhancement of the realism of cloth simulations with a corresponding savings in computational cost.

  • PDF

FBX Format Animation Generation System Combined with Joint Estimation Network using RGB Images (RGB 이미지를 이용한 관절 추정 네트워크와 결합된 FBX 형식 애니메이션 생성 시스템)

  • Lee, Yujin;Kim, Sangjoon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2021
  • Recently, in various fields such as games, movies, and animation, content that uses motion capture to build body models and create characters to express in 3D space is increasing. Studies are underway to generate animations using RGB-D cameras to compensate for problems such as the cost of cinematography in how to place joints by attaching markers, but the problem of pose estimation accuracy or equipment cost still exists. Therefore, in this paper, we propose a system that inputs RGB images into a joint estimation network and converts the results into 3D data to create FBX format animations in order to reduce the equipment cost required for animation creation and increase joint estimation accuracy. First, the two-dimensional joint is estimated for the RGB image, and the three-dimensional coordinates of the joint are estimated using this value. The result is converted to a quaternion, rotated, and an animation in FBX format is created. To measure the accuracy of the proposed method, the system operation was verified by comparing the error between the animation generated based on the 3D position of the marker by attaching a marker to the body and the animation generated by the proposed system.