• Title/Summary/Keyword: Pixel error

Search Result 481, Processing Time 0.025 seconds

Development of a Camera Self-calibration Method for 10-parameter Mapping Function

  • Park, Sung-Min;Lee, Chang-je;Kong, Dae-Kyeong;Hwang, Kwang-il;Doh, Deog-Hee;Cho, Gyeong-Rae
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.3
    • /
    • pp.183-190
    • /
    • 2021
  • Tomographic particle image velocimetry (PIV) is a widely used method that measures a three-dimensional (3D) flow field by reconstructing camera images into voxel images. In 3D measurements, the setting and calibration of the camera's mapping function significantly impact the obtained results. In this study, a camera self-calibration technique is applied to tomographic PIV to reduce the occurrence of errors arising from such functions. The measured 3D particles are superimposed on the image to create a disparity map. Camera self-calibration is performed by reflecting the error of the disparity map to the center value of the particles. Vortex ring synthetic images are generated and the developed algorithm is applied. The optimal result is obtained by applying self-calibration once when the center error is less than 1 pixel and by applying self-calibration 2-3 times when it was more than 1 pixel; the maximum recovery ratio is 96%. Further self-correlation did not improve the results. The algorithm is evaluated by performing an actual rotational flow experiment, and the optimal result was obtained when self-calibration was applied once, as shown in the virtual image result. Therefore, the developed algorithm is expected to be utilized for the performance improvement of 3D flow measurements.

Matching Performance Analysis of Upsampled Satellite Image and GCP Chip for Establishing Automatic Precision Sensor Orientation for High-Resolution Satellite Images

  • Hyeon-Gyeong Choi;Sung-Joo Yoon;Sunghyeon Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.103-114
    • /
    • 2024
  • The escalating demands for high-resolution satellite imagery necessitate the dissemination of geospatial data with superior accuracy.Achieving precise positioning is imperative for mitigating geometric distortions inherent in high-resolution satellite imagery. However, maintaining sub-pixel level accuracy poses significant challenges within the current technological landscape. This research introduces an approach wherein upsampling is employed on both the satellite image and ground control points (GCPs) chip, facilitating the establishment of a high-resolution satellite image precision sensor orientation. The ensuing analysis entails a comprehensive comparison of matching performance. To evaluate the proposed methodology, the Compact Advanced Satellite 500-1 (CAS500-1), boasting a resolution of 0.5 m, serves as the high-resolution satellite image. Correspondingly, GCP chips with resolutions of 0.25 m and 0.5 m are utilized for the South Korean and North Korean regions, respectively. Results from the experiment reveal that concurrent upsampling of satellite imagery and GCP chips enhances matching performance by up to 50% in comparison to the original resolution. Furthermore, the position error only improved with 2x upsampling. However,with 3x upsampling, the position error tended to increase. This study affirms that meticulous upsampling of high-resolution satellite imagery and GCP chips can yield sub-pixel-level positioning accuracy, thereby advancing the state-of-the-art in the field.

Design & Analysis of an Error-reduced Precision Optical Triangulation Probes (오차 최소화된 정밀 광삼각법 프로브의 해석 및 설계)

  • Kim, Kyung-Chan;Oh, Se-Baek;Kim, Jong-Ahn;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • Proceedings of the KSME Conference
    • /
    • 2000.04a
    • /
    • pp.411-414
    • /
    • 2000
  • Optical Triangulation Probes (OTPs) are widely used for their simple structure. high resolution, and long operating range. However, errors originating from speckle, inclination of the object, source power fluctuation, ambient light, and noise of the detector limit their usability. In this paper, we propose new design criteria for an error-reduced OTP. The light source module for the system consists of an incoherent light source and a multimode optical fiber for eliminating speckle and shaping a Gaussian beam Intensity profile. A diffuse-reflective white copy paper, which is attached to the object, makes the light intensity distribution on the change-coupled device(CCD). Since the peak positions of the intensity distribution are not related to the various error sources, a sub-pixel resolution signal processing algorithm that can detect the peak position makes it possible to construct an error-reduced OTP system

  • PDF

Distance Estimation Method using Enhanced Adaptive Fuzzy Strong Tracking Kalman Filter Based on Stereo Vision (스테레오 비전에서 향상된 적응형 퍼지 칼만 필터를 이용한 거리 추정 기법)

  • Lim, Young-Chul;Lee, Chung-Hee;Kwon, Soon;Lee, Jong-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.108-116
    • /
    • 2008
  • In this paper, we propose an algorithm that can estimate the distance using disparity based on stereo vision system, even though the obstacle is located in long ranges as well as short ranges. We use sub-pixel interpolation to minimize quantization errors which deteriorate the distance accuracy when calculating the distance with integer disparity, and also we use enhanced adaptive fuzzy strong tracking Kalman filter(EAFSTKF) to improve the distance accuracy and track the path optimally. The proposed method can solve the divergence problem caused by nonlinear dynamics such as various vehicle movements in the conventional Kalman filter(CKF), and also enhance the distance accuracy and reliability. Our simulation results show that the performance of our method improves by about 13.5% compared to other methods in point of root mean square error rate(RMSER).

Adaptive Interpolation for Intra Frames in H.264 Using Interference Function (H.264 인트라 프레임에서 방해함수를 이용한 적응적 보간)

  • Park Mi-Seon;Yoo Jae-Myeong;Toan Nguyen Dinh;Kim Ji-Soo;Son Hwa-Jeong;Lee Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.10
    • /
    • pp.107-113
    • /
    • 2006
  • Error Concealment method for Intra frames in H.264 reconstructs the lost block by computing weighted average value of the boundary pixels of the neighboring blocks; up, bottom, left and right blocks. However a simple average of pixel values of the neighboring blocks for Intra frames in H.264 leads to excessive blurring and degrades the picture quality severely. To solve this problem, in this paper we estimate the dominant edge of lost block using the pixel values of the neighboring blocks and reconstruct the pixel values by choosing adaptive interpolation between directional interpolation and weighted average interpolation considering the result value of the interference function based on statistics. Finally directional interpolation method improves by determining the dominant edge direction considering the relation of the dominent edge and the edges of neighboring blocks. Experiments show improvement of picture quality of about $0.5{\sim}2.0dB$ compared with the method of H.264.

  • PDF

Detection Performance Analysis of the Telescope considering Pointing Angle Command Error (지향각 명령 오차를 고려한 망원경 탐지 성능 분석)

  • Lee, Hojin;Lee, Sangwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.237-243
    • /
    • 2017
  • In this paper, the detection performance of the electro-optical telescopes which observes and surveils space objects including artificial satellites, is analyzed. To perform the Modeling & Simulation(M&S) based analysis, satellite orbit model, telescope model, and the atmospheric model are constructed and a detection scenario observing the satellite is organized. Based on the organized scenario, pointing accuracy is analyzed according to the Field of View(FOV), which is one of the key factors of the telescope, considering pointing angle command error. In accordance with the preceding result, detection possibility according to the pixel-count of the detector and the FOV of the telescope is analyzed by discerning detection by Signal-to-Noise Ratio(SNR). The result shows that pointing accuracy increases with larger FOV, whereas the detection probability increases with smaller FOV and higher pixel-count. Therefore, major specification of the telescope such as FOV and pixel-count should be determined considering the result of M&S based analysis performed in this paper and the operational circumstances.

Efficient Image Warping Mechanism Using Template Matching and Partial Warping (템플릿 매칭과 부분 워핑을 이용한 효율적인 원근 영상 워핑 기법)

  • Jeong, Dae-Heon;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.339-342
    • /
    • 2017
  • Geometric transform of an image is used to image correction. Ridid-Body, Simlilary transform, etc, many correction methods are exist in computer vision. Image warping is used to correction for image with perspective. To image warping I extracted 4 feature point about warping position. But It is difficult to extract 4 points accurately and warping result with these point is occurs error over 3 or 4 pixel at warping position. So I used template matching to extract 4 points correctly and selected repeatedly 2 points of 4 points because to confirm result correctly. positions of 2 points are changed in near of 3 by 3 pixel and warped each change. So I selected optimal 4 points with a error of less than 1 pixel and finally, warped image using optimal points. For this way is possible to obtain optimum result.

  • PDF

Adaptive Hyperspectral Image Classification Method Based on Spectral Scale Optimization

  • Zhou, Bing;Bingxuan, Li;He, Xuan;Liu, Hexiong
    • Current Optics and Photonics
    • /
    • v.5 no.3
    • /
    • pp.270-277
    • /
    • 2021
  • The adaptive sparse representation (ASR) can effectively combine the structure information of a sample dictionary and the sparsity of coding coefficients. This algorithm can effectively consider the correlation between training samples and convert between sparse representation-based classifier (SRC) and collaborative representation classification (CRC) under different training samples. Unlike SRC and CRC which use fixed norm constraints, ASR can adaptively adjust the constraints based on the correlation between different training samples, seeking a balance between l1 and l2 norm, greatly strengthening the robustness and adaptability of the classification algorithm. The correlation coefficients (CC) can better identify the pixels with strong correlation. Therefore, this article proposes a hyperspectral image classification method called correlation coefficients and adaptive sparse representation (CCASR), based on ASR and CC. This method is divided into three steps. In the first step, we determine the pixel to be measured and calculate the CC value between the pixel to be tested and various training samples. Then we represent the pixel using ASR and calculate the reconstruction error corresponding to each category. Finally, the target pixels are classified according to the reconstruction error and the CC value. In this article, a new hyperspectral image classification method is proposed by fusing CC and ASR. The method in this paper is verified through two sets of experimental data. In the hyperspectral image (Indian Pines), the overall accuracy of CCASR has reached 0.9596. In the hyperspectral images taken by HIS-300, the classification results show that the classification accuracy of the proposed method achieves 0.9354, which is better than other commonly used methods.

Detecting Line Segment by Incremental Pixel Extension (점진적인 화소 확장에 의한 선분 추출)

  • Lee, Jae-Kwang;Park, Chang-Joon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.292-300
    • /
    • 2008
  • An algorithm for detecting a line segment in an image is presented using incremental pixel extension. We use a different approach from conventional algorithms, such as the Hough transform approach and the line segment grouping approach. The Canny edge is calculated and an arbitrary point is selected among the edge elements. After the arbitrary point is selected, a base line approximating the line segment is calculated and edge pixels within an arbitrary radius are selected. A weighted value is assigned to each edge pixel, which is selected by using the error of the distance and the direction between the pixel and the base line. A line segment is extracted by Jilting a line using the weighted least square method after determining whether selected pixels are linked or delinked using the sum comparison of the weights. The proposed algorithm is compared with two other methods and results show that our algorithm is faster and can detect the real line segment.

  • PDF

ESTIMATION OF ERRORS IN THE TRANSVERSE VELOCITY VECTORS DETERMINED FROM HINODE/SOT MAGNETOGRAMS USING THE NAVE TECHNIQUE

  • Chae, Jong-Chul;Moon, Yong-Jae
    • Journal of The Korean Astronomical Society
    • /
    • v.42 no.3
    • /
    • pp.61-69
    • /
    • 2009
  • Transverse velocity vectors can be determined from a pair of images successively taken with a time interval using an optical flow technique. We have tested the performance of the new technique called NAVE (non-linear affine velocity estimator) recently implemented by Chae & Sakurai using real image data taken by the Narrowband Filter Imager (NFI) of the Solar Optical Telescope (SOT) aboard the Hinode satellite. We have developed two methods of estimating the errors in the determination of velocity vectors, one resulting from the non-linear fitting ${\sigma}_{\upsilon}$ and the other ${\epsilon}_u$ resulting from the statistics of the determined velocity vectors. The real error is expected to be somewhere between ${\sigma}_{\upsilon}$ and ${\epsilon}_u$. We have investigated the dependence of the determined velocity vectors and their errors on the different parameters such as the critical speed for the subsonic filtering, the width of the localizing window, the time interval between two successive images, and the signal-to-noise ratio of the feature. With the choice of $v_{crit}$ = 2 pixel/step for the subsonic filtering, and the window FWHM of 16 pixels, and the time interval of one step (2 minutes), we find that the errors of velocity vectors determined using the NAVE range from around 0.04 pixel/step in high signal-to-noise ratio features (S/N $\sim$ 10), to 0.1 pixel/step in low signa-to-noise ratio features (S/N $\sim$ 3) with the mean of about 0.06 pixel/step where 1 pixel/step corresponds roughly to 1 km/s in our case.