• 제목/요약/키워드: Image pixel

Search Result 2,495, Processing Time 0.043 seconds

Analysis of Signal Integrity of High Speed Serial Interface for Ultra High Definition Video Pattern Control Signal Generator (UHD급 영상패턴 제어 신호발생기를 위한 고속 시리얼 인터페이스의 신호 무결성 분석)

  • Son, Hui-Bae;Kweon, Oh-Keun
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.726-735
    • /
    • 2014
  • In accordance with 4K UHD(Ultra High Definition) LCD television's higher resolution and data expansion, LCD TV had to face problems such as increasing numbers of cables and tangible skews problems among cables. The V-by-One HS is a new interface technology in the path between the image processing IC and timing control (TCON) board. The variable speed from 600 Mbps to 3.75 Gbps effectively meets the requirements of various different pixel rates. In this paper, we use the V-by-One HS interface to illustrate our proposed simulation method of frequency resonance mode and PCB design approach to model the effects of signal integrity for high speed video signal using an IBIS models.

Estimating Accuracy of 3-D Models of SPOT Imagery Based on Changes of Number of GCPs (SPOT영상을 사용한 3차원 모델링시 지상기준점수에 따른 정확도 평가)

  • 김감래;안병구;김명배
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.21 no.1
    • /
    • pp.61-69
    • /
    • 2003
  • There is various kinds cause that influence to created DEM and orthoimage using stereo satellite images. Specialty, research about effect that GCP number gives to accuracy of DEM, orthoimage and modeling may have to be gone ahead. Therefore, this research increases GCP number by 5 to 30 and created each modeling, DEM and orthoimage using SPOT panchromatic images that resolution is 10m by digital image processing method. Accuracy assessment did by orthoimage using 20 check point. As a result, GCP number between 10∼30 modeling RMSE is 1 pixel low appear. Horizontal·vertical error that use orthoimage looked tendency that decrease GCP number increases, and confirmed by the most economical in GCP number 10∼15. Also, analyze correlation of GCP number and orthoimage position accuracy and presented improvement plan and research task hereafter.

A 2-Dimensional Barcode Detection Algorithm based on Block Contrast and Projection (블록 명암대비와 프로젝션에 기반한 2차원 바코드 검출 알고리즘)

  • Choi, Young-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.259-268
    • /
    • 2008
  • In an effort to increase the data capacity of one-dimensional symbology, 2D barcodes have been proposed a decade ago. In this paper, we present an effective 2D barcode detection algorithm from gray-level images, especially for the handheld 2D barcode recognition system. To locate the symbol inside the image, a criteria based on the block contrast is adopted, and a gray-scale projection with sub-pixel operation is utilized to segment the symbol precisely from the region of interest(ROI). Finally, the segmented ROI is normalized using the inverse perspective transformation for the following decoding processes. We also introduce the post-processing steps for decoding the QR-code. The proposed method ensures high performances under various lighting/printing conditions and strong perspective deformations. Experiments shows that our method is very robust and efficient in detecting the code area for the various types of 2D barcodes in real time.

Performance Improvement of Fractal Dimension Estimator Based on a New Sampling Method (새로운 샘플링법에 기초한 프랙탈 차원 추정자의 정도 개선)

  • Jin, Gang-Gyoo;Choi, Dong-Sik
    • Journal of Navigation and Port Research
    • /
    • v.38 no.1
    • /
    • pp.45-52
    • /
    • 2014
  • Fractal theory has been widely used to quantify the complexity of remotely sensed digital elevation models and images. Despite successful applications of fractals to a variety of fields including computer graphics, engineering and geosciences, the performance of fractal estimators depends highly on data sampling. In this paper, we propose an algorithm for computing the fractal dimension based on the triangular prism method and a new sampling method. The proposed sampling method combines existing two methods, that is, the geometric step method and the divisor step method to increase pixel utilization. In addition, while the existing estimation methods are based on $N{\times}M$ window, the proposed method expands to $N{\times}M$ window. The proposed method is applied to generated fractal DEM, Brodatz's image DB and real images taken in the campus to demonstrate its feasibility.

Accurate and efficient GPU ray-casting algorithm for volume rendering of unstructured grid data

  • Gu, Gibeom;Kim, Duksu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.608-618
    • /
    • 2020
  • We present a novel GPU-based ray-casting algorithm for volume rendering of unstructured grid data. Our volume rendering system uses a ray-casting method that guarantees accurate rendering results. We also employ the per-pixel intersection list concept in the Bunyk algorithm to guarantee an accurate result for non-convex meshes. For efficient memory access for the lists on the GPU, we represent the intersection lists for all faces as an array with our novel construction algorithm. With the intersection lists, we perform ray-casting on a GPU, and a GPU thread handles each ray. To increase ray-coherency in a thread block and improve memory access efficiency, we extend a prior image-tile-based work distribution method to fit modern GPU architectures. We also show that a prior approach using a per-thread local buffer to reduce redundant computation is not appropriate for modern GPU architectures. Instead, we take an on-demand calculation strategy that achieves better performance even though it allows duplicate computations. We applied our method to three unstructured grid datasets with different characteristics. With a GPU, our method achieved up to 36.5 times higher performance for the ray-casting process and 19.7 times higher performance for the whole volume rendering process compared with the Bunyk algorithm using a CPU core. Also, our approach showed up to 8.2 times higher performance than a GPU-based cell projection method while generating more accurate rendering results. These results demonstrate the efficiency and accuracy of our method.

The 3D Depth Extraction Method by Edge Information Analysis in Extended Depth of Focus Algorithm (확장된 피사계 심도 알고리즘에서 엣지 정보 분석에 의한 3차원 깊이 정보 추출 방법)

  • Kang, Sunwoo;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.2
    • /
    • pp.139-146
    • /
    • 2016
  • Recently, popularity of 3D technology has been growing significantly and it has many application parts in the various fields of industry. In order to overcome the limitations of 2D machine vision technologies based on 2D image, we need the 3D measurement technologies. There are many 3D measurement methods as such scanning probe microscope, phase shifting interferometry, confocal scanning microscope, white-light scanning interferometry, and so on. In this paper, we have used the extended depth of focus (EDF) algorithm among 3D measurement methods. The EDF algorithm is the method which extracts the 3D information from 2D images acquired by short range depth camera. In this paper, we propose the EDF algorithm using the edge informations of images and the average values of all pixel on z-axis to improve the performance of conventional method. To verify the performance of the proposed method, we use the various synthetic images made by point spread function(PSF) algorithm. We can correctly make a comparison between the performance of proposed method and conventional one because the depth information of these synthetic images was known. Through the experimental results, the PSNR of the proposed algorithm was improved about 1 ~ 30 dB than conventional method.

Adaptive Denoising for Low Light Level Environment Using Frequency Domain Analysis (주파수 해석에 따른 저조도 환경의 적응적 잡음제거)

  • Yi, Jeong-Youn;Lee, Seong-Won
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.128-137
    • /
    • 2012
  • When a CCD camera acquires images in the low light level environment, not only the image signals but also noise components are amplified by the AGC (auto gain control) circuit. Since the noise level in the images acquired in the dark is very high, it is difficult to remove noise with existing denoising algorithms that are targeting the images taken in the normal light condition. In this paper, we proposed an adaptive denoising algorithm that can efficiently remove significant noises caused by the low light level. First, the window including a target pixel is transformed to the frequency domain. Then the algorithm compares the characteristics of equally divided four frequency bands. Finally the noises are adaptively removed according to the frequency characteristics. The proposed algorithm successfully improves the quality of low light level images than the existing algorithms do.

New Motion Vector Prediction for Efficient H.264/AVC Full Pixel Motion Estimation (H.264/AVC의 효율적인 전 영역 움직임 추정을 위한 새로운 움직임 벡터 예측 방법 제안)

  • Choi, Jin-Ha;Lee, Won-Jae;Kim, Jae-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.70-79
    • /
    • 2007
  • H.264/AVC has many repeated computation for motion estimation. Because of that, it takes much time to encode and it is very hard to implement into a real-time encoder. Many fast algorithms were proposed to reduce computation time but encoding quality couldn't be qualified. In this paper we proposed a new motion vector prediction method for efficient and fast full search H.264/AVC motion estimation. We proposed independent motion vector prediction and SAD share for motion estimation. Using our algorithm, motion estimation reduce calculation complexity 80% and less distortion of image (less PSNR drop) than previous full search scheme. We simulated our proposed method. Maximum Y PSNR drop is about 0.04 dB and average bit increasing is about 0.6%.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

A Ray-Tracing Algorithm Based On Processor Farm Model (프로세서 farm 모델을 이용한 광추적 알고리듬)

  • Lee, Hyo Jong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.2 no.1
    • /
    • pp.24-30
    • /
    • 1996
  • The ray tracing method, which is one of many photorealistic rendering techniques, requires heavy computational processing to synthesize images. Parallel processing can be used to reduce the computational processing time. A parallel algorithm for the ray tracing has been implemented and executed for various images on transputer systems. In order to develop a scalable parallel algorithm, a processor farming technique has been exploited. Since each image is divided and distributed to each farming processor, the scalability of the parallel system and load balancing are achieved naturally in the proposed algorithm. Efficiency of the parallel algorithm is obtained up to 95% for nine processors. However, the best size of a distributed task is much higher in simple images due to less computational requirement for every pixel. Efficiency degradation is observed for large granularity tasks because of load unbalancing caused by the large task. Overall, transputer systems behave as good scalable parallel processing system with respect to the cost-performance ratio.

  • PDF