• Title/Summary/Keyword: Pixel-Based

Search Result 1,752, Processing Time 0.028 seconds

SHADOW EXTRACTION FROM ASTER IMAGE USING MIXED PIXEL ANALYSIS

  • Kikuchi, Yuki;Takeshi, Miyata;Masataka, Takagi
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.727-731
    • /
    • 2003
  • ASTER image has some advantages for classification such as 15 spectral bands and 15m ${\sim}$ 90m spatial resolution. However, in the classification using general remote sensing image, shadow areas are often classified into water area. It is very difficult to divide shadow and water. Because reflectance characteristics of water is similar to characteristics of shadow. Many land cover items are consisted in one pixel which is 15m spatial resolution. Nowadays, very high resolution satellite image (IKONOS, Quick Bird) and Digital Surface Model (DSM) by air borne laser scanner can also be used. In this study, mixed pixel analysis of ASTER image has carried out using IKONOS image and DSM. For mixed pixel analysis, high accurated geometric correction was required. Image matching method was applied for generating GCP datasets. IKONOS image was rectified by affine transform. After that, one pixel in ASTER image should be compared with corresponded 15×15 pixel in IKONOS image. Then, training dataset were generated for mixed pixel analysis using visual interpretation of IKONOS image. Finally, classification will be carried out based on Linear Mixture Model. Shadow extraction might be succeeded by the classification. The extracted shadow area was validated using shadow image which generated from 1m${\sim}$2m spatial resolution DSM. The result showed 17.2% error was occurred in mixed pixel. It might be limitation of ASTER image for shadow extraction because of 8bit quantization data.

  • PDF

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

Pixel Reconstruction of Edge Boundary Block using Multi-Buffer (다중버퍼를 이용한 경계영역 블록의 화소 재조합)

  • 한병준;손창훈;김응성;이근영
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1117-1120
    • /
    • 1999
  • The main purpose of padding methods is to extend the boundary segments of arbitrarily shaped objects to a regular grid so that the common block based coding technique, such as 8${\times}$8 DCT, can be applied. In the conventional padding methods used in MPEG-4: LPE and zero padding, the main process is based on 8${\times}$8 blocks. On the contrary, we propose a new padding method based on pixel-by-pixel. The proposed method puts pixels into a multi-busier using the typical value of each boundary blocks and reproduces new boundary blocks. Simulation results show that the proposed method reduces the conventional padding method and improves the coding efficiency of the conventional padding method.

  • PDF

Triqubit-State Measurement-Based Image Edge Detection Algorithm

  • Wang, Zhonghua;Huang, Faliang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1331-1346
    • /
    • 2018
  • Aiming at the problem that the gradient-based edge detection operators are sensitive to the noise, causing the pseudo edges, a triqubit-state measurement-based edge detection algorithm is presented in this paper. Combing the image local and global structure information, the triqubit superposition states are used to represent the pixel features, so as to locate the image edge. Our algorithm consists of three steps. Firstly, the improved partial differential method is used to smooth the defect image. Secondly, the triqubit-state is characterized by three elements of the pixel saliency, edge statistical characteristics and gray scale contrast to achieve the defect image from the gray space to the quantum space mapping. Thirdly, the edge image is outputted according to the quantum measurement, local gradient maximization and neighborhood chain code searching. Compared with other methods, the simulation experiments indicate that our algorithm has less pseudo edges and higher edge detection accuracy.

Half-Pixel Correction for MPEG-2/H.264 Transcoding (DCT 기반 MPEG-2/H.264 변환을 위한 1/2 화소 보정)

  • Kwon Soon-young;Lee Joo-kyong;Chung Ki-dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.956-962
    • /
    • 2005
  • To improve video quality and coding efficiency, H.264/AVC adopts different half pixel calculating method compared with the previous standards. So, the transcoder requires additional works to transcode the pre-coded video contents with the previous standards to H.264/AVC in DCT domain. In this paper, we propose the first half-pixel correction method for MPEG-2 to H.264 transcoding in DCT domain. In the proposed method, MPEG-2 block is added to the correction block obtained by difference calculation of half-pixel values between two standards using DCT reference frame. Experimental results show that the proposed achieves better quality than pixel based cascaded transcoding method.

Fingerprint Sensor Based on a Skin Resistivity with $256{\times}256$ pixel array ($256{\times}256$ 픽셀 어레이 저항형 지문센서)

  • Jung, Seung-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.3
    • /
    • pp.531-536
    • /
    • 2009
  • In this paper, we propose $256{\times}256$ pixel array fingerprint sensor with an advanced circuits for detecting. The pixel level simple detection circuit converts from a small and variable sensing current to binary voltage out effectively. We minimizes an electrostatic discharge(ESD) influence by applying an effective isolation structure around the unit pixel. The sensor circuit blocks were designed and simulated in standard CMOS $0.35{\mu}m$ process. Full custom layout is performed in the unit sensor pixel and auto placement and routing is performed in the full chip.

Semantic Image Segmentation Combining Image-level and Pixel-level Classification (영상수준과 픽셀수준 분류를 결합한 영상 의미분할)

  • Kim, Seon Kuk;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.

A Wide Dynamic Range CMOS Image Sensor Based on a Pseudo 3-Transistor Active Pixel Sensor Using Feedback Structure

  • Bae, Myunghan;Jo, Sung-Hyun;Lee, Minho;Kim, Ju-Yeong;Choi, Jinhyeon;Choi, Pyung;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.6
    • /
    • pp.413-419
    • /
    • 2012
  • A dynamic range extension technique is proposed based on a 3-transistor active pixel sensor (APS) with gate/body-tied p-channel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector using a feedback structure. The new APS consists of a pseudo 3-transistor APS and an additional gate/body-tied PMOSFET-type photodetector, and to extend the dynamic range, an NMOSFET switch is proposed. An additional detector and an NMOSFET switch are integrated into the APS to provide negative feedback. The proposed APS and pseudo 3-transistor APS were designed and fabricated using a $0.35-{\mu}m$ 2-poly 4-metal standard complementary metal oxide semiconductor (CMOS) process. Afterwards, their optical responses were measured and characterized. Although the proposed pixel size increased in comparison with the pseudo 3-transistor APS, the proposed pixel had a significantly extended dynamic range of 98 dB compared to a pseudo 3-transistor APS, which had a dynamic range of 28 dB. We present a proposed pixel that can be switched between two operating modes depending on the transfer gate voltage. The proposed pixel can be switched between two operating modes depending on the transfer gate voltage: normal mode and WDR mode. We also present an imaging system using the proposed APS.

Background Subtraction based on GMM for Night-time Video Surveillance (야간 영상 감시를 위한 GMM기반의 배경 차분)

  • Yeo, Jung Yeon;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.4 no.3
    • /
    • pp.50-55
    • /
    • 2015
  • In this paper, we present background modeling method based on Gaussian mixture model to subtract background for night-time video surveillance. In night-time video, it is hard work to distinguish the object from the background because a background pixel is similar to a object pixel. To solve this problem, we change the pixel of input frame to more advantageous value to make the Gaussian mixture model using scaled histogram stretching in preprocessing step. Using scaled pixel value of input frame, we then exploit GMM to find the ideal background pixelwisely. In case that the pixel of next frame is not included in any Gaussian, the matching test in old GMM method ignores the information of stored background by eliminating the Gaussian distribution with low weight. Therefore we consider the stacked data by applying the difference between the old mean and new pixel intensity to new mean instead of removing the Gaussian with low weight. Some experiments demonstrate that the proposed background modeling method shows the superiority of our algorithm effectively.

Binary Connected-component Labeling with Block-based Labels and a Pixel-based Scan Mask (블록기반 라벨과 화소기반 스캔마스크를 이용한 이진 연결요소 라벨링)

  • Kim, Kyoil
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.287-294
    • /
    • 2013
  • Binary connected-component labeling is widely used in the fields of the image processing and the computer vision. Many kinds of labeling techniques have been developed, and two-scan is known as the fastest method among them. Traditionally pixel-based scan masks have been used for the first stage of the two-scan. Recently, block-based labeling techniques were introduced by C. Grana et. al. and L. He et. al. They are faster than pixel-based labeling methods. In this paper, we propose a new binary connected-component labeling technique with block-based labels and a pixel-based scan mask. The experimental results with various images show that the proposed method is faster than the He's which is known as the fastest method currently. The amount of performance enhancement is averagely from 3.9% to 22.4% according to the sort of the images.