• Title/Summary/Keyword: 에지 영상

Search Result 1,239, Processing Time 0.028 seconds

Automatic Extraction of the Land Readjustment Paddy for High-level Land Cover Classification (토지 피복 세분류를 위한 경지 정리 논 자동 추출)

  • Yeom, Jun Ho;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.443-450
    • /
    • 2014
  • To fulfill the recent increasement in the public and private demands for various spatial data, the central and local governments started to produce those data. The low-level land cover map has been produced since 2000, yet the production of high-level land covered map has started later in 2010, and recently, a few regions was completed recently. Although many studies have been carried to improve the quality of land that covered in the map, most of them have been focused on the low-level and mid-level classifications. For that reason, the study for high-level classification is still insufficient. Therefore, in this study, we suggested the automatic extraction of land readjustment for paddy land that updated in the mid-level land mapping. At the study, the RapidEye satellite images, which consider efficient to apply in the agricultural field, were used, and the high pass filtering emphasized the outline of paddy field. Also, the binary images of the paddy outlines were generated from the Otsu thresholding. The boundary information of paddy field was extracted from the image-to-map registrations and masking of paddy land cover. Lastly, the snapped edges were linked, as well as the linear features of paddy outlines were extracted by the regional Hough line extraction. The start and end points that were close to each other were linked to complete the paddy field outlines. In fact, the boundary of readjusted paddy fields was able to be extracted efficiently. We could conclude in that this study contributed to the automatic production of a high-level land cover map for paddy fields.

Content Analysis-based Adaptive Filtering in The Compressed Satellite Images (위성영상에서의 적응적 압축잡음 제거 알고리즘)

  • Choi, Tae-Hyeon;Ji, Jeong-Min;Park, Joon-Hoon;Choi, Myung-Jin;Lee, Sang-Keun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.84-95
    • /
    • 2011
  • In this paper, we present a deblocking algorithm that removes grid and staircase noises, which are called "blocking artifacts", occurred in the compressed satellite images. Particularly, the given satellite images are compressed with equal quantization coefficients in row according to region complexity, and more complicated regions are compressed more. However, this approach has a problem that relatively less complicated regions within the same row of complicated regions have blocking artifacts. Removing these artifacts with a general deblocking algorithm can blur complex and undesired regions as well. Additionally, the general filter lacks in preserving the curved edges. Therefore, the proposed algorithm presents an adaptive filtering scheme for removing blocking artifacts while preserving the image details including curved edges using the given quantization step size and content analysis. Particularly, WLFPCA (weighted lowpass filter using principle component analysis) is employed to reduce the artifacts around edges. Experimental results showed that the proposed method outperforms SA-DCT in terms of subjective image quality.

Video object segmentation using a novel object boundary linking (새로운 객체 외곽선 연결 방법을 사용한 비디오 객체 분할)

  • Lee Ho-Suk
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.255-274
    • /
    • 2006
  • Moving object boundary is very important for the accurate segmentation of moving object. We extract the moving object boundary from the moving object edge. But the object boundary shows broken boundaries so we develop a novel boundary linking algorithm to link the broken boundaries. The boundary linking algorithm forms a quadrant around the terminating pixel in the broken boundaries and searches for other terminating pixels to link in concentric circles clockwise within a search radius in the forward direction. The boundary linking algorithm guarantees the shortest distance linking. We register the background from the image sequence using the stationary background filtering. We construct two object masks, one object mask from the boundary linking and the other object mask from the initial moving object, and use these two complementary object masks to segment the moving objects. The main contribution of the proposed algorithms is the development of the novel object boundary linking algorithm for the accurate segmentation. We achieve the accurate segmentation of moving object, the segmentation of multiple moving objects, the segmentation of the object which has a hole within the object, the segmentation of thin objects, and the segmentation of moving objects in the complex background using the novel object boundary linking and the background automatically. We experiment the algorithms using standard MPEG-4 test video sequences and real video sequences of indoor and outdoor environments. The proposed algorithms are efficient and can process 70.20 QCIF frames per second and 19.7 CIF frames per second on the average on a Pentium-IV 3.4GHz personal computer for real-time object-based processing.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

Gaussian Noise Reduction Algorithm using Self-similarity (자기 유사성을 이용한 가우시안 노이즈 제거 알고리즘)

  • Jeon, Yougn-Eun;Eom, Min-Young;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.1-10
    • /
    • 2007
  • Most of natural images have a special property, what is called self-similarity, which is the basis of fractal image coding. Even though an image has local stationarity in several homogeneous regions, it is generally non-stationarysignal, especially in edge region. This is the main reason that poor results are induced in linear techniques. In order to overcome the difficulty we propose a non-linear technique using self-similarity in the image. In our work, an image is classified into stationary and non-stationary region with respect to sample variance. In case of stationary region, do-noising is performed as simply averaging of its neighborhoods. However, if the region is non-stationary region, stationalization is conducted as make a set of center pixels by similarity matching with respect to bMSE(block Mean Square Error). And then do-nosing is performed by Gaussian weighted averaging of center pixels of similar blocks, because the set of center pixels of similar blocks can be regarded as nearly stationary. The true image value is estimated by weighted average of the elements of the set. The experimental results show that our method has better performance and smaller variance than other methods as estimator.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Development of Video Image-Guided Setup (VIGS) System for Tomotherapy: Preliminary Study (단층치료용 비디오 영상기반 셋업 장치의 개발: 예비연구)

  • Kim, Jin Sung;Ju, Sang Gyu;Hong, Chae Seon;Jeong, Jaewon;Son, Kihong;Shin, Jung Suk;Shin, Eunheak;Ahn, Sung Hwan;Han, Youngyih;Choi, Doo Ho
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.85-91
    • /
    • 2013
  • At present, megavoltage computed tomography (MVCT) is the only method used to correct the position of tomotherapy patients. MVCT produces extra radiation, in addition to the radiation used for treatment, and repositioning also takes up much of the total treatment time. To address these issues, we suggest the use of a video image-guided setup (VIGS) system for correcting the position of tomotherapy patients. We developed an in-house program to correct the exact position of patients using two orthogonal images obtained from two video cameras installed at $90^{\circ}$ and fastened inside the tomotherapy gantry. The system is programmed to make automatic registration possible with the use of edge detection of the user-defined region of interest (ROI). A head-and-neck patient is then simulated using a humanoid phantom. After taking the computed tomography (CT) image, tomotherapy planning is performed. To mimic a clinical treatment course, we used an immobilization device to position the phantom on the tomotherapy couch and, using MVCT, corrected its position to match the one captured when the treatment was planned. Video images of the corrected position were used as reference images for the VIGS system. First, the position was repeatedly corrected 10 times using MVCT, and based on the saved reference video image, the patient position was then corrected 10 times using the VIGS method. Thereafter, the results of the two correction methods were compared. The results demonstrated that patient positioning using a video-imaging method ($41.7{\pm}11.2$ seconds) significantly reduces the overall time of the MVCT method ($420{\pm}6$ seconds) (p<0.05). However, there was no meaningful difference in accuracy between the two methods (x=0.11 mm, y=0.27 mm, z=0.58 mm, p>0.05). Because VIGS provides a more accurate result and reduces the required time, compared with the MVCT method, it is expected to manage the overall tomotherapy treatment process more efficiently.

High Performance Object Recognition with Application of the Size and Rotational Invariant Feature of the Fourier Descriptor to the 3D Information of Edges (푸리에 표현자의 크기와 회전 불변 특징을 에지에 대한 3차원 정보에 응용한 고효율의 물체 인식)

  • Wang, Shi;Chen, Hongxin;I, Jun-Ho;Lin, Haiping;Kim, Hyong-Suk;Kim, Jong-Man
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.170-178
    • /
    • 2008
  • A high performance object recognition algorithm using Fourier description of the 3D information of the objects is proposed. Object boundaries contain sufficient information for recognition in most of objects. However, it is not well utilized as the key solution of the object recognition since obtaining the accurate boundary information is not easy. Also, object boundaries vary highly depending on the size or orientation of object. The proposed object recognition algorithm is based on 1) the accurate object boundaries extracted from the 3D shape which is obtained by the laser scan device, and 2) reduction of the required database using the size and rotational invariant feature of the Fourier Descriptor. Such Fourier information is compared with the database and the recognition is done by selecting the best matching object. The experiments have been done on the rich database of MPEG 7 Part B.

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.