• Title/Summary/Keyword: 적응적 가중치

Search Result 330, Processing Time 0.024 seconds

Spatial prioritization of climate change vulnerability using uncertainty analysis of multi-criteria decision making method (다기준 의사결정기법의 불확실성 분석기법을 이용한 기후변화 취약성에 대한 지역별 우선순위 결정)

  • Song, Jae Yeol;Chung, Eun-Sung
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.2
    • /
    • pp.121-128
    • /
    • 2017
  • In this study, robustness index and uncertainty analysis were proposed to quantify the risk inherent in the process of climate change vulnerability assessment. The water supply vulnerability for six metropolitan cities (Busan, Daegu, Incheon, Gwangju, Daejeon, and Ulsan), except for Seoul, were prioritized using TOPSIS, a kind of multi-criteria decision making method. The robustness index was used to analyze the possibility of rank reversal and the uncertainty analysis was introduced to derive the minimum changed weights of the criteria that determine the rank reversal between any paired cities. As a result, Incheon and Daegu were found to be very vulnerable and Daegu and Busan were derived to be very sensitive. Although Daegu was relatively vulnerable against the other cities, it can be largely improved by developing and performing various climate change adaptation measures because it is more sensitive. This study can be used as a preliminary assessment for establishing and planning climate change adaptation measure.

Image Restoration for Edge Preserving in Mixed Noise Environment (복합잡음 환경에서 에지 보존을 위한 영상복원)

  • Long, Xu;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.727-734
    • /
    • 2014
  • Digital processing technologies are being studied in various areas of image compression, recognition and recovery. However, image deterioration still occurs due to the noises in the process of image acquisition, storage and transmission. Generally in the typical noises which are included in the images, there are Gaussian noise and the mixed noise where the Gaussian noise and impulse noise are overlapped and in order to remove these noises, various researches are being executed. In order to preserve the edge and effectively remove mixed noises, image recovery filter algorithm was suggested in this study which sets and processes the adaptive weight using the median values and average values after noise judgment. Additionally, existing methods were compared through simulations and PSNR(peak signal to noise ratio) was used as a judgment standard.

A Systolic Array Structured Decision Feedback Equalizer based on Extended QR-RLS Algorithm (확장 QR-RLS 알고리즘을 이용한 시스토릭 어레이 구조의 결정 궤환 등화기)

  • Lee Won Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1518-1526
    • /
    • 2004
  • In this paper, an algorithm using wavelet transform for detecting a cut that is a radical scene transition point, and fade and dissolve that are gradual scene transition points is proposed. The conventional methods using wavelet transform for this purpose is using features in both spatial and frequency domain. But in the proposed algorithm, the color space of an input image is converted to YUV and then luminance component Y is transformed in frequency domain using 2-level lifting. Then, the histogram of only low frequency subband that may contain some spatial domain features is compared with the previous one. Edges obtained from other higher bands can be divided into global, semi-global and local regions and the histogram of each edge region is compared. The experimental results show the performance improvement of about 17% in recall and 18% in precision and also show a good performance in fade and dissolve detection.

Robust Stereo Matching under Radiometric Change based on Weighted Local Descriptor (광량 변화에 강건한 가중치 국부 기술자 기반의 스테레오 정합)

  • Koo, Jamin;Kim, Yong-Ho;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.164-174
    • /
    • 2015
  • In a real scenario, radiometric change has frequently occurred in the stereo image acquisition process using multiple cameras with geometric characteristics or moving a single camera because it has different camera parameters and illumination change. Conventional stereo matching algorithms have a difficulty in finding correct corresponding points because it is assumed that corresponding pixels have similar color values. In this paper, we present a new method based on the local descriptor reflecting intensity, gradient and texture information. Furthermore, an adaptive weight for local descriptor based on the entropy is applied to estimate correct corresponding points under radiometric variation. The proposed method is tested on Middlebury datasets with radiometric changes, and compared with state-of-the-art algorithms. Experimental result shows that the proposed scheme outperforms other comparison algorithms around 5% less matching error on average.

Adaptive Parallel Interference Canceller using Hyperbolic Tangent with Null Zone Detector (Hyperbolic Tangent 검파방식에서 Null zone을 이용한 적응 병렬 간섭제거기)

  • Lee, Sang-Hoon;Kim, Nam
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.38 no.3
    • /
    • pp.1-8
    • /
    • 2001
  • In the DS/CDMA mobile communication systems, the parallel interference canceller is used in order to reduce the multiple access interference and the multipath fading. It is needed the accurate interference estimate in the multistage parallel cancellation. In this paper, the adaptive cancellation method and the new tentative decision device arc proposed and the performance is analyzed. The adaptive cancellation method uses the normalized least mean square(NLMS) algorithm to calculate the weight adaptively, and new tentative decision device uses the hyperbolic tangent decision with null zone. Computer simulation shows that the proposed scheme has the improved performance and the number of user is increased 48% compared with the conventional receiver.

  • PDF

Efficient Preprocessing Method for Binary Centroid Tracker in Cluttered Image Sequences (복잡한 배경영상에서 효과적인 전처리 방법을 이용한 표적 중심 추적기)

  • Cho, Jae-Soo
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.1
    • /
    • pp.48-56
    • /
    • 2006
  • This paper proposes an efficient preprocessing technique for a binary centroid tracker in correlated image sequences. It is known that the following factors determine the performance of the binary centroid target tracker: (1) an efficient real-time preprocessing technique, (2) an exact target segmentation from cluttered background images and (3) an intelligent tracking window sizing, and etc. The proposed centroid tracker consists of an adaptive segmentation method based on novel distance features and an efficient real-time preprocessing technique in order to enhance the distinction between the objects of interest and their local background. Various tracking experiments using synthetic images as well as real Forward-Looking InfraRed (FLIR) images are performed to show the usefulness of the proposed methods.

  • PDF

Image Adaptive Block DCT-Based Perceptual Digital Watermarking (영상 특성에 적응적인 블록 DCT 기반 지각적 디지털 워터마킹)

  • 최윤희;최태선
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.221-229
    • /
    • 2004
  • We present new digital watermarking scheme that embeds a watermark according to the characteristics of the image or video. The scheme is compatible with established image compression standard. We define a weighting function using a parent-child structure of the DCT coefficients in a block to embed a maximum watermark. The spatio-frequency localization of the DCT coefficients can be achieved with this structure. In the detection stage, we present an optimum a posteriori threshold with a given false detection error probability based on the statistical analysis. Simulation results show that the proposed algorithm is efficient and robust against various signal processing techniques. Especially, they are robust against widely used coding standards, such as JPEG and MPEG.

Adaptive MAP High-Resolution Image Reconstruction Algorithm Using Local Statistics (국부 통계 특성을 이용한 적응 MAP 방식의 고해상도 영상 복원 방식)

  • Kim, Kyung-Ho;Song, Won-Seon;Hong, Min-Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1194-1200
    • /
    • 2006
  • In this paper, we propose an adaptive MAP (Maximum A Posteriori) high-resolution image reconstruction algorithm using local statistics. In order to preserve the edge information of an original high-resolution image, a visibility function defined by local statistics of the low-resolution image is incorporated into MAP estimation process, so that the local smoothness is adaptively controlled. The weighted non-quadratic convex functional is defined to obtain the optimal solution that is as close as possible to the original high-resolution image. An iterative algorithm is utilized for obtaining the solution, and the smoothing parameter is updated at each iteration step from the partially reconstructed high-resolution image is required. Experimental results demonstrate the capability of the proposed algorithm.

Design of Adaptive Retrieval System using XMDR based knowledge Sharing (지식 공유 기반의 XMDR을 이용한 적응형 검색 시스템 설계)

  • Hwang Chi-Gon;Jung Kye-Dong;Choi Young-Keun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.8B
    • /
    • pp.716-729
    • /
    • 2006
  • The information systems in the most enterprise environments are distributed locally and are comprised with various heterogeneous data sources, so that it is difficult to obtain necessary and integrated information for supporting user decision. For solving 'this problems efficiently, it provides uniform interface to users and constructed database systems between heterogeneous systems make a consistence each independence and need to provide transparency like one interface. This paper presents XMDR that consists of category, standard ontology, location ontology and knowledge base. Standard ontology solves heterogeneous problem about naming, attributes, relations in data expression. Location ontology is a mediator that connects each legacy systems. Knowledge base defines the relation for sharing glossary. Adaptive retrieve proposes integrated retrieve system through reflecting site weight by location ontology, information sharing of various forms of knowledge base and integration and propose conceptual domain model about how to share unstructured knowledge.

Performance Analysis of Adaptive Corner Shrinking Algorithm for Decimating the Document Image (문서 영상 축소를 위한 적응형 코너 축소 알고리즘의 성능 분석)

  • Kwak No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.4 no.2
    • /
    • pp.211-221
    • /
    • 2003
  • The objective of this paper is performance analysis of the digital document image decimation algorithm which generates a value of decimated element by an average of a target pixel value and a value of neighbor intelligible element to adaptively reflect the merits of ZOD method and FOD method on the decimated image. First, a target pixel located at the center of sliding window is selected, then the gradient amplitudes of its right neighbor pixel and its lower neighbor pixel are calculated using first order derivative operator respectively. Secondly, each gradient amplitude is divided by the summation result of two gradient amplitudes to generate each local intelligible weight. Next, a value of neighbor intelligible element is obtained by adding a value of the right neighbor pixel times its local intelligible weight to a value of the lower neighbor pixel times its intelligible weight. The decimated image can be acquired by applying the process repetitively to all pixels in input image which generates the value of decimated element by calculating the average of the target pixel value and the value of neighbor intelligible element. In this paper, the performance comparison of proposed method and conventional methods in terms of subjective performance and hardware complexity is analyzed and the preferable approach for developing the decimation algorithm of the digital document image on the basis of this analysis result has been reviewed.

  • PDF