• Title/Summary/Keyword: 이진법

Search Result 1,086, Processing Time 0.03 seconds

Edge Enhanced Error Diffusion Halftoning Method Using Local Activity Measure (공간활성도를 이용한 에지 강조 오차확산법)

  • Kwak Nae-Joung;Ahn Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.3
    • /
    • pp.313-321
    • /
    • 2005
  • Digital halftoning is a process to produce a binary image so that the original image and its binary counterpart appear similar when observed from a distance. Among digital halftoning methods, error diffusion is a procedure for generating high quality bilevel images from continuous-tone images but blurs the edge information in the bilevel images. To solve this problem, we propose the improved error diffusion using local spatial information of the original images. Based on the fact that the human vision perceives not a pixel but local mean of input image, we compute edge enhancement information(EEI) by appling the ratio of a pixel and its adjacent pixels to local mean. The weights applied to local means is computed using the ratio of local activity measure(LAM) to the difference between input pixels of 3$\times$3 blocks and theirs mean. LAM is the measure of luminance changes in local regions and is obtained by adding the square of the difference between input pixels of 3$\times$3 blocks and theirs mean. We add the value to a input pixel of quantizer to enhance edge. The performance of the proposed method is compared with conventional methods by measuring the edge correlation. The halftone images by using the proposed method show better quality due to the enhanced edge. And the detailed edge is preserved in the halftone images by using the proposed method. Also the proposed method improves the quality of halftone images because unpleasant patterns for human visual system are reduced.

  • PDF

A Method for Automatic Detection of Character Encoding of Multi Language Document File (다중 언어로 작성된 문서 파일에 적용된 문자 인코딩 자동 인식 기법)

  • Seo, Min Ji;Kim, Myung Ho
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.4
    • /
    • pp.170-177
    • /
    • 2016
  • Character encoding is a method for changing a document to a binary document file using the code table for storage in a computer. When people decode a binary document file in a computer to be read, they must know the code table applied to the file at the encoding stage in order to get the original document. Identifying the code table used for encoding the file is thus an essential part of decoding. In this paper, we propose a method for detecting the character code of the given binary document file automatically. The method uses many techniques to increase the detection rate, such as a character code range detection, escape character detection, character code characteristic detection, and commonly used word detection. The commonly used word detection method uses multiple word database, which means this method can achieve a much higher detection rate for multi-language files as compared with other methods. If the proportion of language is 20% less than in the document, the conventional method has about 50% encoding recognition. In the case of the proposed method, regardless of the proportion of language, there is up to 96% encoding recognition.

Hierarchic Document Clustering in OPAC (OPAC에서 자동분류 열람을 위한 계층 클러스터링 연구)

  • 노정순
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.1
    • /
    • pp.93-117
    • /
    • 2004
  • This study is to develop a hierarchic clustering model fur document classification and browsing in OPAC systems. Two automatic indexing techniques (with and without controlled terms), two term weighting methods (based on term frequency and binary weight), five similarity coefficients (Dice, Jaccard, Pearson, Cosine, and Squared Euclidean). and three hierarchic clustering algorithms (Between Average Linkage, Within Average Linkage, and Complete Linkage method) were tested on the document collection of 175 books and theses on library and information science. The best document clusters resulted from the Between Average Linkage or Complete Linkage method with Jaccard or Dice coefficient on the automatic indexing with controlled terms in binary vector. The clusters from Between Average Linkage with Jaccard has more likely decimal classification structure.

Smoke Detection using Region Growing Method (영역 확장법을 이용한 연기검출)

  • Kim, Dong-Keun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.271-280
    • /
    • 2009
  • In this paper, we propose a smoke detection method using region growing method in outdoor video sequences. Our proposed method is composed of three steps; the initial change area detection step, the boundary finding and expanding step, and the smoke classification step. In the first step, we use a background subtraction to detect changed areas in the current input frame against the background image. In difference images of the background subtraction, we calculate a binary image using a threshold value and apply morphology operations to the binary image to remove noises. In the second step, we find boundaries of the changed areas using labeling algorithm and expand the boundaries to their neighbors using the region growing algorithm. In the final step, ellipses of the boundaries are estimated using moments. We classify whether the boundary is smoke by using the temporal information.

Design of Iterative Divider in GF(2163) Based on Improved Binary Extended GCD Algorithm (개선된 이진 확장 GCD 알고리듬 기반 GF(2163)상에서 Iterative 나눗셈기 설계)

  • Kang, Min-Sup;Jeon, Byong-Chan
    • The KIPS Transactions:PartC
    • /
    • v.17C no.2
    • /
    • pp.145-152
    • /
    • 2010
  • In this paper, we first propose a fast division algorithm in GF($2^{163}$) using standard basis representation, and then it is mapped into divider for GF($2^{163}$) with iterative hardware structure. The proposed algorithm is based on the binary ExtendedGCD algorithm, and the arithmetic operations for modular reduction are performed within only one "while-statement" unlike conventional approach which uses two "while-statement". In this paper, we use reduction polynomial $f(x)=x^{163}+x^7+x^6+x^3+1$ that is recommended in SEC2(Standards for Efficient Cryptography) using standard basis representation, where degree m = 163. We also have implemented the proposed iterative architecture in FPGA using Verilog HDL, and it operates at a clock frequency of 85 MHz on Xilinx-VirtexII XC2V8000 FPGA device. From implementation results, we will show that computation speed of the proposed scheme is significantly improved than the existing two approaches.

Mining Quantitative Association Rules using Commercial Data Mining Tools (상용 데이타 마이닝 도구를 사용한 정량적 연관규칙 마이닝)

  • Kang, Gong-Mi;Moon, Yang-Sae;Choi, Hun-Young;Kim, Jin-Ho
    • Journal of KIISE:Databases
    • /
    • v.35 no.2
    • /
    • pp.97-111
    • /
    • 2008
  • Commercial data mining tools basically support binary attributes only in mining association rules, that is, they can mine binary association rules only. In general, however. transaction databases contain not only binary attributes but also quantitative attributes. Thus, in this paper we propose a systematic approach to mine quantitative association rules---association rules which contain quantitative attributes---using commercial mining tools. To achieve this goal, we first propose an overall working framework that mines quantitative association rules based on commercial mining tools. The proposed framework consists of two steps: 1) a pre-processing step which converts quantitative attributes into binary attributes and 2) a post-processing step which reconverts binary association rules into quantitative association rules. As the pre-processing step, we present the concept of domain partition, and based on the domain partition, we formally redefine the previous bipartition and multi-partition techniques, which are mean-based or median-based techniques for bipartition, and are equi-width or equi-depth techniques for multi-partition. These previous partition techniques, however, have the problem of not considering distribution characteristics of attribute values. To solve this problem, in this paper we propose an intuitive partition technique, named standard deviation minimization. In our standard deviation minimization, adjacent attributes are included in the same partition if the change of their standard deviations is small, but they are divided into different partitions if the change is large. We also propose the post-processing step that integrates binary association rules and reconverts them into the corresponding quantitative rules. Through extensive experiments, we argue that our framework works correctly, and we show that our standard deviation minimization is superior to other partition techniques. According to these results, we believe that our framework is practically applicable for naive users to mine quantitative association rules using commercial data mining tools.

Application of the Velocity Index Method for Discharge Computation in Tidal River Basin (감조하천의 유량산정을 위한 유속지수법의 적용)

  • Song, Jae-Hyun;Lee, Suk-Ho;Kim, Chi-Young;Lee, Jin-Won;Jung, Sung-Won
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.1342-1345
    • /
    • 2009
  • 조위영향을 받는 감조하천에서는 조위변화의 주기적인 특성으로 인해 일반적인 수위와 유량의 관계가 성립하기 어렵기 때문에 최근 ADVM(Acoustic Doppler Velocity Meter) 또는 UVM (Ultrasonic Velocity Meter)과 같은 자동 유량측정 기법을 통한 연속유량측정이 이루어지고 있다. 한강대교 수위관측소는 대표적인 감조구간으로 이러한 문제를 해결하기 위해 ADVM 방식의 자동유량측정시설이 설치되어 운영 중에 있으며, H-ADCP 센서를 통해 측정된 유속을 Chiou의 무차원단면유속분포법을 이용하여 유량을 계산한다. 이는 최대유속을 유량산정의 지표로 하여 유량을 계산하는 방법으로, 본 연구에서는 한강대교 자동유량측정시설의 측정성과를 이용하여 유속지수법과 무차원유속분포법에 의해 산정된 유량을 비교하였고, 앞의 방법들을 검증하기 위하여 2008년 ADVM을 이용한 이동보트법으로 측정된 유량과 비교하였다.

  • PDF

Mass Estimation of a Permanent Magnet Linear Synchronous Motor by the Least-Squares Algorithm (선형 영구자석 동기전동기의 최소자승법을 적용한 질량 추정)

  • Lee, Jin-Woo
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.11 no.2
    • /
    • pp.159-163
    • /
    • 2006
  • In order to tune the speed controller in the linear servo applications an accurate information of a mover mass including a load mass is always required. This paper suggests the mass estimation method of a permanent magnet linear synchronous motor(PMLSM) 4y using the parameter estimation method of Least-Squares algorithm. First, the deterministic autoregressive moving average(DARMA) model of the mechanical dynamic system is derived. Then the application of the Least-Squares algorithm shows that the mass can be accurately estimated both in the simulation results and in the experimental results.

On the Efficiency of Outlier Cleaners in Spatial Data Analysis (공간통계분석에서 이상점 수정방법의 효율성비교)

  • 이진희;신기일
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.2
    • /
    • pp.327-336
    • /
    • 2004
  • Many researchers have used the robust variogram to reduce the effect of outliers in spatial data analysis. Recently it is known that estimating the variogram after replacing outliers is more efficient. In this paper, we suggest a new data cleaner for geostatistic data analysis and compare the efficiency of outlier cleaners.

Adative Error Diffusion Using Fuzzy Relaxation Technique (퍼지 이완 방법을 이용한 적응적 오차 확산법)

  • 박양우;엄태억;장주석;하영호
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.5
    • /
    • pp.42-47
    • /
    • 1999
  • 연속 계조도 영상을 이진 영상(하프톤 영상)으로 변환하는 하프톤 기법중 대표적인 두 방법은 순차적 디더법과 오차 확산법이 있다. 이 중에서 오차확산법은 예리한 하프톤 영상을 얻기위한 우수한 하프톤 기법으로 잘 알려져 있다. 그러나 알고리듬에 기인하는 여러 인공잡음들이 발생하므로 이를 개선하기 위한 방법으로 최적의 필터 계수를 얻기 위한 많은 연구가 진행되었다. 본 논문에서는 연속 계조도 입력 영상과 하프톤 영상사이의 양자화 오차를 영상에 적응적이며 최적으로 확산시키기 위해 양자화 오차를 초기 가능성의 퍼지 부분 집합으로 정의하였다. 이러한 퍼지 부분 집합의 중심화소에 대해 이웃한 화소의 오차 가능성을 고려한 후 FAM 규칙을 이용하여 각각 화소들의 오차 가능성을 영상에 따라 적응적으로 갱신하였으며 이를 원영상에 더하여 다시 양자화 과정을 번복하는 퍼지 이완 알고리듬을 이용한 오차 확산법을 제안하였다. 제안한 방법을 이용하여 얻은 결과를 최적 필터 계수를 구하기 위한 기존의 방법의 결과 영상과 비교 분석하였다.

  • PDF