• Title/Summary/Keyword: Binary Systems

Search Result 1,167, Processing Time 0.029 seconds

Contactless Fingerprint Recognition Based on LDP (LDP 기반 비접촉식 지문 인식)

  • Kang, Byung-Jun;Park, Kang-Ryoung;Yoo, Jang-Hee;Moon, Ki-Young;Kim, Jeong-Nyeo;Shin, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1337-1347
    • /
    • 2010
  • Fingerprint recognition is a biometric technology to identify individual by using fingerprint features such ridges and valleys. Most fingerprint systems perform the recognition based on minutiae points after acquiring a fingerprint image from contact type sensor. They have an advantage of acquiring a clear image of uniform size by touching finger on the sensor. However, they have the problems of the image quality can be reduced in case of severely dry or wet finger due to the variations of touching pressure and latent fingerprint on the sensor. To solve these problems, the contactless capturing devices for a fingerprint image was introduced in previous works. However, the accuracy of detecting minutiae points and recognition performance are reduced due to the degradation of image quality by the illumination variation. So, this paper proposes a new LDP-based fingerprint recognition method. It can effectively extract fingerprint patterns of iterative ridges and valleys. After producing histograms of the binary codes which are extracted by the LDP method, chi square distance between the enrolled and input feature histograms is calculated. The calculated chi square distance is used as the score of fingerprint recognition. As the experimental results, the EER of the proposed approach is reduced by 0.521% in comparison with that of the previous LBP-based fingerprint recognition approach.

Analysis of Land Use Change Using RCP-Based Dyna-CLUE Model in the Hwangguji River Watershed (RCP 시나리오 기반 Dyna-CLUE 모형을 이용한 황구지천 유역의 토지이용변화 분석)

  • Kim, Jihye;Park, Jihoon;Song, Inhong;Song, Jung-Hun;Jun, Sang Min;Kang, Moon Seong
    • Journal of Korean Society of Rural Planning
    • /
    • v.21 no.2
    • /
    • pp.33-49
    • /
    • 2015
  • The objective of this study was to predict land use change based on the land use change scenarios for the Hwangguji river watershed, South Korea. The land use change scenario was derived from the representative concentration pathways (RCP) 4.5 and 8.5 scenarios. The CLUE (conversion of land use and its effects) model was used to simulate the land use change. The CLUE is the modeling framework to simulate land use change considering empirically quantified relations between land use types and socioeconomic and biophysical driving factors through dynamical modeling. The Hwangguji river watershed, South Korea was selected as study area. Future land use changes in 2040, 2070, and 2100 were analyzed relative to baseline (2010) under the RCP4.5 and 8.5 scenarios. Binary logistic regressions were carried out to identify the relation between land uses and its driving factors. CN (Curve number) and impervious area based on the RCP4.5 and 8.5 scenarios were calculated and analyzed using the results of future land use changes. The land use change simulation of the RCP4.5 scenario resulted that the area of urban was forecast to increase by 12% and the area of forest was estimated to decrease by 16% between 2010 and 2100. The land use change simulation of the RCP8.5 scenario resulted that the area of urban was forecast to increase by 16% and the area of forest was estimated to decrease by 18% between 2010 and 2100. The values of Kappa and multiple resolution procedure were calculated as 0.61 and 74.03%. CN (III) and impervious area were increased by 0-1 and 0-8% from 2010 to 2100, respectively. The study findings may provide a useful tool for estimating the future land use change, which is an important factor for the future extreme flood.

Magnetic Hardening of Rapidly Solidified $SmFe_{7+x}M_{x}(M=Mo,\;V,\;Ti)$ Compounds (급속냉각된 $SmFe_{7+x}M_{x}(M=Mo,\;V,\;Ti)$ 화합물에서 생성된 신 강자성상)

  • Choong-Jin Yang;E. B. Park;S. D. Choi
    • Journal of the Korean Magnetics Society
    • /
    • v.4 no.3
    • /
    • pp.226-232
    • /
    • 1994
  • Rapidly solidified $SmFe_{7+x}M_{x}(M=Mo,\;V,\;Ti)$ compound were found to crystallize in the ${Sm(Fe,\;M)}_{7}$ based stable magnetic phase by introducing a second transition element into the Sm-Fe binary system. The ${Sm(Fe,\;M)}_{7}$ phase exhibits the highest Curie temperatuer ($T_{c}=355^{\circ}C$) ever Known in the Sm-Fe magnetic systems with a quite high intrinsic coercivity($_{i}H_{c}=3~6\;kOe $). The ${Sm(Fe,\;M)}_{7}$ phase remains stable even after annealing if once form during the rapid solidification. The primary reason for the high coercive force is due to the fine grain size($2000~8000\;{\AA}$)of the magnetic ${Sm(Fe,\;M)}_{7}$ matrix phase, and the enhanced Curie temperature is attributed to the extended solid-solubility of the additive transition elements in Fe matrix, which leads to volume expansion of the ${Sm(Fe,\;M)}_{7}$ cell causing an enhanced coupling constant of Fe atoms.

  • PDF

Phenol Concentration using Thermal Simulated Moving Bed Concentrator (TSMBC(Thermal Simulated Moving Bed Concentrator)를 이용한 페놀 농축)

  • Gil, Mun-Seok;Kim, Jin-Il;Lee, Ju Weon;Koo, Yoon-Mo
    • Korean Chemical Engineering Research
    • /
    • v.50 no.6
    • /
    • pp.1027-1033
    • /
    • 2012
  • Conventional SMB process is operated using 4-zone having several chromatography columns in each zone. Unlike batch chromatography, SMB process can continuously separate binary materials. Both high productivity and purity are obtainable by using SMB process. In this study, the simulation on Thermal Simulated Moving Bed Concentrator (TSMBC) which is a SMB process with thermal swing adsorption was carried out. The advantage of TSMBC is that adsorption isotherm can be easily controlled by thermal wave or direct heating. Recovery of pure water and concentration of phenol was studied in simulation. To verify environmental-friendly potential of TSMBC, DOWEX $1{\times}4$ was chosen as an adsorbent and phenol was selected as a target material. When 3 columns were used in this study, concentration of phenol is 2.29, 2.28 and 1.31 times higher than injected sample. However, a contamination of phenol in solvent port was found, probably due to the restriction of adsorption isotherm of phenol on DOWEX $1{\times}4$.

Design and Implementation of ASTERIX Parsing Module Based on Pattern Matching for Air Traffic Control Display System (항공관제용 현시시스템을 위한 패턴매칭 기반의 ASTERIX 파싱 모듈 설계 및 구현)

  • Kim, Kanghee;Kim, Hojoong;Yin, Run Dong;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.89-101
    • /
    • 2014
  • Recently, as domestic air traffic dramatically increases, the need of ATC(air traffic control) systems has grown for safe and efficient ATM(air traffic management). Especially, for smooth ATC, it is far more important that performance of display system which should show all air traffic situation in FIR(Flight Information Region) without additional latency is guaranteed. In this paper, we design a ASTERIX(All purpose STructured Eurocontrol suRveillance Information eXchange) parsing module to promote stable ATC by minimizing system loads, which is connected with reducing overheads arisen when we parse ASTERIX message. Our ASTERIX parsing module based on pattern matching creates patterns by analyzing received ASTERIX data, and handles following received ASTERIX data using pre-defined procedure through patterns. This module minimizes display errors by rapidly extracting only necessary information for display different from existing parsing module containing unnecessary parsing procedure. Therefore, this designed module is to enable controllers to operate stable ATC. The comparison with existing general bit level ASTERIX parsing module shows that ASTERIX parsing module based on pattern matching has shorter processing delay, higher throughput, and lower CPU usage.

A Novel Compressed Sensing Technique for Traffic Matrix Estimation of Software Defined Cloud Networks

  • Qazi, Sameer;Atif, Syed Muhammad;Kadri, Muhammad Bilal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4678-4702
    • /
    • 2018
  • Traffic Matrix estimation has always caught attention from researchers for better network management and future planning. With the advent of high traffic loads due to Cloud Computing platforms and Software Defined Networking based tunable routing and traffic management algorithms on the Internet, it is more necessary as ever to be able to predict current and future traffic volumes on the network. For large networks such origin-destination traffic prediction problem takes the form of a large under- constrained and under-determined system of equations with a dynamic measurement matrix. Previously, the researchers had relied on the assumption that the measurement (routing) matrix is stationary due to which the schemes are not suitable for modern software defined networks. In this work, we present our Compressed Sensing with Dynamic Model Estimation (CS-DME) architecture suitable for modern software defined networks. Our main contributions are: (1) we formulate an approach in which measurement matrix in the compressed sensing scheme can be accurately and dynamically estimated through a reformulation of the problem based on traffic demands. (2) We show that the problem formulation using a dynamic measurement matrix based on instantaneous traffic demands may be used instead of a stationary binary routing matrix which is more suitable to modern Software Defined Networks that are constantly evolving in terms of routing by inspection of its Eigen Spectrum using two real world datasets. (3) We also show that linking this compressed measurement matrix dynamically with the measured parameters can lead to acceptable estimation of Origin Destination (OD) Traffic flows with marginally poor results with other state-of-art schemes relying on fixed measurement matrices. (4) Furthermore, using this compressed reformulated problem, a new strategy for selection of vantage points for most efficient traffic matrix estimation is also presented through a secondary compression technique based on subset of link measurements. Experimental evaluation of proposed technique using real world datasets Abilene and GEANT shows that the technique is practical to be used in modern software defined networks. Further, the performance of the scheme is compared with recent state of the art techniques proposed in research literature.

Adsorption Characteristics of Acetone, Benzene, and Metylmercaptan in the Fixed Bed Reactor Packed with Activated Carbon Prepared from Waste Citrus Peel (폐감귤박으로 제조한 활성탄을 충전한 고정층 반응기에서 아세톤, 벤젠 및 메틸메르캅탄의 흡착특성)

  • Kam, Sang-Kyu;Kang, Kyung-Ho;Lee, Min-Gyu
    • Applied Chemistry for Engineering
    • /
    • v.29 no.1
    • /
    • pp.28-36
    • /
    • 2018
  • Adsorption experiments of three target gases such as acetone, benzene, and methyl mercaptan (MM) were carried in a continuous reactor using the activated carbon prepared from waste citrus peel. In a single gas system, the breakthrough time obtained from using the activated carbon (WCAC) prepared from waste citrus peel. In a single gas system, the breakthrough time obtained from the breakthrough curve decreased with increasing the inlet concentration and flow rate, but increased with respect to the aspect ratio (L/D). Adsorbed amounts of the target gases by WCAC increased as a function of the inlet concentration and aspect ratio. However, adsorbed amounts with the increase of the flow rate were different depending upon target gases. Results from the breakthrough time and adsorbed amount showed that the affinity for WCAC was the highest in benzene, followed by acetone and then MM. On the other hand, in the binary and ternary systems, the breakthrough curve showed a roll-up phenomenon where the adsorbate having a small affinity for WCAC was replaced with the adsorbate with a high affinity. The adsorption of acetone on WCAC was more strongly affected when mixing with the nonpolar benzene than that of using sulfur compound MM.

Disaggregate Demand Forecasting and Estimation of the Optimal Price for VTIS (부가교통정보시스템(VTIS) 이용수요예측 및 적정이용료 산정에 관한 연구)

  • 정헌영;진재업;손태민
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.27-38
    • /
    • 2002
  • VTIS(Value-added Traffic Information System), among the sub-systems of ATIS, is an Advanced Traffic System which innovates efficiency and safety. And this system, having marketability and publicness, is very important. Moreover, This system offers definite traffic information according to the demand of specified users. And it is expected to produce additional spread effects because of high participation rate of private sector. However, the VTIS service media are varied and there are varied optimal Prices and payment methods according to each medium. Because of that, there needs the study on these problems or optimal criteria. But because existing studies were devoted to estimate the optimal route, the study toward the optimal price which was considered part of user and service use demand do not exist. Accordingly, we surveyed under imaginary alternative pricing scenarios and forecasted the use demand of VTIS by using Binary Logit model. Also, for the users who answered that they would use VTIS service in survey, we classified their use's behaviors as four categories and estimated the use ratio to each category by using Ordered Probit model. Last, using sensitivity analysis for results form above, we derived the optimal price that is 2800won in monthly. 145won in payment per call. Then, VTIS service use rate is respectively 65%, 75%.

Fast Generation of Elliptic Curve Base Points Using Efficient Exponentiation over $GF(p^m)$) (효율적인 $GF(p^m)$ 멱승 연산을 이용한 타원곡선 기저점의 고속 생성)

  • Lee, Mun-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.3
    • /
    • pp.93-100
    • /
    • 2007
  • Since Koblitz and Miller suggested the use of elliptic curves in cryptography, there has been an extensive literature on elliptic curve cryptosystem (ECC). The use of ECC is based on the observation that the points on an elliptic curve form an additive group under point addition operation. To realize secure cryptosystems using these groups, it is very important to find an elliptic curve whose group order is divisible by a large prime, and also to find a base point whose order equals this prime. While there have been many dramatic improvements on finding an elliptic curve and computing its group order efficiently, there are not many results on finding an adequate base point for a given curve. In this paper, we propose an efficient method to find a random base point on an elliptic curve defined over $GF(p^m)$. We first show that the critical operation in finding a base point is exponentiation. Then we present efficient algorithms to accelerate exponentiation in $GF(p^m)$. Finally, we implement our algorithms and give experimental results on various practical elliptic curves, which show that the new algorithms make the process of searching for a base point 1.62-6.55 times faster, compared to the searching algorithm based on the binary exponentiation.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.