• Title/Summary/Keyword: Binary Systems

Search Result 1,173, Processing Time 0.021 seconds

Design of Fuzzy Prediction System based on Dual Tuning using Enhanced Genetic Algorithms (강화된 유전알고리즘을 이용한 이중 동조 기반 퍼지 예측시스템 설계 및 응용)

  • Bang, Young-Keun;Lee, Chul-Heui
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.1
    • /
    • pp.184-191
    • /
    • 2010
  • Many researchers have been considering genetic algorithms to system optimization problems. Especially, real-coded genetic algorithms are very effective techniques because they are simpler in coding procedures than binary-coded genetic algorithms and can reduce extra works that increase the length of chromosome for wide search space. Thus, this paper presents a fuzzy system design technique to improve the performance of the fuzzy system. The proposed system consists of two procedures. The primary tuning procedure coarsely tunes fuzzy sets of the system using the k-means clustering algorithm of which the structure is very simple, and then the secondary tuning procedure finely tunes the fuzzy sets using enhanced real-coded genetic algorithms based on the primary procedure. In addition, this paper constructs multiple fuzzy systems using a data preprocessing procedure which is contrived for reflecting various characteristics of nonlinear data. Finally, the proposed fuzzy system is applied to the field of time series prediction and the effectiveness of the proposed techniques are verified by simulations of typical time series examples.

Risk Relationship of Cataract and Epilation on Radiation Dose and Smoking Habit

  • Tomita, Makoto;Otake, Masanori;Moon, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1349-1364
    • /
    • 2006
  • An analytic approach that provides explicit estimates of risk on cataract and epilation data is evaluated by reasonableness of conceivable relative risk models regarding a simple, odds, logistic or Gompertz regression method, assuming a binomial distribution. In these analyses, we apply relative risk models with two thresholds between epilators and nonepilators from a highly characteristic lesion of which radiation cataract does not occur around 2 gray for a single acute exposure. The risk models are fitted to the data assuming 10 as a constant relative biological effectiveness of neutron. The likelihood of observing the entire data set in these models fitted is evaluated by an individual binary-response array. Estimation of a threshold with or without severe epilation and the 100 ($1-\alpha$)% confidence limits are derived from the maximum likelihood approach. The relative risk model with two thresholds can be expressed as a formula with structure of Background $\times$ RR, where RR includes threshold models with or without epilation. The radiosensitivity of ionizing radiation to cataracts has been examined for the relationship between epilators and nonepilators.

  • PDF

A New Concept of Power Flow Analysis

  • Kim, Hyung-Chul;Samann, Nader;Shin, Dong-Geun;Ko, Byeong-Hun;Jang, Gil-Soo;Cha, Jun-Min
    • Journal of Electrical Engineering and Technology
    • /
    • v.2 no.3
    • /
    • pp.312-319
    • /
    • 2007
  • The solution of the power flow is one of the most important problems in electrical power systems. These traditional methods such as Gauss-Seidel method and Newton-Raphson (NR) method have had drawbacks up to now such as initial values, abnormal operating solutions and divergences in heavy loads. In order to overcome theses problems, the power flow solution incorporating genetic algorithm (GA) is introduced in this paper. General operator of genetic algorithm, arithmetic crossover, and non-uniform mutation operator of GA are suggested to solve the power flow problem. While abnormal solution cannot be obtained by a NR method, multiple power flow solution can be obtained by a GA method. With a heavy load, both normal solution and abnormal solution can be obtained by a proposed method. In this paper, a floating number representation instead of the binary number representation is introduced for accuracy. Simulation results have been compared with traditional methods.

Inventory Models for Fresh Agriculture Products with Time-Varying Deterioration Rate

  • Ning, Yufu;Rong, Lixia;Liu, Jianjun
    • Industrial Engineering and Management Systems
    • /
    • v.12 no.1
    • /
    • pp.23-29
    • /
    • 2013
  • This paper presents inventory models for fresh agriculture products with time-varying deterioration rate. Due to the particularity of fresh agriculture products, the demand rate is a function that depends on sale price and freshness. The deterioration rate increases with time and is assumed to be a time-varying function. In the models, the inventory cycle may be constant or variable. The optimal solutions of models are discussed for different freshness and the deterioration rate. The results of experiments show that the profit depends on the freshness and deterioration rate of products. With the increasing inventory cycle, the sale price and profit increase at first and then start decreasing. Furthermore, when the inventory cycle is variable, the total profit is a binary function of the sale price and inventory cycle. There exist unique sale price and inventory cycle such that the profit is optimal. The results also show that the optimal sale price and inventory cycle depend on the freshness and the deterioration rate of fresh agriculture products.

A Study on the Optical Communication Channel using Forward Error Correcting Technique (순방향 에러 교정 기법을 이용한 광통신 채널에 관한 연구)

  • Kang, Young-Jin;Kim, Sun-Yeob
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.2
    • /
    • pp.835-839
    • /
    • 2013
  • In this paper, We operate at a relatively low BER or using forward error control coding techniques on ways to increase the capacity of optical communication systems research. Coding gain is defined as the ratio of the probability of the coded signal and coding of error signal. Coding gain is increased, partly because of the period, to reduce the value of the optimal coding of the signal error probability decreases because of the effective bit binary symbol duration is longer than can be ignored. Transmission capacity on the coding gain for various code rates, which show the extent of up to 75Gb/s transmission capacity to get through it was confirmed that these coding techniques.

Measurement and Prediction of the Flash Point for the Flammable Binary Mixtures using Tag Open-Cup Apparatus (Tag식 개방계 장치를 이용한 가연성 이성분계 혼합물의 인화점 측정 및 예측)

  • Ha, Dong-Myeong;Lee, Sungjin;Song, Young-Ho
    • Korean Chemical Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.181-185
    • /
    • 2005
  • The flash point is one of the most important combustible properties used to determine the potential for fire and explosion hazards of industrial material. An accurate knowledge of the flash point is important in developing appropriate preventive and control measures in industrial fire protection. The flash points for the n-butanol+n-propionic acid and n-propanol+n-propionic acid systems were measured by using Tag open-cup apparatus(ASTM D 1310-86). The experimental data were compared with the values calculated by the laws of Raoult and van Laar equation. The calculated values based on the van Laar equation were found to be better than those based on the Raoult's law.

A Dynamic Storage Allocation Algorithm with Predictable Execution Time (예측 가능한 실행 시간을 가진 동적 메모리 할당 알고리즘)

  • Jeong, Seong-Mu;Yu, Hae-Yeong;Sim, Jae-Hong;Kim, Ha-Jin;Choe, Gyeong-Hui;Jeong, Gi-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2204-2218
    • /
    • 2000
  • This paper proposes a dynamic storage allocation algorithm, QHF(quick-half-fit) for real-time systems. The proposed algorithm manages a free block list per each worked size for memory requests of small size, and a free block list per each power of 2 size for memory requests of large size. This algorithms uses the exact-fit policy for small sie requests and provides high memory utilization. The proposed algorithm also has the time complexity O(I) and enables us to easily estimate the worst case execution time (WCET). In order to confirm efficiency of the proposed algorithm, we compare he memory utilization of proposed algorithm with that of half-fit and binary buddy system that have also time complexity O(I). The simulation result shows that the proposed algorithm guarantees the constant WCET regardless of the system memory size and provides lower fragmentation ratio and allocation failure ratio thant other two algorithms.

  • PDF

A Polynomial-time Algorithm to Find Optimal Path Decompositions of Trees (트리의 최적 경로 분할을 위한 다항시간 알고리즘)

  • An, Hyung-Chan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.195-201
    • /
    • 2007
  • A minimum terminal path decomposition of a tree is defined as a partition of the tree into edge-disjoint terminal-to-terminal paths that minimizes the weight of the longest path. In this paper, we present an $O({\mid}V{\mid}^2$time algorithm to find a minimum terminal path decomposition of trees. The algorithm reduces the given optimization problem to the binary search using the corresponding decision problem, the problem to decide whether the cost of a minimum terminal path decomposition is at most l. This decision problem is solved by dynamic programing in a single traversal of the tree.

Joint Relay Selection and Resource Allocation for Cooperative OFDMA Network

  • Lv, Linshu;Zhu, Qi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.3008-3025
    • /
    • 2012
  • In this paper, the downlink resource allocation of OFDMA system with decode-and-forward (DF) relaying is investigated. A non-convex optimization problem maximizing system throughput with users' satisfaction constraints is formulated with joint relay selection, subcarrier assignment and power allocation. We first transform it to a standard convex problem and then solve it by dual decomposition. In particular, an Optimal resource allocation scheme With Time-sharing (OWT) is proposed with combination of relay selection, subcarrier allocation and power control. Due to its poor adaption to the fast-varying environment, an improved version with subcarrier Monopolization (OWM) is put forward, whose performance promotes about 20% compared with that of OWT in the fast-varying vehicular environment. In fact, OWM is the special case of OWT with binary time-sharing factor and OWT can be seen as the tight upper bound of the OWM. To the best of our knowledge, such algorithms and their relation have not been accurately investigated in cooperative OFDMA networks in the literature. Simulation results show that both the system throughput and the users' satisfaction of the proposed algorithms outperform the traditional ones.

Fast Face Gender Recognition by Using Local Ternary Pattern and Extreme Learning Machine

  • Yang, Jucheng;Jiao, Yanbin;Xiong, Naixue;Park, DongSun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1705-1720
    • /
    • 2013
  • Human face gender recognition requires fast image processing with high accuracy. Existing face gender recognition methods used traditional local features and machine learning methods have shortcomings of low accuracy or slow speed. In this paper, a new framework for face gender recognition to reach fast face gender recognition is proposed, which is based on Local Ternary Pattern (LTP) and Extreme Learning Machine (ELM). LTP is a generalization of Local Binary Pattern (LBP) that is in the presence of monotonic illumination variations on a face image, and has high discriminative power for texture classification. It is also more discriminate and less sensitive to noise in uniform regions. On the other hand, ELM is a new learning algorithm for generalizing single hidden layer feed forward networks without tuning parameters. The main advantages of ELM are the less stringent optimization constraints, faster operations, easy implementation, and usually improved generalization performance. The experimental results on public databases show that, in comparisons with existing algorithms, the proposed method has higher precision and better generalization performance at extremely fast learning speed.