• Title/Summary/Keyword: combinatorial number theory

Search Result 10, Processing Time 0.022 seconds

ON THE MULTI-DIMENSIONAL PARTITIONS OF SMALL INTEGERS

  • Kim, Jun-Kyo
    • East Asian mathematical journal
    • /
    • v.28 no.1
    • /
    • pp.101-107
    • /
    • 2012
  • For each dimension exceeds 1, determining the number of multi-dimensional partitions of a positive integer is an open question in combinatorial number theory. For n ${\leq}$ 14 and d ${\geq}$ 1 we derive a formula for the function ${\wp}_d(n)$ where ${\wp}_d(n)$ denotes the number of partitions of n arranged on a d-dimensional space. We also give an another definition of the d-dimensional partitions using the union of finite number of divisor sets of integers.

Numerical analysis of quantization-based optimization

  • Jinwuk Seok;Chang Sik Cho
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.367-378
    • /
    • 2024
  • We propose a number-theory-based quantized mathematical optimization scheme for various NP-hard and similar problems. Conventional global optimization schemes, such as simulated and quantum annealing, assume stochastic properties that require multiple attempts. Although our quantization-based optimization proposal also depends on stochastic features (i.e., the white-noise hypothesis), it provides a more reliable optimization performance. Our numerical analysis equates quantization-based optimization to quantum annealing, and its quantization property effectively provides global optimization by decreasing the measure of the level sets associated with the objective function. Consequently, the proposed combinatorial optimization method allows the removal of the acceptance probability used in conventional heuristic algorithms to provide a more effective optimization. Numerical experiments show that the proposed algorithm determines the global optimum in less operational time than conventional schemes.

The development of critical node method based heuristic procedure for Solving fuzzy assembly-line balancing problem (퍼지 조립라인밸런싱 문제 해결을 위한 주노드법에 기초한 휴리스틱 절차 개발)

  • 이상완;박병주
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.22 no.51
    • /
    • pp.189-197
    • /
    • 1999
  • Assembly line balancing problem is known as one of difficult combinatorial optimization problems. This problem has been solved with linear programming, dynamic programming approaches. but unfortunately these approaches do not lead to efficient algorithms. Recently, genetic algorithm has been recognized as an efficient procedure for solving hard combinatorial optimization problems, but has a defect that requires long-run time and computational complexties to find the solution. For this reason, we adapt a new method called the Critical Node Method that is intuitive, easy to understand, simple for implementation. Fuzzy set theory is frequently used to represent uncertainty of information. In this paper, to treat the data of real world problems we use a fuzzy number to represent the duration and Critical Node Method based heuristic procedure is developed for solving fuzzy assembly line balancing problem.

  • PDF

Optimal Design of Contour-Lined Plots for Land Consolidation Planning in Sloping Areas (경사지 경지정리지구의 등고선 구획 최적설계)

  • 강민구;박승우;강문성;김상민
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.45 no.6
    • /
    • pp.83-95
    • /
    • 2003
  • In this study, a new concept in a paddy consolidation project is introduced in that curved parallel terracing with contour-lined layout is adopted in sloping areas instead of conventional rectangular terracing. The contoured layout reduces earth-moving considerably compared to rectangular methods in consolidation projects. The objective of the paper is to develop a combinatorial optimization model using the network theory for the design of contour-lined plots which minimizes the volume of earth moving. The results showed that as the length of short side of plot is longer or the land slope is steeper, the volume of earth moving for land leveling increases. The developed optimization model is applied for three consolidated districts and the resulting optimal earth moving is compared with the volume of earth from the conventional method. The shorter is the minimum length of short side of a polt with increases the number of plots, the less is the volume of earth. As the minimum length of short side is 20 m for efficient field works by farm machinery, the volume of earth moving of optimal plot is less by 21.0∼27.1 % than that of the conventional consolidated plots.

DEGENERATE POLYEXPONENTIAL FUNCTIONS AND POLY-EULER POLYNOMIALS

  • Kurt, Burak
    • Communications of the Korean Mathematical Society
    • /
    • v.36 no.1
    • /
    • pp.19-26
    • /
    • 2021
  • Degenerate versions of the special polynomials and numbers since they have many applications in analytic number theory, combinatorial analysis and p-adic analysis. In this paper, we define the degenerate poly-Euler numbers and polynomials arising from the modified polyexponential functions. We derive explicit relations for these numbers and polynomials. Also, we obtain some identities involving these polynomials and some other special numbers and polynomials.

Identification of Combinatorial Factors Affecting Fatal Accidents in Small Construction Sites: Association Rule Analysis (연관규칙 기반 소규모 건설현장 사망재해 다중요인 분석)

  • Lee, Gangho;Lee, Chansik;Koo, Choogwan;Kim, Tae Wan
    • Korean Journal of Construction Engineering and Management
    • /
    • v.21 no.4
    • /
    • pp.90-99
    • /
    • 2020
  • The construction industry is suffering from a large number of fatal accidents. As many field works are being conducted in a dangerous condition such as working at height and adverse weather, they are always exposed to safety accidents with high frequency and severity compared to other industries. Such risk is even larger in small construction sites, but studies that focus on combinatorial factors leading to fatal accidents in small construction sites are lacking. Thus, in order to reduce the fatal accidents in the construction industry, this study analyzed 1,438 occupational death accidents cases in small construction sites and, then, conducted the association rule analysis to extract ten combinatorial factors that frequently led to fatal accidents in small construction sites. Based on the extracted association rules, this study also discussed possible countermeasures to reduce the fatal accidents. The results were explained to experts, who agreed on the results of the study. This study contributes to the construction safety management theory by providing a detailed analysis of fatal accidents in small construction sites that can be used for developing and deploying safety policies and educations for small construction site workers.

REGULAR MAPS-COMBINATORIAL OBJECTS RELATING DIFFERENT FIELDS OF MATHEMATICS

  • Nedela, Roman
    • Journal of the Korean Mathematical Society
    • /
    • v.38 no.5
    • /
    • pp.1069-1105
    • /
    • 2001
  • Regular maps and hypermaps are cellular decompositions of closed surfaces exhibiting the highest possible number of symmetries. The five Platonic solids present the most familar examples of regular maps. The gret dodecahedron, a 5-valent pentagonal regular map on the surface of genus 5 discovered by Kepler, is probably the first known non-spherical regular map. Modern history of regular maps goes back at least to Klein (1878) who described in [59] a regular map of type (3, 7) on the orientable surface of genus 3. In its early times, the study of regular maps was closely connected with group theory as one can see in Burnside’s famous monograph [19], and more recently in Coxeter’s and Moser’s book [25] (Chapter 8). The present-time interest in regular maps extends to their connection to Dyck\`s triangle groups, Riemann surfaces, algebraic curves, Galois groups and other areas, Many of these links are nicely surveyed in the recent papers of Jones [55] and Jones and Singerman [54]. The presented survey paper is based on the talk given by the author at the conference “Mathematics in the New Millenium”held in Seoul, October 2000. The idea was, on one hand side, to show the relationship of (regular) maps and hypermaps to the above mentioned fields of mathematics. On the other hand, we wanted to stress some ideas and results that are important for understanding of the nature of these interesting mathematical objects.

  • PDF

A Lower Bound for Performance of Group Testing Problems (그룹검사 문제에 대한 성능 하한치)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.572-578
    • /
    • 2018
  • This paper considers Group Testing as one of combinatorial problems. The group testing first began to inspect soldier's syphilis infection during World War II and have long established an academic basis. Recently, there has been much interest in related areas because of the rediscovery of the value of the group testing. The group testing is the same as finding a few defect samples out of a large number of samples, which is similar to the inverse problem of Compressed Sensing. In this paper, we introduce the definition of the group testing, and specify the classes of the group testing and the bounds on performance of the group testing. In addition, we show a lower bound for the number of tests required to find defective samples using the theoretical theorem which is mainly used for relationship between conditional entropy and the probability of error in the information theory. We see how our result can be different from other related results.

Four proofs of the Cayley formula (케일리 공식의 네 가지 증명)

  • Seo, Seung-Hyun;Kwon, Seok-Il;Hong, Jin-Kon
    • Journal for History of Mathematics
    • /
    • v.21 no.3
    • /
    • pp.127-142
    • /
    • 2008
  • In this paper, we introduce four different approaches of proving Cayley formula, which counts the number of trees(acyclic connected simple graphs). The first proof was done by Cayley using recursive formulas. On the other hands the core idea of the other three proofs is the bijective method-find an one to one correspondence between the set of trees and a suitable family of combinatorial objects. Each of the three bijection gives its own generalization of Cayley formula. In particular, the last proof, done by Seo and Shin, has an application to computer science(theoretical computation), which is a typical example that pure mathematics supply powerful tools to other research fields.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF