• Title/Summary/Keyword: random sets

Search Result 276, Processing Time 0.028 seconds

A hybrid algorithm for classifying rock joints based on improved artificial bee colony and fuzzy C-means clustering algorithm

  • Ji, Duofa;Lei, Weidong;Chen, Wenqin
    • Geomechanics and Engineering
    • /
    • v.31 no.4
    • /
    • pp.353-364
    • /
    • 2022
  • This study presents a hybrid algorithm for classifying the rock joints, where the improved artificial bee colony (IABC) and the fuzzy C-means (FCM) clustering algorithms are incorporated to take advantage of the artificial bee colony (ABC) algorithm by tuning the FCM clustering algorithm to obtain the more reasonable and stable result. A coefficient is proposed to reduce the amount of blind random searches and speed up convergence, thus achieving the goals of optimizing and improving the ABC algorithm. The results from the IABC algorithm are used as initial parameters in FCM to avoid falling to the local optimum in the local search, thus obtaining stable classifying results. Two validity indices are adopted to verify the rationality and practicability of the IABC-FCM algorithm in classifying the rock joints, and the optimal amount of joint sets is obtained based on the two validity indices. Two illustrative examples, i.e., the simulated rock joints data and the field-survey rock joints data, are used in the verification to check the feasibility and practicability in rock engineering for the proposed algorithm. The results show that the IABC-FCM algorithm could be applicable in classifying the rock joint sets.

Utility Maximization, The Shapes of the Indifference Curve on the Characteristic Space and its Estimation: A Theoretical Approach (개인여객 효용의 극대화 및 운송특성공간상의 무차별곡선의 형태와 그 추정)

  • Kim, Jong-Seok
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.2
    • /
    • pp.157-168
    • /
    • 2009
  • The random utility theory and the multinomial logit model (including a more recent variant--the mixed multinomial logit) derived from it have constituted a back bone for theoretical and empirical analyses of various travel demand features including mode choice. In their empirical applications, however, it is customary to specify random utilities which are linear in modal attributes such as time and cost, and in socio-economic variables. The linearity helps easy derivation of important information such as value of travel time savings by calculating marginal rate of substitution between time and cost. In this paper the author focuses on the very linearity of the random utilities. Taking into account the fact that the mode chooser is also labour supplier, commodity consumer as well as leisure-seeker, the author sets up a maximization model of the traveller, which encompasses various economic activities of the traveller. The author derive from the model the indifference curve defined on the space of modal attributes, time and cost and investigate under what conditions the random utility of the traveller becomes linear. It turns out that there exist the conditions under which the random utility is really linear in modal attributes, but the property does not hold when the traveller has a corner solution on the space of modal attributes, or when the primary utility function of the traveller is directly affected by labour provided and/or the travel time itself. As a corollary of the analysis, a random utility is suggested, approximated up to the second order of the variables involved for empirical studies of the field.

Abstracted Partitioned-Layer Index: A Top-k Query Processing Method Reducing the Number of Random Accesses of the Partitioned-Layer Index (요약된 Partitioned-Layer Index: Partitioned-Layer Index의 임의 접근 횟수를 줄이는 Top-k 질의 처리 방법)

  • Heo, Jun-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1299-1313
    • /
    • 2010
  • Top-k queries return k objects that users most want in the database. The Partitioned-Layer Index (simply, the PL -index) is a representative method for processing the top-k queries efficiently. The PL-index partitions the database into a number of smaller databases, and then, for each partitioned database, constructs a list of sublayers over the partitioned database. Here, the $i^{th}$ sublayer in the partitioned database has the objects that can be the top-i object in the partitioned one. To retrieve top k results, the PL-index merges the sublayer lists depending on the user's query. The PL-index has the advantage of reading a very small number of objects from the database when processing the queries. However, since many random accesses occur in merging the sublayer lists, query performance of the PL-index is not good in environments like disk-based databases. In this paper, we propose the Abstracted Partitioned-Layer Index (simply, the APL-index) that significantly improves the query performance of the PL-index in disk-based environments by reducing the number of random accesses. First, by abstracting each sublayer of the PL -index into a virtual (point) object, we transform the lists of sublayers into those of virtual objects (ie., the APL-index). Then, we virtually process the given query by using the APL-index and, accordingly, predict sublayers that are to be read when actually processing the query. Next, we read the sublayers predicted from each sublayer list at a time. Accordingly, we reduce the number of random accesses that occur in the PL-index. Experimental results using synthetic and real data sets show that our APL-index proposed can significantly reduce the number of random accesses occurring in the PL-index.

Applicability of a Multiplicative Random Cascade Model for Disaggregation of Forecasted Rainfalls (예보강우 시간분해를 위한 Multiplicative Cascade 모형의 적용성 평가)

  • Kim, Daeha;Yoon, Sun-Kwon;Kang, Moon Seong;Lee, Kyung-do
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.58 no.5
    • /
    • pp.91-99
    • /
    • 2016
  • High resolution rainfall data at 1-hour or a finer scale are essential for reliable flood analysis and forecasting; nevertheless, many observations, forecasts, and climate projections are still given at coarse temporal resolutions. This study aims to evaluate a chaotic method for disaggregation of 6-hour rainfall data sets so as to apply operational 6-hour rainfall forecasts of the Korean Meteorological Association to flood models. We computed parameters of a state-of-the-art multiplicative random cascade model with two combinations of cascades, namely uniform splitting and diversion, using rainfall observations at Seoul station, and compared statistical performance. We additionally disaggregated 6-hour rainfall time series at 58 stations with the uniform splitting and evaluated temporal transferability of the parameters and changes in multifractal properties. Results showed that the uniform splitting outperformed the diversion in reproduction of observed statistics, and hence is better to be used for disaggregation of 6-hour rainfall forecasts. We also found that multifractal properties of rainfall observations has adequate temporal consistency with an indication of gradually increasing rainfall intensity across South Korea.

Fast Motion Estimation Using Adaptive Search Range for HEVC (적응적 탐색 영역을 이용한 HEVC 고속 움직임 탐색 방법)

  • Lee, Hoyoung;Shim, Huik Jae;Park, Younghyeon;Jeon, Byeungwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.4
    • /
    • pp.209-211
    • /
    • 2014
  • This paper proposes a fast motion estimation method which can reduce the computational complexity of HEVC encoding process. While the previous method determines its search range based on a distance between a current and a reference pictures to accelerate the time-consuming motion estimation, the proposed method adaptively sets the search range according to motion vector difference between prediction units. Experimental results show that the proposed method achieves about 10.7% of reduction in processing time of motion estimation under the random access configuration whereas its coding efficiency loss is less than 0.1%.

A Unified Measure of Association for Complex Data Obtained from Independence Tests (혼합자료에서 독립성 검정에 의한 연관성 측정)

  • 이승천;허문열
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.151-167
    • /
    • 2003
  • Although there exist numerous measures of association, most of them are lacking in generality in that they do not intend to measure the association between heterogeneous type of random variables. On the other hand, many statistical analyzes dealing with complex data sets require a very sophisticate measure of association. In this note, the p-value of independence tests is utilized to obtain a measure of association. The proposed measure of association have some consistency in measuring association between various types of random variables.

Does Correction Factor Vary with Solar Cycle?

  • Chang, Heon-Young;Oh, Sung-Jin
    • Journal of Astronomy and Space Sciences
    • /
    • v.29 no.2
    • /
    • pp.97-101
    • /
    • 2012
  • Monitoring sunspots consistently is the most basic step required to study various aspects of solar activity. To achieve this goal, the observers must regularly calculate their own correction factor $k$ and keep it stable. Relatively recently, two observing teams in South Korea have presented interesting papers which claim that revisions that take the yearly-basis $k$ into account lead to a better agreement with the international relative sunspot number $R_i$, and that yearly $k$ apparently varies with the solar cycle. In this paper, using artificial data sets we have modeled the sunspot numbers as a superposition of random noise and a slowly varying background function, and attempted to investigate whether the variation in the correction factor is coupled with the solar cycle. Regardless of the statistical distributions of the random noise, we have found the correction factor increases as sunspot numbers increase, as claimed in the reports mentioned above. The degree of dependence of correction factor $k$ on the sunspot number is subject to the signal-to-noise ratio. Therefore, we conclude that apparent dependence of the value of the correction factor $k$ on the phase of the solar cycle is not due to a physical property, but a statistical property of the data.

Conditional Moment-based Classification of Patterns Using Spatial Information Based on Gibbs Random Fields (깁스확률장의 공간정보를 갖는 조건부 모멘트에 의한 패턴분류)

  • Kim, Ju-Sung;Yoon, Myoung-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1636-1645
    • /
    • 1996
  • In this paper we proposed a new scheme for conditional two dimensional (2-D)moment-based classification of patterns on the basis of Gibbs random fields which are will suited for representing spatial continuity that is the characteristic of the most images. This implementation contains two parts: feature extraction and pattern classification. First of all, we extract feature vector which consists of conditional 2-D moments on the basis of estimated Gibbs parameter. Note that the extracted feature vectors are invariant under translation, rotation, size of patterns the corresponding template pattern. In order to evaluate the performance of the proposed scheme, classification experiments with training document sets of characters have been carried out on 486 66Mhz PC. Experiments reveal that the proposed scheme has high classification rate over 94%.

  • PDF

Performance Analysis of Mulitilayer Neural Net Claddifiers Using Simulated Pattern-Generating Processes (모의 패턴생성 프로세스를 이용한 다단신경망분류기의 성능분석)

  • Park, Dong-Seon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.456-464
    • /
    • 1997
  • We describe a random prcess model that prvides sets of patterms whth prcisely contrlolled within-class varia-bility and between-class distinctions.We used these pattems in a simulation study wity the back-propagation netwoek to chracterize its perfotmance as we varied the process-controlling parameters,the statistical differences between the processes,and the random noise on the patterns.Our results indicated that grneralized statistical difference between the processes genrating the patterns provided a good predictor of the difficulty of the clssi-fication problem. Also we analyzed the performance of the Bayes classifier whith the maximum-likeihood cri-terion and we compared the performance of the neural network to that of the Bayes classifier.We found that the performance of neural network was intermediate between that of the simulated and theoretical Bayes classifier.

  • PDF

Hyper-Geometric Distribution Software Reliability Growth Model : Generalizatio, Estimation and Prediction (초기하분포 소프트웨어 신뢰성 성장 모델 : 일반화, 추정과 예측)

  • Park, Jung-Yang;Yu, Chang-Yeol;Park, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2343-2349
    • /
    • 1999
  • The hyper-geometric distribution software reliability growth model (HGDM) was recently developed and successfully applied to real data sets. The HGDM considers the sensitivity factor as a parameter to be estimated. In order to reflect the random behavior of the test-and-debug process, this paper generalizes the HGDM by assuming that the sensitivity factor is a binomial random variable. Such a generalization enables us to easily understand the statistical characteristics of the HGDM. It is shown that the least squares method produces the identical results for both the HGDM and the generalized HGDM. Methods for computing the maximum likelihood estimates and predicting the future outcomes are also presented.

  • PDF