• Title/Summary/Keyword: generalized processing parameters

Search Result 18, Processing Time 0.025 seconds

GLSP setup algorithm based on a GMPLS (GMPLS 기반의 GLSP 경로 설정 알고리즘)

  • Kim, Kyoung-Mok;Oh, Young-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.11 s.353
    • /
    • pp.192-199
    • /
    • 2006
  • Establishment of bandwidth and effective traffic processing are required to treat various traffics according to a rapid increase in Internet traffic. A path setup algorithm was introduced to support backbone franc processing but the failure probability is increasing according to fixed operation in nodes. In other words, fixed maximum permitted time can be raised failure probability. So as to solve this problem, this paper introduces variable path setup time algorithm which support channel service of excessive traffic that is generated from not allowed channel. We proposed GLSP(Generalized Label Switch Path) setup algorithm that use variable path setup time parameters. This algorithm can improve path setup probabilities of backbone network that is composed of fixed way.

A Fuzzy Morphological Neural Network : Principles and Implementation (퍼지 수리 형태학적 신경망 : 원리 및 구현)

  • Won, Yong-Gwan;Lee, Bae-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.449-459
    • /
    • 1996
  • The main goal of this paper is to introduce a novel definition for fuzzy mathematical morphology and a neural network implementation. The generalized- mean operator plays the key role for the definition. Such definition is well suited for neural network implementation. The first stage of the shared-weight neural network has adequate architecture to perform morphological operation. The shared- weight network performs classification based on the features extracted with the fuzzy morphological operation defined in this paper. Therefore, the parameters for the fuzzy definition can be optimized using neural network learning paradigm. Learning rules for the structuring elements, degree of membership, and weighting factors are precisely described. In application to handwritten digit recognition problem, the fuzzy morphological shared-weight neural network produced the results which are comparable to the state-of art for this problem.

  • PDF

Generalization and implementation of hardening soil constitutive model in ABAQUS code

  • Bo Songa;Jun-Yan Liu;Yan Liu;Ping Hu
    • Geomechanics and Engineering
    • /
    • v.36 no.4
    • /
    • pp.355-366
    • /
    • 2024
  • The original elastoplastic Hardening Soil model is formulated actually partly under hexagonal pyramidal Mohr-Coulomb failure criterion, and can be only used in specific stress paths. It must be completely generalized under Mohr-Coulomb criterion before its usage in engineering practice. A set of generalized constitutive equations under this criterion, including shear and volumetric yield surfaces and hardening laws, is proposed for Hardening Soil model in principal stress space. On the other hand, a Mohr-Coulumb type yield surface in principal stress space comprises six corners and an apex that make singularity for the normal integration approach of constitutive equations. With respect to the isotropic nature of the material, a technique for processing these singularities by means of Koiter's rule, along with a transforming approach between both stress spaces for both stress tensor and consistent stiffness matrix based on spectral decomposition method, is introduced to provide such an approach for developing generalized Hardening Soil model in finite element analysis code ABAQUS. The implemented model is verified in comparison with the results after the original simulations of oedometer and triaxial tests by means of this model, for volumetric and shear hardenings respectively. Results from the simulation of oedometer test show similar shape of primary loading curve to the original one, while maximum vertical strain is a little overestimated for about 0.5% probably due to the selection of relationships for cap parameters. In simulation of triaxial test, the stress-strain and dilation curves are both in very good agreement with the original curves as well as test data.

The Design of a Fuzzy Adaptive Controller for the Process Control (공정제어를 위한 퍼지 적응제어기의 설계)

  • Lee Bong Kuk
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.7
    • /
    • pp.31-41
    • /
    • 1993
  • In this paper, a fuzzy adaptive controller is proposed for the process with large delay time and unmodelled dynamics. The fuzzy adaptive controller consists of self tuning controller and fuzzy tuning part. The self tuning controller is designed with the continuous time GMV (generalized minimum variance) using emulator and weighted least square method. It is realized by the hybrid method. The controller has robust characteristics by adapting the inference rule in design parameters. The inference processing is tuned according to the operating point of the process having the nonlinear characteristics considering the practical application. We review the characteristics of the fuzzy adaptive controller through the simulation. The controller is applied to practical electric furnace. As a result, the fuzzy adaptive controller shows the better characteristics than the simple numeric self tuning controller and the PI controller.

  • PDF

Noise Removal Using Complex Wavelet and Bernoulli-Gaussian Model (복소수 웨이블릿과 베르누이-가우스 모델을 이용한 잡음 제거)

  • Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.52-61
    • /
    • 2006
  • Orthogonal wavelet tansform which is generally used in image and signal processing applications has limited performance because of lack of shift invariance and low directional selectivity. To overcome these demerits complex wavelet transform has been proposed. In this paper, we present an efficient image denoising method using dual-tree complex wavelet transform and Bernoulli-Gauss prior model. In estimating hyper-parameters for Bernoulli-Gaussian model, we present two simple and non-iterative methods. We use hypothesis-testing technique in order to estimate the mixing parameter, Bernoulli random variable. Based on the estimated mixing parameter, variance for clean signal is obtained by using maximum generalized marginal likelihood (MGML) estimator. We simulate our denoising method using dual-tree complex wavelet and compare our algorithm to well blown denoising schemes. Experimental results show that the proposed method can generate good denoising results for high frequency image with low computational cost.

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Tracking and Interpretation of Moving Object in MPEG-2 Compressed Domain (MPEG-2 압축 영역에서 움직이는 객체의 추적 및 해석)

  • Mun, Su-Jeong;Ryu, Woon-Young;Kim, Joon-Cheol;Lee, Joon-Hoan
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.27-34
    • /
    • 2004
  • This paper proposes a method to trace and interpret a moving object based on the information which can be directly obtained from MPEG-2 compressed video stream without decoding process. In the proposed method, the motion flow is constructed from the motion vectors included in compressed video. We calculate the amount of pan, tilt, and zoom associated with camera operations using generalized Hough transform. The local object motion can be extracted from the motion flow after the compensation with the parameters related to the global camera motion. Initially, a moving object to be traced is designated by user via bounding box. After then automatic tracking Is performed based on the accumulated motion flows according to the area contributions. Also, in order to reduce the cumulative tracking error, the object area is reshaped in the first I-frame of a GOP by matching the DCT coefficients. The proposed method can improve the computation speed because the information can be directly obtained from the MPEG-2 compressed video, but the object boundary is limited by macro-blocks rather than pixels. Also, the proposed method is proper for approximate object tracking rather than accurate tracing of an object because of limited information available in the compressed video data.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.