• Title/Summary/Keyword: discrete systems

Search Result 1,856, Processing Time 0.036 seconds

Concept-based Detection of Functional Modules in Protein Interaction Networks (단백질 상호작용 네트워크에서의 개념 기반 기능 모듈 탐색 기법)

  • Park, Jong-Min;Choi, Jae-Hun;Park, Soo-Jun;Yang, Jae-Dong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.10
    • /
    • pp.474-492
    • /
    • 2007
  • In the protein interaction network, there are many meaningful functional modules, each involving several protein interactions to perform discrete functions. Pathways and protein complexes are the examples of the functional modules. In this paper, we propose a new method for detecting the functional modules based on concept. A conceptual functional module, briefly concept module is introduced to match the modules taking them as its instances. It is defined by the corresponding rule composed of triples and operators between the triples. The triples represent conceptual relations reifying the protein interactions of a module, and the operators specify the structure of the module with the relations. Furthermore, users can define a composite concept module by the counterpart rule which, in turn, is defined in terms of the predefined rules. The concept module makes it possible to detect functional modules that are conceptually similar as well as structurally identical to users' queries. The rules are managed in the XML format so that they can be easily applied to other networks of different species. In this paper, we also provide a visualized environment for intuitionally describing complexly structured rules.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Chatbot Design Method Using Hybrid Word Vector Expression Model Based on Real Telemarketing Data

  • Zhang, Jie;Zhang, Jianing;Ma, Shuhao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1400-1418
    • /
    • 2020
  • In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.

Effects of variety, region and season on near infrared reflectance spectroscopic analysis of quality parameters in red wine grapes

  • Esler, Michael B.;Gishen, Mark;Francis, I.Leigh;Dambergs, Robert G.;Kambouris, Ambrosias;Cynkar, Wies U.;Boehm, David R.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1523-1523
    • /
    • 2001
  • The wine industry requires practical methods for objectively measuring the composition of both red wine grapes on the vine to determine optimal harvest time; and of freshly harvested grapes for efficient allocation to vinery process streams for particular red wine products, and to determine payment of contract grapegrowers. To be practical for industry application these methods must be rapid, inexpensive and accurate. In most cases this restricts the analyses available to measurement of TSS (total soluble solids, predominantly sugars) by refractometry and pH by electropotentiometry. These two parameters, however, do not provide a comprehensive compositional characterization for the purpose of winemaking. The concentration of anthocyanin pigment in red wine grapes is an accepted indicator of potential wine quality and price. However, routine analysis for total anthocyanins is not considered as a practical option by the wider wine industry because of the high cost and slow turnaround time of this multi-step wet chemical laboratory analysis. Recent work by this ${group}^{l,2}$ has established the capability of near infrared (NIR) spectroscopy to provide rapid, accurate and simultaneous measurement of total anthocyanins, TSS and pH in red wine grapes. The analyses may be carried out equally well using either research grade scanning spectrometers or much simpler reduced spectral range portable diode-array based instrumentation. We have recently expanded on this work by collecting thousands of red wine grape samples in Australia. The sample set spans two vintages (1999 and 2000), five distinct geographical winegrowing regions and three main red wine grape varieties used in Australia (Cabernet Sauvignon, Shiraz and Merlot). Homogenized grape samples were scanned in diffuse reflectance mode on a FOSE NIR Systems6500 spectrometer and subject to laboratory analysis by the traditional methods for total anthocyanins, TSS and pH. We report here an analysis of the correlations between the NIR spectra and the laboratory data using standard chemometric algorithms within The Unscrambler software package. In particular, various subsets of the total data set are considered in turn to elucidate the effects of vintage, geographical area and grape variety on the measurement of grape composition by NIR spectroscopy. The relative ability of discrete calibrations to predict within and across these differences is considered. The results are then used to propose an optimal calibration strategy for red wine grape analysis.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Optimal-synchronous Parallel Simulation for Large-scale Sensor Network (대규모 센서 네트워크를 위한 최적-동기식 병렬 시뮬레이션)

  • Kim, Bang-Hyun;Kim, Jong-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.5
    • /
    • pp.199-212
    • /
    • 2008
  • Software simulation has been widely used for the design and application development of a large-scale wireless sensor network. The degree of details of the simulation must be high to verify the behavior of the network and to estimate its execution time and power consumption of an application program as accurately as possible. But, as the degree of details becomes higher, the simulation time increases. Moreover, as the number of sensor nodes increases, the time tends to be extremely long. We propose an optimal-synchronous parallel discrete-event simulation method to shorten the time in a large-scale sensor network simulation. In this method, sensor nodes are partitioned into subsets, and each PC that is interconnected with others through a network is in charge of simulating one of the subsets. Results of experiments using the parallel simulator developed in this study show that, in the case of the large number of sensor nodes, the speedup tends to approach the square of the number of PCs participating in the simulation. In such a case, the ratio of the overhead due to parallel simulation to the total simulation time is so small that it can be ignored. Therefore, as long as PCs are available, the number of sensor nodes to be simulated is not limited. In addition, our parallel simulation environment can be constructed easily at the low cost because PCs interconnected through LAN are used without change.

Feature Vector Extraction and Classification Performance Comparison According to Various Settings of Classifiers for Fault Detection and Classification of Induction Motor (유도 전동기의 고장 검출 및 분류를 위한 특징 벡터 추출과 분류기의 다양한 설정에 따른 분류 성능 비교)

  • Kang, Myeong-Su;Nguyen, Thu-Ngoc;Kim, Yong-Min;Kim, Cheol-Hong;Kim, Jong-Myon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.446-460
    • /
    • 2011
  • The use of induction motors has been recently increasing with automation in aeronautical and automotive industries, and it playes a significant role. This has motivated that many researchers have studied on developing fault detection and classification systems of an induction motor in order to minimize economical damage caused by its fault. With this reason, this paper proposed feature vector extraction methods based on STE (short-time energy)+SVD (singular value decomposition) and DCT (discrete cosine transform)+SVD techniques to early detect and diagnose faults of induction motors, and classified faults of an induction motor into different types of them by using extracted features as inputs of BPNN (back propagation neural network) and multi-layer SVM (support vector machine). When BPNN and multi-lay SVM are used as classifiers for fault classification, there are many settings that affect classification performance: the number of input layers, the number of hidden layers and learning algorithms for BPNN, and standard deviation values of Gaussian radial basis function for multi-layer SVM. Therefore, this paper quantitatively simulated to find appropriate settings for those classifiers yielding higher classification performance than others.

Principles and Current Trends of Neural Decoding (뉴럴 디코딩의 원리와 최신 연구 동향 소개)

  • Kim, Kwangsoo;Ahn, Jungryul;Cha, Seongkwang;Koo, Kyo-in;Goo, Yong Sook
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.342-351
    • /
    • 2017
  • The neural decoding is a procedure that uses spike trains fired by neurons to estimate features of original stimulus. This is a fundamental step for understanding how neurons talk each other and, ultimately, how brains manage information. In this paper, the strategies of neural decoding are classified into three methodologies: rate decoding, temporal decoding, and population decoding, which are explained. Rate decoding is the firstly used and simplest decoding method in which the stimulus is reconstructed from the numbers of the spike at given time (e. g. spike rates). Since spike number is a discrete number, the spike rate itself is often not continuous and quantized, therefore if the stimulus is not static and simple, rate decoding may not provide good estimation for stimulus. Temporal decoding is the decoding method in which stimulus is reconstructed from the timing information when the spike fires. It can be useful even for rapidly changing stimulus, and our sensory system is believed to have temporal rather than rate decoding strategy. Since the use of large numbers of neurons is one of the operating principles of most nervous systems, population decoding has advantages such as reduction of uncertainty due to neuronal variability and the ability to represent a stimulus attributes simultaneously. Here, in this paper, three different decoding methods are introduced, how the information theory can be used in the neural decoding area is also given, and at the last machinelearning based algorithms for neural decoding are introduced.

Dynamic Instability and Multi-step Taylor Series Analysis for Space Truss System under Step Excitation (스텝 하중을 받는 공간 트러스 시스템의 멀티스텝 테일러 급수 해석과 동적 불안정)

  • Lee, Seung-Jae;Shon, Su-Deok
    • Journal of Korean Society of Steel Construction
    • /
    • v.24 no.3
    • /
    • pp.289-299
    • /
    • 2012
  • The goal of this paper is to apply the multi-step Taylor method to a space truss, a non-linear discrete dynamic system, and analyze the non-linear dynamic response and unstable behavior of the structures. The accurate solution based on an analytical approach is needed to deal with the inverse problem, or the dynamic instability of a space truss, because the governing equation has geometrical non-linearity. Therefore, the governing motion equations of the space truss were formulated by considering non-linearity, where an accurate analytical solution could be obtained using the Taylor method. To verify the accuracy of the applied method, an SDOF model was adopted, and the analysis using the Taylor method was compared with the result of the 4th order Runge-Kutta method. Moreover, the dynamic instability and buckling characteristics of the adopted model under step excitation was investigated. The result of the comparison between the two methods of analysis was well matched, and the investigation shows that the dynamic response and the attractors in the phase space can also delineate dynamic snapping under step excitation, and damping affects the displacement of the truss. The analysis shows that dynamic buckling occurs at approximately 77% and 83% of the static buckling in the undamped and damped systems, respectively.

Quality of Working Life (직장생활에 대한 새로운 인식)

  • 김영환
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.4 no.4
    • /
    • pp.43-61
    • /
    • 1981
  • Interest in the Quality of working life is spreading rapidly and the phrase has entered the popular vocabulary. That this should be so is probably due in large measure to changes in the values of society, nowadays accelerated as never before by the concerns and demands of younger people. But however topical the concept has become, there is very little agreement on its definition. Rather, the term appears to have become a kind of depository for a variety of sometimes contradictory meanings attributed to it by different groups. A list of all the elements it if held to cover would include availability and security of employment, adaquate income, safe and pleasant physical working conditions, reasonable hours of work, equitable treatment and democracy in the workplace, the possibility of self-development, control over one's work, a sense of pride in craftsmanship or product, wider career choices, and flexibility in matters such as the time of starting work, the number of working days in the week, Job sharing and so on altogether an array that encompasses a variety of traditional aspirations and many new ones reflecting the entry into the post industrial era. The term "quality of working life" was introduced by professor Louis E. Davis and his colleagues in the late 1960s to call attention to the prevailing and needlessly poor quality of life at the workplace. In their usage it referred to the quality of the relationship between the worker and his working environment as a whole, and was intended to emphasize the human dimension so often forgotten among the technical and economic factors in job design. Treating workers as if they were elements or cogs in the production process is not only an affront to the dignity of human life, but is also a serious underestimation of the human capabilities needed to operate more advanced technologies. When tasks demand high levels of vigilence, technical problem-solving skills, self initiated behavior, and social and communication skills. it is imperative that our concepts of man be of requisite complexity. Our aim is not just to protect the worker's life and health but to give them an informal interest in their job and opportunity to express their views and exercise control over everything that affects their working life. Certainly, so far as his work is concerned, a man must feel better protected but he must also have a greater feeling of freedom and responsibility. Something parallel but wholly different if happening in Europe, industrial democracy. What has happened in Europe has been discrete, fixed, finalized, and legalized. Those developing centuries driving toward industrialization like R.O.K, shall have to bear in mind the human complexity in processing and designing the work and its environment. Increasing attention is needed to the contradiction between autocratic rule at the workplace and democratic rights in society.n society.

  • PDF