• Title/Summary/Keyword: window sequence

Search Result 114, Processing Time 0.024 seconds

A Study on the Restoration Expert System at Substations (변전소의 사고복구 전문가 시스템의 연구)

  • Lee, Heung-Jae;Lim, Chan-Ho;Yang, Su-Hyun;Park, Young-Moon
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.39-41
    • /
    • 1994
  • In this paper, an expert system is proposed to deal with restoration problem in a unmanned substation to improve the reliability of power supply and the efficiency of power system management. The proposed expert system searches a switching sequence to rectorate the blackout region using heuristic rules and displays the operation sequence. The expert system uses the X-window on UNIX for the graphics of the expert system. The proposed expert system show a promissing result to a future application.

  • PDF

Partial Go back N Scheme for Occupancy Control of Reordering Buffer in 3GPP ARQ (3GPP ARQ에서 재정렬 버퍼의 점유량 조절을 위한 부분 Go back N 방식)

  • Shin, Woo-Cheol;Park, Jin-Kyung;Ha, Jun;Choi, Cheon-Won
    • Proceedings of the IEEK Conference
    • /
    • 2003.11c
    • /
    • pp.302-305
    • /
    • 2003
  • 3GPP RLC protocol specification adopted an error control scheme based on selective repeat ARQ. In the 3GPP ARQ, distinctive windows are provided at transmitting and receiving stations so that those stations are prohibited to send or receive data PDU's out of window. An increase in window size enhances delay performance. Such an increase, however, raises the occupancy at re-ordering buffer, which results in a long re-ordering time. Aiming at suppressing the occupancy at re-ordering buffer, we propose partial go back N scheme in this paper In the partial go back N scheme, the receiving station regards all data PDU's between the first (lowest sequence numbered) error-detected PDU and last (highest sequence numbered) error-detected PDU. By the employment of the partial go back N scheme, the occupancy at the re-ordering buffer is apparently reduced, while the delay and throughput performance may be degraded due to the remaining properties of go back N. We thus consider peak occupancy of re-ordering buffer, mean sojourn time at re-ordering buffer, mean delay time, and maximum throughput as measures to evaluate tile proposed scheme and investigate such performance by using a simulation method. From numerical examples, we observe a trade-off among performance measures and conclude that the partial go back N scheme is able to effectively reduce the occupancy of re-ordering buffer.

  • PDF

An Efficient Subsequence Matching Method Based on Index Interpolation (인덱스 보간법에 기반한 효율적인 서브시퀀스 매칭 기법)

  • Loh Woong-Kee;Kim Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.345-354
    • /
    • 2005
  • Subsequence matching is one of the most important operations in the field of data mining. The existing subsequence matching algorithms use only one index, and their performance gets worse as the difference between the length of a query sequence and the site of windows, which are subsequences of a same length extracted from data sequences to construct the index, increases. In this paper, we propose a new subsequence matching method based on index interpolation to overcome such a problem. An index interpolation method constructs two or more indexes, and performs search ing by selecting the most appropriate index among them according to the given query sequence length. In this paper, we first examine the performance trend with the difference between the query sequence length and the window size through preliminary experiments, and formulate a search cost model that reflects the distribution of query sequence lengths in the view point of the physical database design. Next, we propose a new subsequence matching method based on the index interpolation to improve search performance. We also present an algorithm based on the search cost formula mentioned above to construct optimal indexes to get better search performance. Finally, we verify the superiority of the proposed method through a series of experiments using real and synthesized data sets.

A DNA Index Structure using Frequency and Position Information of Genetic Alphabet (염기문자의 빈도와 위치정보를 이용한 DNA 인덱스구조)

  • Kim Woo-Cheol;Park Sang-Hyun;Won Jung-Im;Kim Sang-Wook;Yoon Jee-Hee
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.263-275
    • /
    • 2005
  • In a large DNA database, indexing techniques are widely used for rapid approximate sequence searching. However, most indexing techniques require a space larger than original databases, and also suffer from difficulties in seamless integration with DBMS. In this paper, we suggest a space-efficient and disk-based indexing and query processing algorithm for approximate DNA sequence searching, specially exact match queries, wildcard match queries, and k-mismatch queries. Our indexing method places a sliding window at every possible location of a DNA sequence and extracts its signature by considering the occurrence frequency of each nucleotide. It then stores a set of signatures using a multi-dimensional index, such as R*-tree. Especially, by assigning a weight to each position of a window, it prevents signatures from being concentrated around a few spots in index space. Our query processing algorithm converts a query sequence into a multi-dimensional rectangle and searches the index for the signatures overlapped with the rectangle. The experiments with real biological data sets revealed that the proposed method is at least three times, twice, and several orders of magnitude faster than the suffix-tree-based method in exact match, wildcard match, and k- mismatch, respectively.

An Efficient Approach for Single-Pass Mining of Web Traversal Sequences (단일 스캔을 통한 웹 방문 패턴의 탐색 기법)

  • Kim, Nak-Min;Jeong, Byeong-Soo;Ahmed, Chowdhury Farhan
    • Journal of KIISE:Databases
    • /
    • v.37 no.5
    • /
    • pp.221-227
    • /
    • 2010
  • Web access sequence mining can discover the frequently accessed web pages pursued by users. Utility-based web access sequence mining handles non-binary occurrences of web pages and extracts more useful knowledge from web logs. However, the existing utility-based web access sequence mining approach considers web access sequences from the very beginning of web logs and therefore it is not suitable for mining data streams where the volume of data is huge and unbounded. At the same time, it cannot find the recent change of knowledge in data streams adaptively. The existing approach has many other limitations such as considering only forward references of web access sequences, suffers in the level-wise candidate generation-and-test methodology, needs several database scans, etc. In this paper, we propose a new approach for high utility web access sequence mining over data streams with a sliding window method. Our approach can not only handle large-scale data but also efficiently discover the recently generated information from data streams. Moreover, it can solve the other limitations of the existing algorithm over data streams. Extensive performance analyses show that our approach is very efficient and outperforms the existing algorithm.

Extended Forecasts of a Stock Index using Learning Techniques : A Study of Predictive Granularity and Input Diversity

  • Kim, Steven H.;Lee, Dong-Yun
    • Asia pacific journal of information systems
    • /
    • v.7 no.1
    • /
    • pp.67-83
    • /
    • 1997
  • The utility of learning techniques in investment analysis has been demonstrated in many areas, ranging from forecasting individual stocks to entire market indexes. To date, however, the application of artificial intelligence to financial forecasting has focused largely on short predictive horizons. Usually the forecast window is a single period ahead; if the input data involve daily observations, the forecast is for one day ahead; if monthly observations, then a month ahead; and so on. Thus far little work has been conducted on the efficacy of long-term prediction involving multiperiod forecasting. This paper examines the impact of alternative procedures for extended prediction using knowledge discovery techniques. One dimension in the study involves temporal granularity: a single jump from the present period to the end of the forecast window versus a web of short-term forecasts involving a sequence of single-period predictions. Another parameter relates to the numerosity of input variables: a technical approach involving only lagged observations of the target variable versus a fundamental approach involving multiple variables. The dual possibilities along each of the granularity and numerosity dimensions entail a total of 4 models. These models are first evaluated using neural networks, then compared against a multi-input jump model using case based reasoning. The computational models are examined in the context of forecasting the S&P 500 index.

  • PDF

Design of Daylighting Aperture Using Daylight Factor Method and its Evaluation by Distribution of Sky Component (Daylight Factor Method를 이용한 채광창의 설계와 주광율의 직접조도분에 의한 채광창의 평가)

  • Chee, Chol-Kon;Kwon, Young-Hye
    • Proceedings of the KIEE Conference
    • /
    • 1988.11a
    • /
    • pp.210-213
    • /
    • 1988
  • A new and accurate expression to derive a window area is presented with a sequence for daylighting design using Daylight Factor Method process not in its classical point--by-point method but in lumen method as in artificial lighting design process to consider daylight in the early stage of a building design process. Accepting CIE Overcast Sky as the worst state with the lowest sky luminance, a user of a room can have more available daylight in his or her room. In the design process uniformity is checked to ensure reasonably even daylighting by comparing the depth of the room with the computed limiting depth. After these steps the shape and position of window is altered, of which the Sky Component of Daylight Factor under an Overcast Sky, SCo, is investigated and computed in Composite Simpson Multiple Integral so that a building designer or an analyst can choose the best shape and location that satisfies his/her taste and purpose of the room.

  • PDF

TELE-OPERATIVE SYSTEM FOR BIOPRODUCTION - REMOTE LOCAL IMAGE PROCESSING FOR OBJECT IDENTIFICATION -

  • Kim, S. C.;H. Hwang;J. E. Son;Park, D. Y.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.300-306
    • /
    • 2000
  • This paper introduces a new concept of automation for bio-production with tele-operative system. The proposed system showed practical and feasible way of automation for the volatile bio-production process. Based on the proposition, recognition of the job environment with object identification was performed using computer vision system. A man-machine interactive hybrid decision-making, which utilized a concept of tele-operation was proposed to overcome limitations of the capability of computer in image processing and feature extraction from the complex environment image. Identifying watermelons from the outdoor scene of the cultivation field was selected to realize the proposed concept. Identifying watermelon from the camera image of the outdoor cultivation field is very difficult because of the ambiguity among stems, leaves, shades, and especially fruits covered partly by leaves or stems. The analog signal of the outdoor image was captured and transmitted wireless to the host computer by R.F module. The localized window was formed from the outdoor image by pointing to the touch screen. And then a sequence of algorithms to identify the location and size of the watermelon was performed with the local window image. The effect of the light reflectance of fruits, stems, ground, and leaves were also investigated.

  • PDF

Dynamically Alternating Power Saving Scheme for IEEE 802.16e Mobile Broadband Wireless Access Systems

  • Chang, Jau-Yang;Lin, Yu-Chen
    • Journal of Communications and Networks
    • /
    • v.14 no.2
    • /
    • pp.179-187
    • /
    • 2012
  • Power saving is one of the most important features that extends the lifetime of portable devices in mobile wireless networks. The IEEE 802.16e mobile broadband wireless access system adopts a power saving mechanism with a binary truncated exponent algorithm for determining sleep intervals. When using this standard power saving scheme, there is often a delay before data packets are received at the mobile subscriber station (MSS). In order to extend the lifetime of a MSS, the battery energy must be used efficiently. This paper presents a dynamically alternating sleep interval scheduling algorithm as a solution to deal with the power consumption problem. We take into account different traffic classes and schedule a proper sequence of power saving classes. The window size of the sleep interval is calculated dynamically according to the packet arrival rate. We make a tradeoff between the power consumption and packet delay. The method achieves the goal of efficiently reducing the listening window size, which leads to increased power saving. The performance of our proposed scheme is compared to that of the standard power saving scheme. Simulation results demonstrate the superior performance of our power saving scheme and its ability to strike the appropriate performance balance between power saving and packet delay for a MSS in an IEEE 802.16e mobile broadband wireless access system.

Novel Push-Front Fibonacci Windows Model for Finding Emerging Patterns with Better Completeness and Accuracy

  • Akhriza, Tubagus Mohammad;Ma, Yinghua;Li, Jianhua
    • ETRI Journal
    • /
    • v.40 no.1
    • /
    • pp.111-121
    • /
    • 2018
  • To find the emerging patterns (EPs) in streaming transaction data, the streaming is first divided into some time windows containing a number of transactions. Itemsets are generated from transactions in each window, and then the emergence of itemsets is evaluated between two windows. In the tilted-time windows model (TTWM), it is assumed that people need support data with finer accuracy from the most recent windows, while accepting coarser accuracy from older windows. Therefore, a limited array's elements are used to maintain all support data in a way that condenses old windows by merging them inside one element. The capacity of elements that accommodates the windows inside is modeled using a particular number sequence. However, in a stream, as new data arrives, the current array updating mechanisms lead to many null elements in the array and cause data incompleteness and inaccuracy problems. Two models derived from TTWM, logarithmic TTWM and Fibonacci windows model, also inherit the same problems. This article proposes a novel push-front Fibonacci windows model as a solution, and experiments are conducted to demonstrate its superiority in finding more EPs compared to other models.