• Title/Summary/Keyword: Time Constraint Applications

Search Result 95, Processing Time 0.021 seconds

On-line Schedulability Check Algorithm for Imprecise Real-time Tasks (부정확한 실시간태스크들을 위한 온라인 스케쥴가능성 검사 알고리즘)

  • Gi-Hyeon Song
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1167-1176
    • /
    • 2002
  • In a (hard) real-time system, every time-critical task must meet its timing constraint, which is typically specified in terms of its deadline. Many computer systems, such as those for open system environment or multimedia services, need an efficient schedulability test for on-line real-time admission control of new jobs. Although various polynomial time schedulability tests have been proposed, they often fail to decide the schedulability of the system precisely when the system is heavily loaded. Furthermore, the most of previous studies on on-line real-time schedulability tests are concentrated on periodic task applications. Thus, this paper presents an efficient on-line real-time schedulability check algorithm which can be used for imprecise real-time system predictability before dispatching of on-line imprecise real-time task system consisted of aperiodic and preemptive task sets when the system is overloaded.

  • PDF

A New Pairwise Key Pre-Distribution Scheme for Wireless Sensor Networks (무선 센서 네트워크를 위한 새로운 키 사전 분배 구조)

  • Kim, Tae-Yeon
    • The KIPS Transactions:PartC
    • /
    • v.16C no.2
    • /
    • pp.183-188
    • /
    • 2009
  • Wireless sensor networks will be broadly deployed in the real world and widely utilized for various applications. A prerequisite for secure communication among the sensor nodes is that the nodes should share a session key to bootstrap their trust relationship. The open problems are how to verify the identity of communicating nodes and how to minimize any information about the keys disclosed to the other side during key agreement. At any rate, any one of the existing schemes cannot perfectly solve these problems due to some drawbacks. Accordingly, we propose a new pre-distribution scheme with the following merits. First, it supports authentication services. Second, each node can only find some indices of key spaces that are shared with the other side, without revealing unshared key information. Lastly, it substantially improves resilience of network against node capture. Performance and security analyses have proven that our scheme is suitable for sensor networks in terms of performance and security aspects.

A Methodology for Task placement and Scheduling Based on Virtual Machines

  • Chen, Xiaojun;Zhang, Jing;Li, Junhuai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.9
    • /
    • pp.1544-1572
    • /
    • 2011
  • Task placement and scheduling are traditionally studied in following aspects: resource utilization, application throughput, application execution latency and starvation, and recently, the studies are more on application scalability and application performance. A methodology for task placement and scheduling centered on tasks based on virtual machines is studied in this paper to improve the performances of systems and dynamic adaptability in applications development and deployment oriented parallel computing. For parallel applications with no real-time constraints, we describe a thought of feature model and make a formal description for four layers of task placement and scheduling. To place the tasks to different layers of virtual computing systems, we take the performances of four layers as the goal function in the model of task placement and scheduling. Furthermore, we take the personal preference, the application scalability for a designer in his (her) development and deployment, as the constraint of this model. The workflow of task placement and scheduling based on virtual machines has been discussed. Then, an algorithm TPVM is designed to work out the optimal scheme of the model, and an algorithm TEVM completes the execution of tasks in four layers. The experiments have been performed to validate the effectiveness of time estimated method and the feasibility and rationality of algorithms. It is seen from the experiments that our algorithms are better than other four algorithms in performance. The results show that the methodology presented in this paper has guiding significance to improve the efficiency of virtual computing systems.

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

Mining Frequent Sequential Patterns over Sequence Data Streams with a Gap-Constraint (순차 데이터 스트림에서 발생 간격 제한 조건을 활용한 빈발 순차 패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.35-46
    • /
    • 2010
  • Sequential pattern mining is one of the essential data mining tasks, and it is widely used to analyze data generated in various application fields such as web-based applications, E-commerce, bioinformatics, and USN environments. Recently data generated in the application fields has been taking the form of continuous data streams rather than finite stored data sets. Considering the changes in the form of data, many researches have been actively performed to efficiently find sequential patterns over data streams. However, conventional researches focus on reducing processing time and memory usage in mining sequential patterns over a target data stream, so that a research on mining more interesting and useful sequential patterns that efficiently reflect the characteristics of the data stream has been attracting no attention. This paper proposes a mining method of sequential patterns over data streams with a gap constraint, which can help to find more interesting sequential patterns over the data streams. First, meanings of the gap for a sequential pattern and gap-constrained sequential patterns are defined, and subsequently a mining method for finding gap-constrained sequential patterns over a data stream is proposed.

An Extended Negotiation Agent Using Multi-Issues under Time-Constraint Environment (시간제약 환경에서 다중 속성을 이용한 확장된 협상 에이전트)

  • 김현식;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1208-1219
    • /
    • 2003
  • The Internet has become a part of our life and has changed our living environments dramatically. However, Electronic Commerce (EC) stays currently at a level that a buyer deals only with an one-sided condition (price) proposed by a seller or compares the proposed condition to find a better one. As the agent technology makes progress, EC requires negotiation that is not one-sided but is a way to maximize profits of both seller and buyer. Moreover, negotiation in EC should consider multiple issues rather than a single issue to replace the traditional commerce. In this paper we propose a negotiation model, which guarantees balanced profits between two agents through negotiation using multi-issues, and makes a deal successfully under time-constraint environment. The proposed negotiation model suggests strategies(alternative and simultaneous strategy) that change the value of issues with each agent has time for negotiation. In this paper, we also suggest a strategy that proposes an offer to each other. We compare the proposed negotiation model with another negotiation model. The experimental results show that dynamic conceder tactic and linear tactic showed balanced profits and a high percentage of deals for negotiation between two agents, and the sum of utilities of two agents is high, when the alternative strategy is used.

Performance Enhancement of an OFDMA/CDM-based Cellular System in a Multi-Cell Environment (다중셀 환경에서 OFDMA/CDM 기반 셀룰라 시스템의 성능 개선)

  • Kim, Duk-Kyung;Ryu, Je-Hun;Jeong, Bu-Seop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7A
    • /
    • pp.587-596
    • /
    • 2005
  • In this paper, we propose an OFDMA/CDM-based cellular system, which accommodates multiple users in frequency-domain and multiplexes user data with frequency-domain spreading. The proposed system utilizes random codes to discriminate cells and adopts the pre-equalization to enhance the performance. For cellular applications, a number of pre-equalization techniques are compared and an efficient power allocation scheme is suggested with a transmit power constraint. Especially, the validity of OFDMA/CDM based cellular system is investigated, by comparing the performance for varying the number of multiplexed data symbols at different locations. Finally the pre/post-equalization is proposed to reduce the performance degradation caused by time delay.

Relative Position Estimation using Kalman Filter Based on Inertial Sensor Signals Considering Soft Tissue Artifacts of Human Body Segments (신체 분절의 연조직 변형을 고려한 관성센서신호 기반의 상대위치 추정 칼만필터)

  • Lee, Chang June;Lee, Jung Keun
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.4
    • /
    • pp.237-242
    • /
    • 2020
  • This paper deals with relative position estimation using a Kalman filter (KF) based on inertial sensors that have been widely used in various biomechanics-related outdoor applications. In previous studies, the relative position is determined using relative orientation and predetermined segment-to-joint (S2J) vectors, which are assumed to be constant. However, because body segments are influenced by soft tissue artifacts (STAs), including the deformation and sliding of the skin over the underlying bone structures, they are not constant, resulting in significant errors during relative position estimation. In this study, relative position estimation was performed using a KF, where the S2J vectors were adopted as time-varying states. The joint constraint and the variations of the S2J vectors were used to develop a measurement model of the proposed KF. Accordingly, the covariance matrix corresponding to the variations of the S2J vectors continuously changed within the ranges of the STA-causing flexion angles. The experimental results of the knee flexion tests showed that the proposed KF decreased the estimation errors in the longitudinal and lateral directions by 8.86 and 17.89 mm, respectively, compared with a conventional approach based on the application of constant S2J vectors.

MODIFIED DOUBLE SNAKE ALGORITHM FOR ROAD FEATURE UPDATING OF DIGITAL MAPS USING QUICKBIRD IMAGERY

  • Choi, Jae-Wan;Kim, Hye-Jin;Byun, Young-Gi;Han, You-Kyung;Kim, Yong-Il
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.234-237
    • /
    • 2007
  • Road networks are important geospatial databases for various GIS (Geographic Information System) applications. Road digital maps may contain geometric spatial errors due to human and scanning errors, but manually updating roads information is time consuming. In this paper, we developed a new road features updating methodology using from multispectral high-resolution satellite image and pre-existing vector map. The approach is based on initial seed point generation using line segment matching and a modified double snake algorithm. Firstly, we conducted line segment matching between the road vector data and the edges of image obtained by Canny operator. Then, the translated road data was used to initialize the seed points of the double snake model in order to refine the updating of road features. The double snake algorithm is composed of two open snake models which are evolving jointly to keep a parallel between them. In the proposed algorithm, a new energy term was added which behaved as a constraint. It forced the snake nodes not to be out of potential road pixels in multispectral image. The experiment was accomplished using a QuickBird pan-sharpened multispectral image and 1:5,000 digital road maps of Daejeon. We showed the feasibility of the approach by presenting results in this urban area.

  • PDF

A Study of Personal Characteristics Influencing Cloud Intention (클라우드 사용의도에 영향을 미치는 개인특성 연구)

  • Kim, Jin Bae;Cho, Myeonggil
    • Journal of Information Technology Applications and Management
    • /
    • v.26 no.3
    • /
    • pp.135-157
    • /
    • 2019
  • Information technology has economic, social and cultural impacts is closely linked to our lives. This information technology is becoming a key to the change of human civilization through connecting people and objects on the earth. In addition, future information technology is becoming more intelligent and personalized with the development of computing technology, and due to the rapid development of alcohol, environment without time and space constraint is realized, Is spreading. Since existing portable storage media are made of physical form, there is a limit to usage due to the risk of loss and limitation of capacity. Cloud services can overcome these limitations. Due to the problems of existing storage media, it is possible to overcome the limitations of storing, managing and reusing information through cloud services. Despite the large number of cloud service users, the existing research has focused mainly on the concept of cloud service and the effect of introduction on the companies. This study aims to conduct a study on individual characteristics that affect the degree of cloud use. We will conduct research on the causes of IT knowledge, personal perception of security, convenience, innovation, economical trust, and platform dependency affecting the intention to use the cloud. These results show that the variables affecting individual 's use of cloud service are influenced by individuals, and this study can be used as a basic data for individuals to use cloud service.