• Title/Summary/Keyword: Path Maintenance

Search Result 197, Processing Time 0.027 seconds

A Study on Backup Route Setup Scheme in Ad Hoc Networks (애드혹 네트워크에서의 보조 경로 설정 기법에 관한 연구)

  • Jung Se-Won;Lee Chae-Woo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.8 s.350
    • /
    • pp.47-58
    • /
    • 2006
  • Due to the movement of nodes, ad-hoc networks suffer from the problems such as the decrease of data delivery ratio, the increase of end-to-end delay, and the increase of routing overhead. The backup routing schemes try to solve these problems by finding the backup routes during the route discovery phase and using them when a route fails. Generally the backup routing schemes outperform the single-path routing schemes in terms of data delivery ratio, end-to-end delay, and routing overhead when the nodes move rapidly. But when the nodes don't move rapidly, the backup routing schemes generate more routing traffics than the single-path routing schemes because they need to exchange packets to find the backup route. In addition, when the backup route fails earlier than the main route, it can not use the backup route because in many backup route algorithms, the backup route is found only at the initial route discovery phase. RBR(Reactive Backup Routing Algorithm) proposed in this paper is an algorithm that provides more stable data delivery than the previous backup routing schemes through the selective maintenance of backup route and the backup route rediscovery. To do that RBR prioritize the backup routes, and maintain and use them selectively Thus it can also decrease the routing overheads. Also, RBR can increase data delivery ratio and decrease delay because it reestablishes the backup route when the network topology changes. For the performance evaluation, OPNET simulator is used to compare RBR with the single-path routing scheme and some of the well known backup routing schemes.

A Research of LEACH Protocol improved Mobility and Connectivity on WSN using Feature of AOMDV and Vibration Sensor (AOMDV의 특성과 진동 센서를 적용한 이동성과 연결성이 개선된 WSN용 LEACH 프로토콜 연구)

  • Lee, Yang-Min;Won, Joon-We;Cha, Mi-Yang;Lee, Jae-Kee
    • The KIPS Transactions:PartC
    • /
    • v.18C no.3
    • /
    • pp.167-178
    • /
    • 2011
  • As the growth of ubiquitous services, various types of ad hoc networks have emerged. In particular, wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are widely known ad hoc networks, but there are also other kinds of wireless ad hoc networks in which the characteristics of the aforementioned two network types are mixed together. This paper proposes a variant of the Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol modified to be suitable in such a combined network environment. That is, the proposed routing protocol provides node detection and route discovery/maintenance in a network with a large number of mobile sensor nodes, while preserving node mobility, network connectivity, and energy efficiency. The proposed routing protocol is implemented with a multi-hop multi-path algorithm, a topology reconfiguration technique using node movement estimation and vibration sensors, and an efficient path selection and data transmission technique for a great many moving nodes. In the experiments, the performance of the proposed protocol is demonstrated by comparing it to the conventional LEACH protocol.

A Code Clustering Technique for Unifying Method Full Path of Reusable Cloned Code Sets of a Product Family (제품군의 재사용 가능한 클론 코드의 메소드 경로 통일을 위한 코드 클러스터링 방법)

  • Kim, Taeyoung;Lee, Jihyun;Kim, Eunmi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • Similar software is often developed with the Clone-And-Own (CAO) approach that copies and modifies existing artifacts. The CAO approach is considered as a bad practice because it makes maintenance difficult as the number of cloned products increases. Software product line engineering is a methodology that can solve the issue of the CAO approach by developing a product family through systematic reuse. Migrating product families that have been developed with the CAO approach to the product line engineering begins with finding, integrating, and building them as reusable assets. However, cloning occurs at various levels from directories to code lines, and their structures can be changed. This makes it difficult to build product line code base simply by finding clones. Successful migration thus requires unifying the source code's file path, class name, and method signature. This paper proposes a clustering method that identifies a set of similar codes scattered across product variants and some of their method full paths are different, so path unification is necessary. In order to show the effectiveness of the proposed method, we conducted an experiment using the Apo Games product line, which has evolved with the CAO approach. As a result, the average precision of clustering performed without preprocessing was 0.91 and the number of identified common clusters was 0, whereas our method showed 0.98 and 15 respectively.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

User-Oriented Controller Design for Multi-Axis Manipulators (다관절 머니퓰레이터의 사용자 중심 제어기 설계)

  • Son, HeonSuk;Kang, DaeHoon;Lee, JangMyung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.49-56
    • /
    • 2008
  • This paper proposes a PC-based open architecture controller for a multi-axis robotic manipulator. The designed controller can be applied for various multi-axes robotic manipulators since the motion controller is implemented on a PC with its peripheral devices. The accuracy of the controller based on the computed torque method has been measured with the dynamic model of manipulator. Since the controller is implemented in the PC-based architecture, it is free from the user circumstances and the operating environment. Dynamics of the manipulator have been compensated by the feed forward path in the inner loop and the resulting linear outer loop has been controlled by PD algorithm. Using the specialized language, it can be more efficient in programming and in driving of the multi-axis robot. Unlike the conventional controller that is used to control only a specific robot, this controller can be easily changed for various types of robots. This paper proposes a PC-based controller that has a simple architecture with its simple interface circuits than general commercial controllers. The maintenance and the performance of the controller can be easily improved for a specific robot. In fact, using a Samsung multi-axis robot, AT1, the controller performance and convenience of the PC-based controller have been verified by comparing to the commercial one.

  • PDF

Evolution and Maintenance of Proxy Networks for Location Transparent Mobile Agent and Formal Representation By Graph Transformation Rules

  • Kurihara, Masahito;Numazawa, Masanobu
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.151-155
    • /
    • 2001
  • Mobile agent technology has been the subject of much attention in the last few years, mainly due to the proliferation of distributed software technologies combined with the distributed AI research field. In this paper, we present a design of communication networks of agents that cooperate with each other for forwarding messages to the specific mobile agent in order to make the overall system location transparent. In order to make the material accessible to general intelligent system researchers, we present the general ideas abstractly in terms of the graph theory. In particular, a proxy network is defined as a directed acyclic graph satisfying some structural conditions. In turns out that the definition ensures some kind of reliability of the network, in the sense that as long as at most one proxy agent is abnormal, there agent exists a communication path, from every proxy agent to the target agent, without passing through the abnormal proxy. As the basis for the implementation of this scheme, an appropriate initial proxy network is specified and the dynamic nature of the network is represented by a set of graph transformation rules. It is shown that those rules are sound, in the sense that all graphs created from the initial proxy network by zero or more applications of the rules are guaranteed to be proxy networks. Finally, we will discuss some implementation issues.

  • PDF

The Effect of IT Human Capability and Absorptive Capacity on Knowledge Transfer

  • Park, Joo-Yeon
    • Journal of Information Technology Applications and Management
    • /
    • v.15 no.3
    • /
    • pp.209-225
    • /
    • 2008
  • The purpose of this study is to examine the relationship between IT human capability and knowledge transfer and the role of absorptive capacity between them. From the test of both measurement and structural model using Partial Least Squares (PLS), IT human capability is found to be significant to absorptive capacity and knowledge transfer. Absorptive capacity is also significantly related to knowledge transfer. The interesting result found in this study is that the path of absorptive capacity drawn from IT human capability to knowledge transfer is stronger than the direct relationship between IT human capability and knowledge transfer, indicating that absorptive capacity plays an important role in knowledge transfer. This result indicates that IT personnel with stronger technical skill, interpersonal skill and management capability are more likely to acquire and learn knowledge effectively from outside expertise. Moreover, this study shows that absorptive capacity, the individual’s ability to utilize external knowledge is derived from IT human capability and strongly effects on transferring knowledge from outsourcing vendors. This study suggests IT related managers that the development of IT human capability and absorptive capacity should be recognized for a successful exploitation of outside knowledge within a firm. It is also a necessary condition for a successful IT implementation and maintenance independently and economically from outside vendors.

  • PDF

Implementation of Virtual Manufacturing Technology to Manual Spot Welding Process in Automotive Body Shop (자동차 차체공장의 매뉴얼 점용접 공정에 가상생산기술 적용)

  • Jung, Kwang-Jo;Lee, Kun-Sang;Park, Young-Jin
    • Proceedings of the KSME Conference
    • /
    • 2003.04a
    • /
    • pp.1166-1172
    • /
    • 2003
  • The extremely strong competition among the world automobile industries has introduced the concept of PLM in the total production activities, one of whose major components is VM(Virtual Manufacturing). If the production lines are equipped with robots, the application of OLP in the virtual space is fully mature. However, in the point of the investment's and the maintenance's view, there are always some activities, which can not be automated: for example, typically the manual welding for prefixing in the automobile body shop and the material loading. Process planning for these activities, therefore, are decided mainly by experiences, which caused many repeated rework of the processes and the inconvenience of the workers, and resulted consequently in the reduction of the productivity and the safety of the workers. In this paper, the optimal dimension of the welding gun and its handle position and the optimal working path is simulated and decided by use of DELIMN/IGRIP and DELMIA/Ergo and the working area modelized in the virtual workcell of DELMIA.

  • PDF

분산 제어기 구조를 갖는 마스터 암의 기구학 설계 및 해석

  • Lee, Jangwook;Kim, Yoonsang;Lee, Sooyong;Kim, Munsang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.6
    • /
    • pp.532-539
    • /
    • 2001
  • In robot teleoperation, much research has been carried out to control the slave robot from remote site. One of the essential devices for robot teleoperation is the masterarm, which is a path command generating device worn on human arm. In this paper, a new masterarm based on human kinematics is proposed. Its controller is based on the distributed controller architecture composed of two controller parts: a host controller and a set of satellite controllers. Each satellite controller measures the corresponding joint angle, while the host controller performs forward and inverse kinematics calculation. This distributed controller architecture can make the data updating faster, which allows to implement real-time implementation. The host controller and the satellited controllers are networked via three-wire daisy-chained SPI(Serial Peripheral Interface) protocol, so this architecture makes the electrical wiring very simple, and enhances maintenance. Analytical method for finding three additional unknown joint angles is derived using only three measured angles for each shoulder and wrist, which makes th hardware implementation very simple by minimizing the required number of satellite controllers. Finally, the simulation and experiment results are given to demonstrate the usefulness and performance of the proposed masterarm.

  • PDF

Cluster-based AODV for ZigBee Wireless Measurement and Alarm Systems (ZigBee 무선계측/경보 시스템을 위한 클러스터 기반의 AODV)

  • Park, Jae-Won;Kim, Hong-Rok;Lee, Yun-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.920-926
    • /
    • 2007
  • Establishing a fixed path for the message delivery through a wireless network is impossible due to the mobility. Among the number of routing protocols that have been proposed for wireless ad-hoc networks, the AODV(Ad-hoc On-demand Distance Vector) algorithm is suitable in the case of highly dynamic topology changes, along with ZigBee Routing(ZBR), with the exception of route maintenance. Accordingly, this paper introduces a routing scheme focusing on the energy efficiency and route discovery time for wireless alarm systems using IEEE 802.15.4-based ZigBee. Essentially, the proposed routing algorithm utilizes a cluster structure and applies ZBR within a cluster and DSR (Dynamic Source Routing) between clusters. The proposed algorithm does not require a routing table for the cluster heads, as the inter-cluster routing is performed using DSR. The performance of the proposed algorithm is evaluated and compared with ZBR using an NS2 simulator. The results confirm that the proposed Cluster-based AODV (CAODV) algorithm is more efficient than ZBR in terms of the route discovery time and energy consumption.