• Title/Summary/Keyword: distributed computing

Search Result 1,264, Processing Time 0.034 seconds

An Enhanced University Registration Model Using Distributed Database Schema

  • Maabreh, Khaled Saleh
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3533-3549
    • /
    • 2019
  • A big database utilizes the establishing network technology, and it became an emerging trend in the computing field. Therefore, there is a necessity for an optimal and effective data distribution approach to deal with this trend. This research presents the practical perspective of designing and implementing distributed database features. The proposed system has been establishing the satisfying, reliable, scalable, and standardized use of information. Furthermore, the proposed scheme reduces the vast and recurring efforts for designing an individual system for each university, as well as it is effectively participating in solving the course equivalence problem. The empirical finding in this study shows the superiority of the distributed system performance based on the average response time and the average waiting time than the centralized system. The system throughput also overcomes the centralized system because of data distribution and replication. Therefore, the analyzed data shows that the centralized system thrashes when the workload exceeds 60%, while the distributed system becomes thrashes after 81% workload.

Performance Improvement of Data Replication in Cloud Computing (Cloud Computing에서의 데이터 복제 성능 개선)

  • Lee, Joon-Kyu;Lee, Bong-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.53-56
    • /
    • 2008
  • Recently, the distributed system is being evolved into a new paradigm, named cloud computing, which provides users with efficient computing resources and services from data centers. Cloud computing would reduce the potential danger of Grid computing which utilizes resource sharing by constructing centralized data center. In this paper, a new data replication scheme is proposed for Hadoop distributed file system by changing 1:1 data transmission to 1:N. The proposed scheme considerably reduced the data transmission delay comparing to the current mechanism.

  • PDF

A Distributed Electrical Impedance Tomography Algorithm for Real-Time Image Reconstruction (실시간 영상 복원을 위한 분산 전기단층촬영 알고리즘)

  • Junghoon Lee;Gyunglin Park
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.1
    • /
    • pp.25-36
    • /
    • 2004
  • This paper proposes and measures the performance of a distributed EIT (Electrical Impedance Tomography) image reconstruction algorithm which has a master-slave structure. The image construction is a computation based application of which the execute time is proportional to the cube of the unknowns. After receiving a specific frame from the master, each computing node extracts the basic elements by executing the first iteration of Kalman Filter in parallel. Then the master merges the basic element lists into one group and then performs the sequential iterations with the reduced number of unknowns. Every computing node has MATLAB functions as well as extended library implemented for the exchange of MATLAB data structure. The master implements another libraries such as threaded multiplication, partitioned inverse, and fast Jacobian to improve the speed of the serial execution part. The parallel library reduces the reconstruction time of image visualization about by half, while the distributed grouping scheme further reduces by about 12 times for the given target object when there are 4 computing nodes.

An Efficient Solution Method to MDO Problems in Sequential and Parallel Computing Environments (순차 및 병렬처리 환경에서 효율적인 다분야통합최적설계 문제해결 방법)

  • Lee, Se-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.3
    • /
    • pp.236-245
    • /
    • 2011
  • Many researchers have recently studied multi-level formulation strategies to solve the MDO problems and they basically distributed the coupling compatibilities across all disciplines, while single-level formulations concentrate all the controls at the system-level. In addition, approximation techniques became remedies for computationally expensive analyses and simulations. This paper studies comparisons of the MDO methods with respect to computing performance considering both conventional sequential and modem distributed/parallel processing environments. The comparisons show Individual Disciplinary Feasible (IDF) formulation is the most efficient for sequential processing and IDF with approximation (IDFa) is the most efficient for parallel processing. Results incorporating to popular design examples show this finding. The author suggests design engineers should firstly choose IDF formulation to solve MDO problems because of its simplicity of implementation and not-bad performance. A single drawback of IDF is requiring more memory for local design variables and coupling variables. Adding cheap memories can save engineers valuable time and effort for complicated multi-level formulations and let them free out of no solution headache of Multi-Disciplinary Analysis (MDA) of the Multi-Disciplinary Feasible (MDF) formulation.

Debugging of Parallel Programs using Distributed Cooperating Components

  • Mrayyan, Reema Mohammad;Al Rababah, Ahmad AbdulQadir
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.570-578
    • /
    • 2021
  • Recently, in the field of engineering and scientific and technical calculations, problems of mathematical modeling, real-time problems, there has been a tendency towards rejection of sequential solutions for single-processor computers. Almost all modern application packages created in the above areas are focused on a parallel or distributed computing environment. This is primarily due to the ever-increasing requirements for the reliability of the results obtained and the accuracy of calculations, and hence the multiply increasing volumes of processed data [2,17,41]. In addition, new methods and algorithms for solving problems appear, the implementation of which on single-processor systems would be simply impossible due to increased requirements for the performance of the computing system. The ubiquity of various types of parallel systems also plays a positive role in this process. Simultaneously with the growing demand for parallel programs and the proliferation of multiprocessor, multicore and cluster technologies, the development of parallel programs is becoming more and more urgent, since program users want to make the most of the capabilities of their modern computing equipment[14,39]. The high complexity of the development of parallel programs, which often does not allow the efficient use of the capabilities of high-performance computers, is a generally accepted fact[23,31].

A Secure Model for Reading and Writing in Hadoop Distributed File System and its Evaluation (하둡 분산파일시스템에서 안전한 쓰기, 읽기 모델과 평가)

  • Pang, Sechung;Ra, Ilkyeun;Kim, Yangwoo
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.55-64
    • /
    • 2012
  • Nowadays, as Cloud computing becomes popular, a need for a DFS(distributed file system) is increased. But, in the current Cloud computing environments, there is no DFS framework that is sufficient to protect sensitive private information from attackers. Therefore, we designed and proposed a secure scheme for distributed file systems. The scheme provides confidentiality and availability for a distributed file system using a secret sharing method. In this paper, we measured the speed of encryption and decryption for our proposed method, and compared them with that of SEED algorithm which is the most popular algorithm in this field. This comparison showed the computational efficiency of our method. Moreover, the proposed secure read/write model is independent of Hadoop DFS structure so that our modified algorithm can be easily adapted for use in the HDFS. Finally, the proposed model is evaluated theoretically using performance measurement method for distributed secret sharing model.

InterCom : Design and Implementation of an Agent-based Internet Computing Environment (InterCom : 에이전트 기반 인터넷 컴퓨팅 환경 설계 및 구현)

  • Kim, Myung-Ho;Park, Kweon
    • The KIPS Transactions:PartA
    • /
    • v.8A no.3
    • /
    • pp.235-244
    • /
    • 2001
  • Development of network and computer technology results in many studies to use physically distributed computers as a single resource. Generally, these studies have focused on developing environments based on message passing. These environments are mainly used to solve problems for scientific computation and process in parallel suing inside parallelism of the given problems. Therefore, these environments provide high parallelism generally, while it is difficult to program and use as well as it is required to have user accounts in the distributed computers. If a given problem is divided into completely independent subproblems, more efficient environment can be provided. We can find these problems in bio-informatics, 3D animatin, graphics, and etc., so the development of new environment for these problems can be considered to be very important. Therefore, we suggest new environment called InterCom based on a proxy computing, which can solve these problems efficiently, and explain the implementation of this environment. This environment consists of agent, server, and client. Merits of this environment are easy programing, no need of user accounts in the distributed computers, and easiness by compiling distributed code automatically.

  • PDF

Distributed AI Learning-based Proof-of-Work Consensus Algorithm (분산 인공지능 학습 기반 작업증명 합의알고리즘)

  • Won-Boo Chae;Jong-Sou Park
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • The proof-of-work consensus algorithm used by most blockchains is causing a massive waste of computing resources in the form of mining. A useful proof-of-work consensus algorithm has been studied to reduce the waste of computing resources in proof-of-work, but there are still resource waste and mining centralization problems when creating blocks. In this paper, the problem of resource waste in block generation was solved by replacing the relatively inefficient computation process for block generation with distributed artificial intelligence model learning. In addition, by providing fair rewards to nodes participating in the learning process, nodes with weak computing power were motivated to participate, and performance similar to the existing centralized AI learning method was maintained. To show the validity of the proposed methodology, we implemented a blockchain network capable of distributed AI learning and experimented with reward distribution through resource verification, and compared the results of the existing centralized learning method and the blockchain distributed AI learning method. In addition, as a future study, the thesis was concluded by suggesting problems and development directions that may occur when expanding the blockchain main network and artificial intelligence model.

Experimental verification of a distributed computing strategy for structural health monitoring

  • Gao, Y.;Spencer, B.F. Jr.
    • Smart Structures and Systems
    • /
    • v.3 no.4
    • /
    • pp.455-474
    • /
    • 2007
  • A flexibility-based distributed computing strategy (DCS) for structural health monitoring (SHM) has recently been proposed which is suitable for implementation on a network of densely distributed smart sensors. This approach uses a hierarchical strategy in which adjacent smart sensors are grouped together to form sensor communities. A flexibility-based damage detection method is employed to evaluate the condition of the local elements within the communities by utilizing only locally measured information. The damage detection results in these communities are then communicated with the surrounding communities and sent back to a central station. Structural health monitoring can be done without relying on central data acquisition and processing. The main purpose of this paper is to experimentally verify this flexibility-based DCS approach using wired sensors; such verification is essential prior to implementation on a smart sensor platform. The damage locating vector method that forms foundation of the DCS approach is briefly reviewed, followed by an overview of the DCS approach. This flexibility-based approach is then experimentally verified employing a 5.6 m long three-dimensional truss structure. To simulate damage in the structure, the original truss members are replaced by ones with a reduced cross section. Both single and multiple damage scenarios are studied. Experimental results show that the DCS approach can successfully detect the damage at local elements using only locally measured information.

A Study on Data Storage and Recovery in Hadoop Environment (하둡 환경에 적합한 데이터 저장 및 복원 기법에 관한 연구)

  • Kim, Su-Hyun;Lee, Im-Yeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.12
    • /
    • pp.569-576
    • /
    • 2013
  • Cloud computing has been receiving increasing attention recently. Despite this attention, security is the main problem that still needs to be addressed for cloud computing. In general, a cloud computing environment protects data by using distributed servers for data storage. When the amount of data is too high, however, different pieces of a secret key (if used) may be divided among hundreds of distributed servers. Thus, the management of a distributed server may be very difficult simply in terms of its authentication, encryption, and decryption processes, which incur vast overheads. In this paper, we proposed a efficiently data storage and recovery scheme using XOR and RAID in Hadoop environment.