• Title/Summary/Keyword: client-server computing

Search Result 275, Processing Time 0.027 seconds

Design and Implementation of Medical Information System using QR Code (QR 코드를 이용한 의료정보 시스템 설계 및 구현)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.109-115
    • /
    • 2015
  • The new medical device technologies for bio-signal information and medical information which developed in various forms have been increasing. Information gathering techniques and the increasing of the bio-signal information device are being used as the main information of the medical service in everyday life. Hence, there is increasing in utilization of the various bio-signals, but it has a problem that does not account for security reasons. Furthermore, the medical image information and bio-signal of the patient in medical field is generated by the individual device, that make the situation cannot be managed and integrated. In order to solve that problem, in this paper we integrated the QR code signal associated with the medial image information including the finding of the doctor and the bio-signal information. bio-signal. System implementation environment for medical imaging devices and bio-signal acquisition was configured through bio-signal measurement, smart device and PC. For the ROI extraction of bio-signal and the receiving of image information that transfer from the medical equipment or bio-signal measurement, .NET Framework was used to operate the QR server module on Window Server 2008 operating system. The main function of the QR server module is to parse the DICOM file generated from the medical imaging device and extract the identified ROI information to store and manage in the database. Additionally, EMR, patient health information such as OCS, extracted ROI information needed for basic information and emergency situation is managed by QR code. QR code and ROI management and the bio-signal information file also store and manage depending on the size of receiving the bio-singnal information case with a PID (patient identification) to be used by the bio-signal device. If the receiving of information is not less than the maximum size to be converted into a QR code, the QR code and the URL information can access the bio-signal information through the server. Likewise, .Net Framework is installed to provide the information in the form of the QR code, so the client can check and find the relevant information through PC and android-based smart device. Finally, the existing medical imaging information, bio-signal information and the health information of the patient are integrated over the result of executing the application service in order to provide a medical information service which is suitable in medical field.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

A Case Study on SK Telecom's Next Generation Marketing System Development (SK텔레콤의 차세대 마케팅 시스템 개발사례 연구)

  • Lee, Sang-Goo;Jang, Si-Young;Yang, Jung-Yeon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.158-170
    • /
    • 2008
  • In response to the changing demands of ever competitive market, SK Telecom has built a new marketing system that can support dynamic marketing campaigns and, at the same time, scale up to the large volumes of data and transactions for the next decade. The system which employs Unix-based client-server (using Web browser interfaces) architecture will replace the current mainframe-based COIS system. The project, named NGM (Next Generation Marketing ), is unprecedentedly large in scale. However, both managerial and technical problems led the project into a crisis. The application framework that depended on a software solution from a major global vendor could not support the dynamic functionalities required for the new system. In March 2005, SK telecom declared the suspension of the NGM project. The second phase of the project started in May 2005 following a comprehensive replanning. It was decided that no single existing solution could cope with the complexity of the new system and hence the new system would be custom-built. As such. a number of technical challenges emerged. In this paper, we report on the three key dimensions of technical challenges - middleware and application framework, database architecture and tuning, and system performance. The processes and approaches, adopted in building NGM system, may be viewed as "best practices" in the telecom industry. The completed NGM system, now called "U.key System," successfully came into operation on the ninth of October, 2006. This new infrastructure is expected to give birth to a series of innovative, fruitful, and customer-oriented applications in the near future.

A Data Allocation Method based on Broadcast Disks Using Indices over Multiple Broadcast Channels (다중방송 채널에서 인덱스를 이용한 브로드캐스트 디스크 기반의 데이타 할당 기법)

  • Lee, Won-Taek;Jung, Sung-Won
    • Journal of KIISE:Databases
    • /
    • v.35 no.3
    • /
    • pp.272-285
    • /
    • 2008
  • In this paper, we concentrate on data allocation methods for multiple broadcast channels. When the server broadcasts data, the important issue is to let mobile clients access requested data rapidly. Previous works first sorted data by their access probabilities and allocate the sorted data to the multiple channels by partitioning them into multiple channels. However, they do not reflect the difference of access probabilities among data allocated in the same channel. This paper proposes ZGMD allocation method. ZGMD allocates data item on multiple channels so that the difference of access probability in the same channel is maximized. ZGMD allocates sorted data to each channels and applies Broadcast Disk in each channel. ZGMD requires a proper indexing scheme for the performance improvement. This is because in ZGMD method each channel got allocated both hot and cold data. As a result, the sequential search heuristic does not allow the mobile client to access hot data items quickly. The proposed index scheme is based on using dedicated index channels in order to search the data channel where the requested data is. We show that our method achieve the near-optimal performance in terms of the average access time and significantly outperforms the existing methods.

Mobile M/VC Application Framework Using Observer/Observable Design Pattern (관찰자/피관찰자 설계 패턴을 이용한 모바일 M/VC 응용 프레임워크)

  • Eum Doo-Hun
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.81-92
    • /
    • 2006
  • Recently, the number of mobile phone and PDA users has been rapidly increased. Such monitoring and control applications as geographical and traffic information systems are being used widely with wireless devices. In this paper, we introduce the mobile M/VC application framework that supports the rapid constructions of mobile monitoring and control (M/VC) applications. The mobile M/VC application framework uses the mobile Observer/Observable pattern that extends the Java's Observer/Observable for automatic interactions of server and client objects in wireless environments. It also provides the Multiplexer and Demultiplexer classes that supports the assembly feature of Observer and Observable objects. To construct an application using the framework, developers just need to create necessary objects from the Observable and MobileObserver classes and inter-connect them structurally(like the plug-and-play style) through the Multiplexer and Demultiplexer objects. Then, the state change of Observable objects is notified to the connected Observer objects and user's input with Observer objects is propagated to Observable objects. These mechanism is the main process for monitoring and control applications. Therefore, the mobile M/VC application framework can improve the productivity of mobile applications and enhance the reusability of such components as Observer and Observable objects in wireless environments.

  • PDF

Message Interoperability in e-Logistics System (e-Logistics시스템의 메시지 상호운용성)

  • Seo Sungbo;Lee Young Joon;Hwang Jaegak;Ryu Keun Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.5
    • /
    • pp.436-450
    • /
    • 2005
  • Existing B2B, B2C computer systems and applications that executed business trans-actions were the client- server based architecture which consists of heterogeneous hardware and software including personal computers and mainframes. Due to the active boom of electronic business, integration and compatibility of exchanged data, applications and hardwares have emerged as hot issue. This paper designs and implements a message transport system and a document transformation system in order to solve the interoperability problem of integrated logistics system in e-Business when doing electronic business. Message transport system integrated ebMS 2.0 which is standard business message exchange format of ebXML, the international standard electronic commerce framework, and JMS of J2EE enable to ensure reliable messaging. The document transformation system could convert non-standard XML documents into standard XML documents and provide the web services after integrating message system. Using suggested business scenario and various test data, our message oriented system preyed to be interoperable and stable. We participated ebXML messaging interoperability test organized by ebXML Asia Committee ITG in oder to evaluate and certify the suitability for message system.