• Title/Summary/Keyword: Service-Oriented Computing

Search Result 174, Processing Time 0.022 seconds

Design and Implementation of Data Binder for Dynamic Data Delivery in Healthcare Service (헬스케어 서비스에서 동적인 데이터 전달을 위한 데이터 결합기 설계 및 구현)

  • Kang, Kyu-Chang;Lee, Jeun-Woo;Choi, Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.891-898
    • /
    • 2009
  • This paper suggests producer/consumer-based Data Binder enabling applications and biomedical devices developed by mutually different vendors to transfer data dynamically. Data Binder is implemented as a bundle of OSGi platform providing component-based programming model and service-oriented operation architecture. Data Binder complements the disadvantage of OSGi WireAdmin service enabling static data delivery between a producer and a consumer of data. Data Binder normalizes an application requirement as an application descriptor and a device capability as a device descriptor so that it enables dynamic data delivery by making data producer/consumer pair in runtime. Therefore, Data Binder can be used as a connection management of a data link between a data producer and a data consumer in sensor-based application development. The object of this paper is to provide the facility of the healthcare service development by separating a data producer such as a biomedical device from a data consumer such as a healthcare application.

Dynamic Discovery of Geographically Cohesive Services in Internet of Things Environments (사물인터넷 환경에서 지리적 응집도를 고려한 동적 서비스 검색방법)

  • Baek, KyeongDeok;Kim, MinHyeop;Ko, InYoung
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.893-901
    • /
    • 2016
  • In Internet of Things (IoT) environments, users are required to search for IoT devices necessary to access services for accomplishing their tasks. As IoT technologies advance, a user task will utilize various types of IoT-based services that are deployed in an IoT environment. Therefore, to accomplish a user task effectively, the services that utilize IoT devices need to be found in a certain geographical region. In addition, the service discovery needs to be accomplished in a stable manner while considering dynamically changing IoT environments. To deal with these issues, we propose two service discovery methods that consider geographic cohesiveness of services in IoT environments. We compare the effectiveness of the proposed methods against a traditional service discovery algorithm that does not consider geographic cohesiveness.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Development of Customized Trip Navigation System Using Open Government Data (공공데이터를 활용한 맞춤형 여행 네비게이션 시스템 구현)

  • Shim, Beomsoo;Lee, Hanjun;Yoo, Donghee
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.15-21
    • /
    • 2016
  • Under the flag of creative economy, Korea government is now releasing public data in order to develop or provide a range of services. In this paper, we develop a customized trip navigation system to recommend a trip itinerary based on integration of open government data and personal tourist data. The system uses case-based reasoning (CBR) to provide a personalized trip navigation service. The main difference between existing trip information systems and ours is that our system can offers a user-oriented information service. In addition, our system supports Turn-key style contents provision to maximize convenience. Our system can be a good example of the way in which open government data can be used to design a new service.

A Bandwidth Allocation Method by Internet Service Types - Focus on Elementary and Middle Schools (인터넷 서비스 유형별 대역폭 할당 방안 - 초.중등학교 중심으로)

  • Park, Hyeong-Yong;Hwang, Jun;Kim, Jae-Hyoun
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.49-57
    • /
    • 2010
  • Today, internet traffic is rapidly increasing by the appearance of a large number of website thru a numerous internet service provider and the introduction of advanced teaching methods like utilizing ICT(Information and Communication Technology) education. At these actualities, it is requested to increase internet bandwidth in the scene of school; however, it is difficult to resolve the request quickly because of the government's budget and technical problems. In this circumstance, this paper proposes a method to raise the efficiency of internet operations with the current bandwidth. In order to achieve the purpose of this paper, the analysis of internet traffic has been done by the traffic currently being used in school, and an internet bandwidth distribution model differ from the existing standpoint is developed with the analyzed result. It can be used to predict the increase in the bandwidth which may be requested at school in the future, and it can support the office of education for appropriate bandwidth allocation at schools and reduction of school budget in telecommunication. Also, it can be adopted as a future oriented reference model for network policy.

Fat Client-Based Abstraction Model of Unstructured Data for Context-Aware Service in Edge Computing Environment (에지 컴퓨팅 환경에서의 상황인지 서비스를 위한 팻 클라이언트 기반 비정형 데이터 추상화 방법)

  • Kim, Do Hyung;Mun, Jong Hyeok;Park, Yoo Sang;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.59-70
    • /
    • 2021
  • With the recent advancements in the Internet of Things, context-aware system that provides customized services become important to consider. The existing context-aware systems analyze data generated around the user and abstract the context information that expresses the state of situations. However, these datasets is mostly unstructured and have difficulty in processing with simple approaches. Therefore, providing context-aware services using the datasets should be managed in simplified method. One of examples that should be considered as the unstructured datasets is a deep learning application. Processes in deep learning applications have a strong coupling in a way of abstracting dataset from the acquisition to analysis phases, it has less flexible when the target analysis model or applications are modified in functional scalability. Therefore, an abstraction model that separates the phases and process the unstructured dataset for analysis is proposed. The proposed abstraction utilizes a description name Analysis Model Description Language(AMDL) to deploy the analysis phases by each fat client is a specifically designed instance for resource-oriented tasks in edge computing environments how to handle different analysis applications and its factors using the AMDL and Fat client profiles. The experiment shows functional scalability through examples of AMDL and Fat client profiles targeting a vehicle image recognition model for vehicle access control notification service, and conducts process-by-process monitoring for collection-preprocessing-analysis of unstructured data.

A Storage and Retrieval System for Structured SGML Documents using Grove (Grove를 이용한 구조적 SGML문서의 저장 및 검색)

  • Kim, Hak-Gyoon;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.5
    • /
    • pp.501-509
    • /
    • 2002
  • SGML(ISO 8879) has been proliferated to support various document styles and to transfer documents into different platforms. SGML documents have logical structure information in addition to contents. As SGML documents are widely used, there is an increasing need for database storage and retrieval system using the logical structure of documents. However. traditional search engines using document indexes cannot exploit the logical structure. In this Paper, we have developed an SGML document storage system, which is DTD-independent and store the document type and the document instance separately by using Grove which is the document model for DSSSL and HyTime. We have used the Object Store, an object-oriented DBMS, to store the structure information appropriately without any loss of structural information. Also, we have supported a index structure for search efficiency like the relational DBMS, and constructed an effective user interface which combines content-based search with structure-based search.

A Monitoring System based on Layered Architecture (계층형 구조를 기반으로 한 모니터링 시스템)

  • Kwon Sung-Ju;Choi Jae-Young;Lee Ji-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.7
    • /
    • pp.440-447
    • /
    • 2006
  • Grid computing is the complex deployments of various hardware and software components. The Grid environment should provide a mechanism for real-time monitoring and notification. It is very important to implement a monitoring mechanism in the Grid environment. Most existing monitoring systems only focus on their own requirements. With the development of Grid computing technology, the extensible monitoring systems become more and more feasible and popular. In this paper, we describe our research and development works on M-Mon, a novel framework for the flexible and adaptive Grid monitoring system. M-Mon system focuses on some critical issues like scalability, reusability, runtime extensibility, protocol transparency and uniform data representation. To provide interoperability with other monitoring systems and to reuse legacy facilities with a minimum effort, our monitoring system has been developed using service-oriented architecture.

Drsign and Evaluation of a GQS-based Fog Pub/Sub System for Delay-Sensitive IoT Applications (지연 민감형 IoT 응용을 위한 GQS 기반 포그 Pub/Sub 시스템의 설계 및 평가)

  • Bae, Ihn-Han
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1369-1378
    • /
    • 2017
  • Pub/Sub (Publish/Subscribe) paradigm is a simple and easy to use model for interconnecting applications in a distributed environment. In general, subscribers register their interests in a topic or a pattern of events and then asynchronously receive events matching their interest, regardless of the events' publisher. In order to build a low latency lightweight pub/sub system for Internet of Things (IoT) services, we propose a GQSFPS (Group Quorum System-based Fog Pub/Sub) system that is a core component in the event-driven service oriented architecture framework for IoT services. The GQSFPS organizes multiple installed pub/sub brokers in the fog servers into a group quorum based P2P (peer-to-peer) topology for the efficient searching and the low latency accessing of events. Therefore, the events of IoT are cached on the basis of group quorum, and the delay-sensitive IoT applications of edge devices can effectively access the cached events from group quorum fog servers in low latency. The performance of the proposed GQSFPS is evaluated through an analytical model, and is compared to the GQPS (grid quorum-based pud/sub system).

Design and Implementation of Cloud-based Sensor Data Management System (클라우드 기반 센서 데이터 관리 시스템 설계 및 구현)

  • Park, Kyoung-Wook;Kim, Kyong-Og;Ban, Kyeong-Jin;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.672-677
    • /
    • 2010
  • Recently, the efficient management system for large-scale sensor data has been required due to the increasing deployment of large-scale sensor networks. In this paper, we propose a cloud-based sensor data management system with low cast, high scalability, and efficiency. Sensor data in sensor networks are transmitted to the cloud through a cloud-gateway. At this point, outlier detection and event processing is performed. Transmitted sensor data are stored in the Hadoop HBase, distributed column-oriented database, and processed in parallel by query processing module designed as the MapReduce model. The proposed system can be work with the application of a variety of platforms, because processed results are provided through REST-based web service.