• Title/Summary/Keyword: 웹서비스시스템

Search Result 2,563, Processing Time 0.029 seconds

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

A Study on Design of Agent based Nursing Records System in Attending System (에이전트기반 개방병원 간호기록시스템 설계에 관한 연구)

  • Kim, Kyoung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.73-94
    • /
    • 2010
  • The attending system is a medical system that allows doctors in clinics to use the extra equipment in hospitals-beds, laboratory, operating room, etc-for their patient's care under a contract between the doctors and hospitals. Therefore, the system is very beneficial in terms of the efficiency of the usage of medical resources. However, it is necessary to develop a strong support system to strengthen its weaknesses and supplement its merits. If doctors use hospital beds under the attending system of hospitals, they would be able to check a patient's condition often and provide them with nursing care services. However, the current attending system lacks delivery and assistance support. Thus, for the successful performance of the attending system, a networking system should be developed to facilitate communication between the doctors and nurses. In particular, the nursing records in the attending system could help doctors monitor the patient's condition and provision of nursing care services. A nursing record is the formal documentation associated with nursing care. It is merely a data repository that helps nurses to track their activities; nursing records thus represent a resource of primary information that can be reused. In order to maximize their usefulness, nursing records have been introduced as part of computerized patient records. However, nursing records are internal data that are not disclosed by hospitals. Moreover, the lack of standardization of the record list makes it difficult to share nursing records. Under the attending system, nurses would want to minimize the amount of effort they have to put in for the maintenance of additional records. Hence, they would try to maintain the current level of nursing records in the form of record lists and record attributes, while doctors would require more detailed and real-time information about their patients in order to monitor their condition. Therefore, this study developed a system for assisting in the maintenance and sharing of the nursing records under the attending system. In contrast to previous research on the functionality of computer-based nursing records, we have emphasized the practical usefulness of nursing records from the viewpoint of the actual implementation of the attending system. We suggested that nurses could design a nursing record dictionary for their convenience, and that doctors and nurses could confirm the definitions that they looked up in the dictionary through negotiations with intelligent agents. Such an agent-based system could facilitate networking among medical institutes. Multi-agent systems are a widely accepted paradigm for the distribution and sharing of computation workloads in the scientific community. Agent-based systems have been developed with differences in functional cooperation, coordination, and negotiation. To increase such communication, a framework for a multi-agent based system is proposed in this study. The agent-based approach is useful for developing a system that promotes trade-offs between transactions involving multiple attributes. A brief summary of our contributions follows. First, we propose an efficient and accurate utility representation and acquisition mechanism based on a preference scale while minimizing user interactions with the agent. Trade-offs between various transaction attributes can also be easily computed. Second, by providing a multi-attribute negotiation framework based on the attribute utility evaluation mechanism, we allow both the doctors in charge and nurses to negotiate over various transaction attributes in the nursing record lists that are defined by the latter. Third, we have designed the architecture of the nursing record management server and a system of agents that provides support to the doctors and nurses with regard to the framework and mechanisms proposed above. A formal protocol has also been developed to create and control the communication required for negotiations. We verified the realization of the system by developing a web-based prototype. The system was implemented using ASP and IIS5.1.

A Dynamical Load Balancing Method for Data Streaming and User Request in WebRTC Environment (WebRTC 환경에 데이터 스트리밍 및 사용자 요청에 따른 동적로드 밸런싱 방법)

  • Ma, Linh Van;Park, Sanghyun;Jang, Jong-hyun;Park, Jaehyung;Kim, Jinsul
    • Journal of Digital Contents Society
    • /
    • v.17 no.6
    • /
    • pp.581-592
    • /
    • 2016
  • WebRTC has quickly grown to be the world's advanced real-time communication in several platforms such as web and mobile. In spite of the advantage, the current technology in WebRTC does not handle a big-streaming efficiently between peers and a large amount request of users on the Signaling server. Therefore, in this paper, we put our work to handle the problem by delivering the flow of data with dynamical load balancing algorithms. We analyze the request source users and direct those streaming requests to a load balancing component. More specifically, the component determines an amount of the requested resource and available resource on the response server, then it delivers streaming data to the requesting user parallel or alternately. To show how the method works, we firstly demonstrate the load-balancing algorithm by using a network simulation tool OPNET, then, we seek to implement the method into an Ubuntu server. In addition, we compare the result of our work and the original implementation of WebRTC, it shows that the method performs efficiently and dynamically than the origin.

Development of Evaluation Framework and Professional Evaluation of Health Information Predictability (건강정보의 예보성 평가준거를 활용한 전문가 평가결과 분석연구)

  • Kang, Min-Sug;Lee, Moo-Sik;Hong, Jee-Young;Kim, Sang-Ha
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.10
    • /
    • pp.2966-2973
    • /
    • 2009
  • In this article, I propose effective strategies for improving the Predictive Health Care. The results of qualitative study on health information show the following order from the highest score: whether health information is scientifically sound ($3.7\pm0.5$), whether people can easily understand health information ($3.6\pm0.5$), and whether health information reflects the public'sconcerns (($3.5\pm0.5$), and whether health information includes enough information to satisfy the public ($2.9\pm0.6$). The most pressing reforms for the effective Predictive Health Care areto provide enough health information and regularly collection of information because the Predictive Health Care has not provided enough information, authoritative information has rarely been offered, and methodological limitations on producing and applying predictive information have not been addressed. Although the Predictive Health Care provides online services like web-based epidemic reporting system, it needs to extend services from the epidemic information to general health information because of lack of promoting the Predictive Health Care and of credibility of information offered so far. Lastly, the Predictive Health Care needs to strengthen efforts to collect information, form common grounds between information and the public's concerns, clarify classification system of information, and offer an easy way for the public to use information.

Development of Intelligent Learning Tool based on Human eyeball Movement Analysis for Improving Foreign Language Competence (외국어 능력 향상을 위한 사용자 안구운동 분석 기반의 지능형 학습도구 개발)

  • Shin, Jihye;Jang, Young-Min;Kim, Sangwook;Mallipeddi, Rammohan;Bae, Jungok;Choi, Sungmook;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.153-161
    • /
    • 2013
  • Recently, there has been a tremendous increase in the availability of educational materials for foreign language learning. As part of this trend, there has been an increase in the amount of electronically mediated materials available. However, conventional educational contents developed using computer technology has provided typically one-way information, which is not the most helpful thing for users. Providing the user's convenience requires additional off-line analysis for diagnosing an individual user's learning. To improve the user's comprehension of texts written in a foreign language, we propose an intelligent learning tool based on the analysis of the user's eyeball movements, which is able to diagnose and improve foreign language reading ability by providing necessary supplementary aid just when it is needed. To determine the user's learning state, we correlate their eye movements with findings from research in cognitive psychology and neurophysiology. Based on this, the learning tool can distinguish whether users know or do not know words when they are reading foreign language sentences. If the learning tool judges a word to be unknown, it immediately provides the student with the meaning of the word by extracting it from an on-line dictionary. The proposed model provides a tool which empowers independent learning and makes access to the meanings of unknown words automatic. In this way, it can enhance a user's reading achievement as well as satisfaction with text comprehension in a foreign language.

Design and Implementation of HPC Job Management Framework for Computational Scientific Simulation (계산과학 시뮬레이션을 위한 HPC 작업 관리 프레임워크의 설계 및 구현)

  • Yu, Jung-Lok;Kim, Han-Gi;Byun, Hee-Jung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.554-557
    • /
    • 2016
  • Recently, supercomputer has been increasingly adopted as a computing environment for scientific simulation as well as education, healthcare and national defence. Especially, supercomputing system with heterogeneous computing resources is gaining resurgence of interest as a next-generation problem solving environment, allowing theoretical and/or experimental research in various fields to be free of time and spatial limits. However, traditional supercomputing services have only been handled through a simple form of command-line based console, which leads to the critical limit of accessibility and usability of heterogeneous computing resources. To address this problem, in this paper, we provide the design and implementation of web-based HPC (High Performance Computing) job management framework for computational scientific simulation. The proposed framework has highly extensible design principles, providing the abstraction interfaces of job scheduler (as well as bundle scheduler plug-ins for LoadLeveler, Sun Grid Engine, OpenPBS scheduler) in order to easily incorporate the broad spectrum of heterogeneous computing resources such as cluster, computing cloud and grid. We also present the detailed specification of HTTP standard based RESTful endpoints, which manage simulation job's life-cycles such as job creation, submission, control and status monitoring, etc., enabling various 3rd-party applications to be newly created on top of the proposed framework.

  • PDF

Dynamic Management of Equi-Join Results for Multi-Keyword Searches (다중 키워드 검색에 적합한 동등조인 연산 결과의 동적 관리 기법)

  • Lim, Sung-Chae
    • The KIPS Transactions:PartA
    • /
    • v.17A no.5
    • /
    • pp.229-236
    • /
    • 2010
  • With an increasing number of documents in the Internet or enterprises, it becomes crucial to efficiently support users' queries on those documents. In that situation, the full-text search technique is accepted in general, because it can answer uncontrolled ad-hoc queries by automatically indexing all the keywords found in the documents. The size of index files made for full-text searches grows with the increasing number of indexed documents, and thus the disk cost may be too large to process multi-keyword queries against those enlarged index files. To solve the problem, we propose both of the index file structure and its management scheme suitable to the processing of multi-keyword queries against a large volume of index files. For this, we adopt the structure of inverted-files, which are widely used in the multi-keyword searches, as a basic index structure and modify it to a hierarchical structure for join operations and ranking operations performed during the query processing. In order to save disk costs based on that index structure, we dynamically store in the main memory the results of join operations between two keywords, if they are highly expected to be entered in users' queries. We also do performance comparisons using a cost model of the disk to show the performance advantage of the proposed scheme.

A Study on the Improvement of Public Library Workflow Using Process Innovation (프로세스 이노베이션을 활용한 공공도서관 업무 프로세스 개선에 관한 연구)

  • Noh, Dong-Jo;Kim, Young-Mi;Oh, Dong-Geun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.28 no.4
    • /
    • pp.393-413
    • /
    • 2017
  • This study will explore the possibility of applying process innovation as a public library for business innovation. In order to do this, the study defines the concept of process innovation through the literature review. According to survey and focus group interviews with librarians working for public libraries, this study will organize work done at libraries into the following categories: acquisition, resource management, user management, user services, programs, volunteer management, marketing, library computerization system, facility management, general affairs and statistical management. In addition, through face-to-face surveys with librarians working for public libraries it is confirmed that there are issues within acquisition, user management, and applied process innovation. In response to these issues a new process has been developed. Using this new approach, book contract procedure and requests for book related work could be improved and optimized. In the user management section, the study analyzes the requirements and subscription procedures for members through a website survey of 30 public libraries in the United States and then provides an improved system through process analysis of the membership process of Korean public libraries. It is expected that the new system will contribute to improvements in user satisfaction as well as improvements in library workflow.

Real-time and Parallel Semantic Translation Technique for Large-Scale Streaming Sensor Data in an IoT Environment (사물인터넷 환경에서 대용량 스트리밍 센서데이터의 실시간·병렬 시맨틱 변환 기법)

  • Kwon, SoonHyun;Park, Dongwan;Bang, Hyochan;Park, Youngtack
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.54-67
    • /
    • 2015
  • Nowadays, studies on the fusion of Semantic Web technologies are being carried out to promote the interoperability and value of sensor data in an IoT environment. To accomplish this, the semantic translation of sensor data is essential for convergence with service domain knowledge. The existing semantic translation technique, however, involves translating from static metadata into semantic data(RDF), and cannot properly process real-time and large-scale features in an IoT environment. Therefore, in this paper, we propose a technique for translating large-scale streaming sensor data generated in an IoT environment into semantic data, using real-time and parallel processing. In this technique, we define rules for semantic translation and store them in the semantic repository. The sensor data is translated in real-time with parallel processing using these pre-defined rules and an ontology-based semantic model. To improve the performance, we use the Apache Storm, a real-time big data analysis framework for parallel processing. The proposed technique was subjected to performance testing with the AWS observation data of the Meteorological Administration, which are large-scale streaming sensor data for demonstration purposes.