• Title/Summary/Keyword: web-based platform

Search Result 691, Processing Time 0.025 seconds

Implementation of a Remote Patient Monitoring System using Mobile Phones (모바일 폰을 이용한 원격 환자 관리 시스템의 구현)

  • Park, Hung-Bog;Seo, Jung-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.6
    • /
    • pp.1167-1174
    • /
    • 2009
  • In the monitoring of a patient in a sickroom, not only the physiologic and environmental data of the patient, which is automatically measured, but also the clinical data(clinical chart)of the patient, which is drew up by a doctor or nurse, are recognized as important data. However, since in the current environment of a sickroom, clinical data is collected being divided from the data that is automatically measured, the two data are used without an effective integration. This is because the integration of the two data is difficult due to their different collection times, which leads the reconstruction of clinical data to be remarkably uncertain. In order to solve these problems, a method to synchronize the continuous environmental data of a sickroom and clinical data is appearing as an important measure. In addition, the increase of use of small machines and the development of solutions based on wireless communications provide a communication platform to the developers of health care. Thus, this paper realizes a remote system for taking care of patients based on a web that uses mobile phones. That is, clinical data made by a nurse or doctor and the environmental data of a sick room comes to be collected by a collection module through a wireless sensor network. An observer can see clinical data and the environmental data of a sickroom through his/her mobile phone, integrating and storing his/her data into the database. Families of a patient can see clinical data made by hospital and the environment of the sick room of the patent through their computers or mobile phones outside the hospital. Through the system,hospital can provide better medical services to patients and their families.

Relationships between Job Stress and Burnout of Primary Health Care Practitioners during COVID-19: A Mixed Methods Study (코로나19 기간 동안 보건진료전담공무원의 직무스트레스와 소진의 관계: 혼합연구방법)

  • Ha, Yeongmi;Yim, Eun Shil;Kim, Youngnam;Choi, Hyunkyoung;Ko, Young-suk;Jung, Mira;Yi, Jee-Seon;Choi, Youngmi; Shin, Eun Ji;Kim, Younkyoung;Lee, Kowoon;Jung, Aeri;Jang, Ji hui;Kim, Da Eun;Kim, Kyeonghui;Shin, So Young;Yang, Seung-Kyoung;Park, Songran
    • Journal of Korean Academy of Rural Health Nursing
    • /
    • v.19 no.1
    • /
    • pp.25-34
    • /
    • 2024
  • Purpose: This study investigates the relationship between job stress and burnout among primary healthcare practitioners during COVID-19 pandemic through mixed methods study. Methods: Data were collected from October to November 2022 using Qualtrix, a web-based survey platform. 1,082 primary health care practitioners participated in the survey. Quantitative data were analyzed using correlation analysis using IBM SPSS/WIN 27.0. Qualitative data were analyzed using content analysis through open-ended questions. Results: Job stress and burnout among primary healthcare practitioners during COVID-19 were positively correlated. Four categories and seven subcategories were identified. Conclusion: Based on these findings, it is necessary to develop a support system for primary healthcare practitioners according to the type of residential area and the number of peopleto reduce job stress and burnout.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

Study on the Effect of Self-Disclosure Factor on Exposure Behavior of Social Network Service (자기노출 요인이 소셜 네트워크 서비스의 노출행동에 미치는 영향에 관한 연구)

  • Do Soon Kwon;Seong Jun Kim;Jung Eun Kim;Hye In Jeong;Ki Seok Lee
    • Information Systems Review
    • /
    • v.18 no.3
    • /
    • pp.209-233
    • /
    • 2016
  • Internet companies that utilize social network have increased in number. The introduction of diverse social media services facilitated innovative changes in e-business. Social network service (SNS), which is a domain of social media, is a web-based service designed to strengthen human relations in the Internet and build new social relations. The remarkable growth of social network services and the profit generation and perception of this service are the new growth engines of this digital age. Given this development, many global IT companies views SNS as the most powerful form of social media. Thus, they invest efforts to develop business models using SNS.2) This study verifies the impact of privacy exposure in SNS as a result of privacy invasion. This study examines the purpose of using the SNS and user's awareness of the significance of personal information, which are key factors that affect self-disclosure of personal information. This study utilizes theory of reasoned action (TRA) to provide a theoretical platform that describes the specific behavior and emotional response of individuals. This study presents a research model that considers negative attitude (negatude). In this model, self-disclosure in SNS is considered a TRA. TRA is a subjective norm, a behavioral intention, and a key variable of exposure behavior. A survey was conducted on college students at Y university in Seoul to empirically verify the research model. The students have experiences in using SNS. A total of 198 samples were collected. Path analysis was applied to analyze the relations of factors. The results of path analysis show the statistically insignificant impact of privacy invasion on negatude, subjective norm, behavioral intention, and exposure behavior. The impact of unrecognized privacy invasion was also considered insignificant. The impacts of intention to use SNS on negatude, subjective norm, behavioral intention, and exposure behavior was significant. A significant impact was also found for the significance of personal information on subjective norm, behavioral intention, and exposure behavior, whereas the impact on negatude was insignificant. The impact of subjective norm on behavioral intention was significant. Lastly, the impact of behavioral intention on exposure behavior was insignificant. These findings are significant because the study examined the process of self-disclosure by integrating psychological and social factors based on theoretical discussion.

Design of Splunk Platform based Big Data Analysis System for Objectionable Information Detection (Splunk 플랫폼을 활용한 유해 정보 탐지를 위한 빅데이터 분석 시스템 설계)

  • Lee, Hyeop-Geon;Kim, Young-Woon;Kim, Ki-Young;Choi, Jong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.1
    • /
    • pp.76-81
    • /
    • 2018
  • The Internet of Things (IoT), which is emerging as a future economic growth engine, has been actively introduced in areas close to our daily lives. However, there are still IoT security threats that need to be resolved. In particular, with the spread of smart homes and smart cities, an explosive amount of closed-circuit televisions (CCTVs) have been installed. The Internet protocol (IP) information and even port numbers assigned to CCTVs are open to the public via search engines of web portals or on social media platforms, such as Facebook and Twitter; even with simple tools these pieces of information can be easily hacked. For this reason, a big-data analytics system is needed, capable of supporting quick responses against data, that can potentially contain risk factors to security or illegal websites that may cause social problems, by assisting in analyzing data collected by search engines and social media platforms, frequently utilized by Internet users, as well as data on illegal websites.

An Exploratory Study on the Big Data Convergence-based NCS Homepage : focusing on the Use of Splunk (빅데이터 융합 기반 NCS 홈페이지에 관한 탐색적 연구: 스플렁크 활용을 중심으로)

  • Park, Seong-Taek;Lee, Jae Deug;Kim, Tae Ung
    • Journal of Digital Convergence
    • /
    • v.16 no.7
    • /
    • pp.107-116
    • /
    • 2018
  • One of the key mission is to develop and prompte the use National Competency Standards, which is defined to be the systemization of competencies(knowledge, skills and attitudes) required to perform duties at the workplace by the nation for each industrial sector and level. This provides the basis for the design of training and detailed specifications for workplace assessment. To promote the data-driven service improvement, the commercial product Splunk was introduced, and has grown to become an extremely useful platform because it enables the users to search, collect, and organize data in a far more comprehensive, far less labor-intensive way than traditional databases. Leveraging Splunk's built-in data visualization and analytical features, HRD Korea have built custom tools to gain new insight and operational intelligence that organizations have never had before. This paper analyzes the NCS homepage. Concretely, applying Splunk in creating visualizations, dashboards and performing various functional and statistical analysis and structure without Web development skills. We presented practical use and implications through case studies.

Development of Virtual Ambient Weather Measurement System for the Smart Greenhouse (스마트온실을 위한 가상 외부기상측정시스템 개발)

  • Han, Sae-Ron;Lee, Jae-Su;Hong, Young-Ki;Kim, Gook-Hwan;Kim, Sung-Ki;Kim, Sang-Cheol
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.5 no.5
    • /
    • pp.471-479
    • /
    • 2015
  • This study was conducted to make use of Korea Meteorological Administration(KMA)'s Automatic Weather Station(AWS) data to operate smart green greenhouse. A Web-based KMA AWS data receiving system using JAVA and APM_SETUP 8 on windows 7 platform was developed. The system was composed of server and client. The server program was developed by a Java application to receive weather data from the KMA every 30 minutes and to send the weather data to smart greenhouse. The client program was developed by a Java applets to receive the KMA AWS data from the server every 30 minutes through communicating with the server so that smart greenhouse could recognize the KMA AWS data as the ambient weather information. This system was evaluated by comparing with local weather data measured by Inc. Ezfarm. In case of ambient air temperature, it showed some difference between virtual data and measured data. But, the average absolute deviation of the difference has a little difference as less than 2.24℃. Therefore, the virtual weather data of the developed system was considered available as the ambient weather information of the smart greenhouse.

Oil Spill Monitoring in Norilsk, Russia Using Google Earth Engine and Sentinel-2 Data (Google Earth Engine과 Sentinel-2 위성자료를 이용한 러시아 노릴스크 지역의 기름 유출 모니터링)

  • Minju Kim;Chang-Uk Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.311-323
    • /
    • 2023
  • Oil spill accidents can cause various environmental issues, so it is important to quickly assess the extent and changes in the area and location of the spilled oil. In the case of oil spill detection using satellite imagery, it is possible to detect a wide range of oil spill areas by utilizing the information collected from various sensors equipped on the satellite. Previous studies have analyzed the reflectance of oil at specific wavelengths and have developed an oil spill index using bands within the specific wavelength ranges. When analyzing multiple images before and after an oil spill for monitoring purposes, a significant amount of time and computing resources are consumed due to the large volume of data. By utilizing Google Earth Engine, which allows for the analysis of large volumes of satellite imagery through a web browser, it is possible to efficiently detect oil spills. In this study, we evaluated the applicability of four types of oil spill indices in the area of various land cover using Sentinel-2 MultiSpectral Instrument data and the cloud-based Google Earth Engine platform. We assessed the separability of oil spill areas by comparing the index values for different land covers. The results of this study demonstrated the efficient utilization of Google Earth Engine in oil spill detection research and indicated that the use of oil spill index B ((B3+B4)/B2) and oil spill index C (R: B3/B2, G: (B3+B4)/B2, B: (B6+B7)/B5) can contribute to effective oil spill monitoring in other regions with complex land covers.

A Study on Open Source Version and License Detection Tool (오픈소스 버전 및 라이선스 탐지 도구에 관한 연구)

  • Ki-Hwan Kim;Seong-Cheol Yoon;Su-Hyun Kim;Im-Yeong Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.299-310
    • /
    • 2024
  • Software is expensive, labor-intensive, and time-consuming to develop. To solve this problem, many organizations turn to publicly available open source, but they often do so without knowing exactly what they're getting into. Older versions of open source have various security vulnerabilities, and even when newer versions are released, many users are still using them, exposing themselves to security threats. Additionally, compliance with licenses is essential when using open source, but many users overlook this, leading to copyright issues. To solve this problem, you need a tool that analyzes open source versions, vulnerabilities, and license information. Traditional Blackduck provide a wealth of open source information when you request the source code, but it's a heavy lift to build the environment. In addition, Fossology extracts the licenses of open source, but does not provide detailed information such as versions because it does not have its own database. To solve these problems, this paper proposes a version and license detection tool that identifies the open source of a user's source code by measuring the source code similarity, and then detects the version and license. The proposed method improves the accuracy of similarity over existing source code similarity measurement programs such as MOSS, and provides users with information about licenses, versions, and vulnerabilities by analyzing each file in the corresponding open source in a web-based lightweight platform environment. This solves capacity issues such as BlackDuck and the lack of open source details such as Fossology.