• Title/Summary/Keyword: Web based Databases

Search Result 310, Processing Time 0.038 seconds

A Systematic Review and Meta-Analysis of the Effects of Simultaneous Dual-Task Training on Executive Function in Older Adults (동시적 이중과제 훈련이 노인의 실행기능에 미치는 효과: 체계적 고찰 및 메타분석)

  • Jeun, Yu-Jin;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.10 no.3
    • /
    • pp.23-41
    • /
    • 2021
  • Objective : The purpose of this study was to analyze the effects of simultaneous dual-task training to assess executive function in older adults. Methods : We searched the PubMed, EMBASE, Cochrane, Web of Science, and RISS databases of publicated studies in the past decade. Seven studies were selected based on the inclusion and exclusion criteria. Qualitative assessment and meta-analysis were performed for the seven studies. Results : A randomized controlled trial design was used in the selected studies, and PEDro Scores above seven were obtained. The Trial Making Test (TMT) evaluated the effects of dual-task training on executive function in four studies. The Color Trail Test (CTT) was used in two studies, and Stroop test was used in three studies. The effect size for total executive function was 0.38, which was small. The effect sizes for TMT and CTT were 0.37. Stroop Test was 0.34, demonstrating that their effect sizes were also small. Only significant effects in total executive function, TMT, and CTT showed significant effects (all p<0.05). Conclusion : This study confirmed that dual-task training was effective in improving executive function in older adults. To improve the effectiveness of dual-task training, the difficulty of the dual-task training should be considered. It is also necessary to implement assessments that can evaluate performance under dual-task conditions as well as conventional test tools for executive function. In the future, dual-task training could be used as an appropriate intervention for executive function in older adults to delay the onset of dementia.

Efficacy and Toxicity of Anti-VEGF Agents in Patients with Castration-Resistant Prostate Cancer: a Meta-analysis of Prospective Clinical Studies

  • Qi, Wei-Xiang;Fu, Shen;Zhang, Qing;Guo, Xiao-Mao
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.19
    • /
    • pp.8177-8182
    • /
    • 2014
  • Background: Blocking angiogenesis by targeting vascular endothelial growth factor (VEGF) signaling pathway to inhibit tumor growth has proven to be successful in treating a variety of different metastatic tumor types, including kidney, colon, ovarian, and lung cancers, but its role in castration-resistant prostate cancer (CRPC) is still unknown. We here aimed to determine the efficacy and toxicities of anti-VEGF agents in patients with CRPC. Materials and Methods: The databases of PubMed, Web of Science and abstracts presented at the American Society of Clinical Oncology up to March 31, 2014 were searched for relevant articles. Pooled estimates of the objective response rate (ORR) and prostate-specific antigen (PSA) response rate (decline ${\geq}50%$) were calculated using the Comprehensive Meta-Analysis (version 2.2.064) software. Median weighted progression-free survival (PFS) and overall survival (OS) time for anti-VEGF monotherapy and anti-VEGF-based doublets were compared by two-sided Student's t test. Results: A total of 3,841 patients from 19 prospective studies (4 randomized controlled trials and 15 prospective nonrandomized cohort studies) were included for analysis. The pooled ORR was 12.4% with a higher response rate of 26.4% (95%CI, 13.6-44.9%) for anti-VEGF-based combinations vs. 6.7% (95%CI, 3.5-12.7%) for anti-VEGF alone (p=0.004). Similarly, the pooled PSA response rate was 32.4% with a higher PSA response rate of 52.8% (95%CI: 40.2-65.1%) for anti-VEGF-based combinations vs. 7.3% (95%CI, 3.6-14.2%) for anti-VEGF alone (p<0.001). Median PFS and OS were 6.9 and 22.1 months with weighted median PFS of 5.6 vs. 6.9 months (p<0.001) and weighted median OS of 13.1 vs. 22.1 months (p<0.001) for anti-VEGF monotherapy vs. anti-VEGF-based doublets. Conclusions: With available evidence, this pooled analysis indicates that anti-VEGF monotherapy has a modest effect in patients with CRPC, and clinical benefits gained from anti-VEGF-based doublets appear greater than anti-VEGF monotherapy.

Psychosocial Interventions for Children and Adolescents after a Disaster: A Systematic Literature Review (1991-2015) (재난 후 소아청소년의 정신사회적 개입: 체계적 문헌고찰(1991~2015))

  • Lee, Mi-Sun;Hwang, Jun-Won;Lee, Cheol-Soon;Kim, Ji-Youn;Lee, Ju-Hyun;Kim, Eunji;Chang, Hyoung Yoon;Bae, Seung-Min;Park, Jang-Ho;Bhang, Soo-Young
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.27 no.4
    • /
    • pp.278-305
    • /
    • 2016
  • Objective: The aim of this systematic literature review is to analyze the psychosocial interventions for children and adolescents after disasters. Methods: We conducted a review of the extant research literature from 1991 to 2015 via a comprehensive search of the MEDLINE, EMBASE, Cochrane CENTRAL, PubMed and PsyclNFO databases. The keywords employed in this research included: 'child', 'adolescent', 'youth', 'disaster', 'posttraumatic', 'psychosocial', 'therapy' and 'intervention'. The researchers followed the PRISMA guidelines. A total of 850 articles were screened for their eligibility and fifty-nine were found to meet the study criteria. The final data analysis was performed based on the disaster type, study design, type of intervention, sample size, age, school grade, number of sessions, setting of intervention delivery, providers, approach and parent involvement. Results: Countries worldwide have experienced various kinds of disasters, including earthquakes, hurricanes, vessel accidents, tornados, tsunamis, volcanic eruptions, war, fire, terrorism, and traffic accidents. The types of psychosocial intervention that were conducted after these disasters included: psychological first aid, psychological debriefing, psychoeducation, trauma focused cognitive behavior therapy, eye movement desensitization reprocessing, prolonged exposure therapy, group play therapy and arts therapy, project interventions, school-based interventions and web-based interventions. Conclusion: The findings of the systematic literature review suggest that an appropriate psychosocial intervention could be utilized as evidence-based mental health treatment for children and adolescents after disasters.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Establishment and Application of GIS-Based DongNam Kwon Industry Information System (GIS기반 동남 광역권 산업체 정보시스템 구축 및 활용)

  • Nam, Kwang-Woo;Kwon, Il-Hwa;Park, Jun-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.1
    • /
    • pp.70-79
    • /
    • 2014
  • Following the technology developments of traffic network and communication for the wide area, the importance of the cooperation system to vitalize the wide area economy is increasing. Therefore, in this study, DongNam Kwon industry information system is established for the industrial information sharing based on GIS in the DongNam Kwon wide area economy. The DongNam Kwon is an industrial integration area centered with the manufacturing so that the operation of effective industrial cluster and cooperation systems are required across the administrational boundaries. To establish the database of the information, the information system was established utilizing already established industrial databases in Busan, Ulsan and Gyeongnam. But, various issues caused by the discordances among the data of each local government and the insufficiency of GIS based location information have been found. According to the analysis, the standardization considering the courses of collection, distributions and utilization are required immediately to solve the issues. This study establishes an 2-way industrial information system enabling the information creation and the phased approach between the administrator and the user in the bi-directions on the web by utilizing cadastral and numerical maps. The result of this study would have a meaning in providing a fundamental frame for cooperative responses and cooperation system for DongNam Kwon's industrial promotion using industrial information sharing.

Generation, Storing and Management System for Electronic Discharge Summaries Using HL7 Clinical Document Architecture (HL7 표준임상문서구조를 사용한 전자퇴원요약의 생성, 저장, 관리 시스템)

  • Kim, Hwa-Sun;Kim, Il-Kon;Cho, Hune
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.239-249
    • /
    • 2006
  • Interoperability has been deemphasized from the hospital information system in general, because it is operated independently of other hospital information systems. This study proposes a future-oriented hospital information system through the design and actualization of the HL7 clinical document architecture. A clinical document is generated using the hospital information system by analysis and designing the clinical document architecture, after we defined the item regulations and the templates for the release form and radiation interpretation form. The schema is analyzed based on the HL7 reference information model, and HL7 interface engine ver.2.4 was used as the transmission protocol. This study has the following significance. First, an expansion and redefining process conducted, founded on the HL7 clinical document architecture and reference information model, to apply international standards to Korean contexts. Second, we propose a next-generation web based hospital information system that is based on the clinical document architecture. In conclusion, the study of the clinical document architecture will include an electronic health record (EHR) and a clinical data repository (CDR), and also make possible medical information-sharing among various healthcare institutions.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Video Matching Algorithm of Content-Based Video Copy Detection for Copyright Protection (저작권보호를 위한 내용기반 비디오 복사검출의 비디오 정합 알고리즘)

  • Hyun, Ki-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.315-322
    • /
    • 2008
  • Searching a location of the copied video in video database, signatures should be robust to video reediting, channel noise, time variation of frame rate. Several kinds of signatures has been proposed. Ordinal signature, one of them, is difficult to describe the spatial characteristics of frame due to the site of fixed window, $N{\times}N$, which is compute the average gray value. In this paper, I studied an algorithm of sequence matching in video copy detection for the copyright protection, employing the R-tree index method for retrieval and suggesting a robust ordinal signatures for the original video clips and the same signatures of the pirated video. Robust ordinal has a 2-dimensional vector structures that has a strong to the noise and the variation of the frame rate. Also, it express as MBR form in search space of R-tree. Moreover, I focus on building a video copy detection method into which content publishers register their valuable digital content. The video copy detection algorithms compares the web content to the registered content and notifies the content owners of illegal copies. Experimental results show the proposed method is improve the video matching rate and it has a characteristics of signature suitable to the large video databases.

  • PDF

An Efficient Sequence Matching Method for XML Query Processing (XML 질의 처리를 위한 효율적인 시퀀스 매칭 기법)

  • Seo, Dong-Min;Song, Seok-Il;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.356-367
    • /
    • 2008
  • As XML is gaining unqualified success in being adopted as a universal data representation and exchange format, particularly in the World Wide Web, the problem of querying XML documents poses interesting challenges to database researcher. Several structural XML query processing methods, including XISS and XR-tree, for past years, have been proposed for fast query processing. However, structural XML query processing has the problem of requiring expensive Join cost for twig path query Recently, sequence matching based XML query processing methods, including ViST and PRIX, have been proposed to solve the problem of structural XML query processing methods. Through sequence matching based XML query processing methods match structured queries against structured data as a whole without breaking down the queries into sub queries of paths or nodes and relying on join operations to combine their results. However, determining the structural relationship of ViST is incorrect because its numbering scheme is not optimized. And PRIX requires many processing time for matching LPS and NPS about XML data trees and queries. Therefore, in this paper, we propose efficient sequence matching method u sing the bottom-up query processing for efficient XML query processing. Also, to verify the superiority of our index structure, we compare our sequence matching method with ViST and PRIX in terms of query processing with linear path or twig path including wild-card('*' and '//').