• Title/Summary/Keyword: Software assessment

Search Result 1,066, Processing Time 0.029 seconds

Validity and Reliability of a Korean Version of the Flow State Scale for Occupational Task (한글판 작업과제 몰입상태척도(Korean version of Flow State Scale for Occupational Task: K-FSSOT)의 타당도 및 신뢰도연구)

  • Lee, Jeong-Hoon;Park, Ji-Hyuk
    • Therapeutic Science for Rehabilitation
    • /
    • v.10 no.4
    • /
    • pp.53-63
    • /
    • 2021
  • Objective : This study aimed to develop a Korean version of the Flow State Scale for Occupational Task (K-FSSOT), to measure the level of a flow experience of a subject in occupational therapy activities. Methods : To develop a measure of K-FSSOT, validity and reliability were verified through a systematic development process. The validity was verified by calculating the content validity index (CVI) through the content validity of 10 occupational therapists and a question-and-answer survey of 20 patients. Reliability was verified by investigating the internal consistency and examination-re-test reliability of 33 patients. Results : The item-CVI for each question in the content feasibility study was .90 to 1.00, and the scale-CVI, which is the average of the whole item, was found to be appropriate at .97. The verification of reliability indicated that the intrinsic value of the entire question was high at .855, and the test-retest reliability value was high at .894 (p<.01), showing a high correlation, and very high reliability. Conclusion : K-FSSOT could be used as a useful tool to measure the level of a flow experience of the target in performing occupational therapy activities for occupational therapists concerned about the participation and flow experience of the target.

Association Between Cognitive Impairment and Oral Health Related Quality of Life: Using Propensity Score Approaches (인지기능과 구강건강관련 삶의 질의 연관성에 대한 연구: 성향점수 분석과 회귀모델을 중심으로)

  • Cha, Suna;Bae, Suyeong;Nam, Sanghun;Hong, Ickpyo
    • Therapeutic Science for Rehabilitation
    • /
    • v.12 no.3
    • /
    • pp.61-77
    • /
    • 2023
  • Objective : This study analyzed the correlation between cognitive function and oral health-related quality of life (OHQoL). Methods : Demographic and clinical characteristics were extracted and utilized for subjects aged 45 years or older who participated in the 8th Korean Longitudinal Study on Aging in 2020. The dependent variable was the Geriatric Oral Health Assessment Index, and the independent variable was the level of cognitive function classified by the Mini-Mental State Examination scores. The analysis method used inverse probability of treatment weighting (IPTW). Then, the association between cognitive function and OHQoL was analyzed by multiple regression analysis. Results : Among the participants, 4,367 (71.40%) had normal cognition, 1,155 (18.89%) had moderate cognitive impairment, and 594 (9.71%) had severe cognitive impairment. As a result of analysis by applying IPTW, there was a negative correlation between the cognitive function group and OHQoL (normal vs. moderate: β = -2.534, p < .0001; normal vs. severe: β = -2.452, p < .0001). Conclusion : After propensity score matching, mild cognitive impairment showed a more negative association than severe cognitive impairment. Therefore, patients with cognitive impairment require oral health management education to improve OHQoL regardless of the level of cognitive impairment.

Development of Mental Health Self-Care App for University Student (대학생을 위한 정신건강 자가관리 어플리케이션 개발)

  • Kang, Gwang-Soon;Roh, Sun-Sik
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.1
    • /
    • pp.25-34
    • /
    • 2019
  • The purpose of this study is to develop a mobile app for mental health self care of university student. User centered design is a research design that applies the subject's needs assessment, analysis, design, development, evaluation, modification and supplement to suit the subjects. In order to manage the mental health of university students, they consisted of four main areas of mental health problems: drinking, sleeping, depression, and stress. It is designed to enable self test content, analysis and notification of inspection results, and management plan for current status of each area. Based on this, I developed an Android based mental health self-care Application. The subject can enter his or her mental health status data to explain the normal or risk level for each result, and the subject can then select the appropriate intervention method that he or she can perform. In addition, we developed a mental health self care calendar that can display the present status of each of the four areas on a day by day basis, and the current status can be expressed in an integrated manner through animations and status bars. The purpose of this study was to develop a mental health self-care app that can be improved by continuous and improved programs.

Review for Assessment Methodology of Disaster Prevention Performance using Scientometric Analysis (계량정보 분석을 활용한 방재성능평가 방법에 대한 고찰)

  • Dong Hyun Kim;Hyung Ju Yoo;Seung Oh Lee
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.4
    • /
    • pp.39-46
    • /
    • 2022
  • The rainfall characteristics such as heavy rains are changing differently from the past, and uncertainties are also greatly increasing due to climate change. In addition, urban development and population concentration are aggravating flood damage. Since the causes of urban inundation are generally complex, it is very important to establish an appropriate flood prevention plan. Thus, the government in Korea is establishing standards for disaster prevention performance for each local government. Since the concept of the disaster prevention performance target was first presented in 2010, the setting standards have changed several times, but the overall technology, methodology, and procedures have been maintained. Therefore, in this study, studies and technologies related to urban disaster prevention performance were reviewed using the scientometric analysis method to review them. This analysis is a method of identifying trends in the field and deriving new knowledge and information based on data such as papers and literature. In this study, papers related to the disaster prevention performance of the Web of Science for the last 30 years from 1990 to 2021 were collected. Citespace, scientometric software, was used to identify authors, research institutes, countries, and research trends, including citation analysis. As a result of the analysis, consideration factors such as the the concept of asset evaluation were identified when making decisions related to urban disaster prevention performance. In the future, it is expected that prevention performance standards and procedures can be upgraded if the keywords are specified and the review of each technology is conducted.

A Systematic Review of the Effects of Visual Perception Interventions for Children With Cerebral Palsy (뇌성마비 아동에게 시지각 중재가 미치는 효과에 대한 체계적 고찰)

  • Ha, Yae-Na;Chae, Song-Eun;Jeong, Mi-Yeon;Yoo, Eun-Young
    • Therapeutic Science for Rehabilitation
    • /
    • v.12 no.2
    • /
    • pp.55-68
    • /
    • 2023
  • Objective : This study aims to analyze the effects of visual perception intervention by systematically reviewing the studies that applied visual perception intervention to children with cerebral palsy. Methods : The databases used were PubMed, EMbase, Science Direct, ProQuest, Koreanstudies Information Service System (KISS), Research Information Sharing Service (RISS), and the National Assembly Library. The keywords used were cerebral palsy, CP, and visual perception. According to the PRISMA flowchart, 10 studies were selected from among studies published from January 1, 2012 to March 30, 2022. The quality level of the selected studies, the demographic characteristics of study participants, the effectiveness of interventions, area and strategies of intervention, assessment tools to measure the effectiveness of interventions, and risk of bias were analyzed. Results : All selected studies confirmed that visual perception intervention was effective in improving visual perception function. In addition, positive results were shown in upper extremity function, activities of daily living, posture control, goal achievement, and psychosocial areas as well as visual perception function. The eye-hand coordination area was intervened in all studies. Conclusion : In visual perception intervention, It is necessary to evaluate the visual perception function by area, and apply systematically graded customized interventions for each individual.

A Study on Constructing a RMF Optimized for Korean National Defense for Weapon System Development (무기체계 개발을 위한 한국형 국방 RMF 구축 방안 연구)

  • Jung keun Ahn;Kwangsoo Cho;Han-jin Jeong;Ji-hun Jeong;Seung-joo Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.5
    • /
    • pp.827-846
    • /
    • 2023
  • Recently, various information technologies such as network communication and sensors have begun to be integrated into weapon systems that were previously operated in stand-alone. This helps the operators of the weapon system to make quick and accurate decisions, thereby allowing for effective operation of the weapon system. However, as the involvement of the cyber domain in weapon systems increases, it is expected that the potential for damage from cyber attacks will also increase. To develop a secure weapon system, it is necessary to implement built-in security, which helps considering security from the requirement stage of the software development process. The U.S. Department of Defense is implementing the Risk Management Framework Assessment and Authorization (RMF A&A) process, along with the introduction of the concept of cybersecurity, for the evaluation and acquisition of weapon systems. Similarly, South Korea is also continuously making efforts to implement the Korea Risk Management Framework (K-RMF). However, so far, there are no cases where K-RMF has been applied from the development stage, and most of the data and documents related to the U.S. RMF A&A are not disclosed for confidentiality reasons. In this study, we propose the method for inferring the composition of the K-RMF based on systematic threat analysis method and the publicly released documents and data related to RMF. Furthermore, we demonstrate the effectiveness of our inferring method by applying it to the naval battleship system.

Cardiac Phenotyping of SARS-CoV-2 in British Columbia: A Prospective Echo Study With Strain Imaging

  • Jeffrey Yim;Michael Y.C. Tsang;Anand Venkataraman;Shane Balthazaar;Ken Gin;John Jue;Parvathy Nair;Christina Luong;Darwin F. Yeung;Robb Moss;Sean A Virani;Jane McKay;Margot Williams;Eric C. Sayre;Purang Abolmaesumi;Teresa S.M. Tsang
    • Journal of Cardiovascular Imaging
    • /
    • v.31 no.3
    • /
    • pp.125-132
    • /
    • 2023
  • BACKGROUND: There is limited data on the residual echocardiographic findings including strain analysis among post-coronavirus disease (COVID) patients. The aim of our study is to prospectively phenotype post-COVID patients. METHODS: All patients discharged following acute COVID infection were systematically followed in the post-COVID-19 Recovery Clinic at Vancouver General Hospital and St. Paul's Hospital. At 4-18 weeks post diagnosis, patients underwent comprehensive echocardiographic assessment. Left ventricular ejection fraction (LVEF) was assessed by 3D, 2D Biplane Simpson's, or visual estimate. LV global longitudinal strain (GLS) was measured using a vendor-independent 2D speckle-tracking software (TomTec). RESULTS: A total of 127 patients (53% female, mean age 58 years) were included in our analyses. At baseline, cardiac conditions were present in 58% of the patients (15% coronary artery disease, 4% heart failure, 44% hypertension, 10% atrial fibrillation) while the remainder were free of cardiac conditions. COVID-19 serious complications were present in 79% of the patients (76% pneumonia, 37% intensive care unit admission, 21% intubation, 1% myocarditis). Normal LVEF was seen in 96% of the cohort and 97% had normal right ventricular systolic function. A high proportion (53%) had abnormal LV GLS defined as < 18%. Average LV GLS of septal and inferior segments were lower compared to that of other segments. Among patients without pre-existing cardiac conditions, LVEF was abnormal in only 1.9%, but LV GLS was abnormal in 46% of the patients. CONCLUSIONS: Most post-COVID patients had normal LVEF at 4-18 weeks post diagnosis, but over half had abnormal LV GLS.

A School-tailored High School Integrated Science Q&A Chatbot with Sentence-BERT: Development and One-Year Usage Analysis (인공지능 문장 분류 모델 Sentence-BERT 기반 학교 맞춤형 고등학교 통합과학 질문-답변 챗봇 -개발 및 1년간 사용 분석-)

  • Gyeongmo Min;Junehee Yoo
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.3
    • /
    • pp.231-248
    • /
    • 2024
  • This study developed a chatbot for first-year high school students, employing open-source software and the Korean Sentence-BERT model for AI-powered document classification. The chatbot utilizes the Sentence-BERT model to find the six most similar Q&A pairs to a student's query and presents them in a carousel format. The initial dataset, built from online resources, was refined and expanded based on student feedback and usability throughout over the operational period. By the end of the 2023 academic year, the chatbot integrated a total of 30,819 datasets and recorded 3,457 student interactions. Analysis revealed students' inclination to use the chatbot when prompted by teachers during classes and primarily during self-study sessions after school, with an average of 2.1 to 2.2 inquiries per session, mostly via mobile phones. Text mining identified student input terms encompassing not only science-related queries but also aspects of school life such as assessment scope. Topic modeling using BERTopic, based on Sentence-BERT, categorized 88% of student questions into 35 topics, shedding light on common student interests. A year-end survey confirmed the efficacy of the carousel format and the chatbot's role in addressing curiosities beyond integrated science learning objectives. This study underscores the importance of developing chatbots tailored for student use in public education and highlights their educational potential through long-term usage analysis.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Development of Evaluation Method of Regional Contractility of Left Ventricle Using Gated Myocardial SPECT and Assessment of Reproducibility (게이트 심근 SPECT를 이용한 좌심실의 국소탄성률 평가방법 개발 및 재현성 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Kang, Won-Jun;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.6
    • /
    • pp.355-363
    • /
    • 2003
  • Purpose: Regional contractility can be calculated using the regional volume change of left ventricle measured by gated myocardial SPECT image and curve of central artery pressure obtained from radial artery pressure data. In this study, a program to obtain the regional contractility was developed, and reproducibility of regional contractility measurement was assessed. Materials and Methods: Seven patients(male:female=5:2, $58{\pm}11.9$ years) with coronary artery diseases underwent gated Tc-99m MIBI myocardial SPECT twice without delay between two scans. Regional volume change of left ventricle was estimated using CSA (Cardiac SPECT Analyzer) software developed in this study. Regional contractility was iteratively estimated from the time-elastance curve obtained using the time-pressure curve and regional time-volume curve. Reproducibility of regional contractility measurement assessed by comparing the contractility values measured twice from the same SPECT data and by comparing those measured from the pair of SPECT data obtained from a same patient. Results: Measured regional contractility was $3.36{\pm}3.38{mm}Hg/mL$ using 15-segment model, $3.16{\pm}2.25{mm}Hg/mL$ using 7-segment model, and $3.11{\pm}2.57{mm}Hg/mL$ using 5-segment model. The harmonic average of regional contractility value was almost identical to the global contractility. Correlation coefficient of regional contractility values measured twice from the same data was greater than 0.97 for all models, and two standard deviations of contractility difference on Bland Altman plot were 1.5%, 1.0%, and 0.9% for 15-, 7-, and 5-segment models, respectively. Correlation coefficient of regional contractility values measured from the pair of SPECT data obtained from a same patient was greater than 0.95 for all models, and two standard deviations on Bland Altman plot were 2.2%, 1.0%, and 1.2%. Conclusion: Regional contractility of left ventricle measured using developed software in this study was reproducible. Regional contractility of left ventricle will be a new useful index for myocardial function after analysis of the clinical data.