• Title/Summary/Keyword: software tool

Search Result 2,142, Processing Time 0.029 seconds

A Convergence Study of Surface Electromyography in Swallowing Stages for Swallowing Function Evaluation in Older Adults: Systematic Review (노인의 삼킴 단계별 삼킴 기능 평가를 위한 표면 근전도 검사의 융합적 연구 : 체계적 문헌고찰)

  • Park, Sun-Ha;Bae, Suyeong;Kim, Jung-eun;Park, Hae-Yean
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2022
  • In this study, a systematic review was conducted to analyze the method of applying sEMG to evaluate the swallowing function of the elderly at each stage of swallowing, and to help objectively measure the swallowing stage of the older adults in clinical practice. From 2011 to 2021, 7 studies that met the selection criteria were selected using Pubmed, Scopus, and Web of Science (WoS). As a result of this study, the older adults and adults were divided into an experimental group and a control group and the swallowing phase was analyzed using sEMG only for the older adults. sEMG was used to evaluate swallowing in the oral and pharyngeal stages, and the sEMG attachment site was attached to the swallowing muscle involved in each stage. The collected sEMG data were filtered using a bandpass-filter and a notch-filter, and were analyzed using RMS, amplitude, and maximum spontaneous contraction. In this study, it was found that sEMG can be used as a tool to objectively and quantitatively evaluate the swallowing function in stages. Therefore, it is expected that this study will activate various studies that incorporate sEMG to evaluate the swallowing function in stages.

Applicability of QSAR Models for Acute Aquatic Toxicity under the Act on Registration, Evaluation, etc. of Chemicals in the Republic of Korea (화평법에 따른 급성 수생독성 예측을 위한 QSAR 모델의 활용 가능성 연구)

  • Kang, Dongjin;Jang, Seok-Won;Lee, Si-Won;Lee, Jae-Hyun;Lee, Sang Hee;Kim, Pilje;Chung, Hyen-Mi;Seong, Chang-Ho
    • Journal of Environmental Health Sciences
    • /
    • v.48 no.3
    • /
    • pp.159-166
    • /
    • 2022
  • Background: A quantitative structure-activity relationship (QSAR) model was adopted in the Registration, Evaluation, Authorization, and Restriction of Chemicals (REACH, EU) regulations as well as the Act on Registration, Evaluation, etc. of Chemicals (AREC, Republic of Korea). It has been previously used in the registration of chemicals. Objectives: In this study, we investigated the correlation between the predicted data provided by three prediction programs using a QSAR model and actual experimental results (acute fish, daphnia magna toxicity). Through this approach, we aimed to effectively conjecture on the performance and determine the most applicable programs when designating toxic substances through the AREC. Methods: Chemicals that had been registered and evaluated in the Toxic Chemicals Control Act (TCCA, Republic of Korea) were selected for this study. Two prediction programs developed and operated by the U.S. EPA - the Ecological Structure-Activity Relationship (ECOSAR) and Toxicity Estimation Software Tool (T.E.S.T.) models - were utilized along with the TOPKAT (Toxicity Prediction by Komputer Assisted Technology) commercial program. The applicability of these three programs was evaluated according to three parameters: accuracy, sensitivity, and specificity. Results: The prediction analysis on fish and daphnia magna in the three programs showed that the TOPKAT program had better sensitivity than the others. Conclusions: Although the predictive performance of the TOPKAT program when using a single predictive program was found to perform well in toxic substance designation, using a single program involves many restrictions. It is necessary to validate the reliability of predictions by utilizing multiple methods when applying the prediction program to the regulation of chemicals.

Modelling Gas Production Induced Seismicity Using 2D Hydro-Mechanical Coupled Particle Flow Code: Case Study of Seismicity in the Natural Gas Field in Groningen Netherlands (2차원 수리-역학적 연계 입자유동코드를 사용한 가스생산 유발지진 모델링: 네덜란드 그로닝엔 천연가스전에서의 지진 사례 연구)

  • Jeoung Seok Yoon;Anne Strader;Jian Zhou;Onno Dijkstra;Ramon Secanell;Ki-Bok Min
    • Tunnel and Underground Space
    • /
    • v.33 no.1
    • /
    • pp.57-69
    • /
    • 2023
  • In this study, we simulated induced seismicity in the Groningen natural gas reservoir using 2D hydro-mechanical coupled discrete element modelling (DEM). The code used is PFC2D (Particle Flow Code 2D), a commercial software developed by Itasca, and in order to apply to this study we further developed 1)initialization of inhomogeneous reservoir pressure distribution, 2)a non-linear pressure-time history boundary condition, 3)local stress field monitoring logic. We generated a 2D reservoir model with a size of 40 × 50 km2 and a complex fault system, and simulated years of pressure depletion with a time range between 1960 and 2020. We simulated fault system failure induced by pressure depletion and reproduced the spatiotemporal distribution of induced seismicity and assessed its failure mechanism. Also, we estimated the ground subsidence distribution and confirmed its similarity to the field measurements in the Groningen region. Through this study, we confirm the feasibility of the presented 2D hydro-mechanical coupled DEM in simulating the deformation of a complex fault system by hydro-mechanical coupled processes.

How to automatically extract 2D deliverables from BIM?

  • Kim, Yije;Chin, Sangyoon
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1253-1253
    • /
    • 2022
  • Although the construction industry is changing from a 2D-based to a 3D BIM-based management process, 2D drawings are still used as standards for permits and construction. For this reason, 2D deliverables extracted from 3D BIM are one of the essential achievements of BIM projects. However, due to technical and institutional problems that exist in practice, the process of extracting 2D deliverables from BIM requires additional work beyond generating 3D BIM models. In addition, the consistency of data between 3D BIM models and 2D deliverables is low, which is a major factor hindering work productivity in practice. To solve this problem, it is necessary to build BIM data that meets information requirements (IRs) for extracting 2D deliverables to minimize the amount of work of users and maximize the utilization of BIM data. However, despite this, the additional work that occurs in the BIM process for drawing creation is still a burden on BIM users. To solve this problem, the purpose of this study is to increase the productivity of the BIM process by automating the process of extracting 2D deliverables from BIM and securing data consistency between the BIM model and 2D deliverables. For this, an expert interview was conducted, and the requirements for automation of the process of extracting 2D deliverables from BIM were analyzed. Based on the requirements, the types of drawings and drawing expression elements that require automation of drawing generation in the design development stage were derived. Finally, the method for developing automation technology targeting elements that require automation was classified and analyzed, and the process for automatically extracting BIM-based 2D deliverables through templates and rule-based automation modules were derived. At this time, the automation module was developed as an add-on to Revit software, a representative BIM authoring tool, and 120 rule-based automation rulesets, and the combinations of these rulesets were used to automatically generate 2D deliverables from BIM. Through this, it was possible to automatically create about 80% of drawing expression elements, and it was possible to simplify the user's work process compared to the existing work. Through the automation process proposed in this study, it is expected that the productivity of extracting 2D deliverables from BIM will increase, thereby increasing the practical value of BIM utilization.

  • PDF

A Review of Clinical Studies for Chinese Medicine Treatment of Idiopathic Thrombocytopenic Purpura Using the CNKI Database (특발성 혈소판 감소성 자반증의 중의치료에 대한 임상연구 동향 - CNKI검색을 중심으로)

  • Ji-eun Bae;Jae-won Park;Jun-kyu Lim;Mi-so Park;Jeong-su Hong;Dong-jin Kim
    • The Journal of Internal Korean Medicine
    • /
    • v.43 no.6
    • /
    • pp.1045-1062
    • /
    • 2022
  • Objectives: The aim of this study was to analyze the latest clinical studies on Korean medicine treatment of idiopathic thrombocytopenic purpura (ITP) in the Chinese National Knowledge Infrastructure (CNKI) database. Methods: We searched the last 6 years of clinical studies discussing Oriental medicine-based treatments for ITP in the CNKI database. A meta-analysis of 13 RCTs was performed by synthesizing the outcomes, including the measured platelet count and total effective rate. The quality of the studies was assessed using Cochrane's risk of bias (RoB) tool. RevMan 5.4.1 software was used for data analysis. Results: Of the 15 selected studies, 1 was a non-randomized controlled trial (nRCT), 2 were case series, and 12 were randomized controlled trials (RCTs). Treatments in all studies included oral herbal medicine. The most frequently used herbal decoction was the Liangxue Jiedu prescription (凉血解毒方), and the most commonly used herb was Agrimonia pilosa (仙鶴草), Astragali Radix (黃芪), Fossilia Glycyrrhizae Radix et Rhizoma (甘草), and Rehmannia glutinosa Liboschitz ex Steudel (地黃). The meta-analysis showed significantly better improvement in platelet counts and total effective rate for ITP in the treatment group than in the control group. Conclusion: Treatment with herbal medicine was effective in treating ITP. However, the significance of this conclusion is somewhat limited due to the low quality of the available studies. Multifaceted and scientifically designed clinical studies are required to develop treatments for ITP based on Korean medicine. The results of this study could be used as basic data for further ITP studies.

Electric Vehicle Wireless Charging Control Module EMI Radiated Noise Reduction Design Study (전기차 무선충전컨트롤 모듈 EMI 방사성 잡음 저감에 관한 설계 연구)

  • Seungmo Hong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.2
    • /
    • pp.104-108
    • /
    • 2023
  • Because of recent expansion of the electric car market. it is highly growing that should be supplemented its performance and safely issue. The EMI problem due to the interlocking of electrical components that causes various safety problems such as fire in electric vehicles is emerging every time. We strive to achieve optimal charging efficiency by combining various technologies and reduce radioactive noise among the EMI noise of a weirless charging control module, one of the important parts of an electric vehicle was designed and tested. In order to analyze the EMI problems occurring in the wireless charging control module, the optimized wireless charging control module by applying the optimization design technology by learning the accumulated test data for critical factors by utilizing the Python-based script function in the Ansys simulation tool. It showed an EMI noise improvement effect of 25 dBu V/m compared to the charge control module. These results not only contribute to the development of a more stable and reliable weirless charging function in electric vehicles, but also increase the usability and efficiency of electric vehicles. This allows electric vehicles to be more usable and efficient, making them an environmentally friendly alternative.

Analysis and Orange Utilization of Training Data and Basic Artificial Neural Network Development Results of Non-majors (비전공자 학부생의 훈련데이터와 기초 인공신경망 개발 결과 분석 및 Orange 활용)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.2
    • /
    • pp.381-388
    • /
    • 2023
  • Through artificial neural network education using spreadsheets, non-major undergraduate students can understand the operation principle of artificial neural networks and develop their own artificial neural network software. Here, training of the operation principle of artificial neural networks starts with the generation of training data and the assignment of correct answer labels. Then, the output value calculated from the firing and activation function of the artificial neuron, the parameters of the input layer, hidden layer, and output layer is learned. Finally, learning the process of calculating the error between the correct label of each initially defined training data and the output value calculated by the artificial neural network, and learning the process of calculating the parameters of the input layer, hidden layer, and output layer that minimize the total sum of squared errors. Training on the operation principles of artificial neural networks using a spreadsheet was conducted for undergraduate non-major students. And image training data and basic artificial neural network development results were collected. In this paper, we analyzed the results of collecting two types of training data and the corresponding artificial neural network SW with small 12-pixel images, and presented methods and execution results of using the collected training data for Orange machine learning model learning and analysis tools.

Implementation of Git's Commit Message Classification Model Using GPT-Linked Source Change Data

  • Ji-Hoon Choi;Jae-Woong Kim;Seong-Hyun Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.123-132
    • /
    • 2023
  • Git's commit messages manage the history of source changes during project progress or operation. By utilizing this historical data, project risks and project status can be identified, thereby reducing costs and improving time efficiency. A lot of research related to this is in progress, and among these research areas, there is research that classifies commit messages as a type of software maintenance. Among published studies, the maximum classification accuracy is reported to be 95%. In this paper, we began research with the purpose of utilizing solutions using the commit classification model, and conducted research to remove the limitation that the model with the highest accuracy among existing studies can only be applied to programs written in the JAVA language. To this end, we designed and implemented an additional step to standardize source change data into natural language using GPT. This text explains the process of extracting commit messages and source change data from Git, standardizing the source change data with GPT, and the learning process using the DistilBERT model. As a result of verification, an accuracy of 91% was measured. The proposed model was implemented and verified to ensure accuracy and to be able to classify without being dependent on a specific program. In the future, we plan to study a classification model using Bard and a management tool model helpful to the project using the proposed classification model.

A Study on the Domain Discrimination Model of CSV Format Public Open Data

  • Ha-Na Jeong;Jae-Woong Kim;Young-Suk Chung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.129-136
    • /
    • 2023
  • The government of the Republic of Korea is conducting quality management of public open data by conducting a public data quality management level evaluation. Public open data is provided in various open formats such as XML, JSON, and CSV, with CSV format accounting for the majority. When diagnosing the quality of public open data in CSV format, the quality diagnosis manager determines and diagnoses the domain for each field based on the field name and data within the field of the public open data file. However, it takes a lot of time because quality diagnosis is performed on large amounts of open data files. Additionally, in the case of fields whose meaning is difficult to understand, the accuracy of quality diagnosis is affected by the quality diagnosis person's ability to understand the data. This paper proposes a domain discrimination model for public open data in CSV format using field names and data distribution statistics to ensure consistency and accuracy so that quality diagnosis results are not influenced by the capabilities of the quality diagnosis person in charge, and to support shortening of diagnosis time. As a result of applying the model in this paper, the correct answer rate was about 77%, which is 2.8% higher than the file format open data diagnostic tool provided by the Ministry of Public Administration and Security. Through this, we expect to be able to improve accuracy when applying the proposed model to diagnosing and evaluating the quality management level of public data.

An Exploratory Study on the Trustworthiness Analysis of Generative AI (생성형 AI의 신뢰도에 대한 탐색적 연구)

  • Soyon Kim;Ji Yeon Cho;Bong Gyou Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.79-90
    • /
    • 2024
  • This study focused on user trust in ChatGPT, a generative AI technology, and explored the factors that affect usage status and intention to continue using, and whether the influence of trust varies depending on the purpose. For this purpose, the survey was conducted targeting people in their 20s and 30s who use ChatGPT the most. The statistical analysis deploying IBM SPSS 27 and SmartPLS 4.0. A structural equation model was formulated on the foundation of Bhattacherjee's Expectation-Confirmation Model (ECM), employing path analysis and Multi-Group Analysis (MGA) for hypothesis validation. The main findings are as follows: Firstly, ChatGPT is mainly used for specific needs or objectives rather than as a daily tool. The majority of users are cognizant of its hallucination effects; however, this did not hinder its use. Secondly, the hypothesis testing indicated that independent variables such as expectation- confirmation, perceived usefulness, and user satisfaction all exert a positive influence on the dependent variable, the intention for continuance intention. Thirdly, the influence of trust varied depending on the user's purpose in utilizing ChatGPT. trust was significant when ChatGPT is used for information retrieval but not for creative purposes. This study will be used to solve reliability problems in the process of introducing generative AI in society and companies in the future and to establish policies and derive improvement measures for successful employment.