• Title/Summary/Keyword: computer models

Search Result 3,894, Processing Time 0.031 seconds

Testing the Equality of Two Linear Regression Models : Comparison between Chow Test and a Permutation Test

  • Um, Yonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.157-164
    • /
    • 2021
  • Regression analysis is a well-known statistical technique useful to explain the relationship between response variable and predictor variables. In particular, Researchers are interested in comparing the regression coefficients(intercepts and slopes) of the models in two independent populations. The Chow test, proposed by Gregory Chow, is one of the most commonly used methods for comparing regression models and for testing the presence of a structural break in linear models. In this study, we propose the use of permutation method and compare it with Chow test analysis for testing the equality of two independent linear regression models. Then simulation study is conducted to examine the powers of permutation test and Chow test.

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.

High-Capacity Robust Image Steganography via Adversarial Network

  • Chen, Beijing;Wang, Jiaxin;Chen, Yingyue;Jin, Zilong;Shim, Hiuk Jae;Shi, Yun-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.366-381
    • /
    • 2020
  • Steganography has been successfully employed in various applications, e.g., copyright control of materials, smart identity cards, video error correction during transmission, etc. Deep learning-based steganography models can hide information adaptively through network learning, and they draw much more attention. However, the capacity, security, and robustness of the existing deep learning-based steganography models are still not fully satisfactory. In this paper, three models for different cases, i.e., a basic model, a secure model, a secure and robust model, have been proposed for different cases. In the basic model, the functions of high-capacity secret information hiding and extraction have been realized through an encoding network and a decoding network respectively. The high-capacity steganography is implemented by hiding a secret image into a carrier image having the same resolution with the help of concat operations, InceptionBlock and convolutional layers. Moreover, the secret image is hidden into the channel B of carrier image only to resolve the problem of color distortion. In the secure model, to enhance the security of the basic model, a steganalysis network has been added into the basic model to form an adversarial network. In the secure and robust model, an attack network has been inserted into the secure model to improve its robustness further. The experimental results have demonstrated that the proposed secure model and the secure and robust model have an overall better performance than some existing high-capacity deep learning-based steganography models. The secure model performs best in invisibility and security. The secure and robust model is the most robust against some attacks.

Recent Automatic Post Editing Research (최신 기계번역 사후 교정 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.19 no.7
    • /
    • pp.199-208
    • /
    • 2021
  • Automatic Post Editing(APE) is the study that automatically correcting errors included in the machine translated sentences. The goal of APE task is to generate error correcting models that improve translation quality, regardless of the translation system. For training these models, source sentence, machine translation, and post edit, which is manually edited by human translator, are utilized. Especially in the recent APE research, multilingual pretrained language models are being adopted, prior to the training by APE data. This study deals with multilingual pretrained language models adopted to the latest APE researches, and the specific application method for each APE study. Furthermore, based on the current research trend, we propose future research directions utilizing translation model or mBART model.

Comparative Study of PSO-ANN in Estimating Traffic Accident Severity

  • Md. Ashikuzzaman;Wasim Akram;Md. Mydul Islam Anik;Taskeed Jabid;Mahamudul Hasan;Md. Sawkat Ali
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.95-100
    • /
    • 2023
  • Due to Traffic accidents people faces health and economical casualties around the world. As the population increases vehicles on road increase which leads to congestion in cities. Congestion can lead to increasing accident risks due to the expansion in transportation systems. Modern cities are adopting various technologies to minimize traffic accidents by predicting mathematically. Traffic accidents cause economical casualties and potential death. Therefore, to ensure people's safety, the concept of the smart city makes sense. In a smart city, traffic accident factors like road condition, light condition, weather condition etcetera are important to consider to predict traffic accident severity. Several machine learning models can significantly be employed to determine and predict traffic accident severity. This research paper illustrated the performance of a hybridized neural network and compared it with other machine learning models in order to measure the accuracy of predicting traffic accident severity. Dataset of city Leeds, UK is being used to train and test the model. Then the results are being compared with each other. Particle Swarm optimization with artificial neural network (PSO-ANN) gave promising results compared to other machine learning models like Random Forest, Naïve Bayes, Nearest Centroid, K Nearest Neighbor Classification. PSO- ANN model can be adopted in the transportation system to counter traffic accident issues. The nearest centroid model gave the lowest accuracy score whereas PSO-ANN gave the highest accuracy score. All the test results and findings obtained in our study can provide valuable information on reducing traffic accidents.

A SCORM-based e-Learning Process Control Model and Its Modeling System

  • Kim, Hyun-Ah;Lee, Eun-Jung;Chun, Jun-Chul;Kim, Kwang-Hoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2121-2142
    • /
    • 2011
  • In this paper, we propose an e-Learning process control model that aims to graphically describe and automatically generate the manifest of sequencing prerequisites in packaging SCORM's content aggregation models. In specifying the e-Learning activity sequencing, SCORM provides the concept of sequencing prerequisites to be manifested on each e-Learning activity of the corresponding tree-structured content organization model. However, the course developer is required to completely understand the SCORM's complicated sequencing prerequisites and other extensions. So, it is necessary to achieve an efficient way of packaging for the e-Learning content organization models. The e-Learning process control model proposed in this paper ought to be an impeccable solution for this problem. Consequently, this paper aims to realize a new concept of process-driven e-Learning content aggregating approach supporting the e-Learning process control model and to implement its e-Learning process modeling system graphically describing and automatically generating the SCORM's sequencing prerequisites. Eventually, the proposed model becomes a theoretical basis for implementing a SCORM-based e-Learning process management system satisfying the SCORM's sequencing prerequisite specifications. We strongly believe that the e-Learning process control model and its modeling system achieve convenient packaging in SCORM's content organization models and in implementing an e-Learning management system as well.

Detection of Multiple Salient Objects by Categorizing Regional Features

  • Oh, Kang-Han;Kim, Soo-Hyung;Kim, Young-Chul;Lee, Yu-Ra
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.272-287
    • /
    • 2016
  • Recently, various and effective contrast based salient object detection models to focus on a single target have been proposed. However, there is a lack of research on detection of multiple objects, and also it is a more challenging task than single target process. In the multiple target problem, we are confronted by new difficulties caused by distinct difference between properties of objects. The characteristic of existing models depending on the global maximum distribution of data point would become a drawback for detection of multiple objects. In this paper, by analyzing limitations of the existing methods, we have devised three main processes to detect multiple salient objects. In the first stage, regional features are extracted from over-segmented regions. In the second stage, the regional features are categorized into homogeneous cluster using the mean-shift algorithm with the kernel function having various sizes. In the final stage, we compute saliency scores of the categorized regions using only spatial features without the contrast features, and then all scores are integrated for the final salient regions. In the experimental results, the scheme achieved superior detection accuracy for the SED2 and MSRA-ASD benchmarks with both a higher precision and better recall than state-of-the-art approaches. Especially, given multiple objects having different properties, our model significantly outperforms all existing models.

Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method

  • Jeong, Yoo-Geum;Lee, Wan-Sun;Lee, Kyu-Bok
    • The Journal of Advanced Prosthodontics
    • /
    • v.10 no.3
    • /
    • pp.245-251
    • /
    • 2018
  • PURPOSE. To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. MATERIALS AND METHODS. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). RESULTS. The RMS value ($152{\pm}52{\mu}m$) of the model manufactured by the milling method was significantly higher than the RMS value ($52{\pm}9{\mu}m$) of the model produced by the 3D printing method. CONCLUSION. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.

Computer-aided approach of parameters influencing concrete service life and field validation

  • Papadakis, V.G.;Efstathiou, M.P.;Apostolopoulos, C.A.
    • Computers and Concrete
    • /
    • v.4 no.1
    • /
    • pp.1-18
    • /
    • 2007
  • Over the past decades, an enormous amount of effort has been expended in laboratory and field studies on concrete durability estimation. The results of this research are still either widely scattered in the journal literature or mentioned briefly in the standard textbooks. Moreover, the theoretical approaches of deterioration mechanisms with a predictive character are limited to some complicated mathematical models not widespread in practice. A significant step forward could be the development of appropriate software for computer-based estimation of concrete service life, including reliable mathematical models and adequate experimental data. In the present work, the basis for the development of a computer estimation of the concrete service life is presented. After the definition of concrete mix design and structure characteristics, as well as the consideration regarding the environmental conditions where the structure will be found, the concrete service life can be reliably predicted using fundamental mathematical models that simulate the deterioration mechanisms. The prediction is focused on the basic deterioration phenomena of reinforced concrete, such as carbonation and chloride penetration, that initiate the reinforcing bars corrosion. Aspects on concrete strength and the production cost are also considered. Field observations and data collection from existing structures are compared with predictions of service life using the above model. A first attempt to develop a database of service lives of different types of reinforced concrete structure exposed to varying environments is finally included.

Ensemble Knowledge Distillation for Classification of 14 Thorax Diseases using Chest X-ray Images (흉부 X-선 영상을 이용한 14 가지 흉부 질환 분류를 위한 Ensemble Knowledge Distillation)

  • Ho, Thi Kieu Khanh;Jeon, Younghoon;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.313-315
    • /
    • 2021
  • Timely and accurate diagnosis of lung diseases using Chest X-ray images has been gained much attention from the computer vision and medical imaging communities. Although previous studies have presented the capability of deep convolutional neural networks by achieving competitive binary classification results, their models were seemingly unreliable to effectively distinguish multiple disease groups using a large number of x-ray images. In this paper, we aim to build an advanced approach, so-called Ensemble Knowledge Distillation (EKD), to significantly boost the classification accuracies, compared to traditional KD methods by distilling knowledge from a cumbersome teacher model into an ensemble of lightweight student models with parallel branches trained with ground truth labels. Therefore, learning features at different branches of the student models could enable the network to learn diverse patterns and improve the qualify of final predictions through an ensemble learning solution. Although we observed that experiments on the well-established ChestX-ray14 dataset showed the classification improvements of traditional KD compared to the base transfer learning approach, the EKD performance would be expected to potentially enhance classification accuracy and model generalization, especially in situations of the imbalanced dataset and the interdependency of 14 weakly annotated thorax diseases.

  • PDF