• Title/Summary/Keyword: Pre-trained Model

Search Result 272, Processing Time 0.028 seconds

Building Sentence Meaning Identification Dataset Based on Social Problem-Solving R&D Reports (사회문제 해결 연구보고서 기반 문장 의미 식별 데이터셋 구축)

  • Hyeonho Shin;Seonki Jeong;Hong-Woo Chun;Lee-Nam Kwon;Jae-Min Lee;Kanghee Park;Sung-Pil Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.159-172
    • /
    • 2023
  • In general, social problem-solving research aims to create important social value by offering meaningful answers to various social pending issues using scientific technologies. Not surprisingly, however, although numerous and extensive research attempts have been made to alleviate the social problems and issues in nation-wide, we still have many important social challenges and works to be done. In order to facilitate the entire process of the social problem-solving research and maximize its efficacy, it is vital to clearly identify and grasp the important and pressing problems to be focused upon. It is understandable for the problem discovery step to be drastically improved if current social issues can be automatically identified from existing R&D resources such as technical reports and articles. This paper introduces a comprehensive dataset which is essential to build a machine learning model for automatically detecting the social problems and solutions in various national research reports. Initially, we collected a total of 700 research reports regarding social problems and issues. Through intensive annotation process, we built totally 24,022 sentences each of which possesses its own category or label closely related to social problem-solving such as problems, purposes, solutions, effects and so on. Furthermore, we implemented four sentence classification models based on various neural language models and conducted a series of performance experiments using our dataset. As a result of the experiment, the model fine-tuned to the KLUE-BERT pre-trained language model showed the best performance with an accuracy of 75.853% and an F1 score of 63.503%.

Experimental calibration of forward and inverse neural networks for rotary type magnetorheological damper

  • Bhowmik, Subrata;Weber, Felix;Hogsberg, Jan
    • Structural Engineering and Mechanics
    • /
    • v.46 no.5
    • /
    • pp.673-693
    • /
    • 2013
  • This paper presents a systematic design and training procedure for the feed-forward back-propagation neural network (NN) modeling of both forward and inverse behavior of a rotary magnetorheological (MR) damper based on experimental data. For the forward damper model, with damper force as output, an optimization procedure demonstrates accurate training of the NN architecture with only current and velocity as input states. For the inverse damper model, with current as output, the absolute value of velocity and force are used as input states to avoid negative current spikes when tracking a desired damper force. The forward and inverse damper models are trained and validated experimentally, combining a limited number of harmonic displacement records, and constant and half-sinusoidal current records. In general the validation shows accurate results for both forward and inverse damper models, where the observed modeling errors for the inverse model can be related to knocking effects in the measured force due to the bearing plays between hydraulic piston and MR damper rod. Finally, the validated models are used to emulate pure viscous damping. Comparison of numerical and experimental results demonstrates good agreement in the post-yield region of the MR damper, while the main error of the inverse NN occurs in the pre-yield region where the inverse NN overestimates the current to track the desired viscous force.

Transfer Learning Based Real-Time Crack Detection Using Unmanned Aerial System

  • Yuvaraj, N.;Kim, Bubryur;Preethaa, K. R. Sri
    • International Journal of High-Rise Buildings
    • /
    • v.9 no.4
    • /
    • pp.351-360
    • /
    • 2020
  • Monitoring civil structures periodically is necessary for ensuring the fitness of the structures. Cracks on inner and outer surfaces of the building plays a vital role in indicating the health of the building. Conventionally, human visual inspection techniques were carried up to human reachable altitudes. Monitoring of high rise infrastructures cannot be done using this primitive method. Also, there is a necessity for more accurate prediction of cracks on building surfaces for ensuring the health and safety of the building. The proposed research focused on developing an efficient crack classification model using Transfer Learning enabled EfficientNet (TL-EN) architecture. Though many other pre-trained models were available for crack classification, they rely on more number of training parameters for better accuracy. The TL-EN model attained an accuracy of 0.99 with less number of parameters on large dataset. A bench marked METU dataset with 40000 images were used to test and validate the proposed model. The surfaces of high rise buildings were investigated using vision enabled Unmanned Arial Vehicles (UAV). These UAV is fabricated with TL-EN model schema for capturing and analyzing the real time streaming video of building surfaces.

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

Feature Analysis for Detecting Mobile Application Review Generated by AI-Based Language Model

  • Lee, Seung-Cheol;Jang, Yonghun;Park, Chang-Hyeon;Seo, Yeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.650-664
    • /
    • 2022
  • Mobile applications can be easily downloaded and installed via markets. However, malware and malicious applications containing unwanted advertisements exist in these application markets. Therefore, smartphone users install applications with reference to the application review to avoid such malicious applications. An application review typically comprises contents for evaluation; however, a false review with a specific purpose can be included. Such false reviews are known as fake reviews, and they can be generated using artificial intelligence (AI)-based text-generating models. Recently, AI-based text-generating models have been developed rapidly and demonstrate high-quality generated texts. Herein, we analyze the features of fake reviews generated from Generative Pre-Training-2 (GPT-2), an AI-based text-generating model and create a model to detect those fake reviews. First, we collect a real human-written application review from Kaggle. Subsequently, we identify features of the fake review using natural language processing and statistical analysis. Next, we generate fake review detection models using five types of machine-learning models trained using identified features. In terms of the performances of the fake review detection models, we achieved average F1-scores of 0.738, 0.723, and 0.730 for the fake review, real review, and overall classifications, respectively.

An Integrated Accurate-Secure Heart Disease Prediction (IAS) Model using Cryptographic and Machine Learning Methods

  • Syed Anwar Hussainy F;Senthil Kumar Thillaigovindan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.504-519
    • /
    • 2023
  • Heart disease is becoming the top reason of death all around the world. Diagnosing cardiac illness is a difficult endeavor that necessitates both expertise and extensive knowledge. Machine learning (ML) is becoming gradually more important in the medical field. Most of the works have concentrated on the prediction of cardiac disease, however the precision of the results is minimal, and data integrity is uncertain. To solve these difficulties, this research creates an Integrated Accurate-Secure Heart Disease Prediction (IAS) Model based on Deep Convolutional Neural Networks. Heart-related medical data is collected and pre-processed. Secondly, feature extraction is processed with two factors, from signals and acquired data, which are further trained for classification. The Deep Convolutional Neural Networks (DCNN) is used to categorize received sensor data as normal or abnormal. Furthermore, the results are safeguarded by implementing an integrity validation mechanism based on the hash algorithm. The system's performance is evaluated by comparing the proposed to existing models. The results explain that the proposed model-based cardiac disease diagnosis model surpasses previous techniques. The proposed method demonstrates that it attains accuracy of 98.5 % for the maximum amount of records, which is higher than available classifiers.

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

No-Reference Image Quality Assessment based on Quality Awareness Feature and Multi-task Training

  • Lai, Lijing;Chu, Jun;Leng, Lu
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.75-86
    • /
    • 2022
  • The existing image quality assessment (IQA) datasets have a small number of samples. Some methods based on transfer learning or data augmentation cannot make good use of image quality-related features. A No Reference (NR)-IQA method based on multi-task training and quality awareness is proposed. First, single or multiple distortion types and levels are imposed on the original image, and different strategies are used to augment different types of distortion datasets. With the idea of weak supervision, we use the Full Reference (FR)-IQA methods to obtain the pseudo-score label of the generated image. Then, we combine the classification information of the distortion type, level, and the information of the image quality score. The ResNet50 network is trained in the pre-train stage on the augmented dataset to obtain more quality-aware pre-training weights. Finally, the fine-tuning stage training is performed on the target IQA dataset using the quality-aware weights to predicate the final prediction score. Various experiments designed on the synthetic distortions and authentic distortions datasets (LIVE, CSIQ, TID2013, LIVEC, KonIQ-10K) prove that the proposed method can utilize the image quality-related features better than the method using only single-task training. The extracted quality-aware features improve the accuracy of the model.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Proposal of autonomous take-off drone algorithm using deep learning (딥러닝을 이용한 자율 이륙 드론 알고리즘 제안)

  • Lee, Jong-Gu;Jang, Min-Seok;Lee, Yon-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.187-192
    • /
    • 2021
  • This study proposes a system for take-off in a forest or similar complex environment using an object detector. In the simulator, a raspberry pi is mounted on a quadcopter with a length of 550mm between motors on a diagonal line, and the experiment is conducted based on edge computing. As for the images to be used for learning, about 150 images of 640⁎480 size were obtained by selecting three points inside Kunsan University, and then converting them to black and white, and pre-processing the binarization by placing a boundary value of 127. After that, we trained the SSD_Inception model. In the simulation, as a result of the experiment of taking off the drone through the model trained with the verification image as an input, a trajectory similar to the takeoff was drawn using the label.