• Title/Summary/Keyword: traditional metrics

Search Result 84, Processing Time 0.024 seconds

A Detecting Technique for the Climatic Factors that Aided the Spread of COVID-19 using Deep and Machine Learning Algorithms

  • Al-Sharari, Waad;Mahmood, Mahmood A.;Abd El-Aziz, A.A.;Azim, Nesrine A.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.131-138
    • /
    • 2022
  • Novel Coronavirus (COVID-19) is viewed as one of the main general wellbeing theaters on the worldwide level all over the planet. Because of the abrupt idea of the flare-up and the irresistible force of the infection, it causes individuals tension, melancholy, and other pressure responses. The avoidance and control of the novel Covid pneumonia have moved into an imperative stage. It is fundamental to early foresee and figure of infection episode during this troublesome opportunity to control of its grimness and mortality. The entire world is investing unimaginable amounts of energy to fight against the spread of this lethal infection. In this paper, we utilized machine learning and deep learning techniques for analyzing what is going on utilizing countries shared information and for detecting the climate factors that effect on spreading Covid-19, such as humidity, sunny hours, temperature and wind speed for understanding its regular dramatic way of behaving alongside the forecast of future reachability of the COVID-2019 around the world. We utilized data collected and produced by Kaggle and the Johns Hopkins Center for Systems Science. The dataset has 25 attributes and 9566 objects. Our Experiment consists of two phases. In phase one, we preprocessed dataset for DL model and features were decreased to four features humidity, sunny hours, temperature and wind speed by utilized the Pearson Correlation Coefficient technique (correlation attributes feature selection). In phase two, we utilized the traditional famous six machine learning techniques for numerical datasets, and Dense Net deep learning model to predict and detect the climatic factor that aide to disease outbreak. We validated the model by using confusion matrix (CM) and measured the performance by four different metrics: accuracy, f-measure, recall, and precision.

Super-Resolution Transmission Electron Microscope Image of Nanomaterials Using Deep Learning (딥러닝을 이용한 나노소재 투과전자 현미경의 초해상 이미지 획득)

  • Nam, Chunghee
    • Korean Journal of Materials Research
    • /
    • v.32 no.8
    • /
    • pp.345-353
    • /
    • 2022
  • In this study, using deep learning, super-resolution images of transmission electron microscope (TEM) images were generated for nanomaterial analysis. 1169 paired images with 256 × 256 pixels (high resolution: HR) from TEM measurements and 32 × 32 pixels (low resolution: LR) produced using the python module openCV were trained with deep learning models. The TEM images were related to DyVO4 nanomaterials synthesized by hydrothermal methods. Mean-absolute-error (MAE), peak-signal-to-noise-ratio (PSNR), and structural similarity (SSIM) were used as metrics to evaluate the performance of the models. First, a super-resolution image (SR) was obtained using the traditional interpolation method used in computer vision. In the SR image at low magnification, the shape of the nanomaterial improved. However, the SR images at medium and high magnification failed to show the characteristics of the lattice of the nanomaterials. Second, to obtain a SR image, the deep learning model includes a residual network which reduces the loss of spatial information in the convolutional process of obtaining a feature map. In the process of optimizing the deep learning model, it was confirmed that the performance of the model improved as the number of data increased. In addition, by optimizing the deep learning model using the loss function, including MAE and SSIM at the same time, improved results of the nanomaterial lattice in SR images were achieved at medium and high magnifications. The final proposed deep learning model used four residual blocks to obtain the characteristic map of the low-resolution image, and the super-resolution image was completed using Upsampling2D and the residual block three times.

Trend of Pharmacopuncture Treatment on Obesity: Recent 10 Years (비만 치료에 대한 약침연구의 국내외 동향 분석: 최근 10년을 중심으로)

  • Seong-heon, Jeong;Hyung-suk, Kim;Woo-chul, Shin;Jae-heung, Cho;Won-seok, Chung;Mi-yeon, Song
    • Journal of Korean Medicine for Obesity Research
    • /
    • v.22 no.2
    • /
    • pp.147-157
    • /
    • 2022
  • Objectives: The purpose of this study is to research domestic and foreign trend of pharmacopuncture treatment on obesity during recent 10 years. Methods: 5 Databases (Korean Studies Information Service System, Research Information Sharing Service, Oriental Medicine Advanced Searching Integrated System, Scopus, PubMed) were searched with keywords of ('pharmacopuncture', 'herbal acupuncture', 'aquapuncture', 'obesity') from 2012 to 2022. Results: 25 Articles were selected and analyzed. 15 articles (60%) were animal experimentations, 8 articles (32%) were case reports, 1 article (4%) was cell experimentation, and 1 article (4%) was clinical trial. In this study, 25 articles were analyzed by subject, acupoints, injections, metrics and results. Pharmacopuncture treatment for obesity is continuously being studied, and the anti-inflammatory effect as well as the effect of reducing obesity factors has been revealed. Conclusions: This study suggests the efficacy and future development of pharmacopuncture for obesity. The studies of the past decade have been concentrated on animal experiments, so many clinical trials and various studies on new complex pharmacopuncture for obesity are expected.

Twin models for high-resolution visual inspections

  • Seyedomid Sajedi;Kareem A. Eltouny;Xiao Liang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.351-363
    • /
    • 2023
  • Visual structural inspections are an inseparable part of post-earthquake damage assessments. With unmanned aerial vehicles (UAVs) establishing a new frontier in visual inspections, there are major computational challenges in processing the collected massive amounts of high-resolution visual data. We propose twin deep learning models that can provide accurate high-resolution structural components and damage segmentation masks efficiently. The traditional approach to cope with high memory computational demands is to either uniformly downsample the raw images at the price of losing fine local details or cropping smaller parts of the images leading to a loss of global contextual information. Therefore, our twin models comprising Trainable Resizing for high-resolution Segmentation Network (TRS-Net) and DmgFormer approaches the global and local semantics from different perspectives. TRS-Net is a compound, high-resolution segmentation architecture equipped with learnable downsampler and upsampler modules to minimize information loss for optimal performance and efficiency. DmgFormer utilizes a transformer backbone and a convolutional decoder head with skip connections on a grid of crops aiming for high precision learning without downsizing. An augmented inference technique is used to boost performance further and reduce the possible loss of context due to grid cropping. Comprehensive experiments have been performed on the 3D physics-based graphics models (PBGMs) synthetic environments in the QuakeCity dataset. The proposed framework is evaluated using several metrics on three segmentation tasks: component type, component damage state, and global damage (crack, rebar, spalling). The models were developed as part of the 2nd International Competition for Structural Health Monitoring.

A Study on Categorizing Researcher Types Considering the Characteristics of Research Collaboration (공동연구 특성을 고려한 연구자 유형 구분에 대한 연구)

  • Jae Yun Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.59-80
    • /
    • 2023
  • Traditional models for categorizing researcher types have mostly utilized research output metrics. This study proposes a new model that classifies researchers based on the characteristics of research collaboration. The model uses only research collaboration indicators and does not rely on citation data, taking into account that citation impact is related to collaborative research. The model categorizes researchers into four types based on their collaborative research pattern and scope: Sparse & Wide (SW) type, Dense & Wide (DW) type, Dense & Narrow (DN) type, Sparse & Narrow (SN) type. When applied to the quantum metrology field, the proposed model was statistically verified to show differences in citation indicators and co-author network indicators according to the classified researcher types. The proposed researcher type classification model does not require citation information. Therefore, it is expected to be widely used in research management policies and research support services.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Incentive Design Considerations for Free-riding Prevention in Cooperative Distributed Systems (협조적 분산시스템 환경에서 무임승차 방지를 위한 인센티브 디자인 고려사항 도출에 관한 연구)

  • Shin, Kyu-Yong;Yoo, Jin-Cheol;Lee, Jong-Deog;Park, Byoung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.137-148
    • /
    • 2011
  • Different from the traditional client-server model, it is possible for participants in a cooperative distributed system to get quality services regardless of the number of participants in the system since they voluntarily pool or share their resources in order to achieve their common goal. However, some selfish participants try to avoid providing their resources while still enjoying the benefits offered by the system, which is termed free-riding. The results of free-riding in cooperative distributed systems lead to system collapse because the system capacity (per participant) decreases as the number of free-riders increases, widely known as the tragedy of commons. As a consequence, designing an efficient incentive mechanism to prevent free-riding is mandatory for a successful cooperative distributed system. Because of the importance of incentive mechanisms in cooperative distributed system, a myriad of incentives mechanisms have been proposed without a standard for performance evaluation. This paper draws general incentive design considerations which can be used as performance metrics through an extensive survey on this literature, providing future researchers with guidelines for the effective incentive design in cooperative distributed systems.

A study on the Effect of Big Data Quality on Corporate Management Performance (빅데이터 품질이 기업의 경영성과에 미치는 영향에 관한 연구)

  • Lee, Choong-Hyong;Kim, YoungJun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.245-256
    • /
    • 2021
  • The Fourth Industrial Revolution brought the quantitative value of data across the industry and entered the era of 'Big Data'. This is due to both the rapid development of information & communication technology and the diversity & complexity of customer purchasing tendencies. An enterprise's core competence in the Big Data Era is to analyze and utilize the data to make strategic decisions for enterprise. However, most of traditional studies on Big Data have focused on technical issues and future potential values. In addition, these studies lacked interest in managing the quality and utilization levels of internal & external customer Big Data held by the entity. To overcome these shortages, this study attempted to derive influential factors by recognizing the quality management information systems and quality management of the internal & external Big Data. First of all, we conducted a survey of 204 executives & employees to determine whether Big Data quality management, Big Data utilization, and level management have a significant impact on corporate work efficiency & corporate management performance. For the study for this purpose, hypotheses were established, and their verifications were carried out. As a result of these studies, we found that the reasons that significantly affect corporate management performance are support from the management class, individual innovation, changes in the management environment, Big Data quality utilization metrics, and Big Data governance system.

Parameter search methodology of support vector machines for improving performance (속도 향상을 위한 서포트 벡터 머신의 파라미터 탐색 방법론)

  • Lee, Sung-Bo;Kim, Jae-young;Kim, Cheol-Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.329-337
    • /
    • 2017
  • This paper proposes a search method that explores parameters C and σ values of support vector machines (SVM) to improve performance while maintaining search accuracy. A traditional grid search method requires tremendous computational times because it searches all available combinations of C and σ values to find optimal combinations which provide the best performance of SVM. To address this issue, this paper proposes a deep search method that reduces computational time. In the first stage, it divides C-σ- accurate metrics into four regions, searches a median value of each region, and then selects a point of the highest accurate value as a start point. In the second stage, the selected start points are re-divided into four regions, and then the highest accurate point is assigned as a new search point. In the third stage, after eight points near the search point. are explored and the highest accurate value is assigned as a new search point, corresponding points are divided into four parts and it calculates an accurate value. In the last stage, it is continued until an accurate metric value is the highest compared to the neighborhood point values. If it is not satisfied, it is repeated from the second stage with the input level value. Experimental results using normal and defect bearings show that the proposed deep search algorithm outperforms the conventional algorithms in terms of performance and search time.