• Title/Summary/Keyword: Software Performance Analysis

Search Result 1,815, Processing Time 0.026 seconds

Learning Method for Regression Model by Analysis of Relationship Between Input and Output Data with Periodicity (주기성을 갖는 입출력 데이터의 연관성 분석을 통한 회귀 모델 학습 방법)

  • Kim, Hye-Jin;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.299-306
    • /
    • 2022
  • In recent, sensors embedded in robots, equipment, and circuits have become common, and research for diagnosing device failures by learning measured sensor data is being actively conducted. This failure diagnosis study is divided into a classification model for predicting failure situations or types and a regression model for numerically predicting failure conditions. In the case of a classification model, it simply checks the presence or absence of a failure or defect (Class), whereas a regression model has a higher learning difficulty because it has to predict one value among countless numbers. So, the reason that regression modeling is more difficult is that there are many irregular situations in which it is difficult to determine one output from a similar input when predicting by matching input and output. Therefore, in this paper, we focus on input and output data with periodicity, analyze the input/output relationship, and secure regularity between input and output data by performing sliding window-based input data patterning. In order to apply the proposed method, in this study, current and temperature data with periodicity were collected from MMC(Modular Multilevel Converter) circuit system and learning was carried out using ANN. As a result of the experiment, it was confirmed that when a window of 2% or more of one cycle was applied, performance of 97% or more of fit could be secured.

Big Data Management in Structured Storage Based on Fintech Models for IoMT using Machine Learning Techniques (기계학습법을 이용한 IoMT 핀테크 모델을 기반으로 한 구조화 스토리지에서의 빅데이터 관리 연구)

  • Kim, Kyung-Sil
    • Advanced Industrial SCIence
    • /
    • v.1 no.1
    • /
    • pp.7-15
    • /
    • 2022
  • To adopt the development in the medical scenario IoT developed towards the advancement with the processing of a large amount of medical data defined as an Internet of Medical Things (IoMT). The vast range of collected medical data is stored in the cloud in the structured manner to process the collected healthcare data. However, it is difficult to handle the huge volume of the healthcare data so it is necessary to develop an appropriate scheme for the healthcare structured data. In this paper, a machine learning mode for processing the structured heath care data collected from the IoMT is suggested. To process the vast range of healthcare data, this paper proposed an MTGPLSTM model for the processing of the medical data. The proposed model integrates the linear regression model for the processing of healthcare information. With the developed model outlier model is implemented based on the FinTech model for the evaluation and prediction of the COVID-19 healthcare dataset collected from the IoMT. The proposed MTGPLSTM model comprises of the regression model to predict and evaluate the planning scheme for the prevention of the infection spreading. The developed model performance is evaluated based on the consideration of the different classifiers such as LR, SVR, RFR, LSTM and the proposed MTGPLSTM model and the different size of data as 1GB, 2GB and 3GB is mainly concerned. The comparative analysis expressed that the proposed MTGPLSTM model achieves ~4% reduced MAPE and RMSE value for the worldwide data; in case of china minimal MAPE value of 0.97 is achieved which is ~ 6% minimal than the existing classifier leads.

Apartment Price Prediction Using Deep Learning and Machine Learning (딥러닝과 머신러닝을 이용한 아파트 실거래가 예측)

  • Hakhyun Kim;Hwankyu Yoo;Hayoung Oh
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.59-76
    • /
    • 2023
  • Since the COVID-19 era, the rise in apartment prices has been unconventional. In this uncertain real estate market, price prediction research is very important. In this paper, a model is created to predict the actual transaction price of future apartments after building a vast data set of 870,000 from 2015 to 2020 through data collection and crawling on various real estate sites and collecting as many variables as possible. This study first solved the multicollinearity problem by removing and combining variables. After that, a total of five variable selection algorithms were used to extract meaningful independent variables, such as Forward Selection, Backward Elimination, Stepwise Selection, L1 Regulation, and Principal Component Analysis(PCA). In addition, a total of four machine learning and deep learning algorithms were used for deep neural network(DNN), XGBoost, CatBoost, and Linear Regression to learn the model after hyperparameter optimization and compare predictive power between models. In the additional experiment, the experiment was conducted while changing the number of nodes and layers of the DNN to find the most appropriate number of nodes and layers. In conclusion, as a model with the best performance, the actual transaction price of apartments in 2021 was predicted and compared with the actual data in 2021. Through this, I am confident that machine learning and deep learning will help investors make the right decisions when purchasing homes in various economic situations.

Analyzing the impact on logistics outsourcing success for Ugandan food processing firms through third-party logistics service providers' capabilities (제3자 물류 서비스공급자의 역량을 통한 우간다 식품 가공업체의 물류 아웃소싱 성공에 대한 영향 분석)

  • Alioni, Christopher;Park, Byungin
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.4
    • /
    • pp.45-64
    • /
    • 2022
  • Due to the recent and rapid globalization, logistics outsourcing has expanded globally and is seen as a means of creating a robust logistics system. However, many businesses continue to have difficulties with their logistics outsourcing contracts, which compels them to reinstate the logistics function for internal management. This study aims to investigate how organizational capabilities of logistics service providers (LSPs), notably flexibility, integration, innovation, and technological capabilities, impact on the logistics outsourcing success in Ugandan food processing firms. Using a structured questionnaire survey, cross-sectional data collected from 211 food processing firms in Kampala - Uganda were analyzed by partial least squares-structural equation modeling (PLS-SEM) using SmartPLS 3.3.7 software to examine the theorized relationships. The study findings revealed that whereas the technological and innovation capabilities positively and significantly influence logistics outsourcing success, the effects of flexibility and integration capabilities were insignificant. Additionally, the importance-performance map analysis (IPMA) reveals that the technological capability is a priority capability, followed by the innovation capability if logistics outsourcing success is to be achieved. Conversely, flexibility and integration capabilities are of low priority.

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF

Busan Tourism Industry applying OECD Tourism Policy and ICT Convergence Platform (OECD 관광정책과 ICT 융합 플랫폼을 적용한 부산관광산업)

  • Lim, Yong-Suk;Jung, Ho-Jin;Lee, Jung-Won
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.871-879
    • /
    • 2017
  • The purpose of this study is to propose a Busan tourism industry in which the 2016 OECD Tourism policy and ICT convergence platform are applied. OECD proposed 3 policies to promote the tourism industry: First, to maintain the competitiveness of the tourism industry as well as improve its efficiency and sustainability, second, to establish a seamless traffic system, and third, to build a response to the sharing economy. Centering on the OECD's three policies, we propose the developmental possibilities of tourism in Busan. At the same time, we suggest the necessity to build an ICT convergence platform that will help foster the industry. In building an ICT convergence platform, we especially focus on the necessity of: 1. Sharing and creating experience-based interactive contents on the software side, and 2. Developing high quality user experience (UX) and providing a data analysis-based customized service on the hardware side. In addition, we insist on the establishment of the Tourism Promotion Agency for the continuous performance and management of Busan tourism industry. The study ultimately suggests that the construction of ICT convergence platform based on OECD tourism policy can result in the expected outcomes of high effects with low cost for both consumers and suppliers related to the tourism industry.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Experimental and numerical study on the structural behavior of Multi-Cell Beams reinforced with metallic and non-metallic materials

  • Yousry B.I. Shaheen;Ghada M. Hekal;Ahmed K. Fadel;Ashraf M. Mahmoud
    • Structural Engineering and Mechanics
    • /
    • v.90 no.6
    • /
    • pp.611-633
    • /
    • 2024
  • This study intends to investigate the response of multi-cell (MC) beams to flexural loads in which the primary reinforcement is composed of both metallic and non-metallic materials. "Multi-cell" describes beam sections with multiple longitudinal voids separated by thin webs. Seven reinforced concrete MC beams measuring 300×200×1800 mm were tested under flexural loadings until failure. Two series of beams are formed, depending on the type of main reinforcement that is being used. A control RC beam with no openings and six MC beams are found in these two series. Series one and two are reinforced with metallic and non-metallic main reinforcement, respectively, in order to maintain a constant reinforcement ratio. The first crack, ultimate load, deflection, ductility index, energy absorption, strain characteristics, crack pattern, and failure mode were among the structural parameters of the beams under investigation that were documented. The primary variables that vary are the kind of reinforcing materials that are utilized, as well as the kind and quantity of mesh layers. The outcomes of this study that looked at the experimental and numerical performance of ferrocement reinforced concrete MC beams are presented in this article. Nonlinear finite element analysis (NLFEA) was performed with ANSYS-16.0 software to demonstrate the behavior of composite MC beams with holes. A parametric study is also carried out to investigate the factors, such as opening size, that can most strongly affect the mechanical behavior of the suggested model. The experimental and numerical results obtained demonstrate that the FE simulations generated an acceptable degree of experimental value estimation. It's also important to demonstrate that, when compared to the control beam, the MC beam reinforced with geogrid mesh (MCGB) decreases its strength capacity by a maximum of 73.33%. In contrast, the minimum strength reduction value of 16.71% is observed in the MC beams reinforced with carbon reinforcing bars (MCCR). The findings of the experiments on MC beams with openings demonstrate that the presence of openings has a significant impact on the behavior of the beams, as there is a decrease in both the ultimate load and maximum deflection.

Tea Leaf Disease Classification Using Artificial Intelligence (AI) Models (인공지능(AI) 모델을 사용한 차나무 잎의 병해 분류)

  • K.P.S. Kumaratenna;Young-Yeol Cho
    • Journal of Bio-Environment Control
    • /
    • v.33 no.1
    • /
    • pp.1-11
    • /
    • 2024
  • In this study, five artificial intelligence (AI) models: Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc were used to classify tea leaf diseases. Eight image categories were used: healthy, algal leaf spot, anthracnose, bird's eye spot, brown blight, gray blight, red leaf spot, and white spot. Software used in this study was Orange 3 which functions as a Python library for visual programming, that operates through an interface that generates workflows to visually manipulate and analyze the data. The precision of each AI model was recorded to select the ideal AI model. All models were trained using the Adam solver, rectified linear unit activation function, 100 neurons in the hidden layers, 200 maximum number of iterations in the neural network, and 0.0001 regularizations. To extend the functionality of Orange 3, new add-ons can be installed and, this study image analytics add-on was newly added which is required for image analysis. For the training model, the import image, image embedding, neural network, test and score, and confusion matrix widgets were used, whereas the import images, image embedding, predictions, and image viewer widgets were used for the prediction. Precisions of the neural networks of the five AI models (Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc) were 0.807, 0.901, 0.780, 0.800, and 0.771, respectively. Finally, the SqueezeNet (local) model was selected as the optimal AI model for the detection of tea diseases using tea leaf images owing to its high precision and good performance throughout the confusion matrix.