• Title/Summary/Keyword: software metrics

Search Result 319, Processing Time 0.026 seconds

Comparison analysis of YOLOv10 and existing object detection model performance

  • Joon-Yong Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.85-92
    • /
    • 2024
  • In this paper presents a comparative analysis of the performance between the latest object detection model, YOLOv10, and its previous versions. YOLOv10 introduces NMS-Free training, an enhanced model architecture, and an efficiency-centric design, resulting in outstanding performance. Experimental results using the COCO dataset demonstrate that YOLOv10-N maintains high accuracy of 39.5% and low latency of 1.84ms, despite having only 2.3M parameters and 6.7G floating-point operations (FLOPs). The key performance metrics used include the number of model parameters, FLOPs, average precision (AP), and latency. The analysis confirms the effectiveness of YOLOv10 as a real-time object detection model across various applications. Future research directions include testing on diverse datasets, further model optimization, and expanding application scenarios. These efforts aim to further enhance YOLOv10's versatility and efficiency.

Research on Insurance Claim Prediction Using Ensemble Learning-Based Dynamic Weighted Allocation Model (앙상블 러닝 기반 동적 가중치 할당 모델을 통한 보험금 예측 인공지능 연구)

  • Jong-Seok Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.221-228
    • /
    • 2024
  • Predicting insurance claims is a key task for insurance companies to manage risks and maintain financial stability. Accurate insurance claim predictions enable insurers to set appropriate premiums, reduce unexpected losses, and improve the quality of customer service. This study aims to enhance the performance of insurance claim prediction models by applying ensemble learning techniques. The predictive performance of models such as Random Forest, Gradient Boosting Machine (GBM), XGBoost, Stacking, and the proposed Dynamic Weighted Ensemble (DWE) model were compared and analyzed. Model performance was evaluated using Mean Absolute Error (MAE), Mean Squared Error (MSE), and the Coefficient of Determination (R2). Experimental results showed that the DWE model outperformed others in terms of evaluation metrics, achieving optimal predictive performance by combining the prediction results of Random Forest, XGBoost, LR, and LightGBM. This study demonstrates that ensemble learning techniques are effective in improving the accuracy of insurance claim predictions and suggests the potential utilization of AI-based predictive models in the insurance industry.

Development of the Information Delivery System for the Home Nursing Service (가정간호사업 운용을 위한 정보전달체계 개발 I (가정간호 데이터베이스 구축과 뇌졸중 환자의 가정간호 전산개발))

  • Park, J.H;Kim, M.J;Hong, K.J;Han, K.J;Park, S.A;Yung, S.N;Lee, I.S;Joh, H.;Bang, K.S
    • Journal of Home Health Care Nursing
    • /
    • v.4
    • /
    • pp.5-22
    • /
    • 1997
  • The purpose of the study was to development an information delivery system for the home nursing service, to demonstrate and to evaluate the efficiency of it. The period of research conduct was from September 1996 to August 31, 1997. At the 1st stage to achieve the purpose, Firstly Assessment tool for the patients with cerebral vascular disease who have the first priority of HNS among the patients with various health problems at home was developed through literature review. Secondly, after identification of patient nursing problem by the home care nurse with the assessment tool, the patient's classification system developed by Park (1988) that was 128 nursing activities under 6 categories was used to identify the home care nurse's activities of the patient with CAV at home. The research team had several workshops with 5 clinical nurse experts to refine it. At last 110 nursing activities under 11 categories for the patients with CVA were derived. At the second stage, algorithms were developed to connect 110 nursing activities with the patient nursing problems identified by assessment tool. The computerizing process of the algorithms is as follows: These algorithms are realized with the computer program by use of the software engineering technique. The development is made by the prototyping method, which is the requirement analysis of the software specifications. The basic features of the usability, compatibility, adaptability and maintainability are taken into consideration. Particular emphasis is given to the efficient construction of the database. To enhance the database efficiency and to establish the structural cohesion, the data field is categorized with the weight of relevance to the particular disease. This approach permits the easy adaptability when numerous diseases are applied in the future. In paralleled with this, the expandability and maintainability is stressed through out the program development, which leads to the modular concept. However since the disease to be applied is increased in number as the project progress and since they are interrelated and coupled each other, the expand ability as well as maintainability should be considered with a big priority. Furthermore, since the system is to be synthesized with other medical systems in the future, these properties are very important. The prototype developed in this project is to be evaluated through the stage of system testing. There are various evaluation metrics such as cohesion, coupling and adaptability so on. But unfortunately, direct measurement of these metrics are very difficult, and accordingly, analytical and quantitative evaluations are almost impossible. Therefore, instead of the analytical evaluation, the experimental evaluation is to be applied through the test run by various users. This system testing will provide the viewpoint analysis of the user's level, and the detail and additional requirement specifications arising from user's real situation will be feedback into the system modeling. Also. the degree of freedom of the input and output will be improved, and the hardware limitation will be investigated. Upon the refining, the prototype system will be used as a design template. and will be used to develop the more extensive system. In detail. the relevant modules will be developed for the various diseases, and the module will be integrated by the macroscopic design process focusing on the inter modularity, generality of the database. and compatibility with other systems. The Home care Evaluation System is comprised of three main modules of : (1) General information on a patient, (2) General health status of a patient, and (3) Cerebrovascular disease patient. The general health status module has five sub modules of physical measurement, vitality, nursing, pharmaceutical description and emotional/cognition ability. The CVA patient module is divided into ten sub modules such as subjective sense, consciousness, memory and language pattern so on. The typical sub modules are described in appendix 3.

  • PDF

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

A Design-phase Quality Model for Ubiquitous Service Ontology (유비쿼터스 서비스 온톨로지를 위한 설계 품질 모델)

  • Lee, Mee-Yeon;Park, Seung-Soo;Lee, Jung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.430-445
    • /
    • 2010
  • Effective service description and modeling methodologies are essential for dynamic service composition to provide autonomous services for users in ubiquitous computing environments. In our previous research, we proposed a 'u-Service' as an abstract and structured concept for operations of devices in ubiquitous environments. In addition, we established the mechanism to structure u-Services as an ontology and the description specification to represent attributes of u-Services. However, it did not provide enough methods or standards to analyze and evaluate the effectiveness of the u-Service ontology in the design time. Since existing quality models for software products or computing systems cannot consider characteristics of ubiquitous services, they are not suitable for ubiquitous service ontology. Therefore, in this paper, we propose a quality evaluation model to design and modeling a good ubiquitous service ontology, based on our u-Service ontology building process. We extract modeling goals and evaluation indicators according to characteristics of ubiquitous service ontology, and establish quality metrics to quantify each quality sub-characteristics. The experiment result of the proposed quality evaluation model for u-Service ontologies which are constructed for our previous works shows that we can analyze the design of ubiquitous service ontology from various angles, and indicate recommendations for improvement.

A Study of Smart Healthcare Services Software Quality Satisfaction Rating System based on QoS(Quality of Service) Measurement Model (QoS(Quality of Service) 측정 모델을 참조한 스마트헬스케어서비스 소프트웨어 품질만족도 평가체계)

  • Noh, Si-Choon;Song, Eun-Jee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.149-154
    • /
    • 2014
  • Quality is the value that can be measured by observing the characteristics of the service quantity or quality. QoS is predictable service traffic to a minimum requirements what passed in network. In the course of Smart Medical Information System Development there exist some functional requirements to satisfy quality objectives. The functional smart domains of healthcare information systems consists of Patient Module, a smart sensing and communication domain, RFID Tag Readers and the behavior domain, Homecare Station Domain, Clinical Station. This study is performed on evaluation methodology of u-health service satisfaction quality of each domain. In this paper QoS metrics and the quality of medical information requirements, functional requirements are separated by. Quality parameters consists of six items and the functional requirements and quality requirements 20 details the five items and consist of 20 detailed items. On this study the quality evaluation methodology of Korean smart health information quality assessment matrix 2 - factor evaluation method is proposed. The overall framework of this paper is organizing the specific criteria of quality of medical information system and modeling quality evaluation process under all smart environment.

A Spreadsheet Application that Enables to Flexibly Change Mappings in Requirement Traceability Matrix (요구사항 추적성 매트릭스에서 유연한 맵핑 변경을 가능하게 하는 스프레드시트 애플리케이션)

  • Jeong, Serin;Lee, Seonah
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.325-334
    • /
    • 2018
  • Requirement traceability should be continuously maintained in software development and evolution. However, it is usually updated in practice in the quality assurance phase. The gap between "is" and "should" exists due to the fact that developers must invest considerable effort to update requirement traceability while being able to obtain only marginal benefit from the updated traceability. To close this gap, we propose a spreadsheet application that enables developers to flexibly change mappings in a requirement traceability matrix. In this way, developers can reduce their effort in updating the requirement traceability matrix, but still obtain the common form of a requirement traceability matrix on a spreadsheet. The proposed application maintains the mappings between two artifacts on each sheet so that, whenever an artifact item changes, developers can instantly insert the relevant mapping changes. Then, when developers desire the common form of a requirement traceability matrix, the proposed application calculates the mappings among several artifacts and creates the matrix. The application also checks traceability errors and calculates the metrics so that developers can understand the completeness of the matrix. To understand the applicability of the proposed approach, we conducted a case study, which shows that the proposed application can be applied to the real project and easily incorporate the mapping changes.

The Complexity of Object-Oriented Systems by Analyzing the Class Diagram of UML (UML 클래스 다이어그램 분석에 의한 객체지향 시스템의 복잡도 연구)

  • Chung, Hong;Kim, Tae-Sik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.6
    • /
    • pp.780-787
    • /
    • 2005
  • Many researches and validations for the complexity metrics of the object-oriented systems have been studied. Most of them are aimed for the measurement of the partial aspects of the systems, for example, the coupling between objects, the complexity of inheritance structures, the cohesion of methods, and so on. But the software practitioners want to measure the complexity of overall system, not partial. We studied the complexity of the overall structures of object-oriented systems by analyzing the class diagram of UML. The class diagram is composed of classes and their relations. There are three kinds of relations, association, generalization, and aggregation, which are making the structure of object-oriented systems to be difficult to understand. We proposed a heuristic metric to measure the complexity of object-oriented systems by putting together the three kinds of the relations. Tn analyze the complexity of the structure of a object-oriented system for the maintainability of the system, we measured the degree of understandability of it, the reverse engineering time to draw a class diagram from the source codes, and the number of errors in the diagram. The results of this experiment shows that our proposed metric has a considerable relationship with the complexity of object-oriented systems. The metric will be helpful to the software developers for their designing tasks by evaluating the complexity of the structures of object-oriented systems and redesigning tasks , of them for the future maintainability.

Methods to Enhance Service Scalability Using Service Replication and Migration (서비스 복제 및 이주를 이용한 서비스 확장성 향상 기법)

  • Kim, Ji-Won;Lee, Jae-Yoo;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.503-517
    • /
    • 2010
  • Service-oriented computing, the effective paradigm for developing service applications by using reusable services, becomes popular. In service-oriented computing, service consumer has no responsibility for managing services, just invokes services what service providers are producing. On the other hand, service providers should manage any resources and data for service consumers can use the service anytime and anywhere. However, it is hard service providers manage the quality of the services because an unspecified number of service consumers. Therefore, service scalability for providing services with higher quality of services specified in a service level agreement becomes a potential problem in service-oriented computing. There have been many researches for scalability in network, database, and distributed computing area. But a research about a definition of service scalability and metrics of measuring service scalability is still not mature in service engineering area. In this paper, we construct a service network which connects multiple service nodes, and integrate all the resources to manage it. And we also present a service scalability framework for managing service scalability by using a mechanism of service migration or replication. In section 3, we, firstly, present the structure of the scalability management framework and basic functionalities. In section 4, we propose scalability enhancement mechanism which is needed to release functionality of the framework. In section 5, we design and implement the framework by using proposed mechanism. In section 6, we demonstrate the result of our case study which dynamically manages services in multi-nodes environment by applying our framework. Through the case study, we show the applicability of our scalability management framework and mechanism.

A Method for Measuring and Evaluating for Block-based Programming Code (블록기반 프로그래밍 코드의 수준 및 취약수준 측정방안)

  • Sohn, Wonsung
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.3
    • /
    • pp.293-302
    • /
    • 2016
  • It is the latest fashion of interesting with software education in public school environment and also consider as high priority issue of curriculum for college freshman with programming 101 courses. The block-based programming tool is used widely for the beginner and provides several positive features compare than text-based programming language tools. To measure quality of programming code elaborately which is based script language, it is need to very tough manual process. As a result the previously research related with evaluation of block-based script code has been focused very simple methods in which normalize the number of blocks used which is related with programming concept. In such cases in this, it is difficult to measure structural vulnerability of script code and implicit programming concept which does not expose. In this research, the framework is proposed which enable to measure and evaluate quality of code script of block-based programming tools and also provides method to find of vulnerability of script code. In this framework, the quality metrics is constructed to structuralize implicit programming concept and then developed the quality measure and vulnerability model of script to improve level of programming. Consequently, the proposed methods enable to check of level of programming and predict the heuristic target level.