• Title/Summary/Keyword: software metrics

Search Result 319, Processing Time 0.026 seconds

A Practical Quality Model for Evaluation of Mobile Services Based on Mobile Internet Device (모바일 인터넷 장비에 기반한 모바일 서비스 평가를 위한 실용적인 품질모델)

  • Oh, Sang-Hun;La, Hyun-Jung;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.341-353
    • /
    • 2010
  • Mobile Internet Device (MID) allows users to flexibly use various forms of wireless internet such as Wi-Fi, GSM, CDMA, and 3G. Using such Internet, MID users can utilize application services. MID usage is expected to grow due to the benefits of portability, Internet accessibility, and other convenience. However, it has resource constraints such as limited CPU power, small memory size, limited battery life, and small screen size. Consequently, MIDs are not capable to hold large-sized complex applications and to process a large amount of data in memory. An effective solution to remedy these limitations is to develop cloud services for the required application functionality, to deploy them on the server side, and to let MID users access the services through internet. A major concern on running cloud services for MIDs is the potential problems with low Quality of Service (QoS) due to the characteristics of MIDs. Even measuring the QoS of such services is more technically challenging than conventional quality measurements. In this paper, we first identify the characteristics of MIDs and cloud services for MIDs. Based on these observations, we derive a number of quality attributes and their metrics for measuring QoS of mobile services. A case study of applying the proposed quality model is presented to show its effectiveness and applicability.

Analysis of Intensity Attenuation Characteristics Using Physics-based Earthquake Ground-motion Simulation with Site Effect in the Southern Korean Peninsula (한반도 남부에서 부지효과를 고려한 물리적 지진동 모델링 기반 진도 감쇠 특성 분석 연구)

  • An, So Hyeon;Kyung, Jai Bok;Song, Seok Goo;Cho, Hyung-Ik
    • Journal of the Korean earth science society
    • /
    • v.41 no.3
    • /
    • pp.238-247
    • /
    • 2020
  • This study simulated strong ground motion waveforms in the southern Korean Peninsula, based on the physical earthquake modeling of the Southern California Earthquake Center (SCEC) BroadBand Platform (BBP). Characteristics of intensity attenuation were investigated for M 6.0-7.0 events, incorporating the site effects. The SCEC BBP is software generates broadband (0-10 Hz) ground-motion waveforms for earthquake scenarios. Among five available modeling methods in the v16.5 platform, we used the Song Model. Approximately 50 earthquake scenarios each were simulated for M 6.0, 6.5, and 7.0 events. Representative metrics such as peak ground acceleration (PGA) and peak ground velocity (PGV) were obtained from the synthetic waveforms that were simulated before and after the consideration of site effects (VS30). They were then empirically converted to distribution of instrumental intensity. The intensity that considers the site effects is amplified at low rather than high VS30 zones.

Analysis on Power Consumption Characteristics of SHA-3 Candidates and Low-Power Architecture (SHA-3 해쉬함수 소비전력 특성 분석 및 저전력 구조 기법)

  • Kim, Sung-Ho;Cho, Sung-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.1
    • /
    • pp.115-125
    • /
    • 2011
  • Cryptographic hash functions are also called one-way functions and they ensure the integrity of communication data and command by detecting or blocking forgery. Also hash functions can be used with other security protocols for signature, authentication, and key distribution. The SHA-1 was widely used until it was found to be cryptographically broken by Wang, et. al, 2005. For this reason, NIST launched the SHA-3 competition in November 2007 to develop new secure hash function by 2012. Many SHA-3 hash functions were proposed and currently in review process. To choose new SHA-3 hash function among the proposed hash functions, there have been many efforts to analyze the cryptographic secureness, hardware/software characteristics on each proposed one. However there are few research efforts on the SHA-3 from the point of power consumption, which is a crucial metric on hardware module. In this paper, we analyze the power consumption characteristics of the SHA-3 hash functions when they are made in the form of ASIC hardware module. Also we propose power efficient hardware architecture on Luffa, which is strong candidate as a new SHA-3 hash function. Our proposed low power architecture for Luffa achieves 10% less power consumption than previous Luffa hardware architecture.

A Systematic Approach Of Construction Management Based On Last Planner System And Its Implementation In The Construction Industry

  • Hussain, SM Abdul Mannan;Sekhar, Dr.T.Seshadri;Fatima, Asra
    • Journal of Construction Engineering and Project Management
    • /
    • v.5 no.2
    • /
    • pp.11-15
    • /
    • 2015
  • The Last PlannerSystem (LPS) has been implemented on construction projects to increase work flow reliability, a precondition for project performance againstproductivity and progress targets. The LPS encompasses four tiers of planning processes:master scheduling, phase scheduling, lookahead planning, and commitment / weeklywork planning. This research highlights deficiencies in the current implementation of LPS including poor lookahead planning which results in poor linkage between weeklywork plans and the master schedule. This poor linkage undetermines the ability of theweekly work planning process to select for execution tasks that are critical to projectsuccess. As a result, percent plan complete (PPC) becomes a weak indicator of project progress. The purpose of this research is to improve lookahead planning (the bridgebetween weekly work planning and master scheduling), improve PPC, and improve theselection of tasks that are critical to project success by increasing the link betweenShould, Can, Will, and Did (components of the LPS), thereby rendering PPC a betterindicator of project progress. The research employs the case study research method to describe deficiencies inthe current implementation of the LPS and suggest guidelines for a better application ofLPS in general and lookahead planning in particular. It then introduces an analyticalsimulation model to analyze the lookahead planning process. This is done by examining the impact on PPC of increasing two lookahead planning performance metrics: tasksanticipated (TA) and tasks made ready (TMR). Finally, the research investigates theimportance of the lookahead planning functions: identification and removal ofconstraints, task breakdown, and operations design.The research findings confirm the positive impact of improving lookaheadplanning (i.e., TA and TMR) on PPC. It also recognizes the need to perform lookaheadplanning differently for three types of work involving different levels of uncertainty:stable work, medium uncertainty work, and highly emergent work.The research confirms the LPS rules for practice and specifically the need to planin greater detail as time gets closer to performing the work. It highlights the role of LPSas a production system that incorporates deliberate planning (predetermined andoptimized) and situated planning (flexible and adaptive). Finally, the research presents recommendations for production planningimprovements in three areas: process related, (suggesting guidelines for practice),technical, (highlighting issues with current software programs and advocating theinclusion of collaborative planning capability), and organizational improvements(suggesting transitional steps when applying the LPS).

Detecting Errors in POS-Tagged Corpus on XGBoost and Cross Validation (XGBoost와 교차검증을 이용한 품사부착말뭉치에서의 오류 탐지)

  • Choi, Min-Seok;Kim, Chang-Hyun;Park, Ho-Min;Cheon, Min-Ah;Yoon, Ho;Namgoong, Young;Kim, Jae-Kyun;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.221-228
    • /
    • 2020
  • Part-of-Speech (POS) tagged corpus is a collection of electronic text in which each word is annotated with a tag as the corresponding POS and is widely used for various training data for natural language processing. The training data generally assumes that there are no errors, but in reality they include various types of errors, which cause performance degradation of systems trained using the data. To alleviate this problem, we propose a novel method for detecting errors in the existing POS tagged corpus using the classifier of XGBoost and cross-validation as evaluation techniques. We first train a classifier of a POS tagger using the POS-tagged corpus with some errors and then detect errors from the POS-tagged corpus using cross-validation, but the classifier cannot detect errors because there is no training data for detecting POS tagged errors. We thus detect errors by comparing the outputs (probabilities of POS) of the classifier, adjusting hyperparameters. The hyperparameters is estimated by a small scale error-tagged corpus, in which text is sampled from a POS-tagged corpus and which is marked up POS errors by experts. In this paper, we use recall and precision as evaluation metrics which are widely used in information retrieval. We have shown that the proposed method is valid by comparing two distributions of the sample (the error-tagged corpus) and the population (the POS-tagged corpus) because all detected errors cannot be checked. In the near future, we will apply the proposed method to a dependency tree-tagged corpus and a semantic role tagged corpus.

Malicious Traffic Classification Using Mitre ATT&CK and Machine Learning Based on UNSW-NB15 Dataset (마이터 어택과 머신러닝을 이용한 UNSW-NB15 데이터셋 기반 유해 트래픽 분류)

  • Yoon, Dong Hyun;Koo, Ja Hwan;Won, Dong Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.99-110
    • /
    • 2023
  • This study proposed a classification of malicious network traffic using the cyber threat framework(Mitre ATT&CK) and machine learning to solve the real-time traffic detection problems faced by current security monitoring systems. We applied a network traffic dataset called UNSW-NB15 to the Mitre ATT&CK framework to transform the label and generate the final dataset through rare class processing. After learning several boosting-based ensemble models using the generated final dataset, we demonstrated how these ensemble models classify network traffic using various performance metrics. Based on the F-1 score, we showed that XGBoost with no rare class processing is the best in the multi-class traffic environment. We recognized that machine learning ensemble models through Mitre ATT&CK label conversion and oversampling processing have differences over existing studies, but have limitations due to (1) the inability to match perfectly when converting between existing datasets and Mitre ATT&CK labels and (2) the presence of excessive sparse classes. Nevertheless, Catboost with B-SMOTE achieved the classification accuracy of 0.9526, which is expected to be able to automatically detect normal/abnormal network traffic.

Correct Closure of the Left Atrial Appendage Reduces Stagnant Blood Flow and the Risk of Thrombus Formation: A Proof-of-Concept Experimental Study Using 4D Flow Magnetic Resonance Imaging

  • Min Jae Cha;Don-Gwan An;Minsoo Kang;Hyue Mee Kim;Sang-Wook Kim;Iksung Cho;Joonhwa Hong;Hyewon Choi;Jee-Hyun Cho;Seung Yong Shin;Simon Song
    • Korean Journal of Radiology
    • /
    • v.24 no.7
    • /
    • pp.647-659
    • /
    • 2023
  • Objective: The study was conducted to investigate the effect of correct occlusion of the left atrial appendage (LAA) on intracardiac blood flow and thrombus formation in patients with atrial fibrillation (AF) using four-dimensional (4D) flow magnetic resonance imaging (MRI) and three-dimensional (3D)-printed phantoms. Materials and Methods: Three life-sized 3D-printed left atrium (LA) phantoms, including a pre-occlusion (i.e., before the occlusion procedure) model and correctly and incorrectly occluded post-procedural models, were constructed based on cardiac computed tomography images from an 86-year-old male with long-standing persistent AF. A custom-made closed-loop flow circuit was set up, and pulsatile simulated pulmonary venous flow was delivered by a pump. 4D flow MRI was performed using a 3T scanner, and the images were analyzed using MATLAB-based software (R2020b; Mathworks). Flow metrics associated with blood stasis and thrombogenicity, such as the volume of stasis defined by the velocity threshold ($\left|\vec{V}\right|$ < 3 cm/s), surface-and-time-averaged wall shear stress (WSS), and endothelial cell activation potential (ECAP), were analyzed and compared among the three LA phantom models. Results: Different spatial distributions, orientations, and magnitudes of LA flow were directly visualized within the three LA phantoms using 4D flow MRI. The time-averaged volume and its ratio to the corresponding entire volume of LA flow stasis were consistently reduced in the correctly occluded model (70.82 mL and 39.0%, respectively), followed by the incorrectly occluded (73.17 mL and 39.0%, respectively) and pre-occlusion (79.11 mL and 39.7%, respectively) models. The surfaceand-time-averaged WSS and ECAP were also lowest in the correctly occluded model (0.048 Pa and 4.004 Pa-1, respectively), followed by the incorrectly occluded (0.059 Pa and 4.792 Pa-1, respectively) and pre-occlusion (0.072 Pa and 5.861 Pa-1, respectively) models. Conclusion: These findings suggest that a correctly occluded LAA leads to the greatest reduction in LA flow stasis and thrombogenicity, presenting a tentative procedural goal to maximize clinical benefits in patients with AF.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.8
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.

A Study on the Development of an Assessment Index for Selecting Start-ups on Balanced Scorecard (균형성과표(BSC) 기반 창업기업 선정평가지표 개발)

  • Jung, kyung Hee;Choi, Dae Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.6
    • /
    • pp.49-62
    • /
    • 2018
  • The purpose of this study is to develop an assessment index for the selection of promising start-ups, which will enhance the efficiency of program that support start-ups. In order to develop assessment models for selecting start-ups, three major research steps were conducted. First, this study attempted to theoretically redefine the assessment index from the perspective of the Balanced Scorecard (BSC) through a literature review. Second, major assessment index were derived using Delphi technique for experts in start-up areas. Third, weights were derived by applying AHP technique to calculate the importance of each index. The results of this study are summarized as follows. First, this study attempted to apply the assessment model for selecting start-ups from the Balanced Scorecard (BSC) view through the previous study review. Second, the final major questions were derived with sufficient opinions collected and structured survey of leading start-up experts in areas related to research subjects and elicited the most representative questions. Third, the results of applying the weights of the main selected assessment index, commercialization viewpoint is the most priority, followed by market view, technology development viewpoint, and organizational capability viewpoint. In the middle section, th ability to make products in the commercialization viewpoint, market competitiveness in the market, product discrimination capacity in the technology development perspective, and the ability of the entrepreneur in the organizational capacity perspective were important. Overall important items were found to be in the order of the capabilities of entrepreneurs, market competitiveness, product fire capability, and product discrimination. The importance of small items was highest priority for comparative excellence of competing products, and the degree of marketability, capacity of entrepreneurship, ability to raise capital, desire for entrepreneurship, and passion were shown. The results of this study presented a conceptual alternative to the preceding study on the development of existing selection assessment indexes. And it provides meaningful and important implications as an attempt to develop more sophisticated indicators by overcoming the limitations of empirical research on only some of the evaluation metrics.