• Title/Summary/Keyword: 클래스 구조

Search Result 488, Processing Time 0.023 seconds

Scenario-Driven Verification Method for Completeness and Consistency Checking of UML Object-Oriented Analysis Model (UML 객체지향 분석모델의 완전성 및 일관성 진단을 위한 시나리오기반 검증기법)

  • Jo, Jin-Hyeong;Bae, Du-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.3
    • /
    • pp.211-223
    • /
    • 2001
  • 본 논문에서 제안하는 시나리오기반 검증기법의 목적은 UML로 작성된 객체지향 분석모델의 완전성 및 일관성을 진단하는 것이다. 검증기법의 전체 절차는 요구분석을 위한 Use Case 모델링 과정에서 생성되는 Use Case 시나리오와 UML 분석모델로부터 역공학적 방법으로 도출된 객체행위 시나리오와의 상호참조과정 및 시나리오 정보트리 추적과정을 이용하여 단계적으로 수행된다. 본 검증절차를 위하여 우선, UML로 작성된 객체지향 분석모델들은 우선 정형명세언어를 사용하여 Use Case 정형명세로 변환하다. 그 다음에, Use Case 정형명세로부터 해당 Use Case 내의 객체의 정적구조를 표현하는 시나리오 정보트리를 구축하고, Use Case 정형명세 내에 포함되어 있는 객체 동적행위 정보인 메시지 순차에 따라 개별 시나리오흐름을 시나리오 정보트리에 표현한다. 마지막으로 시나리오 정보트리 추적과 시나리오 정보 테이블 참조과정을 중심으로 완전성 및 일관성 검증작업을 수행한다. 즉, 검증하고자 하는 해당 Use Case의 시나리오 정보트리를 이용한 시나리오 추적과정을 통해 생성되는 객체행위 시나리오와 요구분석 과정에서 도출되는 Use Case 시나리오와의 일치여부를 조사하여 분석모델과 사용자 요구사양과의 완전성을 검사한다. 그리고, 시나리오 추적과정을 통해 수집되는 시나리오 관련종보들을 가지고 시나리오 정보 테이블을 작성한 후, 분석과정에서 작성된 클래스 관련정보들의 시나리오 포함 여부를 확인하여 분석모델의 일관성을 검사한다. 한편, 본 논문에서 제안하는 검증기법의 효용성을 증명하기 위해 대학의 수강등록시스템 개발을 위해 UML을 이용해 작성된 분석모델을 특정한 사례로써 적용하여 보았다. 프로세싱 오버헤드 및 메모리와 대역폭 요구량 측면에서 MARS 모델보다 유리함을 알 수 있었다.과는 본 논문에서 제안된 프리페칭 기법이 효율적으로 peak bandwidth를 줄일 수 있다는 것을 나타낸다.ore complicate such a prediction. Although these overestimation sources have been attacked in many existing analysis techniques, we cannot find in the literature any description about questions like which one is most important. Thus, in this paper, we quantitatively analyze the impacts of overestimation sources on the accuracy of the worst case timing analysis. Using the results, we can identify dominant overestimation sources that should be analyzed more accurately to get tighter WCET estimations. To make our method independent of any existing analysis techniques, we use simulation based methodology. We have implemented a MIPS R3000 simulator equipped with several switches, each of which determines the accuracy level of the

  • PDF

The Study for Enhancing Resilience to Debris Flow at the Vulnerable Areas (토석류 재해발생 시 레질리언스 강화를 위한 연구)

  • Kim, Sungduk;Lee, Hojin;Chang, Hyungjoon;Dho, Hyonseung
    • Journal of the Korean GEO-environmental Society
    • /
    • v.22 no.8
    • /
    • pp.5-12
    • /
    • 2021
  • Climate change caused by global warming increases the frequency of occurrence of super typhoons and causes various types of sediment disasters such as debris flows in the mountainous area. This study is to evaluate the behavior of debris flow according to the multiplier value of the precipitation characteristics and the quantity of debris flow according to the typhoon category. For the analysis of the debris flow, the finite difference method for time elapse was applied. The larger the typhoon category, the higher the peak value of the flow discharge of debris flow and the faster the arrival time. When the precipitation characteristic multiplier is large, the fluctuation amplitude is high and the bandwidth is wide. When the slope angle was steeper, water discharge increased by 2~2.5 times or more, and the fluctuation of the flow discharge of debris flow increased. All of the velocities of debris flow were included to the class of "Very rapid", and the distribution of the erosion or sedimentation velocity of debris flows showed that the magnitude of erosion increased from the beginning, large-scale erosion occurred, and flowed downstream. The results of this study will provide information for predicting debris flow disasters, structural countermeasures and establishing countermeasures for reinforcing resilience in vulnerable areas.

A study on the 3-step classification algorithm for the diagnosis and classification of refrigeration system failures and their types (냉동시스템 고장 진단 및 고장유형 분석을 위한 3단계 분류 알고리즘에 관한 연구)

  • Lee, Kangbae;Park, Sungho;Lee, Hui-Won;Lee, Seung-Jae;Lee, Seung-hyun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.31-37
    • /
    • 2021
  • As the size of buildings increases due to urbanization due to the development of industry, the need to purify the air and maintain a comfortable indoor environment is also increasing. With the development of monitoring technology for refrigeration systems, it has become possible to manage the amount of electricity consumed in buildings. In particular, refrigeration systems account for about 40% of power consumption in commercial buildings. Therefore, in order to develop the refrigeration system failure diagnosis algorithm in this study, the purpose of this study was to understand the structure of the refrigeration system, collect and analyze data generated during the operation of the refrigeration system, and quickly detect and classify failure situations with various types and severity . In particular, in order to improve the classification accuracy of failure types that are difficult to classify, a three-step diagnosis and classification algorithm was developed and proposed. A model based on SVM and LGBM was presented as a classification model suitable for each stage after a number of experiments and hyper-parameter optimization process. In this study, the characteristics affecting failure were preserved as much as possible, and all failure types, including refrigerant-related failures, which had been difficult in previous studies, were derived with excellent results.

A Code Clustering Technique for Unifying Method Full Path of Reusable Cloned Code Sets of a Product Family (제품군의 재사용 가능한 클론 코드의 메소드 경로 통일을 위한 코드 클러스터링 방법)

  • Kim, Taeyoung;Lee, Jihyun;Kim, Eunmi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • Similar software is often developed with the Clone-And-Own (CAO) approach that copies and modifies existing artifacts. The CAO approach is considered as a bad practice because it makes maintenance difficult as the number of cloned products increases. Software product line engineering is a methodology that can solve the issue of the CAO approach by developing a product family through systematic reuse. Migrating product families that have been developed with the CAO approach to the product line engineering begins with finding, integrating, and building them as reusable assets. However, cloning occurs at various levels from directories to code lines, and their structures can be changed. This makes it difficult to build product line code base simply by finding clones. Successful migration thus requires unifying the source code's file path, class name, and method signature. This paper proposes a clustering method that identifies a set of similar codes scattered across product variants and some of their method full paths are different, so path unification is necessary. In order to show the effectiveness of the proposed method, we conducted an experiment using the Apo Games product line, which has evolved with the CAO approach. As a result, the average precision of clustering performed without preprocessing was 0.91 and the number of identified common clusters was 0, whereas our method showed 0.98 and 15 respectively.

Detection Fastener Defect using Semi Supervised Learning and Transfer Learning (준지도 학습과 전이 학습을 이용한 선로 체결 장치 결함 검출)

  • Sangmin Lee;Seokmin Han
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.91-98
    • /
    • 2023
  • Recently, according to development of artificial intelligence, a wide range of industry being automatic and optimized. Also we can find out some research of using supervised learning for deteceting defect of railway in domestic rail industry. However, there are structures other than rails on the track, and the fastener is a device that binds the rail to other structures, and periodic inspections are required to prevent safety accidents. In this paper, we present a method of reducing cost for labeling using semi-supervised and transfer model trained on rail fastener data. We use Resnet50 as the backbone network pretrained on ImageNet. At first we randomly take training data from unlabeled data and then labeled that data to train model. After predict unlabeled data by trained model, we adopted a method of adding the data with the highest probability for each class to the training data by a predetermined size. Futhermore, we also conducted some experiments to investigate the influence of the number of initially labeled data. As a result of the experiment, model reaches 92% accuracy which has a performance difference of around 5% compared to supervised learning. This is expected to improve the performance of the classifier by using relatively few labels without additional labeling processes through the proposed method.

Design Information Management System Core Development Using Industry Foundation Classes (IFC를 이용한 설계정보관리시스템 핵심부 구축)

  • Lee Keun-hyung;Chin Sang-yoon;Kim Jae-jun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.2 s.2
    • /
    • pp.98-107
    • /
    • 2000
  • Increased use of computers in AEC (Architecture, Engineering and Construction) has expanded the amount of information gained from CAD (Computer Aided Design), PMIS (Project Management Information System), Structural Analysis Program, and Scheduling Program as well as making it more complex. And the productivity of AEC industry is largely dependent on well management and efficient reuse of this information. Accordingly, such trend incited much research and development on ITC (Information Technology in Construction) and CIC (Computer Integrated Construction) to be conducted. In exemplifying such effort, many researchers studied and researched on IFC (Industry Foundation Classes) since its development by IAI (International Alliance for Interoperability) for the product based information sharing. However, in spite of some valuable outputs, these researches are yet in the preliminary stage and deal mainly with conceptual ideas and trial implementations. Research on unveiling the process of the IFC application development, the core of the Design Information management system, and its applicable plan still need be done. Thus, the purpose of this paper is to determine the technologies needed for Design Information management system using IFC, and to present the key roles and the process of the IFC application development and its applicable plan. This system play a role to integrate the architectural information and the structural information into the product model and to group many each product items with various levels and aspects. To make the process model, we defined two activities, 'Product Modeling', 'Application Development', at the initial level. Then we decomposed the Application Development activity into five activities, 'IFC Schema Compile', 'Class Compile', 'Make Project Database Schema', 'Development of Product Frameworker', 'Make Project Database'. These activities are carried out by C++ Compiler, CAD, ObjectStore, ST-Developer, and ST-ObjectStore. Finally, we proposed the applicable process with six stages, '3D Modeling', 'Creation of Product Information', 'Creation and Update of Database', 'Reformation of Model's Structure with Multiple Hierarchies', 'Integration of Drawings and Specifications', and 'Creation of Quantity Information'. The IFCs, including the other classes which are going to be updated and developed newly on the construction, civil/structure, and facility management, will be used by the experts through the internet distribution technologies including CORBA and DCOM.

  • PDF

Deep Learning-based Fracture Mode Determination in Composite Laminates (복합 적층판의 딥러닝 기반 파괴 모드 결정)

  • Muhammad Muzammil Azad;Atta Ur Rehman Shah;M.N. Prabhakar;Heung Soo Kim
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.4
    • /
    • pp.225-232
    • /
    • 2024
  • This study focuses on the determination of the fracture mode in composite laminates using deep learning. With the increase in the use of laminated composites in numerous engineering applications, the insurance of their integrity and performance is of paramount importance. However, owing to the complex nature of these materials, the identification of fracture modes is often a tedious and time-consuming task that requires critical domain knowledge. Therefore, to alleviate these issues, this study aims to utilize modern artificial intelligence technology to automate the fractographic analysis of laminated composites. To accomplish this goal, scanning electron microscopy (SEM) images of fractured tensile test specimens are obtained from laminated composites to showcase various fracture modes. These SEM images are then categorized based on numerous fracture modes, including fiber breakage, fiber pull-out, mix-mode fracture, matrix brittle fracture, and matrix ductile fracture. Next, the collective data for all classes are divided into train, test, and validation datasets. Two state-of-the-art, deep learning-based pre-trained models, namely, DenseNet and GoogleNet, are trained to learn the discriminative features for each fracture mode. The DenseNet models shows training and testing accuracies of 94.01% and 75.49%, respectively, whereas those of the GoogleNet model are 84.55% and 54.48%, respectively. The trained deep learning models are then validated on unseen validation datasets. This validation demonstrates that the DenseNet model, owing to its deeper architecture, can extract high-quality features, resulting in 84.44% validation accuracy. This value is 36.84% higher than that of the GoogleNet model. Hence, these results affirm that the DenseNet model is effective in performing fractographic analyses of laminated composites by predicting fracture modes with high precision.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.