• Title/Summary/Keyword: Software Graph

Search Result 310, Processing Time 0.03 seconds

Automatic Clustering on Trained Self-organizing Feature Maps via Graph Cuts (그래프 컷을 이용한 학습된 자기 조직화 맵의 자동 군집화)

  • Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.572-587
    • /
    • 2008
  • The Self-organizing Feature Map(SOFM) that is one of unsupervised neural networks is a very powerful tool for data clustering and visualization in high-dimensional data sets. Although the SOFM has been applied in many engineering problems, it needs to cluster similar weights into one class on the trained SOFM as a post-processing, which is manually performed in many cases. The traditional clustering algorithms, such as t-means, on the trained SOFM however do not yield satisfactory results, especially when clusters have arbitrary shapes. This paper proposes automatic clustering on trained SOFM, which can deal with arbitrary cluster shapes and be globally optimized by graph cuts. When using the graph cuts, the graph must have two additional vertices, called terminals, and weights between the terminals and vertices of the graph are generally set based on data manually obtained by users. The Proposed method automatically sets the weights based on mode-seeking on a distance matrix. Experimental results demonstrated the effectiveness of the proposed method in texture segmentation. In the experimental results, the proposed method improved precision rates compared with previous traditional clustering algorithm, as the method can deal with arbitrary cluster shapes based on the graph-theoretic clustering.

Input/Output Relationship Based Adaptive Combinatorial Testing for a Software Component-based Robot System (소프트웨어 컴포넌트 기반 로봇 시스템을 위한 입출력 연관관계 기반 적응형 조합 테스팅 기법)

  • Kang, Jeong Seok;Park, Hong Seong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.699-708
    • /
    • 2015
  • In the testing of a software component-based robot system, generating test cases for the system is a time-consuming and difficult task that requires the combining of test data. This paper proposes an adaptive combinatorial testing method which is based on the input/output relationship among components and which automatically generates the test cases for the system. The proposed algorithm first generates an input/output relationship graph in order to analyze the input/output relationship of the system. It then generates the reduced set of test cases according to the analyzed type of input/output relationship. To validate the proposed algorithm some comparisons are given in terms of the time complexity and the number of test cases.

The translation database design for being written in the Natural Language using the Requirement Diagram (Requirement Diagram 를 자연어로 작성하기 위한 Translation Database Design)

  • Lee, Hye-Ryun;Choi, Kyung-Hee;Jung, Ki-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.325-327
    • /
    • 2007
  • Software testing 은 소프트웨어 개발 과정 중에 1/3 을 차지 할 만큼 중요한 부분 중 하나이다. Software testing 는 Requirement 작성이 제대로 이루어져야만이 제대로 testing 을 할 수 있고, 그에 따라 정확한 결과를 얻을 수 있다. 그 만큼 Requirement 작성이 중요시 되고 있지만, 수동적으로 기술자에 의해서 작성되는 Requirement 에는 많은 문제점을 안고 있다. 본 논문에서는 Requirement를 Graph 하게 표현한 방법을 소개하고, 표현된 방식을 이용하여 다시 자연어로 표현할 수 있도록 Database 를 설계하는 방식을 제안한다. 그 결과로 Design 된 패턴들을 이용하여 Requirement 자연어로 기술한다. 이를 통하여 Requirment 기술 방식을 통일화 시킬 수 있으며, 기술자간에 의사소통을 원할하게 수행할 수 있으며, Software testing 의 중요 기반으로 제공할 수 있다.

  • PDF

A Prediction Method using Markov chain for Step Size Control in FMI based Co-simulation (FMI기반 co-simulation에서 step size control을 위한 Markov chain을 사용한 예측 방법)

  • Hong, Seokjoon;Lim, Ducsun;Kim, Wontae;Joe, Inwhee
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1430-1439
    • /
    • 2019
  • In Functional Mockup Interface(FMI)-based co-simulation, a bisectional algorithm can be used to find the zerocrossing point as a way to improve the accuracy of the simulation results. In this paper, the proposed master algorithm(MA) analyzes the repeated interval graph and predicts the next interval by applying the Markov Chain to the step size. In the simulation, we propose an algorithm to minimize the rollback by storing the step size that changes according to the graph type as an array and applying it to the next prediction interval when the rollback occurs in the simulation. Simulation results show that the proposed algorithm reduces the simulation time by more than 20% compared to the existing algorithm.

Dependent Quantization for Scalable Video Coding

  • Pranantha, Danu;Kim, Mun-Churl;Hahm, Sang-Jin;Lee, Keun-Sik;Park, Keun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.127-132
    • /
    • 2006
  • Quantization in video coding plays an important role in controlling the bit-rate of compressed video bit-streams. It has been used as an important control means to adjust the amount of bit-streams to at]owed bandwidth of delivery networks and storage. Due to the dependent nature of video coding, dependent quantization has been proposed and applied for MPEG-2 video coding to better maintain the quality of reconstructed frame for given constraints of target bit-rate. Since Scalable Video Coding (SVC) being currently standardized exhibits highly dependent coding nature not only between frames but also lower and higher scalability layers where the dependent quantization can be effectively applied, in this paper, we propose a dependent quantization scheme for SVC and compare its performance in visual qualities and bit-rates with the current JSVM reference software for SVC. The proposed technique exploits the frame dependences within each GOP of SVC scalability layers to formulate dependent quantization. We utilize Lagrange optimization, which is widely accepted in R-D (rate-distortion) based optimization, and construct trellis graph to find the optimal cost path in the trellis by minimizing the R-D cost. The optimal cost path in the trellis graph is the optimal set of quantization parameters (QP) for frames within a GOP. In order to reduce the complexity, we employ pruning procedure using monotonicity property in the trellis optimization and cut the frame dependency into one GOP to decrease dependency depth. The optimal Lagrange multiplier that is used for SVC is equal to H.264/AVC which is also used in the mode prediction of the JSVM reference software. The experimental result shows that the dependent quantization outperforms the current JSVM reference software encoder which actually takes a linear increasing QP in temporal scalability layers. The superiority of the dependent quantization is achieved up to 1.25 dB increment in PSNR values and 20% bits saving for the enhancement layer of SVC.

  • PDF

A Extraction of Multiple Object Candidate Groups for Selecting Optimal Objects (최적합 객체 선정을 위한 다중 객체군 추출)

  • Park, Seong-Ok;No, Gyeong-Ju;Lee, Mun-Geun
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.12
    • /
    • pp.1468-1481
    • /
    • 1999
  • didates.본 논문은 절차 중심 소프트웨어를 객체 지향 소프트웨어로 재/역공학하기 위한 다단계 절차중 첫 절차인 객체 추출 절차에 대하여 기술한다. 사용한 객체 추출 방법은 전처리, 기본 분할 및 결합, 정제 결합, 결정 및 통합의 다섯 단계로 이루어진다 : 1) 전처리 과정에서는 객체 추출을 위한 FTV(Function, Type, Variable) 그래프를 생성/분할 및 클러스터링하고, 2) 기본 분할 및 결합 단계에서는 다중 객체 추출을 위한 그래프를 생성하고 생성된 그래프의 정적 객체를 추출하며, 3) 정제 결합 단계에서는 동적 객체를 추출하며, 4) 결정 단계에서는 영역 모델링과 다중 객체 후보군과의 유사도를 측정하여 영역 전문가가 하나의 최적합 후보를 선택할 수 있는 측정 결과를 제시하며, 5) 통합 단계에서는 전처리 과정에서 분리된 그래프가 여러 개 존재할 경우 각각의 처리된 그래프를 통합한다. 본 논문에서는 클러스터링 순서가 고정된 결정론적 방법을 사용하였으며, 가능한 경우의 수에 따른 다중 객체 후보, 객관적이고 의미가 있는 객체 추출 방법으로의 정제와 결정, 영역 모델링을 통한 의미적 관점에 기초한 방법 등을 사용한다. 이러한 방법을 사용함으로써 전문가는 객체 추출 단계에서 좀더 다양하고 객관적인 선택을 할 수 있다.Abstract This paper presents an object extraction process, which is the first phase of a methodology to transform procedural software to object-oriented software. The process consists of five steps: the preliminary, basic clustering & inclusion, refinement, decision and integration. In the preliminary step, FTV(Function, Type, Variable) graph for object extraction is created, divided and clustered. In the clustering & inclusion step, multiple graphs for static object candidate groups are generated. In the refinement step, each graph is refined to determine dynamic object candidate groups. In the decision step, the best candidate group is determined based on the highest similarity to class group modeled from domain engineering. In the final step, the best group is integrated with the domain model. The paper presents a new clustering method based on static clustering steps, possible object candidate grouping cases based on abstraction concept, a new refinement algorithm, a similarity algorithm for multiple n object and m classes, etc. This process provides reengineering experts an comprehensive and integrated environment to select the best or optimal object candidates.

Automatic Extraction of Metadata Information for Library Collections

  • Yang, Gi-Chul;Park, Jeong-Ran
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.117-122
    • /
    • 2018
  • As evidenced through rapidly growing digital repositories and web resources, automatic metadata generation is becoming ever more critical, especially considering the costly and complex operation of manual metadata creation. Also, automatic metadata generation is apt to consistent metadata application. In this sense, metadata quality and interoperability can be enhanced by utilizing a mechanism for automatic metadata generation. In this article, a mechanism of automatic metadata extraction called ExMETA is introduced in order to alleviate issues dealing with inconsistent metadata application and semantic interoperability across ever-growing digital collections. Conceptual graph, one of formal languages that represent the meanings of natural language sentences, is utilized for ExMETA as a mediation mechanism that enhances the metadata quality by disambiguating semantic ambiguities caused by isolation of a metadata element and its corresponding definition from the relevant context. Hence, automatic metadata generation by using ExMETA can be a good way of enhancing metadata quality and semantic interoperability.

A Study on the Development of Teaching Material using GSP in Mathematics Education -Focused on the graph of function of Middle School- (GSP를 활용한 수학과 교육자료 개발 연구 -중학교 함수의 그래프를 중심으로-)

  • 신영섭
    • Journal of the Korean School Mathematics Society
    • /
    • v.2 no.1
    • /
    • pp.93-104
    • /
    • 1999
  • The subject of this study was the graph of relation of direct inverse proportion, linear function and quadratic function in the 1st, 2nd and 3rd grade of current middle school mathematics curriculum. GSP materials were developed to simplify the principle, trait and characteristics of graphs and make them easier to understand. The overall aim of the materials is to improve the effectiveness of teaching and learning through the utilization of enhanced students' practice. Additionally, the use of the GSP will be useful in the development of mere effective materials. The effectiveness of the GSP materials will be as followings. 1. The step by step approach of GSP materials through computer interaction will enhance students motivation and interest in mathematics. 2. By presenting the subject matter simply and in a variety of ways, difficult concepts can be understood without the use of complex mathematical calculation. 3. The GSP program is different from CAI and other software programs. It should be used only after learning how to input and output data.

  • PDF

Handling Semantic Ambiguity for Metadata Generation

  • Yang, Gi-Chul;Park, Jeong-Ran
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.2
    • /
    • pp.1-6
    • /
    • 2018
  • The following research questions are examined in this paper. What hinders quality metadata generation and metadata interoperability? What kind of semantic representation technique can be utilized in order to enhance metadata quality and semantic interoperability? This paper suggests a way of handling semantic ambiguity for metadata generation. The conceptual graph is utilized to disambiguate semantic ambiguities caused by isolation of a metadata element and its corresponding definition from the relevant context. The mechanism introduced in this paper has the potential to alleviate issues dealing with inconsistent metadata application and interoperability across digital collections.

REVIEW OF VARIOUS DYNAMIC MODELING METHODS AND DEVELOPMENT OF AN INTUITIVE MODELING METHOD FOR DYNAMIC SYSTEMS

  • Shin, Seung-Ki;Seong, Poong-Hyun
    • Nuclear Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.375-386
    • /
    • 2008
  • Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing reliability graph with general gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables.