• 제목/요약/키워드: paper templates

Search Result 223, Processing Time 0.024 seconds

Design and Implementation of a Cloud-based Linux Software Practice Platform (클라우드 기반 리눅스 SW 실습 플랫폼의 설계 및 구현 )

  • Hyokyung Bahn;Kyungwoon Cho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.2
    • /
    • pp.67-71
    • /
    • 2023
  • Recently, there are increasing cases of managing software labs by assigning virtual PCs in the cloud instead of physical PCs to each student. In this paper, we design and implement a Linux-based software practice platform that allows students to efficiently build their environments in the cloud. In our platform, instructors can create and control virtual machine templates for all students at once, and students practice on their own machines as administrators. Instructors can also troubleshoot each machine and restore its state. Meanwhile, the biggest obstacle to implementing this approach is the difficulty of predicting the costs of cloud services instantly. To cope with this situation, we propose a model that can estimate the cost of cloud resources used. By using daemons in each user's virtual machine, we instantly estimate resource usage and costs. Although our model has very low overhead, the predicted results are very close to the actual resource usage measured by cloud service providers. To further validate our model, we used the proposed platform in a Linux practice lecture for a semester and confirmed that the proposed model is very accurate.

Underwater Transient Signal Classification Using Eigen Decomposition Based on Wigner-Ville Distribution Function (위그너-빌 분포 함수 기반의 고유치 분해를 이용한 수중 천이 신호 식별)

  • Bae, Keun-Sung;Hwang, Chan-Sik;Lee, Hyeong-Uk;Lim, Tae-Gyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.123-128
    • /
    • 2007
  • This Paper Presents new transient signal classification algorithms for underwater transient signals. In general. the ambient noise has small spectral deviation and energy variation. while a transient signal has large fluctuation. Hence to detect the transient signal, we use the spectral deviation and power variation. To classify the detected transient signal. the feature Parameters are obtained by using the Wigner-Ville distribution based eigenvalue decomposition. The correlation is then calculated between the feature vector of the detected signal and all the feature vectors of the reference templates frame-by-frame basis, and the detected transient signal is classified by the frame mapping rate among the class database.

An Elicitation Approach of Measurement Indicator based an Product line Context (Product Line의 컨텍스트 기반 측정 지표 도출 방법)

  • Hwang Sun-Myung;Kim Jin-Sam
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.583-592
    • /
    • 2006
  • Software development based on product lines has been proved a promising technology that can drastically reduce cycle time and guarantee quality by strategically reusing quality core assets that belong to an organization. However, how to measure within a product line is different from how to measure within a single software project in that we have to consider the aspects of both core assets and projects that utilize the assets. Moreover, the performance aspects of overall project lines need to be considered within a product line context. Therefore, a systematic approach to measure the performance of product lines is essential to have consistent, repeatable and effective measures within a product line. This paper presents a context-based measurements elicitation approach for product lines that reflects the performance characteristics of product lines and the diversity of their application. The approach includes both detailed procedures and work products resulting from implementation of the procedures, along with their templates. To show the utility of the approach, this paper presents the elicited measurements, especially for technical management practices among product line practices. This paper also illustrated a real application case that adopt this approach. The systematic approach enables management attributes, i.e., measurements to be identified when we construct product lines or develop software product based on the product lines. The measurements will be effective in that they are derived in consideration of the application context and interests of stakeholders.

A Methodology for Translation of Operating System Calls in Legacy Real-time Software to Ada (Legacy 실시간 소프트웨어의 운영체제 호출을 Ada로 번역하기 위한 방법론)

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2874-2890
    • /
    • 1997
  • This paper describes a methodology for translation of concurrent software expressed in operating system (OS) calls to Ada. Concurrency is expressed in some legacy software by OS calls that perform concurrent process/task control. Examples considered in this paper are calls in programs in C to Unix and calls in programs in CMS-2 to the Executive Service Routines of ATES or SDEX-20 other software re/reverse engineering research has focused on translating the OS calls in a legacy software to calls to another OS. In this approach, the understanding of software has required knowledge of the underlying OS, which is usually very complicated and informally documented. The research in this paper has focused on translating the OS calls in a legacy software into the equivalent protocols using the Ada facilities. In translation to Ada, these calls are represented by Ada equivalent code that follow the scheme of a message-based kernel oriented architecture. To facilitate translation, it utilizes templates placed in library for data structures, tasks, procedures, and messages. This methodology is a new approach to modeling OS in Ada in software re/reverse engineering. There is no need of knowledge of the underlying OS for software understanding in this approach, since the dependency on the OS in the legacy software is removed. It is portable and interoperable on Ada run-time environments. This approach can handle the OS calls in different legacy software systems.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

CompGenX: Component Code Generation System based on GenVoca and XML (CompGenX: GenVoca와 XML 기반의 컴포넌트 코드 생성 시스템)

  • Choi Seung-Hoon
    • Journal of Internet Computing and Services
    • /
    • v.4 no.3
    • /
    • pp.57-67
    • /
    • 2003
  • Software product lines are to attain the rapid development of qualify applications by concretizing the general components populated in software assets and assembling them according to the predefined architectures. For supporting the construction of the software product lines, this paper proposes a component code generation techniques based on GenVoca architecture and XML/XSLT technologies, In addition, CompGenX(Component Generator using XML), a component code generation system, is proposed on the basis of this techniques. By providing reconfigurability of component at the time of code generation, CompGenX allows the reusers to create the component source code that is appropriate to their purpose, In this system, the process of the component development is divided into two tasks which are the component family construction task and the component reuse task, For the component family construction, CompGenX provides the feature modeling tool for domain analysis and the domain architecture definition tool. Also, it provides the tool for building the component configuration know1edge specification and the code templates, For the component reuse task, it offers the component family search tool. the component customizing tool and the component code generator. Component code generation techniques and system in this paper should be applicable as basic technology to build the component-based software product lines.

  • PDF

Analyzing Korean Math Word Problem Data Classification Difficulty Level Using the KoEPT Model (KoEPT 기반 한국어 수학 문장제 문제 데이터 분류 난도 분석)

  • Rhim, Sangkyu;Ki, Kyung Seo;Kim, Bugeun;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.315-324
    • /
    • 2022
  • In this paper, we propose KoEPT, a Transformer-based generative model for automatic math word problems solving. A math word problem written in human language which describes everyday situations in a mathematical form. Math word problem solving requires an artificial intelligence model to understand the implied logic within the problem. Therefore, it is being studied variously across the world to improve the language understanding ability of artificial intelligence. In the case of the Korean language, studies so far have mainly attempted to solve problems by classifying them into templates, but there is a limitation in that these techniques are difficult to apply to datasets with high classification difficulty. To solve this problem, this paper used the KoEPT model which uses 'expression' tokens and pointer networks. To measure the performance of this model, the classification difficulty scores of IL, CC, and ALG514, which are existing Korean mathematical sentence problem datasets, were measured, and then the performance of KoEPT was evaluated using 5-fold cross-validation. For the Korean datasets used for evaluation, KoEPT obtained the state-of-the-art(SOTA) performance with 99.1% in CC, which is comparable to the existing SOTA performance, and 89.3% and 80.5% in IL and ALG514, respectively. In addition, as a result of evaluation, KoEPT showed a relatively improved performance for datasets with high classification difficulty. Through an ablation study, we uncovered that the use of the 'expression' tokens and pointer networks contributed to KoEPT's state of being less affected by classification difficulty while obtaining good performance.

Controlled Production of Monodisperse Polycaprolactone Microparticles using Microfluidic Device (미세유체장치를 이용한 생분해성 Polycarprolactone의 단분산성 미세입자 생성제어)

  • Jeong, Heon-Ho
    • Clean Technology
    • /
    • v.25 no.4
    • /
    • pp.283-288
    • /
    • 2019
  • Monodisperse microparticles has been particularly enabling for various applications in the encapsulation and delivery of pharmaceutical agents. The microfluidic devices are attractive candidates to produce highly uniform droplets that serve as templates to form monodisperse microparticles. The microfluidic devices that have micro-scale channel allow precise control of the balance between surface tension and viscous forces in two-phase flows. One of its essential abilities is to generate highly monodisperse droplets. In this paper, a microfluidic approach for preparing monodisperse polycaprolactone (PCL) microparticles is presented. The microfluidic devices that have a flow-focusing generator are manufactured by soft-lithography using polydimethylsiloxane (PDMS). The crucial factors in the droplet generation are the controllability of size and monodispersity of the microdroplets. For this, the volumetric flow rates of the dispersed phase of oil solution and the continuous phase of water to generate monodisperse droplets are optimized. As a result, the optimal flow condition for droplet dripping region that is able to generate uniform droplet is found. Furthermore, the droplets containing PCL polymer by solvent evaporation after collection of droplet from device is solidified to generate the microparticle. The particle size can be controlled by tuning the flow rate and the size of the microchannel. The monodispersity of the PCL particles is measured by a coefficient of variation (CV) below 5%.

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System (음성인식을 위한 혼돈시스템 특성기반의 종단탐색 기법)

  • Zang, Xian;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.5
    • /
    • pp.8-14
    • /
    • 2009
  • In the research field of speech recognition, pinpointing the endpoints of speech utterance even with the presence of background noise is of great importance. These noise present during recording introduce disturbances which complicates matters since what we just want is to get the stationary parameters corresponding to each speech section. One major cause of error in automatic recognition of isolated words is the inaccurate detection of the beginning and end boundaries of the test and reference templates, thus the necessity to find an effective method in removing the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two linear time-domain measurements: the short-time energy, and short-time zero-crossing rate. They perform well for clean speech but their precision is not guaranteed if there is noise present, since the high energy and zero-crossing rate of the noise is mistaken as a part of the speech uttered. This paper proposes a novel approach in finding an apparent threshold between noise and speech based on Lyapunov Exponents (LEs). This proposed method adopts the nonlinear features to analyze the chaos characteristics of the speech signal instead of depending on the unreliable factor-energy. The excellent performance of this approach compared with the conventional methods lies in the fact that it detects the endpoints as a nonlinearity of speech signal, which we believe is an important characteristic and has been neglected by the conventional methods. The proposed method extracts the features based only on the time-domain waveform of the speech signal illustrating its low complexity. Simulations done showed the effective performance of the Proposed method in a noisy environment with an average recognition rate of up 92.85% for unspecified person.

A design and implementation of VHDL-to-C mapping in the VHDL compiler back-end (VHDL 컴파일러 후반부의 VHDL-to-C 사상에 관한 설계 및 구현)

  • 공진흥;고형일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.12
    • /
    • pp.1-12
    • /
    • 1998
  • In this paper, a design and implementation of VHDL-to-C mapping in the VHDL compiler back-end is described. The analyzed data in an intermediate format(IF), produced by the compiler front-end, is transformed into a C-code model of VHDL semantics by the VHDL-to-C mapper. The C-code model for VHDL semantics is based on a functional template, including declaration, elaboration, initialization and execution parts. The mapping is carried out by utilizing C mapping templates of 129 types classified by mapping units and functional semantics, and iterative algorithms, which are combined with terminal information, to produce C codes. In order to generate the C program, the C codes are output to the functional template either directly or by combining the higher mapping result with intermediate mapping codes in the data queue. In experiments, it is shown that the VHDL-to-C mapper could completely deal with the VHDL analyzed programs from the compiler front-end, which deal with about 96% of major VHDL syntactic programs in the Validation Suite. As for the performance, it is found that the code size of VHDL-to-C is less than that of interpreter and worse than direct code compiler of which generated code is increased more rapidly with the size of VHDL design, and that the VHDL-to-C timing overhead is needed to be improved by the optimized implementation of mapping mechanism.

  • PDF