• Title/Summary/Keyword: Template modeling

Search Result 114, Processing Time 0.035 seconds

A Shipyard Layout Design System by Simulation (시뮬레이션 기반 조선소 레이아웃 설계 시스템 개발)

  • Song, Young-Joo;Lee, Dong-Kun;Woo, Jong-Hun;Shin, Jong-Gye
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.45 no.4
    • /
    • pp.441-454
    • /
    • 2008
  • Shipyard design and equipments layout problem, which are directly linked with the productivity of ship production, is an important issue serving as reference data of production plan for later massive production of ships. So far in many cases, design of a shipyard has been relying on the experienced engineers in shipbuilding, resulting in sporadic and poorly organized processes. And thus, economic losses as well as trials and errors in that accord have been pointed out as inevitable problems. This paper extracts a checklist of major elements to fine tune the shipbuilding yard designing process and the input/output data based on the simulation based shipbuilding yard layout designing framework and methodology proposed in existing researches, and executed initial architecture to develop software that integrates all the relevant processes and designing tools. In this course, both user request and design data by the steps are arranged and organized in the proposed layout design template form. In addition, simulation is done based on the parent shipbuilding process planning and scheduling data of the ship product, shipbuilding process and work stage facilities that constitute shipbuilding yard, and design items are verified and optimized with the layout and equipment list showing optimal process planning and scheduling effects. All the contents of this paper are based on simulation based shipbuilding yard layout designing methodology, and initial architecture processes are based on object oriented development methodology and system engineering methods.

A Study pn Development of collaborative Document Authoring system based on DOM (DOM에 기반한 공동 문서 저작 시스템 구현에 관한 연구)

  • Yu, Seong-Ju;Kim, Cha-Jong;Shin, Hyun-Sub
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2601-2608
    • /
    • 2010
  • It is difficult to merge text document and to remake use of documents on the most collaborative document authoring system using text document, and also to provide the storage place for saving and keeping documents. It has vulnerable drawbacks about the security though it provides the accessible abilities due to basing it on Web. In this paper, we design and implementation the collaborative document authoring system for XML document to improve a couple of problems on these systems. For these, we based on the DOM to manipulate the modeling object documents and utilized RMI on this system without considering socket communication when it transmits and receives Java objects. We improved the security through processes of authentication. By providing templates and editing functions such as annotation, visualization of document structures, we made easier making collaborative document authoring more than ever.

Design and Application of Data Interchange Formats (DIFs) for Improving Interoperability in SBA (SBA 상호운용성 향상을 위한 데이터교환서식 설계 및 활용에 관한 연구)

  • Kim, Hwang Ho;Kim, Moon Kyung;Choi, Jin Young;Wang, Gi-Nam
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.3
    • /
    • pp.275-285
    • /
    • 2012
  • DIFs (Data Interchange Formats) are needed to enhance interoperability of physically distributed organizations in SBA (Simulation Based Acquisition) process. DIFs play a role as a template of DPDs (Distributed Product Descriptions) and provide capability to use information directly without data format interchange process by allowing access to DPDs, which include various information and M&S (Modeling & Simulation) resources. This characteristic is essential for interoperability in ICE (Integrated Collaborative Environment) based SBA. This paper proposes a framework for the DIF and outputs from each phase of acquisition process for configuration data related to design and manufacturing in SBA process - Conceptual Data Model, Logical Data Model, Physical Data Model and Physical DIF based on XML. Finally, we propose the DIF model architecture and demonstrate the implementation of DIF example based on it.

Development of Parametric Design Tool for Offshore Plant Cable Tray Using PML (프로그램 매크로언어를 이용한 해양 플랜트 케이블 트레이의 파라메트릭 설계 도구 개발)

  • Kim, Hyun-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.632-637
    • /
    • 2019
  • The cable tray design of an offshore plant production design is to optimally arrange the 3D modeling so that the cable can be installed without interfering with the structural members and various outfit equipment, and it is performed using a PDMS (Plant design management system), which is a 3D CAD system for an offshore plant layout. This study reviewed the development of PML (Programmable macro language) for a PDMS supporting offshore plant cable tray design and examined the efficiency compared to the existing method. Cable tray design PML developed in this paper enables fully parametric design using electrical outfit template library, allowing a rapid response to frequent modifications due to design changes and minimizing repetitive work fatigue by reflecting the accumulated design experience. In addition, the developed system was applied to the offshore plant structure module and it improved the work efficiency by more than 50% compared to the existing method.

Improved Side Channel Analysis Using Power Consumption Table (소비 전력 테이블 생성을 통한 부채널 분석의 성능 향상)

  • Ko, Gayeong;Jin, Sunghyun;Kim, Hanbit;Kim, HeeSeok;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.4
    • /
    • pp.961-970
    • /
    • 2017
  • The differential power analysis calculates the intermediate value related to sensitive information and substitute into the power model to obtain (hypothesized) power consumption. After analyzing the calculated power consumption and measuring power consumption, the secret information value can be obtained. Hamming weight and hamming distance models are most commonly used power consumption model, and the power consumption model is obtained through the modeling technique. If the power consumption model assumed by the actual equipment differs from the power consumption of the actual equipment, the side channel analysis performance is declined. In this paper, we propose a method that records measured power consumption and exploits as power consumption model. The proposed method uses the power consumption at the time when the information (plain text, cipher text, etc.) available in the encryption process. The proposed method does not need template in advance and uses the power consumption measured by the actual equipment, so it accurately reflects the power consumption model of the equipment.. Simulation and experiments show that by using our proposed method, side channel analysis is improved on the existing power modeling method.

Development of a Korean Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼 (ECHOS) 개발)

  • Kwon Oh-Wook;Kwon Sukbong;Jang Gyucheol;Yun Sungrack;Kim Yong-Rae;Jang Kwang-Dong;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.498-504
    • /
    • 2005
  • We introduce a Korean speech recognition platform (ECHOS) developed for education and research Purposes. ECHOS lowers the entry barrier to speech recognition research and can be used as a reference engine by providing elementary speech recognition modules. It has an easy simple object-oriented architecture, implemented in the C++ language with the standard template library. The input of the ECHOS is digital speech data sampled at 8 or 16 kHz. Its output is the 1-best recognition result. N-best recognition results, and a word graph. The recognition engine is composed of MFCC/PLP feature extraction, HMM-based acoustic modeling, n-gram language modeling, finite state network (FSN)- and lexical tree-based search algorithms. It can handle various tasks from isolated word recognition to large vocabulary continuous speech recognition. We compare the performance of ECHOS and hidden Markov model toolkit (HTK) for validation. In an FSN-based task. ECHOS shows similar word accuracy while the recognition time is doubled because of object-oriented implementation. For a 8000-word continuous speech recognition task, using the lexical tree search algorithm different from the algorithm used in HTK, it increases the word error rate by $40\%$ relatively but reduces the recognition time to half.

A Methodology for Translation of Operating System Calls in Legacy Real-time Software to Ada (Legacy 실시간 소프트웨어의 운영체제 호출을 Ada로 번역하기 위한 방법론)

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2874-2890
    • /
    • 1997
  • This paper describes a methodology for translation of concurrent software expressed in operating system (OS) calls to Ada. Concurrency is expressed in some legacy software by OS calls that perform concurrent process/task control. Examples considered in this paper are calls in programs in C to Unix and calls in programs in CMS-2 to the Executive Service Routines of ATES or SDEX-20 other software re/reverse engineering research has focused on translating the OS calls in a legacy software to calls to another OS. In this approach, the understanding of software has required knowledge of the underlying OS, which is usually very complicated and informally documented. The research in this paper has focused on translating the OS calls in a legacy software into the equivalent protocols using the Ada facilities. In translation to Ada, these calls are represented by Ada equivalent code that follow the scheme of a message-based kernel oriented architecture. To facilitate translation, it utilizes templates placed in library for data structures, tasks, procedures, and messages. This methodology is a new approach to modeling OS in Ada in software re/reverse engineering. There is no need of knowledge of the underlying OS for software understanding in this approach, since the dependency on the OS in the legacy software is removed. It is portable and interoperable on Ada run-time environments. This approach can handle the OS calls in different legacy software systems.

  • PDF

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Production, Immobilization, and Characterization of Croceibacter atlanticus Lipase Isolated from the Antarctic Ross Sea (남극 로스해에서 분리한 Croceibacter atlanticus균 유래 리파아제의 생산, 고정화, 효소특성 연구)

  • Park, Chae Gyeong;Kim, Hyung Kwoun
    • Microbiology and Biotechnology Letters
    • /
    • v.46 no.3
    • /
    • pp.234-243
    • /
    • 2018
  • The Antarctic Ocean contains numerous microorganisms that produce novel biocatalysts that can have applications in various industries. We screened various psychrophilic bacterial strains isolated from the Ross Sea and found that a Croceibacter atlanticus strain (Stock No. 40-F12) showed high lipolytic activity on a tributyrin plate. We isolated the corresponding lipase gene (lipCA) by shotgun cloning and expressed the LipCA enzyme in Escherichia coli cells. Homology modeling of LipCA was carried out using the Spain Arreo lake metagenome alpha/beta hydrolase as a template. According to the model, LipCA has an ${\alpha}/{\beta}$ hydrolase fold, Gly-X-Ser-X-Glymotif, and lid sequence, indicating that LipCA is a typical lipase enzyme. Active LipCA enzyme was purified fromthe cell-free extract by ammonium sulfate precipitation and gel filtration chromatography. We determined its enzymatic properties including optimum temperature and pH, stability, substrate specificity, and organic solvent stability. LipCA was immobilized by the cross-linked enzyme aggregate (CLEA) method and its enzymatic properties were compared to those of free LipCA. After cross-linking, temperature, pH, and organic solvent stability increased considerably, whereas substrate specificities did not changed. The LipCA CLEA was recovered by centrifugation and showed approximately 40% activity after 4th recovery. This is the first report of the expression, characterization, and immobilization of a C. atlanticus lipase, and this lipase could have potential industrial application.

Sequential Bayesian Updating Module of Input Parameter Distributions for More Reliable Probabilistic Safety Assessment of HLW Radioactive Repository (고준위 방사성 폐기물 처분장 확률론적 안전성평가 신뢰도 제고를 위한 입력 파라미터 연속 베이지안 업데이팅 모듈 개발)

  • Lee, Youn-Myoung;Cho, Dong-Keun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.179-194
    • /
    • 2020
  • A Bayesian approach was introduced to improve the belief of prior distributions of input parameters for the probabilistic safety assessment of radioactive waste repository. A GoldSim-based module was developed using the Markov chain Monte Carlo algorithm and implemented through GSTSPA (GoldSim Total System Performance Assessment), a GoldSim template for generic/site-specific safety assessment of the radioactive repository system. In this study, sequential Bayesian updating of prior distributions was comprehensively explained and used as a basis to conduct a reliable safety assessment of the repository. The prior distribution to three sequential posterior distributions for several selected parameters associated with nuclide transport in the fractured rock medium was updated with assumed likelihood functions. The process was demonstrated through a probabilistic safety assessment of the conceptual repository for illustrative purposes. Through this study, it was shown that insufficient observed data could enhance the belief of prior distributions for input parameter values commonly available, which are usually uncertain. This is particularly applicable for nuclide behavior in and around the repository system, which typically exhibited a long time span and wide modeling domain.