• Title/Summary/Keyword: 비정형 요구사항

Search Result 47, Processing Time 0.025 seconds

Requirement Analysis Study for Development of 3D Printing Concrete Nozzle for FCP Manufacturing (FCP 제작용 3D 프린팅 콘크리트 노즐 개발을 위한 요구사항 분석연구)

  • Youn, Jong-Young;Kim, Ji-Hye;Kim, Hye-Kwon;Lee, Donghoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.65-66
    • /
    • 2022
  • In the construction industry, interest in technologies such as 3D Construction Printing (3DCP) is increasing, and research is being conducted continuously. In the case of atypical architecture, different shapes must be implemented, and the introduction of 3D printing technology is intended to solve it. Our researchers are conducting research to produce Free-form Concrete Panel (FCP). It automatically manufactures the FCP's formwork without any error with the design shape. At this time, the concrete nozzle based on the 3D printing technology is developed and the concrete is precisely extruded into the manufactured form to prevent the deformation of the formwork that can occur due to the concrete load. Therefore, in this study, the requirements for the development of 3D printing concrete nozzles for FCP manufacturing are analyzed. Based on the analyzed requirements, the first nozzle was developed. Such equipment is easy to shorten construction period and cost reduction in the atypical construction field, and is expected to be utilized as basic 3D printing equipment.

  • PDF

A Study on the Introduction of Library Services Based on Cloud Computing (클라우드 컴퓨팅 기반의 도서관 서비스 도입방안에 관한 연구)

  • Kim, Yong
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.23 no.3
    • /
    • pp.57-84
    • /
    • 2012
  • With the advent of Big data era unleashed by tremendous increase of structured and unstructured data, a library needs an effective method to store, manage and preserve various information resources. Also, needs of collaboration of libraries are continuously increased in digital environment. As an effective method to handle the changes and challenges in libraries, interest on cloud computing is growing more and more. This study aims to propose a method to introduce cloud computing in libraries. To achieve the goals, this study performed the literature review to analyze problems of existing library systems. Also, this study proposed considerations, expectations, service scenario, phased strategy to introduce cloud computing in libraries. Based on the results extracted from cases that libraries have introduced cloud computing-based systems, this study proposed introduction strategy and specific applying areas in library works as considered features of cloud computing models. The proposed phased strategy and service scenario may reduce time and effort in the process of introduction of cloud computing and maximize the effect of cloud computing.

예비구조설계에서의 상호작용방식에 따른 컴퓨터 활용방안

  • 정종현
    • Computational Structural Engineering
    • /
    • v.12 no.1
    • /
    • pp.86-94
    • /
    • 1999
  • 예비구조설계에서는 구조시스템 대안의 생성과 발전, 여러 방법과 대상에 따른 해석과 설계 다양한 요구조건과 구조적 특성을 고려한 여러 구조시스템 대안의 비교 및 선택 과정을 거친다. 그리고, 이러한 각 과정은 구조설계자의 경험적 지식을 바탕으로 하는 종합적인 사고와 판단에 따라서 단계적, 반복적으로 이루어진다. 그러므로, 예비구조설계에서는 정형화된 자료와 작어뿐 아니라 비정형화된 자료와 작업도 처리해야 한다. 따라서, 컴퓨터는 정형화된 자료와 작업을, 구조설계자는 비정형화된 자료와 작업을 직접 처리하고 이를 상호작용을 통하여 교환 및 지시하는 방식, 즉 구조설계자와 컴퓨터의 역할분담에 기초한 상호작용방식을 통하여 예비구조설계에 컴퓨터를 효율적으로 활용할 수 있을 것이다. 구조설계자와 컴퓨터의 역할분담에 기초한 상호작용을 원활히 지원하기 위해서는 구조시스템 대안의 기하학적 향상, 재료에 대한 자료를 효과적으로 제시할 수 있는 3차원 관점, 각 구조시스템 대안을 검토하고 비교하는 사항이 되는 다양한 요구조건에 대한 자료들을 관리하고 제시할수 있는 요구조건 관점이 필요하다. 그리고 예비 구조설계가 진행되어 온 과정, 방향, 설계의도등의 파악을 위한 바탕이 될 수 있는 작업과정관점, 구조시스템 대안 발전 관점이 필요하다. 예비구조설계의 프로세스 조절을 위한 상호작용을 효과적으로 지원하기 위해서는 컴퓨터가 수행 할 수 있는 정형화된 작업들을 예비구조설계의 진행순서인 구조시스템 대안의 생성과 발전, 구조시스템 대안의 해석과 설계, 구조시스템 대안의 선택에 따라서 나열하여 제시할 필요가 있다. 이를 통해서 구조설계자는 자신의 판단에 따라서 다음에 수행해야 할 작업을 결정하고, 그에 해당하는 정형화된 작업을 컴퓨터가 제시한 작업 중에서 선택하고 그 수행을 지시하여 자신의 경험적 지식과 설계의도에 맞추어 예비구조설계의 프로세스를 조절할 수 있다.정조합력은 유의차가 컸으나, 상반조합 능력은 없었다. 교배친의 우성효과는 컸다. 잡종강세 환경변이 및 상가적 작용도 컸다. 우성의 방향은 정의 방향이었으므로 우성귀전자가 크게 작용하였다. 이들 형질들의 귀전자들은 초우성을 나타내었다. 교배친의 자견층중의 우성순서는 잠120>잠114>잠108>잠119>잠118>잠107>잠117>잠113 순이었고, 웅견층중에서는 잠114>잠108>잠120>잠117>잠118>잠107>잠119>잠119>잠113 순이었다. 자견층 비율에서는 광의의 귀전력이 협의의 귀전력보다 컸고, 웅견층 비율에서는 같았다, 견층 비율에서는 일반조합 능력은 크게 나타났으나, 특정조합 능력과 상반조직 능력은 나타나지 않았다. 자견층 비율에서 교배친의 우성효과는 컸다. 자견층 비율에서는 교배친의 우성효과는 적었다. 자웅견층 비율의 잡종 강세는 적게 나타났다. 환경변이와 상가적 작계는 자웅견층 비율에서는 크게 나타났다. 우성의 방향은 자견층 비율에서는 정의 방향으로 우성 귀전자가 크게 작용하였으며, 자견층 비율에서는 정의 방향으로 우성 귀전자가 부분적으로 작용하였다. 교배친의 자견층 비율의 우성순서는 잠117>잠114>잠108>잠120>잠118>잠119>잠107>잠113 순이었고, 자견층 비율에서는 잠114>잠117>잠108>잠118>잠107>잠119>잠113>잠120의 순이었다.지방산의 조성이 많은 차이를 보였다.{2+}$ 26 및 $Na^+$ 26 mg $L^{-1}$이었다. 양액 재배 후 버려지는 폐양액 중의 무기성분 함량은 양액재배에 이용되는 원수에 비해 상당히 높아졌다.료로서 응용 가능성이 있음을 시사한다.약재료인 약초류 등을 이용하였는데 오랫동안 푹 삶아 그물에 곡류 등을 넣어 죽이나 밥으로 조리하였으며 면으로도 조리하였다. 이상과 같이 조선시대 주식류의 종류 및 조리방법에 대한

  • PDF

A Study on Questionnaire Improvement using Text Mining (텍스트 마이닝 기법을 활용한 설문 문항 개선에 관한 연구)

  • Paek, Yun-Ji;Jung, Chang-Hyun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.2
    • /
    • pp.121-128
    • /
    • 2020
  • The Marine Safety Culture Index (MSCI) was developed in the year 2018 for objectively assessing the public safety culture levels and for incorporating it as data to spread knowledge regarding the marine safety culture. The method for calculating the safety culture index should include issues that may affect the safety culture and should consist of appropriate attributes for estimating the current status. In addition, continuous verification and supplementation are required for addressing social and economic changes. In this study, to determine whether the questionnaire designed by marine experts reflects the people's interests and needs, we analyzed 915 marine safety proposals. Text mining was employed for analyzing the unstructured data of the marine safety proposals, and network analysis and topic modeling were subsequently performed. Analysis of the marine safety proposals was centered on attributes such as education, public relations, safety rules, awareness, skilled workers, and systems. Eighteen questions were modified and supplemented for reflecting the marine safety proposals, and reliability of the revised questions was analyzed. Furthermore, compared to the previous year, the questionnaire's internal consistency was improved upon and was rated at a high value of 0.895. It is expected that by employing the derived marine safety culture index and incorporating the improved questionnaire that reflects the requirements of marine experts and the people, the improved questionnaire will contribute to the establishment of policies for spreading knowledge regarding the marine safety culture.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on the Improvement of Efficient Execution System for Cadastral Resurvey Project (지적재조사사업의 효율적인 추진체계 개선에 관한 연구)

  • Kim, Jae Myeong;Choi, Yun Soo;Yoon, Ha Su;Kim, Young Dan
    • Spatial Information Research
    • /
    • v.21 no.4
    • /
    • pp.47-62
    • /
    • 2013
  • Recently there are growing arguments about the needs to improve Cadastral Resurvey Projects more efficiently and rationally since the projects have realistically started. This study aims to analysis some problems of the projects and finds out the solutions. With this purpose, we have surveyed some literatures related to the demonstration projects, and then have examined some similar domestic urban development projects and international Cadastral Resurvey Projects in order to apply their appropriate tools to the present or future projects. Generally the execution process of Cadastral Resurvey Projects has four steps and this study arranges the main problems in each steps. In the result it suggests that some improvements should change as the followings. (1) It is more effective to take planning-based approach than project-based one. (2) The project boundary needs to be decided with the rectangular patterns not the irregular ones for the efficient project. (3) The adjusting the current baseline of agreement should be reconsidered because too high baseline may make the project difficult to progress. (4) Not only systematical public relations to promote the projects but also a variety of incentives to induce the public participations is very important to solve the problems of involvement. (5) Institutional tools for collaborate planning is also desirable to solve the conflicts among stakeholders rationally and effectively.

Formal Verification and Testing of RACE Protocol Using SMV (SMV를 이용한 RACE 프로토콜의 정형 검증 및 테스팅)

  • Nam, Won-Hong;Choe, Jin-Yeong;Han, U-Jong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.1-17
    • /
    • 2002
  • In this paper, we present our experiences in using symbolic model checker(SMV) to analyze a number of properties of RACE cache coherence protocol designed by ETRI(Electronics and Communications Research Institute) and to verify that RACE protocol satisfies important requirements. To investigate this, we specified the model of the RACE protocol as the input language of SMV and specified properties as a formula in temporal logic CTL. We successfully used the symbolic model checker to analyze a number of properties of RACE protocol. We verified that abnormal state/input combinations was not occurred and every possible request of processors was executed correctly We verified that RACE protocol satisfies liveness, safety and the property that any abnormal state/input combination was never occurred. Besides, We found some ambiguities of the specification and a case of starvation that the protocol designers could not expect before. By this verification experience, we show advantages of model checking method. And, we propose a new method to generate automatically test cases which are used in simulation and testing.

A Technique of Deriving Concrete Object Model for C++ Programming (C++ 프로그래밍을 위한 구체적 객체 모델의 작성법)

  • Kim, Tae-Gyun;Im, Chae-Deok;Song, Yeong-Gi;In, So-Ran
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.731-746
    • /
    • 1997
  • The usage of object models for the development of software has been frowung due to the prevalence of the ob-ject oriented paradigm.The object moedels produced as results of requirments analysis and design activities are vety veneficial to the implementation phase.It is even possible for source code to be genrated automatically if object models are concrete enough.Therefore system analyzers and desingners should make an dffort to refine theabstrace ogject model defined at.an early stage in order to achieve a more conrete object model.In general,re-fining an abstrace object model into a concrete model depends too much on the desigver's infromal experience.In this paper,we persent the refinement techniques required for concreting an abstract object model bassed on OMT(Object Modeling Technique)'s notation,We will discuss the definition of the abstraction level of an object model and the transformational rules of refinement.These transformational rules are currently applied to the design of a software tool,named Process Modeler,which is a major component of the software development process modeling system for ICS(Information Communication Service). Finally we can achieve a concrete object model which can easily be translated into C++ source code.

  • PDF

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.