• Title/Summary/Keyword: web like structure

Search Result 84, Processing Time 0.021 seconds

Experiment of Flexural Behavior of Composite Beam with Steel Fiber Reinforced Ultra High Performance Concrete Deck and Inverted-T Steel Girder (강섬유로 보강된 초고성능 콘크리트 바닥판과 역T형 강거더 합성보의 휨거동 실험)

  • Yoo, Sung-Won;Ahn, Young-Sun;Cha, Yeong-Dal;Joh, Chang-Bin
    • Journal of the Korea Concrete Institute
    • /
    • v.26 no.6
    • /
    • pp.761-769
    • /
    • 2014
  • Ultra high performance concrete (UHPC) has been developed to overcome the low strengths and brittleness of conventional concrete. Considering that UHPC, owing to its composition and the use of steel fibers, develops a compressive strength of 180 MPa as well as high stiffness, the top flange of the steel girder may be superfluous in the composite beam combining a slab made of UHPC and the steel girder. In such composite beam, the steel girder takes the form of an inverted-T shaped structure without top flange in which the studs needed for the composition of the steel girder with the UHPC slab are disposed in the web of the steel girder. This study investigates experimentally and analytically the flexural behavior of this new type of composite beam to propose details like stud spacing and slab thickness for further design recommendations. To that goal, eight composite beams with varying stud spacing and slab thickness were fabricated and tested. The test results indicated that stud spacing running from 100 mm to 2 to 3 times the slab thickness can be recommended. In view of the relative characteristic slip limit of Eurocode-4, the results showed that the composite beam developed ductile behavior. Moreover, except for the members with thin slab and large stud spacing, most of the specimens exhibited results different to those predicted by AASHTO LRFD and Eurocode-4 because of the high performance developed by UHPC.

A Study on the Modern Understanding of SimChong-Jeon and its Storytelling Strategy in the Movie (심청전에 대한 현대적 상상력과 스토리텔링 전략 - 영화 <마담 뺑덕>(2014)을 대상으로 -)

  • Shin, Horim
    • (The)Study of the Eastern Classic
    • /
    • no.66
    • /
    • pp.303-330
    • /
    • 2017
  • The purpose of this article is figuring out the modern understanding of SimChong Jeon's narrative and its storytelling strategy in the movie (2014). In the movie, there are three steps which are based on the temporal flow of narrative. shows the web-like structure of desire especially by focusing on the male character Sim Hakkyu. The relationship among characters in is gradually broken because of the desire. Moreover, the desire pushes Sim Chong who is Sim Hakkyu's daughter into the sacrifice. This part seems similar with the narrative of SimChong-Jeon which has been transmitted since 18~19 century in Choson dynasty. However, also tells a different story which describes the progress of Sim Hakkyu's seeking the real relationship filled with love. This difference is able to make people read with the 'stroytelling' point of view. All the lack or problem in is closely related to the desire of Sim Hakkyu. His narrative is something different from the typical story of SimChong-Jeon. A new narrative of Sim Hakkyu is not Sim Chong centered story but rather the anti of it. 'The other narrative' in seems social practice of storytelling in order to break down the preconception of SimChong-jeon called 'cannon'. This is the storytelling strategy of and it suggests the another way of creating new narrative which is based on the classical cannon.

Ontology Design for the Register of Officials(先生案) of the Joseon Period (조선시대 선생안 온톨로지 설계)

  • Kim, Sa-hyun
    • (The)Study of the Eastern Classic
    • /
    • no.69
    • /
    • pp.115-146
    • /
    • 2017
  • This paper is about the research on ontology design for a digital archive of seonsaengan(先生案) of the Joseon Period. Seonsaengan is the register of staff officials at each government office, along with their personal information and records of their transfer from one office to another, in addition to their DOBs, family clan, etc. A total of 176 types of registers are known to be kept at libraries and museums in the country. This paper intends to engage in the ontology design of 47 cases of such registers preserved at the Jangseogak Archives of the Academy of Korean Studies (AKS) with a focus on their content and structure including the names of the relevant government offices and posts assumed by the officials, etc. The work for the ontology design was done with a focus on the officials, the offices they belong to, and records about their transfers kept in the registers. The ontology design categorized relevant resources into classes according to the attributes common to the individuals. Each individual has defined a semantic postposition word that can explicitly express the relationship with other individuals. As for the classes, they were divided into eight categories, i.e. registers, figures, offices, official posts, state examination, records, and concepts. For design of relationships and attributes, terms and phrases such as Dublin Core, Europeana Data Mode, CIDOC-CRM, data model for database of those who passed the exam in the past, which are already designed and used, were referred to. Where terms and phrases designed in existing data models are used, the work used Namespace of the relevant data model. The writer defined the relationships where necessary. The designed ontology shows an exemplary implementation of the Myeongneung seonsaengan(明陵先生案). The work gave consideration to expected effects of information entered when a single registered is expanded to plural registers, along with ways to use it. The ontology design is not one made based on the review of all of the 176 registers. The model needs to be improved each time relevant information is obtained. The aim of such efforts is the systematic arrangement of information contained in the registers. It should be remembered that information arranged in this manner may be rearranged with the aid of databases or archives existing currently or to be built in the future. It is expected that the pieces of information entered through the ontology design will be used as data showing how government offices were operated and what their personnel system was like, along with politics, economy, society, and culture of the Joseon Period, in linkage with databases already established.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.