• Title/Summary/Keyword: 설계관리 시스템

Search Result 7,124, Processing Time 0.035 seconds

A Study on Slope Reinforcing Effects Using Soil Stabilizer (토사안정제를 이용한 비탈면보강 효과에 관한 연구)

  • Kim, Ki-Hwan;Kim, Yu-Tae;Lee, Seung-Ho
    • Journal of the Korean Geotechnical Society
    • /
    • v.26 no.10
    • /
    • pp.5-14
    • /
    • 2010
  • The slope stability method using the soil stabilizer is a way to ensure that the slope stability from reinforcing method is environmentally friendly. However, the reinforcing method does not ensure slope stability for lack of research on the reinforcement effect of the mixture with soil. So the application of this method implies difficult technical issues. In this research, reinforcement effect is investigated according to the different ratio of mixture. And the optimum reinforcement depth is proposed according to the height of slope from numerical analysis. The results show that approximately the soil strength increases from two to three times. From numerical analysis, it is possible to estimate the optimum height according to the height of slope. It is anticipated that the use of soil stabilizer will increase the slope stability.

User Interface Design through Mental Accounting : A Case Study on Account book Application (멘탈 어카운팅을 활용한 사용자 인터페이스 디자인 : 가계부 어플리케이션 사례연구)

  • Ga, Ye-Rin;Lee, Jooyoup
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.7
    • /
    • pp.865-874
    • /
    • 2017
  • According to Mental accounting theory, humans sort money according to their psychological purpose. It plays a major role as a means of self-control. However, humans sometimes make mistakes that violate very simple economic principles. Many consumers experience this mistakes. So consumers use account book to manage their income and expenses. These days, they use mobile account book applications. And because of platform characteristics, there are various user interfaces. This paper examines an efficient user interface design that can reduce errors caused by mental accounting and support rational economic activities. For this, we compared and analyzed households exposed on top of Application store based on advanced research on Mental Accounting. Through this case study, We found some cases of efficient user interface for reasonable consumption. In further studies, We will design optimal account book's UI and do a usability test based on this.

A Study on Architectural Design of Library Building for Preserving Ancient Documents of Koreanology (한국학(韓國學) 고문헌자료(古文獻資料) 전문도서관(專門圖書館) 건축계획(建築計劃)에 관한 연구(硏究))

  • Lee, Keun-Young;Park, Jee-Hoon;Kong, Soon-Ku
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.20 no.4
    • /
    • pp.143-157
    • /
    • 2009
  • The purpose of this study is to provide the basic data for architectural planning of the library building for preserving ancient documents through analysis of the spatial composition(facility program, area ratio, space zoning, circulation system). This study suggests an outcome as follows. First, The facility program of the archives is composed with four kind functional area ; collection area, user area, administrative/management area, and service/public area ; Second, through the case studies, it was proven that more space was given to the collection area such as the preservation part when compared to other areas(39~56%). Third, there are some traits found based on the location of the stack room of the specialized libraries, the location of the preservation department, and the existence of the loading and unloading area. Fourth, it shows that the organization is related to the movement routes.

Improvement Strategy for Demolition Industry through a Analysis of Domestic Demolition Technique and Situation (국내해체기술 및 현황분석을 통한 해체산업의 발전방향)

  • Kim, Chang-Hak;Kim, Hyo-Jin;Kang, Leen Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.2D
    • /
    • pp.143-151
    • /
    • 2010
  • Currently, one of the most interested things at home and abroad country will be an eco-friendly construction. Among these, one of the most important elements will be the recycle and reuse of construction and demolition waste. Because construction waste is generated the most at the demolition phase, it is important to minimize the quantity of the demolition waste at the phase. And it is also important to develop a system to manage rightly the generated demolition waste. But in the domestic country, a research for this has hardly been carried out. In recent, the government has realized its importance and is making a research to improve demolition technique and is preparing a research to make a raw for deconstruction. Therefore, this study examined its application situation and importance by analyzing the trend of demolition technique used in the domestic industry. Also this study carried out a survey for situation analysis of the demolition industry. This study suggested items needed for the development of demolition technique, demolition design and reduction of C&D waste through a survey results and a situation analysis.

Simulation and Experimental Study on the Impact of Light Railway Train Bridge Due to Concrete Rail Prominence (주행면 단차에 의한 경량전철 교량의 충격 시뮬레이션 및 실험)

  • Jeon, Jun-Tai;Song, Jae-Pil
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.1A
    • /
    • pp.45-52
    • /
    • 2010
  • This study pointed on the dynamic impact of AGT (Automated Guide-way Transit) bridge, due to concrete rail prominence. An experiment was done with 30 m P.S.C. bridge in AGT test line in Kyungsan. An artificial prominence with 10 mm hight, was installed at the mid span of concrete rail. And computer simulation was executed for the artificial prominence. As an experiment result, in the case of with prominence, bridge acceleration responses are increased 50% at the speed range of 20 km/h-60 km/h, and bridge displacement responses increased slightly. With these results, the prominence of concrete rail can be induce excess impact and vibration. And the computer program simulated much the same as experiments. So this program can be used for AGT bridge design and formulate the standard of concrete rail management.

Designing a system to defend against RDDoS attacks based on traffic measurement criteria after sending warning alerts to administrators (관리자에게 경고 알림을 보낸 후 트래픽 측정을 기준으로 RDDoS 공격을 방어하는 시스템 설계)

  • Cha Yeansoo;Kim Wantae
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.109-118
    • /
    • 2024
  • Recently, a social issue has arisen involving RDDoS attacks following the sending of threatening emails to security administrators of companies and institutions. According to a report published by the Korea Internet & Security Agency and the Ministry of Science and ICT, survey results indicate that DDoS attacks are increasing. However, the top response in the survey highlighted the difficulty in countering DDoS attacks due to issues related to security personnel and costs. In responding to DDoS attacks, administrators typically detect anomalies through traffic monitoring, utilizing security equipment and programs to identify and block attacks. They also respond by employing DDoS mitigation solutions offered by external security firms. However, a challenge arises from the initial failure in early response to DDoS attacks, leading to frequent use of detection and mitigation measures. This issue, compounded by increased costs, poses a problem in effectively countering DDoS attacks. In this paper, we propose a system that creates detection rules, periodically collects traffic using mail detection and IDS, notifies administrators when rules match, and Based on predefined threshold, we use IPS to block traffic or DDoS mitigation. In the absence of DDoS mitigation, the system sends urgent notifications to administrators and suggests that you apply for and use of a cyber shelter or DDoS mitigation. Based on this, the implementation showed that network traffic was reduced from 400 Mbps to 100 Mbps, enabling DDoS response. Additionally, due to the time and expense involved in modifying detection and blocking rules, it is anticipated that future research could address cost-saving through reduced usage of DDoS mitigation by utilizing artificial intelligence for rule creation and modification, or by generating rules in new ways.

Application of the Modified Bartlett-Lewis Rectangular Pulse Model for Daily Precipitation Simulation in Gamcheon Basin (감천유역의 일 강수량 모의를 위한 MBLRP 모형의 적용)

  • Chung, Yeon-Ji;Kim, Min-ki;Um, Myoung-Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.303-314
    • /
    • 2024
  • Precipitation data are an integral part of water management planning, especially the design of hydroelectric structures and the study of floods and droughts. However, it is difficult to obtain accurate data due to space-time constraints. The recent increase in hydrological variability due to climate change has further emphasized the importance of precipitation simulation techniques. Therefore, in this study, the Modified Bartlett-Lewis Rectangular Pulse model was utilized to apply the parameters necessary to predict daily precipitation. The effect of this parameter on the daily precipitation prediction was analyzed by applying exponential distribution, Gamma distribution, and Weibull distribution to evaluate the suitability of daily precipitation prediction according to each distribution type. As a result, it is judged that parameters should be selected in consideration of regional and seasonal characteristics when simulating precipitation using the MBLRP model.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

A Dynamic Behavior Evaluation of the Curved Rail according to Lateral Spring Stiffness of Track System (궤도시스템의 횡탄성에 따른 곡선부 레일의 동적거동평가)

  • Kim, Bag-Jin;Choi, Jung-Youl;Chun, Dae-Sung;Eom, Mac;Kang, Yun-Suk;Park, Yong-Gul
    • Proceedings of the KSR Conference
    • /
    • 2007.11a
    • /
    • pp.517-528
    • /
    • 2007
  • Domestic or international existing researches regarding rail damage factors are focused on laying, vehicle conditions, driving speed and driving habits and overlook characteristics of track structure (elasticity, maintenance etc). Also in ballast track, as there is no special lateral spring stiffness of track also called as ballast lateral resistance in concrete track, generally, existing study shows concrete track has 2 time shorter life cycle for rail replacement than ballast track due to abrasion. As a result of domestic concrete track design and operation performance review, concrete track elasticity is lower than track elasticity of ballast track resulting higher damage on rail and tracks. Generally, concrete track has advantage in track elasticity adjustment than ballast track and in case of Europe, in concrete track design, it is recommended to have same or higher performance range of vertical elastic stiffness of ballast track but domestically or internationally review on lateral spring stiffness of track is very minimal. Therefore, through analysis of service line track on site measurement and analysis on performance of maintenance, in this research, dynamic characteristic behaviors of commonly used ballast and concrete track are studied to infer elasticity of service line track and experimentally prove effects of track lateral spring stiffness that influence curved rail damage as well as correlation between track elasticity by track system and rail damage to propose importance of appropriate elastic stiffness level for concrete and ballast track.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.