• Title/Summary/Keyword: User Value

Search Result 1,755, Processing Time 0.032 seconds

How to automatically extract 2D deliverables from BIM?

  • Kim, Yije;Chin, Sangyoon
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1253-1253
    • /
    • 2022
  • Although the construction industry is changing from a 2D-based to a 3D BIM-based management process, 2D drawings are still used as standards for permits and construction. For this reason, 2D deliverables extracted from 3D BIM are one of the essential achievements of BIM projects. However, due to technical and institutional problems that exist in practice, the process of extracting 2D deliverables from BIM requires additional work beyond generating 3D BIM models. In addition, the consistency of data between 3D BIM models and 2D deliverables is low, which is a major factor hindering work productivity in practice. To solve this problem, it is necessary to build BIM data that meets information requirements (IRs) for extracting 2D deliverables to minimize the amount of work of users and maximize the utilization of BIM data. However, despite this, the additional work that occurs in the BIM process for drawing creation is still a burden on BIM users. To solve this problem, the purpose of this study is to increase the productivity of the BIM process by automating the process of extracting 2D deliverables from BIM and securing data consistency between the BIM model and 2D deliverables. For this, an expert interview was conducted, and the requirements for automation of the process of extracting 2D deliverables from BIM were analyzed. Based on the requirements, the types of drawings and drawing expression elements that require automation of drawing generation in the design development stage were derived. Finally, the method for developing automation technology targeting elements that require automation was classified and analyzed, and the process for automatically extracting BIM-based 2D deliverables through templates and rule-based automation modules were derived. At this time, the automation module was developed as an add-on to Revit software, a representative BIM authoring tool, and 120 rule-based automation rulesets, and the combinations of these rulesets were used to automatically generate 2D deliverables from BIM. Through this, it was possible to automatically create about 80% of drawing expression elements, and it was possible to simplify the user's work process compared to the existing work. Through the automation process proposed in this study, it is expected that the productivity of extracting 2D deliverables from BIM will increase, thereby increasing the practical value of BIM utilization.

  • PDF

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF

A Study on the Introduction of Library Services Based on Blockchain (블록체인 기반의 도서관 서비스 도입 및 활용방안에 관한 연구)

  • Ro, Ji-Yoon;Noh, Younghee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.371-401
    • /
    • 2022
  • If the blockchain means storing information in a distributed environment that cannot be forged or altered, it is mentioned that this is similar to what librarians collect, preserve, and share authoritative information. In this way, this study examined blockchain technology as a way to collect and provide reliable information, increase work efficiency inside and outside the library, and strengthen cooperative networks. This study attempted to propose various ways to utilize blockchain technology in book relations based on literature surveys and case studies in other fields. To this end, this study first analyzed the field and cases of blockchain application to confirm the possibility and value of blockchain application in the library field, and proposed 12 ways to utilize it based on this. The utilization model was proposed by dividing it into operation and service sectors. In the operation sector, it is a digital identity-based user record storage and authentication function, transparent management and traceable monitoring function, voting-based personnel and recruitment system, blockchain governance-based network efficiency function, and blockchain-based next-generation device management and information integration function. The service sector includes improved book purchase and sharing efficiency due to simplification of intermediaries, digital content copyright protection and management functions, customized service provision based on customer behavior analysis, blockchain-based online learning platforms, sharing platforms, and P2P-based reliable information sharing platforms.

The Study on Evaluation of Franchise Corporate Social Responsibility (국내 프랜차이즈 기업의 CSR 단계별 평가 및 제고 방안)

  • Park, Jin Yong;Chae, Danbi;Lim, Jiwon
    • The Korean Journal of Franchise Management
    • /
    • v.5 no.1
    • /
    • pp.109-141
    • /
    • 2014
  • Recently, the interests of consumers in firms that implement the social commitment activities have been consistently growing. Consumers' evaluation about the level of corporate social responsibility(CSR) can affect the overall image for product or service of corporation. This recent changes make a marketer to have to consider direct and indirect effects of CSR efforts on the market performance. This phenomena is also found in the franchise industry. The importance of CSR is more critical rather than other industries since each franchisor should care franchisees as well as end users. Franchisors' execution of CSR could increase satisfaction of end user through consonance of activities provided by franchisees. However most franchisor stay in focusing on the traditional CSR activities. Therefore, this study aims to enhance the understanding the CSR in franchise and provide the phase model of CSR development for general firms including franchise. After diagnosis the firms with the proposed model, the study found many franchisors have huge gap between current CSR activities and higher level of CSR policies that franchisor have to make facing. This study call franchisors to reduce this gap by implementing new CSR efforts. If they answer for this calling, franchise industry could leap for making the best practice of creating shared value with other stakeholders.

A Study on the Application of the Price Prediction of Construction Materials through the Improvement of Data Refactor Techniques (Data Refactor 기법의 개선을 통한 건설원자재 가격 예측 적용성 연구)

  • Lee, Woo-Yang;Lee, Dong-Eun;Kim, Byung-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.6
    • /
    • pp.66-73
    • /
    • 2023
  • The construction industry suffers losses due to failures in demand forecasting due to price fluctuations in construction raw materials, increased user costs due to project cost changes, and lack of forecasting system. Accordingly, it is necessary to improve the accuracy of construction raw material price forecasting. This study aims to predict the price of construction raw materials and verify applicability through the improvement of the Data Refactor technique. In order to improve the accuracy of price prediction of construction raw materials, the existing data refactor classification of low and high frequency and ARIMAX utilization method was improved to frequency-oriented and ARIMA method utilization, so that short-term (3 months in the future) six items such as construction raw materials lumber and cement were improved. ), mid-term (6 months in the future), and long-term (12 months in the future) price forecasts. As a result of the analysis, the predicted value based on the improved Data Refactor technique reduced the error and expanded the variability. Therefore, it is expected that the budget can be managed effectively by predicting the price of construction raw materials more accurately through the Data Refactor technique proposed in this study.

A Design Perspective on Instagram Addiction (디자인적 관점에서 바라본 인스타그램 중독)

  • Changhee Han
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.339-345
    • /
    • 2023
  • Design exists behind technology. Design is intertwined with the needs of daily life and market structures, and while dealing with technology, it can become insensitive to its meaning. Unlike other social media platforms, Instagram consists of image-based content. The purpose of this study is to examine the addictive design of Instagram. Furthermore, we discuss the ethical responsibilities that designers must have. A theoretical framework for understanding Instagram design is established through a review of major domestic and international literature that has been previously studied. Understand the history, structure, and functions of Instagram and identify Instagram designs that promote social media addiction. In this study, we introduced the mechanism by which Instagram promotes user addiction through design issues. (1) Pull-to-Refresh (2) Red color in push alarm (3) Profile photo border expression in Instagram Story. This design stimulates users' social desires and FOMO, forming the structure of obsessive Instagram usage habits. Instagram is an example that forces us to reconsider the ethical role of design and designers along with the advancement of technology. In today's world, the intrinsic value of what they create, including our society and life itself.

Generative AI service implementation using LLM application architecture: based on RAG model and LangChain framework (LLM 애플리케이션 아키텍처를 활용한 생성형 AI 서비스 구현: RAG모델과 LangChain 프레임워크 기반)

  • Cheonsu Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.129-164
    • /
    • 2023
  • In a situation where the use and introduction of Large Language Models (LLMs) is expanding due to recent developments in generative AI technology, it is difficult to find actual application cases or implementation methods for the use of internal company data in existing studies. Accordingly, this study presents a method of implementing generative AI services using the LLM application architecture using the most widely used LangChain framework. To this end, we reviewed various ways to overcome the problem of lack of information, focusing on the use of LLM, and presented specific solutions. To this end, we analyze methods of fine-tuning or direct use of document information and look in detail at the main steps of information storage and retrieval methods using the retrieval augmented generation (RAG) model to solve these problems. In particular, similar context recommendation and Question-Answering (QA) systems were utilized as a method to store and search information in a vector store using the RAG model. In addition, the specific operation method, major implementation steps and cases, including implementation source and user interface were presented to enhance understanding of generative AI technology. This has meaning and value in enabling LLM to be actively utilized in implementing services within companies.

A Method for Selecting AI Innovation Projects in the Enterprise: Case Study of HR part (기업의 혁신 프로젝트 선정을 위한 모폴로지-AHP-TOPSIS 모형: HR 분야 사례 연구)

  • Chung Doohee;Lee Jaeyun;Kim Taehee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.159-174
    • /
    • 2023
  • In this paper, we proposed a methodology to effectively determine the selection and prioritization of new business and innovation projects using AI technology. AI technology is a technology that can upgrade the business of companies in various industries and increase the added value of the entire industry. However, there are various constraints and difficulties in the decision-making process of selecting and implementing AI projects in the enterprise. In this paper, we propose a new methodology for prioritizing AI projects using Morphology, AHP, and TOPSIS. The proposed methodology helps prioritize AI projects by simultaneously considering the technical feasibility of AI technology and real-world user requirements. In this study, we applied the proposal methodology to a real enterprise that wanted to prioritize multiple AI projects in the HR field and evaluated the results. The results confirm the practical applicability of the methodology and suggest ways to use it to help companies make decisions about AI projects. The significance of the methodology proposed in this study is that it is a framework for prioritizing multiple AI projects considered by a company in the most reasonable way by considering both business and technical factors at the same time.

  • PDF

Predicting the splitting tensile strength of manufactured-sand concrete containing stone nano-powder through advanced machine learning techniques

  • Manish Kewalramani;Hanan Samadi;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Ibrahim Albaijan;Hawkar Hashim Ibrahim;Saleh Alsulamy
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.375-394
    • /
    • 2024
  • The extensive utilization of concrete has given rise to environmental concerns, specifically concerning the depletion of river sand. To address this issue, waste deposits can provide manufactured-sand (MS) as a substitute for river sand. The objective of this study is to explore the application of machine learning techniques to facilitate the production of manufactured-sand concrete (MSC) containing stone nano-powder through estimating the splitting tensile strength (STS) containing compressive strength of cement (CSC), tensile strength of cement (TSC), curing age (CA), maximum size of the crushed stone (Dmax), stone nano-powder content (SNC), fineness modulus of sand (FMS), water to cement ratio (W/C), sand ratio (SR), and slump (S). To achieve this goal, a total of 310 data points, encompassing nine influential factors affecting the mechanical properties of MSC, are collected through laboratory tests. Subsequently, the gathered dataset is divided into two subsets, one for training and the other for testing; comprising 90% (280 samples) and 10% (30 samples) of the total data, respectively. By employing the generated dataset, novel models were developed for evaluating the STS of MSC in relation to the nine input features. The analysis results revealed significant correlations between the CSC and the curing age CA with STS. Moreover, when delving into sensitivity analysis using an empirical model, it becomes apparent that parameters such as the FMS and the W/C exert minimal influence on the STS. We employed various loss functions to gauge the effectiveness and precision of our methodologies. Impressively, the outcomes of our devised models exhibited commendable accuracy and reliability, with all models displaying an R-squared value surpassing 0.75 and loss function values approaching insignificance. To further refine the estimation of STS for engineering endeavors, we also developed a user-friendly graphical interface for our machine learning models. These proposed models present a practical alternative to laborious, expensive, and complex laboratory techniques, thereby simplifying the production of mortar specimens.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.