• 제목/요약/키워드: process analytics

검색결과 116건 처리시간 0.035초

An Application of the Analytical Hierarchy Process (AHP) for Safety measurement in Malaysian Construction Industry

  • Samad Doostdar;Zubaidah Ismail
    • 국제학술발표논문집
    • /
    • The 5th International Conference on Construction Engineering and Project Management
    • /
    • pp.66-73
    • /
    • 2013
  • Analytical Hierarchy Process (AHP) is a famous method amongst Multi Criteria Decision Making (MCDM), set up by Saaty in 1980. AHP can be determined as a methodology of hierarchical analysis following reasonable decision making with make simpler a difficult crisis. Decision making in systems of Safety management concerned multipart challenges. AHP is process for get better the composite decisions understanding with analyzes of the problem in a structure of hierarchy. The integration all of applicable decision criteria, their pair wise judgment permits the decision maker to establish the trade-offs amongst objectives. In recent years, Malaysian's economy and infrastructure development have significantly and rapidly risen. The construction industry continues to play a major role in this development as many construction activities have been carried out to meet the high demands of the expansive market. However, the construction industry has faced a wide range of challenges, one of which is the frequent occurrences of accidents at the workplace. An effective safety program can substantially reduce accidents because it can help management to build up safer means of operations and create safe working environments for the workers. Furthermore, by having an effective safety programs, good safety culture can be embedded in organization because it can encourage mutual cooperation between management and workers in the operations of the programs and decisions that affect their safety and health. The focus of this research is development methodology of Analytics Hierarchy Process (AHP) in construction safety factors and investigates the levels of some effective elements in SMS in Malaysian construction industries.

  • PDF

Multi-channel CNN 기반 온라인 리뷰 유용성 예측 모델 개발에 관한 연구 (A multi-channel CNN based online review helpfulness prediction model)

  • 이흠철;윤효림;이청용;김재경
    • 지능정보연구
    • /
    • 제28권2호
    • /
    • pp.171-189
    • /
    • 2022
  • 온라인 리뷰는 소비자의 구매 의사결정 과정에서 중요한 역할을 담당하고 있으므로 소비자에게 유용하고 신뢰성이 있는 리뷰를 제공하는 것이 중요하다. 기존의 온라인 리뷰 유용성 예측 관련 연구는 주로 온라인 리뷰의 텍스트와 평점 정보 간의 일관성을 바탕으로 리뷰 유용성을 예측하였다. 그러나 기존 연구는 평점 정보를 스칼라로 표현했기 때문에 표현 수용력이 제한적이거나 평점 정보와 리뷰 텍스트 정보와의 상호작용을 제한적으로 학습하는 한계가 존재한다. 본 연구에서는 기존 연구의 한계점을 보완하기 위해 리뷰 텍스트와 평점 정보 간의 상호작용을 효과적으로 학습할 수 있는 CNN-RHP(CNN based Review Helpfulness Prediction) 모델을 제안하였다. 먼저, 리뷰 텍스트의 의미론적 특성을 추출하기 위해 multi-channel CNN을 적용하였다. 다음으로, 평점 정보는 텍스트 특성과 동일한 차원을 나타내는 독립된 고차원 임베딩 특성 벡터로 변환하였다. 최종적으로 요소별(Element-wise) 연산을 통해 리뷰 텍스트와 평점 정보 간의 일관성을 학습하였다. 본 연구에서는 제안된 CNN-RHP 모델의 성능을 평가하기 위해 Amazom.com에서 수집된 온라인 소비자 리뷰를 사용하였다. 실험 결과, 본 연구에서 제안한 CNN-RHP 모델이 기존 연구에서 제안된 여러 모델과 비교했을 때 우수한 예측 성능을 나타내는 것을 확인하였다. 본 연구의 결과는 온라인 전자상거래 플랫폼에서 소비자들에게 리뷰 유용성 예측 서비스를 제공할 때 유의미한 시사점을 제공할 수 있다.

실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구 (A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics)

  • 구진희
    • 융합정보논문지
    • /
    • 제8권1호
    • /
    • pp.201-206
    • /
    • 2018
  • 최근 인공지능을 구현하기 위한 기술들이 보편화되면서 특히, 기계 학습이 폭넓게 사용되고 있다. 기계 학습은 대량의 데이터를 수집하고 일괄적으로 처리하며 최종 조치를 취할 수 있는 통찰력을 제공하나, 작업의 효과가 즉시 학습 과정에 통합되지는 않는다. 본 연구에서는 비즈니스의 큰 이슈로서 실시간 데이터 분석의 성능을 개선하기 위한 적응형 학습 모델을 제안하였다. 적응형 학습은 데이터세트의 복잡성에 적응하여 앙상블을 생성하고 알고리즘은 샘플링 할 최적의 데이터 포인트를 결정하는데 필요한 데이터를 사용한다. 6개의 표준 데이터세트를 대상으로 한 실험에서 적응형 학습 모델은 학습 시간과 정확도에서 분류를 위한 단순 기계 학습 모델보다 성능이 우수하였다. 특히 서포트 벡터 머신은 모든 앙상블의 후단에서 우수한 성능을 보였다. 적응형 학습 모델은 시간이 지남에 따라 다양한 매개변수들의 변화에 대한 추론을 적응적으로 업데이트가 필요한 문제에 폭넓게 적용될 수 있을 것으로 기대한다.

Analysis of the influence of food-related social issues on corporate management performance using a portal search index

  • Yoon, Chaebeen;Hong, Seungjee;Kim, Sounghun
    • 농업과학연구
    • /
    • 제46권4호
    • /
    • pp.955-969
    • /
    • 2019
  • Analyzing on-line consumer responses is directly related to the management performance of food companies. Therefore, this study collected and analyzed data from an on-line portal site created by consumers about food companies with issues and examined the relationships between the data and the management performance. Through this process, we identified consumers' awareness of these companies obtained from big data analysis and analyzed the relationship between the results and the sales and stock prices of the companies through a time-series graph and correlation analysis. The results of this study were as follows. First, the result of the text mining analysis suggests that consumers respond more sensitively to negative issues than to positive issues. Second, the emotional analysis showed that companies' ethics issues (Enterprise 3 and 4) have a higher level of emotional continuity than that of food safety issues. It can be interpreted that the problem of ethical management has great influence on consumers' purchasing behavior. Finally, In the case of all negative food issues, the number of word frequency and emotional scores showed opposite trends. As a result of the correlation analysis, there was a correlation between word frequency and stock price in the case of all negative food issues and also between emotional scores and stock price. Recently, studies using big data analytics have been conducted in various fields. Therefore, based on this research, it is expected that studies using big data analytics will be done in the agricultural field.

자연어 처리 기법을 활용한 산업재해 위험요인 구조화 (Structuring Risk Factors of Industrial Incidents Using Natural Language Process)

  • 강성식;장성록;이종빈;서용윤
    • 한국안전학회지
    • /
    • 제36권1호
    • /
    • pp.56-63
    • /
    • 2021
  • The narrative texts of industrial accident reports help to identify accident risk factors. They relate the accident triggers to the sequence of events and the outcomes of an accident. Particularly, a set of related keywords in the context of the narrative can represent how the accident proceeded. Previous studies on text analytics for structuring accident reports have been limited to extracting individual keywords without context. We proposed a context-based analysis using a Natural Language Processing (NLP) algorithm to remedy this shortcoming. This study aims to apply Word2Vec of the NLP algorithm to extract adjacent keywords, known as word embedding, conducted by the neural network algorithm based on supervised learning. During processing, Word2Vec is conducted by adjacent keywords in narrative texts as inputs to achieve its supervised learning; keyword weights emerge as the vectors representing the degree of neighboring among keywords. Similar keyword weights mean that the keywords are closely arranged within sentences in the narrative text. Consequently, a set of keywords that have similar weights presents similar accidents. We extracted ten accident processes containing related keywords and used them to understand the risk factors determining how an accident proceeds. This information helps identify how a checklist for an accident report should be structured.

프로세스 마이닝과 딥러닝을 활용한 구매 프로세스의 적기 입고 예측에 관한 연구 (Exploring the Prediction of Timely Stocking in Purchasing Process Using Process Mining and Deep Learning)

  • 강영식;이현우;김병수
    • 경영정보학연구
    • /
    • 제20권4호
    • /
    • pp.25-41
    • /
    • 2018
  • 예측 분석을 전사 프로세스에 적용하는 것은 운영비용을 절감하고 생산성을 증대시킬 수 있는 효과적 방법이다. 이에 따라 비즈니스 프로세스의 행동과 성과지표를 예측하는 능력이 기업의 핵심역량으로 간주되고 있다. 최근에 순환신경망 형태의 딥러닝을 이용한 프로세스 예측 연구가 큰 관심을 받고 있다. 특히, 순환신경망을 이용하여 다음 단계의 액티비티를 예측하는 접근법이 우수한 결과를 내고 있다. 그러나 동적 순환신경망 형태의 딥러닝을 프로세스 성과지표의 예측에 적용한 연구는 부재한 상황이다. 이러한 지식의 공백을 메우기 위해 본 연구는 프로세스 마이닝과 동적 순환신경망 형태의 딥러닝을 이용하는 접근법을 개발했다. 국내 대기업의 실제 데이터를 활용하여 구매 프로세스의 중요한 성과지표인 적기 입고 예측에 개발된 접근법을 적용했다. 본 연구의 실험 방법과 결과, 연구의 시사점과 한계점이 제시되었다.

클라우드 환경에서의 암호화 데이터에 대한 효율적인 Top-K 질의 수행 기법 (Efficient Top-K Queries Computation for Encrypted Data in the Cloud)

  • 김종욱
    • 한국멀티미디어학회논문지
    • /
    • 제18권8호
    • /
    • pp.915-924
    • /
    • 2015
  • With growing popularity of cloud computing services, users can more easily manage massive amount of data by outsourcing them to the cloud, or more efficiently analyse large amount of data by leveraging IT infrastructure provided by the cloud. This, however, brings the security concerns of sensitive data. To provide data security, it is essential to encrypt sensitive data before uploading it to cloud computing services. Although data encryption helps provide data security, it negatively affects the performance of massive data analytics because it forbids the use of index and mathematical operation on encrypted data. Thus, in this paper, we propose a novel algorithm which enables to efficiently process a large amount of encrypted data. In particular, we propose a novel top-k processing algorithm on the massive amount of encrypted data in the cloud computing environments, and verify the performance of the proposed approach with real data experiments.

Considerations for generating meaningful HRA data: Lessons learned from HuREX data collection

  • Kim, Yochan
    • Nuclear Engineering and Technology
    • /
    • 제52권8호
    • /
    • pp.1697-1705
    • /
    • 2020
  • To enhance the credibility of human reliability analysis, various kinds of data have been recently collected and analyzed. Although it is obvious that the quality of data is critical, the practices or considerations for securing data quality have not been sufficiently discussed. In this work, based on the experience of the recent human reliability data extraction projects, which produced more than fifty thousand data-points, we derive a number of issues to be considered for generating meaningful data. As a result, thirteen considerations are presented here as pertaining to the four different data extraction activities: preparation, collection, analysis, and application. Although the lessons were acquired from a single kind of data collection framework, it is believed that these results will guide researchers to consider important issues in the process of extracting data.

Improving Elasticsearch for Chinese, Japanese, and Korean Text Search through Language Detector

  • Kim, Ki-Ju;Cho, Young-Bok
    • Journal of information and communication convergence engineering
    • /
    • 제18권1호
    • /
    • pp.33-38
    • /
    • 2020
  • Elasticsearch is an open source search and analytics engine that can search petabytes of data in near real time. It is designed as a distributed system horizontally scalable and highly available. It provides RESTful APIs, thereby making it programming-language agnostic. Full text search of multilingual text requires language-specific analyzers and field mappings appropriate for indexing and searching multilingual text. Additionally, a language detector can be used in conjunction with the analyzers to improve the multilingual text search. Elasticsearch provides more than 40 language analysis plugins that can process text and extract language-specific tokens and language detector plugins that can determine the language of the given text. This study investigates three different approaches to index and search Chinese, Japanese, and Korean (CJK) text (single analyzer, multi-fields, and language detector-based), and identifies the advantages of the language detector-based approach compared to the other two.

GNI Corpus Version 1.0: Annotated Full-Text Corpus of Genomics & Informatics to Support Biomedical Information Extraction

  • Oh, So-Yeon;Kim, Ji-Hyeon;Kim, Seo-Jin;Nam, Hee-Jo;Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • 제16권3호
    • /
    • pp.75-77
    • /
    • 2018
  • Genomics & Informatics (NLM title abbreviation: Genomics Inform) is the official journal of the Korea Genome Organization. Text corpus for this journal annotated with various levels of linguistic information would be a valuable resource as the process of information extraction requires syntactic, semantic, and higher levels of natural language processing. In this study, we publish our new corpus called GNI Corpus version 1.0, extracted and annotated from full texts of Genomics & Informatics, with NLTK (Natural Language ToolKit)-based text mining script. The preliminary version of the corpus could be used as a training and testing set of a system that serves a variety of functions for future biomedical text mining.