• Title/Summary/Keyword: 사전 기반 모델

Search Result 850, Processing Time 0.029 seconds

Exploring ESG Activities Using Text Analysis of ESG Reports -A Case of Chinese Listed Manufacturing Companies- (ESG 보고서의 텍스트 분석을 이용한 ESG 활동 탐색 -중국 상장 제조 기업을 대상으로-)

  • Wung Chul Jin;Seung Ik Baek;Yu Feng Sun;Xiang Dan Jin
    • Journal of Service Research and Studies
    • /
    • v.14 no.2
    • /
    • pp.18-36
    • /
    • 2024
  • As interest in ESG has been increased, it is easy to find papers that empirically study that a company's ESG activities have a positive impact on the company's performance. However, research on what ESG activities companies should actually engage in is relatively lacking. Accordingly, this study systematically classifies ESG activities of companies and seeks to provide insight to companies seeking to plan new ESG activities. This study analyzes how Chinese manufacturing companies perform ESG activities based on their dynamic capabilities in the global economy and how they differ in their activities. This study used the ESG annual reports of 151 Chinese manufacturing listed companies on the Shanghai & Shenzhen Stock Exchange and ESG indicators of China Securities Index Company (CSI) as data. This study focused on the following three research questions. The first is to determine whether there are any differences in ESG activities between companies with high ESG scores (TOP-25) and companies with low ESG scores (BOT-25), and the second is to determine whether there are any changes in ESG activities over a 10-year period (2010-2019), focusing only on companies with high ESG scores. The results showed that there was a significant difference in ESG activities between high and low ESG scorers, while tracking the year-to-year change in activities of the top-25 companies did not show any difference in ESG activities. In the third study, social network analysis was conducted on the keywords of E/S/G. Through the co-concurrence matrix technique, we visualized the ESG activities of companies in a four-quadrant graph and set the direction for ESG activities based on this.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

멕시코 로얄 은광산 잠재성 평가

  • Heo, Cheol-Ho;Kim, Ui-Jun
    • 한국지구과학회:학술대회논문집
    • /
    • 2010.04a
    • /
    • pp.108-109
    • /
    • 2010
  • IMPACT Silver 주식회사는 Zacualpan 프로젝트의 Royal Mines(이하 로얄 광산)을 인수했다. $124.5\;km^2$에 해당하는 지역의 소유권은 두 개의 멕시코 사기업으로부터 가행중인 광산의 채굴권 구입과 운영 중인 기반시설의 임대를 조건으로 한다. 프로젝트 지역은 멕시코시티로부터 남서방향으로 100 km와 Taxco Silver 광산으로부터 북서방향으로 25 km 떨어진 지점에 위치한다. 기반시설은 비포장 도로, 충분한 전력과 물의 공급 및 숙련공들을 갖추어 우수한 평가를 받고 있다. 소유권은 멕시코인의 개인소유 하에서 무한한 매장량 혹은 자원량을 갖고 운영된 채광과 가공시설을 인수하는 것이다. 소유권 지역을 대상으로 한 IMPACT Silver사의 주 탐사목적은 이미 알려진 광화대의 확장을 위한 잠재성 평가와 다른 지역에서 신규 광상의 유망지역을 발견하는 것이다. Zacualpan 프로젝트의 로얄 광산은 남동 Guerrero terrane의 북부에 위치한다. Teloloapan subterrane은 주로 저변성 녹색편암상으로 구성된 쥬라기 후기에서 백악기 초기의 화산성 퇴적층으로 구성된다. 대부분의 유망지역은 Lower Villa de Ayala층의 중성 내지 염기성 화산성 쇄설암을 모암으로 한다. 다상의 변성작용은 지역 전반에 걸쳐 나타나고, Zacualpan 광산지역에서 수반되는 광화작용을 규제한다. Zacualpan 광산지역은 Sierra Madre del Sur로 알려진 유망 광화대에 해당한다. 이 지역은 화산성 괴상 황화광상과 천열수 맥상광상이 우세하다. 대부분의 천열수 광화작용은 3.2-3.8억 년 전 마그마의 생성이 활발한 판구조 체제 동안 발생하였다. 역사적으로 가장 주요한 지역은 Lipton Vein이다. 현재 Zacualpan 지구에서 채광량은 은 200-500 g/t 정도로 보고되고 있다. 일부 지역은 고품위 은 광화작용(은 1,000 g/t 이상)을 수반하고 있으며, 이는 탐사의 주 타겟이 되고 있다. Zacualpan에서 은 광화작용은 은이 부화된 중유황 천열수 맥상광상으로 상당히 유명하다. Fresnillo, Pachuca 및 Taxco 광산을 포함한 멕시코 소유의 대규모의 잘 알려진 광산들이 이에 해당한다. 이러한 광산들은 부산물로서 금, 아연, 연이 생산된다. 이러한 광상들은 맥상과 각력상 및 산점상 또는 망상세맥의 형태로 산출된다. 광화작용은 석영과 탄산염 맥 내에 주로 황철석과 다양한 섬아연석, 방연석, 은 혹은 금 광물들을 수반한다. 경제성을 갖는 광화작용의 수직적인 연장은 평균적으로 대략 300 m이고, 멕시코 중부에 위치한 Fresnillo의 광화작용은 100 m에서 960 m의 연장을 갖는 것으로 알려져 있다. 아주 오랫동안 Zacualpan에서 광산관계자의 관측과 IMPACT Silver에서 최근 작업의 결과를 토대로, Zacualpan 광산지역의 탐사모델은 새로운 광상의 탐사를 위한 가이드로서 개발되었다. Zacualpan 광산지역에서 가장 높은 경제성을 갖는 광화작용은 북서와 남북방향의 맥 구조를 따라 수반된다. 이러한 맥 구조들은 종종 이 지역을 가로질러 수 km까지 추적되지만, 경제성을 갖는 광화작용은 맥 구조를 따라서 구조적으로 유리한 지역에서 부광대를 형성한다. 부광대를 형성하기 위한 가장 유리한 구조적 지역은 북서와 남북방향으로 발달한 맥 구조들이 교차하는 지역이다. 지난 30년간 채광된 주요 부광대는 폭이 2-6 m 이고 수평연장은 30-150 m 그리고 수직연장은 230-300 m에 이른다. 가장 높은 생산량을 보이는 부광대는 남북방향의 이차 맥들이 Guadalupe 광산의 Lipton 맥을 가로지르는 지역에서 발달한다. 남동쪽으로 현재 Compadres 광산의 Silver Shoot No. 1으로부터 고품위 은을 생산하는 지역은 북서방향의 San Agustin 맥이 북향의 Cometa Navideno 맥에 의해 절단되는 지역에서 산출한다. 모암은 광화작용을 규제하는 또 다른 중요한 요소이다. 광산지역에서 경제성을 갖는 모든 광화작용은 중성 내지 염기성 화산암 특히 안산암과 관련 모암에 배태된다. 부광대가 셰일 혹은 편암으로 전이되는 지역에서, 맥들은 소규모의 세맥으로 나뉘어 진다. Zacualpan의 전형적인 천열수 광상에서 부광대는 상부로 가면서 은의 함량이 증가하고, 하부로 가면서 연 아연의 함량이 증가하는 수직적 대상을 보인다. 금의 함량 변화는 보다 예측이 어려우나 상당히 중요하다. Zacualpan 광산지역의 탐사모델에 사용된 토양 채취, 정밀지도제작, 트렌치 및 시추탐광은 현재 IMPACT Silver사가 이 지역을 대상으로 한 가장 효율적인 탐사방법으로 입증되었다. Zacualpan 프로젝트의 로얄 광산은 하루 500 톤을 제련하는 기반시설과 수반된 채굴권을 갖는 가행 광산들을 포함한다. 현재 IMPACT Silver사는 두 곳의 타겟 지역에서 정밀지도제작, 토양 및 암석 채취, 12공 총 1866 m의 시추탐광에 의한 사전조사로 구성된 4 단계 탐사를 수행했다. 암석 1,953개, 토양 1,631 개, 389 개의 시추코어 시료가 채집되고 분석되었다. 이러한 작업은 추가탐사를 요구하는 수많은 유망 광화대를 규명했다. Compadres 광산에서 현재 가행중인 지하갱 시료는 레벨 1에서 0.9 m의 폭을 갖는 광체에서 은 680 g/t과 금 0.3 g/t, 레벨 3에서 1.67 m의 폭을 갖는 광체에서 은 12,591 g/t과 금 12.07 g/t의 품위를 갖는 것으로 나타났다. 레벨 1에서 3까지 2-3 m의 폭과 30-40 m 연장으로 채광되었다. 시추탐광은 고품위를 갖는 몇몇의 중첩된 맥을 발견했다. Compadres 광산에서 남동방향으로 200 m지점에 위치한 Soledad 지역에서 5 개의 시추공으로부터 동일 맥 시스템이 발견되었고, 고품위 부광대의 상부로 간주되는 몇몇 중요 지점이 발견되었다. 초기 단계의 탐사는 유망 시추탐광 지역인 중간정도 내지 고품위 유망 광화대를 규명했다.

  • PDF

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

An Analysis of Soil Pressure Gauge Result from KHC Test Road (시험도로 토압계 계측결과 분석)

  • In Byeong-Eock;Kim Ji-Won;Kim Kyong-Ha;Lee Kwang-Ho
    • International Journal of Highway Engineering
    • /
    • v.8 no.3 s.29
    • /
    • pp.129-141
    • /
    • 2006
  • The vertical soil pressure developed in the granular layer of asphalt pavement system is influenced by various factors, including the wheel load magnitude, the loading speed, and asphalt pavement temperature. This research observed the distribution of vertical soil pressure in pavement supporting layer by investigating measured data from soil pressure gage in the KHC Test Road. The existing specification of subbase and subgrade compaction was also evaluated with measured vertical pressure. The finite element analysis was conducted to verify the accuracy of results with measured data because it can maximize research capacity without significant field test. The test data was collected from A5, A7, A14, and A15 test sections at August, September, and November 2004 and August 2005. Those test sections and test data were selected because they had best quality. The size of influence area was evaluated and the vertical pressure variation was investigated with respect to load level, load speed, and pavement temperature. The lower speed, higher load level, and higher pavement temperature increased the vertical pressure and reduced the area of influence. The finite element result showed the similar trend of vertical pressure variation in comparison with measured data. The specification of compaction quality for subbase and subgrade is higher than the level of vertical pressure measured with truck load so that it should be lurker investigated.

  • PDF

A Development of Ontology-Based Law Retrieval System: Focused on Railroad R&D Projects (온톨로지 기반 법령 검색시스템의 개발: 철도·교통 분야 연구개발사업을 중심으로)

  • Won, Min-Jae;Kim, Dong-He;Jung, Hae-Min;Lee, Sang Keun;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.209-225
    • /
    • 2015
  • Research and development projects in railroad domain are different from those in other domains in terms of their close relationship with laws. Some cases are reported that new technologies from R&D projects could not be industrialized because of relevant laws restricting them. This problem comes from the fact that researchers don't know exactly what laws can affect the result of R&D projects. To deal with this problem, we suggest a model for law retrieval system that can be used by researchers of railroad R&D projects to find related legislation. Input of this system is a research plan describing the main contents of projects. After laws related to the R&D project is provided with their rankings, which are assigned by scores we developed. A ranking of a law means its order of priority to be checked. By using this system, researchers can search the laws that may affect R&D projects throughout all the stages of project cycle. So, using our system model, researchers can get a list of laws to be considered before the project they participate ends. As a result, they can adjust their project direction by checking the law list, avoiding their elaborate projects being useless.

A Basic Study on Prediction Module Development of Collision Risk based on Ship's Operator's Consciousness (선박운항자 의식 기반 충돌 위험도 예측 모듈 개발에 관한 연구)

  • Park, Young-Soo;Park, Sang-Won;Cho, Ik-Soon
    • Journal of Navigation and Port Research
    • /
    • v.39 no.3
    • /
    • pp.199-207
    • /
    • 2015
  • In ports of Korea, the marine traffic flow is congested due to a large number of vessels coming in and going out. In order to improve the safety and efficiency of these vessels, South Korea is operating with a Vessel Traffic Service System, which is monitoring its waters for 24 hours. However despite these efforts of the VTS (Vessel Traffic Service) officers, collisions are occurring continuously, the risk situation is analyzed that occurs once in about 20 minutes, the risk may be greater. It investigated to reduce these accidents by providing a safety standard for collision danger in a timely manner. Thus, this study has developed a risk prediction module to predict risk in advance. This module can avoid collision risk to adjust the speed and course of ship using a risk evaluation model based on ship operator's risk perspective. Using this module, the ship operators and VTS officers can easily be identified risks in complex traffic situations, so they can take an appropriate action against danger in near future including course and speed change. To verify the effectiveness of this module, this paper predicted the risk of each encounter situation and confirmed to be capable of identifying a risk changes in specific course and speed changes at Busan coastal water.

A Technique to Recommend Appropriate Developers for Reported Bugs Based on Term Similarity and Bug Resolution History (개발자 별 버그 해결 유형을 고려한 자동적 개발자 추천 접근법)

  • Park, Seong Hun;Kim, Jung Il;Lee, Eun Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.12
    • /
    • pp.511-522
    • /
    • 2014
  • During the development of the software, a variety of bugs are reported. Several bug tracking systems, such as, Bugzilla, MantisBT, Trac, JIRA, are used to deal with reported bug information in many open source development projects. Bug reports in bug tracking system would be triaged to manage bugs and determine developer who is responsible for resolving the bug report. As the size of the software is increasingly growing and bug reports tend to be duplicated, bug triage becomes more and more complex and difficult. In this paper, we present an approach to assign bug reports to appropriate developers, which is a main part of bug triage task. At first, words which have been included the resolved bug reports are classified according to each developer. Second, words in newly bug reports are selected. After first and second steps, vectors whose items are the selected words are generated. At the third step, TF-IDF(Term frequency - Inverse document frequency) of the each selected words are computed, which is the weight value of each vector item. Finally, the developers are recommended based on the similarity between the developer's word vector and the vector of new bug report. We conducted an experiment on Eclipse JDT and CDT project to show the applicability of the proposed approach. We also compared the proposed approach with an existing study which is based on machine learning. The experimental results show that the proposed approach is superior to existing method.