• Title/Summary/Keyword: 수가정보

Search Result 2,852, Processing Time 0.032 seconds

A Study on tne Necessity of Using ESG to Prevent Accidents in the Chemical Industry (화학산업 사고 예방을 위한 ESG 활용 필요성 연구)

  • Cheolhee Yoon;Leesu Kim;Seungho Jung;Keun-won Lee
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.826-833
    • /
    • 2023
  • Purpose: We suggest the need to utilize ESG in the safety field to prevent serious industrial accidents. Method: The Serious Accident Punishment Act, a strong serious accident prevention system, was reviewed through a review of previous research. And through comparative analysis of serious accident data from the United States and Korea, the main causes of accidents in the domestic chemical industry were derived. Result: It was determined that there was a need to induce voluntary safety management by companies through ESG management along with the Serious Accident Punishment Act, which aims to prevent corporate accidents. Through statistical analysis of accident data, it was confirmed that the scale of damage and number of deaths in domestic accidents was greater than in the United States. The reason was interpreted to be that there are many accidents caused by human causes in the country. Conclusion: In order to compensate for the lack of voluntariness in corporate safety management as well as the Serious Accident Punishment Act and encourage active safety management, the proportion of 'ESG safety evaluation' must be expanded. By using ESG as an indirect social sanction, we can expect companies to voluntarily and actively manage safety and expand safety investments in the safety field.

Studying the Comparative Analysis of Highway Traffic Accident Severity Using the Random Forest Method. (Random Forest를 활용한 고속도로 교통사고 심각도 비교분석에 관한 연구)

  • Sun-min Lee;Byoung-Jo Yoon;WutYeeLwin
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.156-168
    • /
    • 2024
  • Purpose: The trend of highway traffic accidents shows a repeating pattern of increase and decrease, with the fatality rate being highest on highways among all road types. Therefore, there is a need to establish improvement measures that reflect the situation within the country. Method: We conducted accident severity analysis using Random Forest on data from accidents occurring on 10 specific routes with high accident rates among national highways from 2019 to 2021. Factors influencing accident severity were identified. Result: The analysis, conducted using the SHAP package to determine the top 10 variable importance, revealed that among highway traffic accidents, the variables with a significant impact on accident severity are the age of the perpetrator being between 20 and less than 39 years, the time period being daytime (06:00-18:00), occurrence on weekends (Sat-Sun), seasons being summer and winter, violation of traffic regulations (failure to comply with safe driving), road type being a tunnel, geometric structure having a high number of lanes and a high speed limit. We identified a total of 10 independent variables that showed a positive correlation with highway traffic accident severity. Conclusion: As accidents on highways occur due to the complex interaction of various factors, predicting accidents poses significant challenges. However, utilizing the results obtained from this study, there is a need for in-depth analysis of the factors influencing the severity of highway traffic accidents. Efforts should be made to establish efficient and rational response measures based on the findings of this research.

Estimate Customer Churn Rate with the Review-Feedback Process: Empirical Study with Text Mining, Econometrics, and Quai-Experiment Methodologies (리뷰-피드백 프로세스를 통한 고객 이탈률 추정: 텍스트 마이닝, 계량경제학, 준실험설계 방법론을 활용한 실증적 연구)

  • Choi Kim;Jaemin Kim;Gahyung Jeong;Jaehong Park
    • Information Systems Review
    • /
    • v.23 no.3
    • /
    • pp.159-176
    • /
    • 2021
  • Obviating user churn is a prominent strategy to capitalize on online games, eluding the initial investments required for the development of another. Extant literature has examined factors that may induce user churn, mainly from perspectives of motives to play and game as a virtual society. However, such works largely dismiss the service aspects of online games. Dissatisfaction of user needs constitutes a crucial aspect for user churn, especially with online services where users expect a continuous improvement in service quality via software updates. Hence, we examine the relationship between a game's quality management and its user base. With text mining and survival analysis, we identify complaint factors that act as key predictors of user churn. Additionally, we find that enjoyment-related factors are greater threats to user base than usability-related ones. Furthermore, subsequent quasi-experiment shows that improvements in the complaint factors (i.e., via game patches) curb churn and foster user retention. Our results shed light on the responsive role of developers in retaining the user base of online games. Moreover, we provide practical insights for game operators, i.e., to identify and prioritize more perilous complaint factors in planning successive game patches.

Authing Service of Platform: Tradeoff between Information Security and Convenience (플랫폼의 소셜로그인 서비스(Authing Service): 보안과 편의 사이의 적절성)

  • Eun Sol Yoo;Byung Cho Kim
    • Information Systems Review
    • /
    • v.20 no.1
    • /
    • pp.137-158
    • /
    • 2018
  • Online platforms recently expanded their connectivity through an authing service. The growth of authing services enabled consumers to enjoy easy log in access without exerting extra effort. However, multiple points of access increases the security vulnerability of platform ecosystems. Despite the importance of balancing authing service and security, only a few studies examined platform connectivity. This study examines the optimal level of authing service of a platform and how authing strategies impact participants in a platform ecosystem. We used a game-theoretic approach to analyze security problems associated with authing services provided by online platforms for consumers and other linked platforms. The main findings are as follows: 1) the decreased expected loss of consumers will increase the number of players who participate in the platform; 2) linked platforms offer strong benefits from consumers involved in an authing service; 3) the main platform will increase its effort level, which includes security cost and checking of linked platform's security if the expected loss of the consumers is low. Our study contributes to the literature on the relationship between technology convenience and security risk and provides guidelines on authing strategies to platform managers.

Analysis of the effect of improving human thermal environment by road directions and street tree planting patterns in summer (여름철 도로 방향과 가로수 식재 방식에 의한 인간 열환경 개선효과 분석)

  • Jeonghyeon Moon;Yuri Choi;Eunja Choi;Jueun Yang;Sookuk Park
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.2
    • /
    • pp.1-18
    • /
    • 2024
  • This study aimed to identify the optimal street tree planting method to improve the summer thermal environment in Seoul, Republic of Korea. The effects of road direction and street tree planting patterns on urban thermal environments using ENVI-met simulations were analyzed. The 68 scenarios were analyzed based on four road directions and 17 planting patterns. The results showed that tree planting had a reducing air temperature, mean radiant temperature, human thermal sensation (PET and UTCI). The most effective planting pattern among all scenarios was low tree height (6m), wide crown width (9m), high leaf area index (3.0), and narrow planting interval (8m). The largest improvement in the thermal environment was the northern sidewalk of the east-west road. Since this study used computer simulations, the difference from real urban spaces should be considered, and further research is needed through field measurement and consideration of more variables.

문헌검색(文獻檢索)에 있어서 Chemical Abstracts와 CA Condensates의 비교(比較)

  • Robert, B.E.
    • Journal of Information Management
    • /
    • v.9 no.1
    • /
    • pp.21-25
    • /
    • 1976
  • 1975년(年) 3월(月), 4년반(年半) 동안의 Chemical Abstracts 색인(索引)과 온-라인이 가능(可能)한 CA Condensates를 비교(比較)하였다. 두가지 데이터 베이스를 함께 이용(利用)하여 검색(檢索)하는 방법(方法)이 가장 효율적(效率的)이지만 실예(實例)에서 보는 바와 같이 CA Condensates를 검색(檢索)하는 것이 보다 실용적(實用的)이다. System Development Corp 사(社) (SDC)에 설치(設置)되어 있는 온-라인 형태(形態)인 CHEMCON과 CHEM7071을 Chemical Abstracts 색인(索引)과 비교(比較)하였다. 대부분(大部分)의 Chemical Abstracts 이용자(理容者)들은 Chemical Abstracts 책자나 우가색인(累加索引)에는 친숙(親熟)하지만 CA Condensates는 아마도 그리 친숙(親熟)하지 못할 것이다. CA Condensates는 서지적 사항을 기계(機械)로 읽을 수 있는 형태(形態)로 되어 있고 Chemical Abstracts에 따라서 색인(索引)되므로 매주 발행되는 Chemical Abstracts 책자의 뒷 부분이 있는 색인(索引)과 같이 우리에게 가장 친숙(親熟)한 형태(形態)로 되어 있다. Chemical Abstracts가 현재(現在) 사용(使用)하고 있는 데이터 데이스이지만 본고(本稿)에서는 Index와 Condensates를 둘 다 데이터 베이스로 정의(定義)한다. Condensates가 미국(美國)의 Chemical Abstracts Service 기관으로부터 상업적(商業的)으로 이용(利用)할 수 있게 되자 여러 정보(情報)센터에서는 이용자(利用者)들의 프로 파일을 뱃취방식(方式)으로 처리(處理)하여 매주 나오는 자기(磁氣)테이프에서 최신정보(最新情報)를 검색(檢索)하여 제공(提供)하는 서어비스 (SDI)를 시작하였다. 어떤 정보(情報)센터들은 지나간 자기(磁氣)테이프들을 모아서 역시 뱃취방식(方式)으로 소급(遡及) 문헌검색(文獻檢索) 서어비스를 한다. 자기(磁氣)테이프를 직접 취급(取扱)하는 사람들을 제외(除外)하고는 대부분(大部分) Condensates를 아직 잘 모르고 있다. 소급(遡及) 문헌검색(文獻檢索)은 비용이 다소 비싸고 두서없이 이것 저것 문헌(文獻)을 검색(檢索)하는 방법(方法)은 실용적(實用的)이 못된다. 매주 나오는 색인(索引)에 대해서 두 개나 그 이상의 개념(槪念)이나 물질(物質)을 조합(組合)하여 검색(檢索)하는 방법(方法)은 어렵고 실용적(實用的)이 못된다. 오히려 주어진 용어(用語) 아래에 있는 모든 인용어(引用語)들을 보고 초록(抄錄)과의 관련성(關連性)을 결정(決定)하는 것이 때때로 더 쉽다. 상호(相互) 작용(作用)하는 온-라인 검색(檢索)을 위한 Condensates의 유용성(有用性)은 많은 변화를 가져 왔다. 필요(必要)한 문헌(文獻)만을 검색(檢索)해 보는 것이 이제 가능(可能)하고 어떤 항목(項目)에 대해서도 완전(完全)히 색인(索引)할 수 있게 되었다. 뱃취 시스팀으로는 검색(檢索)을 시작해서 그 결과(結果)를 받아 볼 때 까지 수시간(數時間)에서 며칠까지 걸리는 번거로운 시간차(時間差)를 이제는 보통 단 몇 분으로 줄일 수 있다. 그리고 뱃취 시스팀과는 달리 부정확하거나 불충분한 검색방법(檢索方法)은 즉시 고칠 수가 있다. 연속적인 뱃취 형태의 검색방법(檢索方法)에 비해서 순서(順序)없이 온-라인으로 검색(檢索)하는 방법(方法)이 분명(分明)하고 정확(正確)한 장점(長點)이 있다. CA Condensates를 자주 이용(移用)하게 되자 그의 진정한 가치(價値)에, 대해 논의(論義)가 있었다. CA Condensates의 색인방법(索引方法)은 CA Abstract 책자나 우가색인(累加索引)의 방법(方法)보다 확실히 덜 체계적(體系的)이고 철저(徹底)하지 못하다. 더우기 두 데이터 베이스는 중복(重複)것이 많으므로, 중복(重複)해서 검색(檢索)할 가치(價値)가 없는지를 결정(決定)해야 한다. 다른 몇 개의 데이터 베이스와 CA Condensates를 비교(比較)한 논문(論文)들이 여러 번 발표(發表)되어 왔는데 일반적(一般的)으로 CA Condensates는 하위(下位)의 데이터 베이스로 나타났다. Buckley는 Chemical Abstracts의 색인(索引)이 CA Condensates 보다 더 좋은 문헌 (데라마이신의 제법에 관해서)을 제공(提供)한 실례(實例)를 인용(引用)하였다. 죠오지대학(大學)의 Search Center는 CA Condensates가 CA Integrated Subject File 보다 기능(機能)이 못하다는 것을 알았다. CA Condensates의 다른 여러 가지 형태(形態)들을 또한 비교(比較)하였다. Michaels은 CA Condensates를 온-라인으로 검색(檢索)한 것과 매주 나오는 Chemical Abstracts 책자의 색인(索引)은 수작업(手作業)으로 검색(檢索)한 것을 비교(比較)한 논문(論文)을 발표(發表)하였다. 그리고 Prewitt는 온-라인으로 축적(蓄積)한 두 개의 상업용(商業用) CA Condensates를 비교(比較)하였다. Amoco Research Center에서도 CA Condensates와 Chemical Abstracts 색인(索引)의 검색결과(檢索結果)를 비교(比較)하고 CA Condensates의 장점(長點)과 색인(索引)의 장점(長點), 그리고 사실상(事實上) 서로 동등(同等)하다는 실례(實例)를 발견(發見)하였다. 1975년(年) 3월(月), 적어도 4년분(年分)의 CA Condensates와 색인(索引)(Vols 72-79, 1970-1973)을 비교(比較)하였다. 저자(著者)와 일반(一般) 주제(主題) 대한 검색(檢索)은 Vol 80 (Jan-June, 1974)을 사용(使用)하여 비교(比較)하였다. CA Condensates는 보통 세분화(細分化)된 복합물(複合物)을 검색(檢索)하는 데 불편(不便)하다. Buckly가 제시(提示)한 실례(實例)가 그 대표적(代表的)인 예(例)이다. 그러나, 다른 형태(形態)의 검색실예(檢索實例)(단체저자(團?著者), 특허수탁저(特許受託著), 개인저자(個人著者), 일반적(一般的)인/세분화(細分化)된 화합물(化合物) 그리고 반응종류(反應種類)로 실제적(實際的)인 검색(檢索)을 위한 CA Condensates의 이점(利點)을 예시(例示)하였다. 다음 실례(實例)에서 CHEMCON과 CHEM7071은 CA Condensates를 온-라인으로 입력(入力)시킨 것이다.

  • PDF

The Simulation for the Organization of Fishing Vessel Control System in Fishing Ground (어장에 있어서의 어선관제시스템 구축을 위한 모의실험)

  • 배문기;신형일
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.36 no.3
    • /
    • pp.175-185
    • /
    • 2000
  • This paper described on a basic study to organize fishing vessel control system in order to control efficiently fishing vessel in Korean offshore. It was digitalized ARPA image on the fishing processing of a fleet of purse seiner in conducting fishing operation at Cheju offshore in Korea as a digital camera and then simulated by used VTMS. Futhermore, it was investigated on the application of FVTMS which can control efficiently fishing vessels in fishing ground. The results obtained were as follows ; (1) It was taken 16 minutes and 35 minutes to casting and hauling net in fishing processing respectively. The length of rope pulled by scout boat was 200m, tactical diameter in casting net was 340.8m, turning speed was 6kts as well. (2) The processing of casting and hauling net was moved to SW, NE as results of simulation when the current direction and speed set into NE, 2kts and SW, 2kts respectively. Such as these results suggest that can predict to control the fishing vessel previously with information of fishing ground, fishery and ship's maneuvering, etc. (3) The control range of VTMS radar used in simulation was about 16 miles. Although converting from a radar of the control vessel to another one, it was continuously acquired for the vector and the target data. The optimum control position could be determined by measuring and analyzing to distance and direction between the control vessel and the fleet of fishing vessel. (4) The FVTMS(fishing vessel traffic management services) model was suggested that fishing vessels received fishing conditions and safety navigation information can operate safely and efficiently.

  • PDF

A Study on the Usage Behavior of Universities Library Website Before and After COVID-19: Focusing on the Library of C University (COVID-19 전후 대학도서관 홈페이지 이용행태에 관한 연구: C대학교 도서관을 중심으로)

  • Lee, Sun Woo;Chang, Woo Kwon
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.3
    • /
    • pp.141-174
    • /
    • 2021
  • In this study, by examining the actual usage data of the university library website before and after COVID-19 outbreak, the usage behavior of users was analyzed, and the data before and after the virus outbreak was compared, so that university libraries can provide more efficient information services in a pandemic situation. We would like to suggest ways to improve it. In this study, the user traffic made on the website of University C was 'using Google Analytics', from January 2018 to December 2018 before the oneself of the COVID-19 virus and from January 2020 to 2020 after the outbreak of the virus. A comparative analysis was conducted until December. Web traffic variables were analyzed by classifying them into three characteristics: 'User information', 'Path', and 'Site behavior' based on metrics such as session, user, number of pageviews, number of pages per session time, and bounce rate. To summarize the study results, first, when compared with data from January 1 to January 20 before the oneself of COVID-19, users, new visitors, and sessions all increased compared to the previous year, and the number of sessions per user, number of pageviews, and number of pages per session, which showed an upward trend before the virus outbreak in 2020, increased significantly. Second, as social distancing was upgraded to the second stage, there was also a change in the use of university library websites. In 2020 and 2018, when the number os students was the lowest, the number of page views increased by 100,000 more in 2020 compared to 2018, and the number of pages per session also recorded10.46, which was about 2 more pages compared to 2018. The bounce rate also recorded 14.38 in 2018 and 2019, but decreased by 1 percentage point to 13.05 in 2020, which led to more active use of the website at a time when social distancing was raised.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Assaying Mitochondrial COI Sequences and Their Molecular Studies in Hexapoda, PART I: From 2000 to 2009 (육각강에서 보고된 미토콘드리아 COI 염기서열과 이들을 이용한 분자 연구 논문 분석, 파트 I: 2000년~2009년)

  • Lee, Wonhoon;Park, Jongsun;Akimoto, Shin-Ichi;Kim, Sora;Kim, Yang-Su;Lee, Yerim;Kim, Kwang-Ho;Lee, Si Hyeock;Lee, Yong-Hwan;Lee, Seunghwan
    • Korean journal of applied entomology
    • /
    • v.52 no.4
    • /
    • pp.395-402
    • /
    • 2013
  • Since 2000, a large number of molecular studies have been conducted in Hexapoda with generating large amount of mitochondrial sequences. In this study, to review mitochondrial COI sequences and their molecular studies reported in Hexapoda from 2000 to 2009, 488 molecular studies conducted based on 58,323 COI sequences were categorized according to 26 orders and the positions of COI sequences (5', 3', and entire regions). The numbers of molecular studies in which the three regions utilized varied largely among the 26 orders; however, seven orders showed preferred positions of COI sequences in the researches: Diptera and Orthoptera revealed the largest number of studies in the 5' region; while, Coleoptera, Phthiraptera, Odonata, Phasmatodea, and Psocoptera, showed the largest number of studies in the 3' region. From comparing 84 molecular studies published before 2000, we observed the possibilities that molecular studies in Coleoptera, Diptera, Phthiraptera, and Phasmatodea from 2000 to 2009 had been followed classical studies using the positions of COI sequences well-known until 1999. This study provides useful information to understand the overall trends in COI sequence usages as well as molecular studies conducted from 2000 to 2009 in Hexapoda.