• Title/Summary/Keyword: 기업데이터 분석

Search Result 2,116, Processing Time 0.033 seconds

A Study on Metadata-based Data Quality Management in a Container Terminal (컨테이너터미널의 메타데이터 기반 데이터 품질관리 방안에 관한 연구)

  • Kang, Yang-Suk;Choi, Hyung-Rim;Kim, Hyun-Soo;Hong, Soon-Goo;Jung, Jae-Un;Park, Jae-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.321-329
    • /
    • 2009
  • Due to the massive increase of data that should be managed, the problems in data quality management have been issued. In addition the lack of integrated management of the data causes duplication of data, low qualify services, and, missing data. To overcome these problems, this study attempts to examine the way of the data qualify management. To do this, metadata was defined, and its current management status in various view points was analyzed, and finally the metadata management was applied to the container terminal. for the "A" container terminal, we performed data standardization, and reflected major constraints and developed the pilot metadata repository. The contributions of this study are in improvement of the data qualify in the container terminal, and its practical application with metadata management method. Limitations of this study is its partial implementation of the metadata management to the company and interoperability of the metadata management for business to business data integration for the future research.

Cluster Analysis of Climate Data for Applying Weather Marketing (날씨 마케팅 적용을 위한 기후 데이터의 군집 분석)

  • Lee, Yang-Koo;Kim, Won-Tae;Jung, Young-Jin;Kim, Kwang-Deuk;Ryu, Keun-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.3 s.15
    • /
    • pp.33-44
    • /
    • 2005
  • Recently, the weather has been influenced by the environmental pollution and the oil price has been risen because of the lack of resources. So, the weather and energy are influencing on not only enterprises or nations, but also individual daily life and economic activities very much. Because of these reasons, there are so many researches about management of solar radiation needed to develope solar energy as alternative energy. And many researchers are also interested in identifying the area according to changing characteristics of climate data. However, the researches have not developed how to apply the cluster analysis, retrieval and analytical results according to the characteristics of the area through data mining. In this paper, we design a data model of the data for storing and managing the climate data tested in twenty cities in the domestic area. And we provide the information according to the characteristics of the area after clustering the domestic climate data, using k-means clustering algorithm. And we suggest the way how to apply the department store and amusement park as an applied weather marketing. The proposed system is useful for constructing the database about the weather marketing and for providing the elements and analysis information.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study on the Effect of the Name Node and Data Node on the Big Data Processing Performance in a Hadoop Cluster (Hadoop 클러스터에서 네임 노드와 데이터 노드가 빅 데이터처리 성능에 미치는 영향에 관한 연구)

  • Lee, Younghun;Kim, Yongil
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.68-74
    • /
    • 2017
  • Big data processing processes various types of data such as files, images, and video to solve problems and provide insightful useful information. Currently, various platforms are used for big data processing, but many organizations and enterprises are using Hadoop for big data processing due to the simplicity, productivity, scalability, and fault tolerance of Hadoop. In addition, Hadoop can build clusters on various hardware platforms and handle big data by dividing into a name node (master) and a data node (slave). In this paper, we use a fully distributed mode used by actual institutions and companies as an operation mode. We have constructed a Hadoop cluster using a low-power and low-cost single board for smooth experiment. The performance analysis of Name node is compared through the same data processing using single board and laptop as name nodes. Analysis of influence by number of data nodes increases the number of data nodes by two times from the number of existing clusters. The effect of the above experiment was analyzed.

Financial ESG and Corporate Sustainable Development: the Moderating Effect of Attention (금융업 ESG와 기업의 지속 가능한 발전: 관심도 조절 역할)

  • Dongmei Li
    • Journal of Digital Policy
    • /
    • v.2 no.1
    • /
    • pp.9-19
    • /
    • 2023
  • ESG is a kind of financial data that pays more attention to corporate environment, social responsibility and corporate governance. This study explores the relationship between ESG and corporate sustainable development through empirical analysis. This study uses the regression method of fixed effects to conduct empirical research on the data of China's A-share listed companies from 2015 to 2020. The research results show that: good ESG performance can promote the sustainable development of enterprises. At the same time, the higher the attention, the better the ESG performance can promote the sustainable development of enterprises. This study enriches the related research on ESG and has certain reference value for promoting the sustainable development of enterprises.

Implement of Job Processing Using GPU for Hadoop Environment (하둡 환경에서 GPU를 사용한 Job 처리 방법)

  • Hong, Seok-min;Yoo, Yeon-jun;Lee, Hyeop Geon;Kim, Young Woon
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.77-79
    • /
    • 2022
  • IT기술이 발전함에 따라 전 세계 데이터의 규모는 매년 증가하고 있다. 빅데이터 플랫폼을 사용하는 기업들은 더욱 빠른 빅데이터 처리를 원하고 있다. 이에 본 논문은 하둡 환경에서 GPU를 사용한 Job 처리 방법을 제안한다. 제안하는 방법은 CPU, GPU 클러스터를 따로 구성하여 세 가지 크기로 분류한 Job들을 알맞은 클러스터에 할당하여 처리한다. 향후, 제안하는 방법의 실질적인 검증을 위해 실제 구현과 성능 평가가 필요하다.

Development of Process Model for Business Collaboration of Franchise Industry (프랜차이즈 업종의 기업간 협업 프로세스 표준모델개발)

  • 박승규;문신명;배승호;홍정완;임춘성
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.780-791
    • /
    • 2003
  • 최근 다양한 업무형태가 등장함에 따라 특정업종이나 기업을 대상으로 기업간 비즈니스에 대란 정보시스템 구현 및 업무 프로세스 리엔지니어링을 통한 혁신을 지원해 줄 수 있는 체계의 도입 및 구현에 대한 요구가 증가하고 있다. 기업이 해당 업무의 정보시스템 구현과 업무의 경쟁력 강화를 뒷받침하기 위해서는 현행 업무프로세스에 대한 표준화가 중요란 사항이나, 현재 기업간 업무에 대해서는 이를 뒷받침 해줄 수 있는 프로세스 및 데이터의 분류와 분석을 위한 통합화, 표준화 작업이 부족한 상태이며, 이를 기반으로 이루어지는 업종별 핵심 기능 프로세스에 대란 표준화와 개발 분야도 미비한 실정이다. 특히 보편적으로 기업간 업무가 많이 발생하는 유통 분야의 경우 이러한 업종내 업무프로세스의 표준을 체계화 하는 필요성이 크게 대두되고 있다. 최근 매우 활성화 되고있는 프랜차이즈 산업의 경우 프랜차이즈 비즈니스의 특성상 경영효율을 높이기 위해 본사의 관리 통제 하에 운영되고 있으며, 상품도 본사에 의해 통일적, 정형적으로 기획되는 둥 프랜차이즈 비즈니스 특유의 경영 특질을 갖고 있다. 그러나 업무에 대란 표준화가 되어 있지 않은 상태에서 프랜차이즈를 운영한다면 본사차원에서는 오히려 인원과 경비만 늘어 프랜차이즈의 가장 큰 장점인 다점화의 장점이 없어지게 되고, 프랜차이즈 사업을 하고자하는 가맹점 입장에서도 프로세스의 차이와 비표준에서 오는 혼란인 많아 이를 쉽고, 체계적으로 접근 딸 수 잇는 표준 프로세스 작업의 필요성이 높다고 할 수 있다. 본 논문에서는 보편적으로 기업간 업무가 많이 발생하는 유통 분야의 프랜차이즈 산업을 대상으로 기업정보시스템 구현 및 경쟁력 강화를 뒷받침하기 위해서, 기업간 프로세스 협업(collaboration) 부분의 데이터 및 서식, 이를 취급하는 기능과 프로세스에 대란 분석을 통해 업무 프로세스 모델링 방법론과 관련한 모델링 지침 및 메타모델을 이용한 표준 업무 프로세스 모델을 개발하여 기업간 업무 프로세스 표준화에 대한 체계적인 관리에 대한 방안을 연구하고자 한다.

  • PDF

Perception Survey about SMEs Employment of University Students in Chungbuk Area: Based on Text-mining (충북지역 대학생의 중소기업 취업에 대한 인식조사: 텍스트마이닝을 기반으로)

  • Choi, Dabin;Choi, Wooseok;Choi, Sanghyun;Lee, Junghwan
    • Korean small business review
    • /
    • v.42 no.4
    • /
    • pp.235-250
    • /
    • 2020
  • This study surveyed the perception of university students about employment in Small and Medium-sized Enterprises(SME) in the Chungbuk area to prepare improvement measures. In particular, the data were collected in descriptive questions along with the existing survey methods, and the perception of SME and decent work was identified using text-mining. As a result of the analysis, there are positive perceptions of jobs at SME such as various work experiences and low job competition rates, while there are generally many negative perceptions in pay, work and welfare. However, as a result of co-occurrence network analysis of responses to decent jobs, 'Information' was derived as a keyword. Currently, college students' negative perception of SME is affected by the lack of sufficient information, which needs to be improved first. To solve this problem, it was proposed to establish and operate a platform that can provide information on employment of SME and select necessary personnel.

The Guideline for Re-Structuring of Information System and Case Study (정보시스템 재구축 수행 방안과 적용 사례)

  • Choi, Youn-Lak;Lee, Eun-Sang;Lee, Hyun-Jeong;Chong, Ki-Won
    • Annual Conference of KIPS
    • /
    • 2001.10a
    • /
    • pp.473-476
    • /
    • 2001
  • 최근 기존 정보시스템에 고객이나 사용자의 다양한 요구사항이나 기업의 환경 변화를 반영하여 새로운 정보시스템으로 재구축하는 경향을 보이고 있다. 이를 통해 기업들에서 경쟁 우위를 선점함으로써 보다 우세한 경쟁력을 갖출 수 있다. 본 논문에서는 정보시스템 재구축을 위한 프로세스 모델링(Process Modeling)과 데이터 모델링(Data Modeling)을 체계적으로 수행하는 방안을 제시하고, 이를 실제로 적용한 사례를 보여준다. 정보시스템의 전체적인 관점에서의 요구사항 및 기존 정보시스템의 미비사항을 분석하여 정보화 대상을 추출하는 프로세스 모델 분석(Process Model Analysis) 단계와 정보화 대상을 개념 모델로 전환하는 논리 데이터 모델링(Logical Data Modeling) 단계, 실제 컴퓨터에 저장하여 사용하는 물리 데이터 모델링(Physical Data Modeling) 단계로 구성된다.

  • PDF

Smart Learning Strategies utilizing Convergence of e-Learning and Bigdata (이러닝과 빅데이터의 융합 기반 스마트러닝 전략)

  • Noh, Kyoo-Sung
    • Journal of Digital Convergence
    • /
    • v.13 no.1
    • /
    • pp.487-493
    • /
    • 2015
  • This paper derives the strategic implications of smart learning as a sophisticated alternative to e-learning through the convergence approach of e-learning and Bigdata based on the practices of developed countries. To this, this paper derives e-Learning status and challenges issues in Korea, and then, analyzes the convergence case of e-learning and data science in major foreign advanced companies and universities. In addition, this study conducts an awareness survey on Bigdata applied for employees of e-learning companies, and then derives a strategic alternative to the Bigdata convergence-based smart learning effectiveness in the industry with the analysis of the survey data.