• Title/Summary/Keyword: 데이터 출처

Search Result 74, Processing Time 0.024 seconds

A Study on Analysis of Smelting Slags Produced Reproduction Experiment of Iron Smelting Furnace and Interpretation Method for the Slags (고대 제철로 복원실험 제련 슬래그 분석과 해석 방법에 관한 연구)

  • Kim, Su Jin;Kim, Soo Ki
    • Journal of Conservation Science
    • /
    • v.33 no.2
    • /
    • pp.75-83
    • /
    • 2017
  • This study produced smelting slag through the reproduction of an ancient iron manufacturing technique, with the aim of facilitating a comprehensive understanding of the process by analyzing the slag components. The research suggests an interpretation method using the ratio of the subcomponents relative to the main slag components as an alternative to existing methods. We investigated the component source within the smelting furnace from which the slag is derived by developing an understanding of the tendency between slags. Based on bivariate graph and triangular coordinate data analysis, it was found that a slag can be categorized according to its components. The groups were identified as the ore slag group(centered on the ore), and the clay slag group(centered on clay and granite soil). This research determined that it is possible to estimate the components derived from the slag, depending on which group they belong to or resemble, as shown in Figure 4~7. It was found that a comprehensive understanding of the ratio between the components was more accurate than a simple analysis of the contents, for the interpretation of ancient iron manufacturing processes. This is based on the fact that a higher ratio of $TiO_2$ was detected by the components analysis, and an analysis of all the slag showed that the value of $CaO/SiO_2$ ratio was lower than 0.4, which corresponds to the reproduction experiment condition in which flux was not used.

Application of Multi-Criteria Analysis and GIS to the Coastal Assessment (GIS와 다기준분석법(MCA)을 활용한 연안지역 평가방법 연구)

  • 최희정;윤진숙;황철수
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.510-516
    • /
    • 2003
  • 연안관리 정책을 결정하기 위해서는 다양한 정보의 수집과 이를 체계적으로 관리하고 분석할 수 있는 기법이 필요하다. 특히, 다양한 이해집단과 목적들이 상충하는 지역의 특성을 파악하기 위해서는 환경, 사회, 경제적인 기준 및 의사결정자의 가치체계가 반영될 수 있어야 하며, 선호도가 반영된 요소를 효율적으로 분석할 수 있어야 한다. 이에, 본 연구에서는 공간자료의 처리 및 분석이 용이한 GIS 환경에 다기준 분석법, 그 중에서도 AHP 모형을 결합하는 방법을 다루어 보았다. 분석과정을 살펴보면, 본 연구에서는 지역에 영향을 미치는 사회ㆍ경제적인 지표, 환경 지표를 정하였고, 이런 지표들의 데이터를 GIS라는 도구를 통하여 분석이 용이하도록 변환시켰다. 한편, 이 지역에 영향을 미치는 각 기준들의 중요성을 파악하여 의사결정자의 의견을 반영할 수 있도록 다기준 분석법의 하나인 AHP를 이용하여 가중치를 산정하였다. 다음으로, 다양한 출처의 자료를 표준화하여 GIS의 래스터 자료로 구축한 후, 가중치를 적용한 개별 레이어를 지도대수와 중첩분석을 이용하여 최종 결과 레이어를 생성하였다. 생성된 최종 결과 레이어 상의 공간의 대안인 각 셀 값을 비교ㆍ분석하였다. 이로 인한 결과는 연안의 유한한 자원과 공간의 다양한 이용상태를 관리하기 위한 해안과 육상의 정보를 제공할 수 있다. GIS와 다기준 분석을 통합함으로써 다양한 출처의 공간정보를 분석하고 연안의 현 상태를 밝힐 수 있다. 또한, 이것은 분석 결과가 단순하고 명확하게 설명되어 정책 결정자에게 유용한 정보를 제공할 뿐만 아니라, 이 정보를 이용하여 실질적인 연안관리계획을 수립하는데 도움이 된다.가능성 0.5이상의 면적은 59%를 차지하였다.퇴적이 우세한 것으로 관측되었다.보체계의 구축사업의 시각이 행정정보화, 생활정보화, 산업정보화 등 다양한 분야와 결합하여 보다 큰 시너지 효과와 사용자 중심의 서비스 개선을 창출할 수 있는 기반을 제공할 것을 기대해 본다.. 이상의 결과를 종합해볼 때, ${\beta}$-glucan은 고용량일 때 직접적으로 또는 $IFN-{\gamma}$ 존재시에는 저용량에서도 복강 큰 포식세로를 활성화시킬 뿐 아니라, 탐식효율도 높임으로써 면역기능을 증진 시키는 것으로 나타났고, 그 효과는 crude ${\beta}$-glucan의 추출조건에 따라 달라지는 것을 알 수 있었다.eveloped. Design concepts and control methods of a new crane will be introduced in this paper.and momentum balance was applied to the fluid field of bundle. while the movement of′ individual material was taken into account. The constitutive model relating the surface force and the deformation of bundle was introduced by considering a representative prodedure that stands for the bundle movement. Then a fundamental equations system could be sim

  • PDF

A Study on the Considerations for Constructing RDA Application Profiles (RDA 응용 프로파일 구축시 고려사항에 관한 연구)

  • Lee, Mihwa
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.30 no.4
    • /
    • pp.29-50
    • /
    • 2019
  • This study was to suggest the considerations for application profiles of 2019 revised RDA using literature reviews and case studies according to new RDA that revised in order to reflect the LRM and linked data. First, the additional elements were recommended as the contents of application profiles such as inverse element, broader element, narrow element, domain, range, alternate label name, mapping to MARC, mapping to BIBFRAME, and RDA description examples as new elements as well as element name, element ID, element URL, description method, vocabulary encoding scheme, data provenance element, data provenance value, and notes as the elements that were already suggested by previous researches. Second, RDA rules' representations in forms of flow chart and application profiles through analyzing RDA rules were suggested in order to apply the rules to RDA application profiles to structure the rules in which every element has 4 types of description method, many conditions, and options. Third, the RDA mapping to BIBFRAME was suggested in RDA application profiles because RDA and BIBFRAME are co-related in context of content standard and encoding format, and mapping BIBFRAME and RDA is necessitated for programming BIBFRAME editors with RDA as content standard. This study will contribute to find the methods for constructing RDA application profiles and BIBFRAME application profiles with RDA as content standard.

Study on Developing the Information System for ESG Disclosure Management (ESG 정보공시 관리를 위한 정보시스템 개발에 관한 연구)

  • Kim, Seung-wook
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.77-90
    • /
    • 2024
  • While discussions on ESG are actively taking place in Europe and other countries, the number of countries pushing for mandatory ESG information disclosure related to non-financial information of listed companies is rapidly increasing. However, as companies respond to mandatory global ESG information disclosure, problems are emerging such as the stringent requirements of global ESG disclosure standards, the complexity of data management, and a lack of understanding and preparation of the ESG system itself. In addition, it requires a reasonable analysis of how business management opportunities and risk factors due to climate change affect the company's financial impact, so it is expected to be quite difficult to analyze the results that meet the disclosure standards. In order to perform tasks such as ESG management activities and information disclosure, data of various types and sources is required and management through an information system is necessary to measure this transparently, collect it without error, and manage it without omission. Therefore, in this study, we designed an ESG data integrated management model to integrate and manage various related indicators and data in order to transparently and efficiently convey the company's ESG activities to various stakeholders through ESG information disclosure. A framework for implementing an information system to handle management was developed. These research results can help companies facing difficulties in ESG disclosure at a practical level to efficiently manage ESG information disclosure. In addition, the presentation of an integrated data management model through analysis of the ESG disclosure work process and the development of an information system to support ESG information disclosure were significant in the academic aspects needed to study ESG in the future.

Evaluation of Classification Algorithm Performance of Sentiment Analysis Using Entropy Score (엔트로피 점수를 이용한 감성분석 분류알고리즘의 수행도 평가)

  • Park, Man-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1153-1158
    • /
    • 2018
  • Online customer evaluations and social media information among a variety of information sources are critical for businesses as it influences the customer's decision making. There are limitations on the time and money that the survey will ask to identify a variety of customers' needs and complaints. The customer review data at online shopping malls provide the ideal data sources for analyzing customer sentiment about their products. In this study, we collected product reviews data on the smartphone of Samsung and Apple from Amazon. We applied five classification algorithms which are used as representative sentiment analysis techniques in previous studies. The five algorithms are based on support vector machines, bagging, random forest, classification or regression tree and maximum entropy. In this study, we proposed entropy score which can comprehensively evaluate the performance of classification algorithm. As a result of evaluating five algorithms using an entropy score, the SVMs algorithm's entropy score was ranked highest.

QSPR model for the boiling point of diverse organic compounds with applicability domain (다양한 유기화합물의 비등점 예측을 위한 QSPR 모델 및 이의 적용구역)

  • Shin, Seong Eun;Cha, Ji Young;Kim, Kwang-Yon;No, Kyoung Tai
    • Analytical Science and Technology
    • /
    • v.28 no.4
    • /
    • pp.270-277
    • /
    • 2015
  • Boiling point (BP) is one of the most fundamental physicochemical properties of organic compounds to characterize and identify the thermal characteristics of target compounds. Previously developed QSPR equations, however, still had some limitation for the specific compounds, like high-energy molecules, mainly because of the lack of experimental data and less coverage. A large BP dataset of 5,923 solid organic compounds was finally secured in this study, after dedicated pre-filtration of experimental data from different sources, mostly consisting of compounds not only from common organic molecules but also from some specially used molecules, and those dataset was used to build the new BP prediction model. Various machine learning methods were performed for newly collected data based on meaningful 2D descriptor set. Results of combined check showed acceptable validity and robustness of our models, and consensus approaches of each model were also performed. Applicability domain of BP prediction model was shown based on descriptor of training set.

A Study on the Effects of Search Language on Web Searching Behavior: Focused on the Differences of Web Searching Pattern (검색 언어가 웹 정보검색행위에 미치는 영향에 관한 연구 - 웹 정보검색행위의 양상 차이를 중심으로 -)

  • Byun, Jeayeon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.52 no.3
    • /
    • pp.289-334
    • /
    • 2018
  • Even though information in many languages other than English is quickly increasing, English is still playing the role of the lingua franca and being accounted for the largest proportion on the web. Therefore, it is necessary to investigate the key features and differences between "information searching behavior using mother tongue as a search language" and "information searching behavior using English as a search language" of users who are non-mother tongue speakers of English to acquire more diverse and abundant information. This study conducted the experiment on the web searching which is applied in concurrent think-aloud method to examine the information searching behavior and the cognitive process in Korean search and English search through the twenty-four undergraduate students at a private university in South Korea. Based on the qualitative data, this study applied the frequency analysis to web search pattern under search language. As a result, it is active, aggressive and independent information searching behavior in Korean search, while information searching behavior in English search is passive, submissive and dependent. In Korean search, the main features are the query formulation by extract and combine the terms from various sources such as users, tasks and system, the search range adjustment in diverse level, the smooth filtering of the item selection in search engine results pages, the exploration and comparison of many items and the browsing of the overall contents of web pages. Whereas, in English search, the main features are the query formulation by the terms principally extracted from task, the search range adjustment in limitative level, the item selection by rely on the relevance between the items such as categories or links, the repetitive exploring on same item, the browsing of partial contents of web pages and the frequent use of language support tools like dictionaries or translators.

A Study on Development of Digital Compilation Management System for Local Culture Contents: Focusing on the Case of The Encyclopedia of Korean Local Culture (향토문화 콘텐츠를 위한 디지털 편찬 관리시스템 개발에 관한 연구: "한국향토문화전자대전"의 사례를 중심으로)

  • Kim, Su-Young
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.3
    • /
    • pp.213-237
    • /
    • 2009
  • Local culture is a cultural heritage that has come down from generation to generation in the natural environment of a region. It includes history, tradition, natural features, art, and historic relics. The Academy of Korean Studies has complied "The Encyclopedia of Korean Local Culture" using those local culture contents. Local culture content shave the features of documentary, such as authenticating the source, and managing hierarchy structure. Thus, to deal with local culture contents, a "circular knowledge information management system" is sought for that helps basic, fragmentary, and high-level information to circulate to create new knowledge information within the system. A user of this circular knowledge information management system is able not only to collect data directly in it, but also to fetch data from other database. Besides, processing the collected data helps to create new knowledge information. But, it's very difficult to sustain the features of the original hierarchy bearing meaning contained in the various kinds of local culture contents when building a new database. Moreover, this kind of work needs many times of correction over a long period of time. Therefore, a system in which compilation, correction, and service can be done simultaneously is needed. Therefore, in this study, focusing on the case of "The Encyclopedia of Korean Local Culture", I propose a XML-based digital compilation management system that can express hierarchy information and sustain the semantic features of the local culture contents containing lots of ancient documents, and introduce the expanded functions developed to manage contents in the system.

Analysis of Highway Traffic Indices Using Internet Search Data (검색 트래픽 정보를 활용한 고속도로 교통지표 분석 연구)

  • Ryu, Ingon;Lee, Jaeyoung;Park, Gyeong Chul;Choi, Keechoo;Hwang, Jun-Mun
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.1
    • /
    • pp.14-28
    • /
    • 2015
  • Numerous research has been conducted using internet search data since the mid-2000s. For example, Google Inc. developed a service predicting influenza patterns using the internet search data. The main objective of this study is to prove the hypothesis that highway traffic indices are similar to the internet search patterns. In order to achieve this objective, a model to predict the number of vehicles entering the expressway and space-mean speed was developed and the goodness-of-fit of the model was assessed. The results revealed several findings. First, it was shown that the Google search traffic was a good predictor for the TCS entering traffic volume model at sites with frequent commute trips, and it had a negative correlation with the TCS entering traffic volume. Second, the Naver search traffic was utilized for the TCS entering traffic volume model at sites with numerous recreational trips, and it was positively correlated with the TCS entering traffic volume. Third, it was uncovered that the VDS speed had a negative relationship with the search traffic on the time series diagram. Lastly, it was concluded that the transfer function noise time series model showed the better goodness-of-fit compared to the other time series model. It is expected that "Big Data" from the internet search data can be extensively applied in the transportation field if the sources of search traffic, time difference and aggregation units are explored in the follow-up studies.

A Genetic Algorithm for Materialized View Selection in Data Warehouses (데이터웨어하우스에서 유전자 알고리즘을 이용한 구체화된 뷰 선택 기법)

  • Lee, Min-Soo
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.325-338
    • /
    • 2004
  • A data warehouse stores information that is collected from multiple, heterogeneous information sources for the purpose of complex querying and analysis. Information in the warehouse is typically stored In the form of materialized views, which represent pre-computed portions of frequently asked queries. One of the most important tasks of designing a warehouse is the selection of materialized views to be maintained in the warehouse. The goal is to select a set of views so that the total query response time over all queries can be minimized while a limited amount of time for maintaining the views is given(maintenance-cost view selection problem). In this paper, we propose an efficient solution to the maintenance-cost view selection problem using a genetic algorithm for computing a near-optimal set of views. Specifically, we explore the maintenance-cost view selection problem in the context of OR view graphs. We show that our approach represents a dramatic improvement in terms of time complexity over existing search-based approaches that use heuristics. Our analysis shows that the algorithm consistently yields a solution that only has an additional 10% of query cost of over the optimal query cost while at the same time exhibits an impressive performance of only a linear increase in execution time. We have implemented a prototype version of our algorithm that is used to evaluate our approach.