• Title/Summary/Keyword: Open Account

Search Result 330, Processing Time 0.038 seconds

Occupational Demands and Educational Needs in Korean Librarianship (한국적 도서관학교육과정 연구)

  • Choi Sung Jin;Yoon Byong Tae;Koo Bon Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.12
    • /
    • pp.269-327
    • /
    • 1985
  • This study was undertaken to meet more fully the demands for improved training of library personnel, occasioned by the rapidly changing roles and functions of libraries as they try to adapt to the vast social, economic and technological changes currently in progress in the Korean society. The specific purpose of this research is to develop a standard curriculum at the batchelor's level that will properly equip the professional personnel in Korean libraries for the changes confronting them. This study started with the premise that to establish a sound base for curriculum development, it was necessary first to determine what concepts, knowledge, and techniques are required for professional library personnel to perform it at an optimal level of efficiency. Explicitly, it was felt that for the development of useful curricula and courses at the batchelor's level, a prime source of knowledge should be functional behaviours that are necessary in the job situation. To determine specifically what these terminal performance behaviours should be so that learning experience provided could be rooted in reality, the decision was reached to use a systems approach to curriculum development, which is an attempt to break the mold of traditional concepts and to approach interaction from an open, innovative, and product-oriented perspective. This study was designed to: (1) identify what knowledge and techniques are required for professional library personnel to perform the job activities in which they are actually engaged, (2) to evaluate the educational needs of the knowledge and techniques that the professional librarian respondents indicate, and (3) to categorise the knowledge and techniques into teaching subjects to present the teaching subjects by their educational importance. The main data-gathering instrument for the study, a questionnaire containing 254 items, was sent to a randomly selected sample of library school graduates working in libraries and related institutions in Korea. Eighty-three librarians completed and returned the questionnaire. After analysing the returned questionnaire, the following conclusions have been reached: (A) To develop a rational curriculum rooted in the real situation of the Korean libraries, compulsory subjects should be properly chosen from those which were ranked highest in importance by the respondents. Characters and educational policies of, and other teaching subjects offered by, the individual educational institution to which a given library school belongs should also be taken into account in determining compulsory subjects. (B) It is traditionally assumed that education in librarianship should be more concerned with theoretical foundations on which any solution can be developed than with professional needs with particulars and techniques as they are used in existing library environments. However, the respondents gave the former a surprisingly lower rating. The traditional assumption must be reviewed. (C) It is universally accepted in developing library school curricula that compulsory subjects are concerned with the area of knowledge students generally need to learn and optional subjects are concerned with the area to be needed to only those who need it. Now that there is no such clear demarcation line provided in librarianship, it may be a realistic approach to designate subjects in the area rated high by the respondents as compulsory and to designate those in the area rated low as optional. (D) Optional subjects that were ranked considerably higher in importance by the respondents should be given more credits than others, and those ranked lower might be given less credits or offered infrequently or combined. (E) A standard list of compulsory and optional subjects with weekly teaching hours for a Korean library school is presented in the fourth chapter of this report.

  • PDF

A Case Study on the UK Park and Green Space Policies for Inclusive Urban Regeneration (영국의 포용적 도시재생을 위한 공원녹지 정책 사례 연구)

  • Kim, Jung-Hwa;Kim, Yong-Gook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.5
    • /
    • pp.78-90
    • /
    • 2019
  • The purpose of this study is to explore the direction of developing policies for parks and green spaces for inclusive urban planning and regeneration. By reviewing the status, budget, and laws pertaining to urban parks in Korea, as well as assessing the inclusivity of urban parks, this study revealed the problems and limitations in Korea as follows. First, the urban park system, which takes into account indicators such as park area per capita and green space ratio, is focused only on quantitative expansion. Second, the distribution of urban parks is unequal; hence, the higher the number of vulnerable residents, the lower the quality of urban parks and green spaces. Moreover, this study focused on the UK central government, along with the five local governments, including London, Edinburgh, Cardiff, Belfast, and Liverpool. Through an analysis of the contexts and contents establishing UK park and green space policies that can reduce socioeconomic inequalities while at the same time increase inclusiveness. This study discovered the following. The government's awareness of the necessity of tackling socioeconomic inequalities to make an inclusive society, the change in the urban regeneration policies from physical redevelopment to neighborhood renewal, and the survey and research on the correlation of parks and green spaces, inequality, health, and well-being provided the background for policy establishment. As a result, the creation of an inclusive society has been reflected in the stated goals of the UK's national plan and the strategies for park and green space supply and qualitative improvement. Deprived areas and vulnerable groups have been included in many local governments' park and green space policies. Also, tools for analyzing deficiencies in parks and methods for examining the qualitative evaluation of parks were developed. Besides, for the sustainability of each project, various funding programs have been set up, such as raising funds and fund-matching schemes. Different ways of supporting partnerships have been arranged, such as the establishment of collaborative bodies for government organizations, allowing for the participation of private organizations. The study results suggested five policy schemes, including conducting research on inequality and inclusiveness for parks and green spaces, developing strategies for improving the quality of park services, identifying tools for analyzing policy areas, developing park project models for urban regeneration, and building partnerships and establishing support systems.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.

The Case Study on Industry-Leading Marketing of Woori Investment and Securities (우리투자증권의 시장선도 마케팅 사례연구)

  • Choi, Eun-Jung;Lee, Sung-Ho;Lee, Sanghyun;Lee, Doo-Hee
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.227-251
    • /
    • 2012
  • This study analyzed Woori Investment and Securities' industry-leading marketing from both a brand management and a marketing decision-making perspective. By executing a different marketing strategy from its competitors, Woori Investment and Securities recognized recent changes in the asset management and investment markets as an open opportunity, and quickly responded to the market changes. First, the company launched the octo brand as a multi-account product, two years before its competitors offered their own products. In particular, it created a differentiated brand image, using the blue octopus character, which became familiar to the general financial community, and was consistently employed as part of an integrated marketing communications strategy. Second, it executed a brand expansion strategy by sub-branding octo in a variety of new financial products, responding to rapid changes in the domestic financial and asset management markets. Through this strategic evolution, the octo brand became a successful wealth management brand and representative of Woori Investment & Securities. Third, it has converged market research, demand and trend analysis, and customer needs acquired through various customer contact channels into a marketing perspective. Thus, marketing has participated in the product development stage, a rarity in the finance industry. Woori Investment and Securities has a leading marketing system. The heart of the successful product creation lies in a collaboration of their customer bases among the finance companies in the Woori Financial Group. The present study suggested a corresponding strategy for octo brand, which is expected to enter into the maturity stage of its product life cycle. In addition, this study found a need to modify the current positioning strategy in order to position and preserve sustainability in the increasingly competitive asset management market. It also suggested the need for an offensive strategy to counter the number one M/S company, and address the issue of cannibalism in the Woori Financial Group.

  • PDF

A Study on the Needs Analysis of University-Regional Collaborative Startup Co-Space Composition (대학-지역 연계 협업적 창업공간(Co-Space) 구성 요구도 분석)

  • Kim, In-Sook;Yang, Ji-Hee;Lee, Sang-Seub
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.159-172
    • /
    • 2023
  • The purpose of this study is to explore a collaborative start-up space(Co-Space) configuration plan in terms of university-regional linkage through demand analysis on the composition of university-regional linkage startup space. To this end, a survey was conducted for request analysis, and the collected data were analyzed through the t-test, The Lotus for Focus model. In addition, FGI was implemented for entrepreneurs, and the direction of the composition of the university-region Co-Space was derived from various aspects. The results of this study are as follows. First, as a result of the analysis of the necessity of university-community Co-Space, the necessity of opening up the start-up space recognized by local residents and the necessity of building the start-up space in the region were high. In addition, men recognized the need to build a space for start-ups in the community more highly than women did women. Second, as a result of analysis of demands for university-regional Co-Space, the difference between current importance and future necessity of university-regional Co-Space was statistically significant. Third, as a result of analysis on the composition of the startup space by cooperation between universities and regions, different demands were made for composition of the startup space considering openness and closeness, and for composition of the startup space size. The implications of the study are as follows. First, Co-Spaces need to be constructed in conjunction with universities in accordance with the demands of start-up companies in the region by stage of development. Second, it is necessary to organize a customized Co-Space that takes into account the size and operation of the start-up space. Third, it is necessary to establish an experience-based open space for local residents in the remaining space of the university. Fourth, it is necessary to establish a Co-Space that enables an organic network between local communities, start-up investment companies, start-up support institutions, and start-up companies. This study is significant in that it proposed the regional startup ecosystem and the cooperative start-up space structure for strengthening start-up sustainability through cooperation between universities and local communities. The results of this study are expected to be used as useful basic data for Co-Space construction to build a regional start-up ecosystem in a trend emphasizing the importance of start-up space, which is a major factor affecting start-up companies.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

An Epidemiological Study on the Industrial Injuries among Metal Products Manufacturing Workers in Young-Dung-Po, Seoul (일부 금속 및 기계제품 제조업체 근로자들의 산업재해($1980{\sim}1981$)에 관한 조사)

  • Lee, Jung-Hee
    • Journal of Preventive Medicine and Public Health
    • /
    • v.15 no.1
    • /
    • pp.187-196
    • /
    • 1982
  • The followings are the results of the study on industrial accidents occurred at 12 factories manufacturing metal products during the period of 2 years from January 1980 to December 1981 in the area of Yong-Dung-Po in Seoul. The results of the study are as follows: 1. The incidence rate of industrial injuries was 45.7 per 1,000 workers of the sample group and the rate of male (54.0) was three times higher than that of female (17.5). 2. In age groups, the highest rate was observed in the group of under 19 years old with 83.5, while the lowest in the group of 40s. 3. It was found that those who had short term of work experience produced a higher rate of injuries, particularly, the group of workers with less than 1 year of experience showed the highest rate of it as 48.1%. 4. In working time, the highest incidence rate occurred 3 and 7 hours after the beginning of their working showing the rate of 6.0 and 6.1 per 1,000 workers, respectively. 5. The highest incidence rate was observed on Monday as 8.4 per 1,000 workers, and it was 18.3% in aspect of the days of a week. 6. In aspect of the months of a year, the highest incidence was observed on July 1,000 workers and the next was on March as 4.8. These figures account for 11.8% of total occurrence in respective month. as 5. 4 per and 10.5% 7. In causes of injuries, the accident caused by power driven machinery showed the highest rate with 37.5%, the second was due to handling without machinery with 17.2%, and the third was due to falling objects with 14.2%, and striking against objects with 10.2%, and so on. 8. By parts of the body affected, the most injuries 84.3% of them occurred on both upper and lower extremities with the rate of 58.8% for the former and 25.5% for the latter. Fingers were most frequently injured with a rate of 40.3%. Comparing the sites of extremities affected, rate of injuries on the right side was 55.0% and 45.0% on the left side. 9. In the nature of injury, laceration and open wound were the highest with 34. 0%, the next was fracture and dislocation with 31. 9%, and sprain was the third with 8.1%. 10. On the duration of treatment, it lasted less than one month in 68.9% of the injured cases, of which 14.5% of the cases were recovered within 2 weeks, and 54.4% of them were treated more than 2 weeks. And the duration of the treatment tended to be prolonged in larger industries. 11. The ratio of insured accidents to uninsured accidents was 1 to 4.7.

  • PDF

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.