• Title/Summary/Keyword: University Information Systems

Search Result 21,814, Processing Time 0.059 seconds

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

A Study on the Revitalization of Tourism Industry through Big Data Analysis (한국관광 실태조사 빅 데이터 분석을 통한 관광산업 활성화 방안 연구)

  • Lee, Jungmi;Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.149-169
    • /
    • 2018
  • Korea is currently accumulating a large amount of data in public institutions based on the public data open policy and the "Government 3.0". Especially, a lot of data is accumulated in the tourism field. However, the academic discussions utilizing the tourism data are still limited. Moreover, the openness of the data of restaurants, hotels, and online tourism information, and how to use SNS Big Data in tourism are still limited. Therefore, utilization through tourism big data analysis is still low. In this paper, we tried to analyze influencing factors on foreign tourists' satisfaction in Korea through numerical data using data mining technique and R programming technique. In this study, we tried to find ways to revitalize the tourism industry by analyzing about 36,000 big data of the "Survey on the actual situation of foreign tourists from 2013 to 2015" surveyed by the Korea Culture & Tourism Research Institute. To do this, we analyzed the factors that have high influence on the 'Satisfaction', 'Revisit intention', and 'Recommendation' variables of foreign tourists. Furthermore, we analyzed the practical influences of the variables that are mentioned above. As a procedure of this study, we first integrated survey data of foreign tourists conducted by Korea Culture & Tourism Research Institute, which is stored in the tourist information system from 2013 to 2015, and eliminate unnecessary variables that are inconsistent with the research purpose among the integrated data. Some variables were modified to improve the accuracy of the analysis. And we analyzed the factors affecting the dependent variables by using data-mining methods: decision tree(C5.0, CART, CHAID, QUEST), artificial neural network, and logistic regression analysis of SPSS IBM Modeler 16.0. The seven variables that have the greatest effect on each dependent variable were derived. As a result of data analysis, it was found that seven major variables influencing 'overall satisfaction' were sightseeing spot attraction, food satisfaction, accommodation satisfaction, traffic satisfaction, guide service satisfaction, number of visiting places, and country. Variables that had a great influence appeared food satisfaction and sightseeing spot attraction. The seven variables that had the greatest influence on 'revisit intention' were the country, travel motivation, activity, food satisfaction, best activity, guide service satisfaction and sightseeing spot attraction. The most influential variables were food satisfaction and travel motivation for Korean style. Lastly, the seven variables that have the greatest influence on the 'recommendation intention' were the country, sightseeing spot attraction, number of visiting places, food satisfaction, activity, tour guide service satisfaction and cost. And then the variables that had the greatest influence were the country, sightseeing spot attraction, and food satisfaction. In addition, in order to grasp the influence of each independent variables more deeply, we used R programming to identify the influence of independent variables. As a result, it was found that the food satisfaction and sightseeing spot attraction were higher than other variables in overall satisfaction and had a greater effect than other influential variables. Revisit intention had a higher ${\beta}$ value in the travel motive as the purpose of Korean Wave than other variables. It will be necessary to have a policy that will lead to a substantial revisit of tourists by enhancing tourist attractions for the purpose of Korean Wave. Lastly, the recommendation had the same result of satisfaction as the sightseeing spot attraction and food satisfaction have higher ${\beta}$ value than other variables. From this analysis, we found that 'food satisfaction' and 'sightseeing spot attraction' variables were the common factors to influence three dependent variables that are mentioned above('Overall satisfaction', 'Revisit intention' and 'Recommendation'), and that those factors affected the satisfaction of travel in Korea significantly. The purpose of this study is to examine how to activate foreign tourists in Korea through big data analysis. It is expected to be used as basic data for analyzing tourism data and establishing effective tourism policy. It is expected to be used as a material to establish an activation plan that can contribute to tourism development in Korea in the future.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

A Retrospective Study of 94 Hypercalcemic Dogs(2002-2004) (94 마리 고칼슘혈증 개들에 대한 회고연구(2002-2004))

  • Cho, Tae-Hyung;Kang, Byeong-Teck;Park, Chul;Jung, Dong-In;Yoo, Jong-Hyun;Kim, Ju-Won;Kim, Ha-Jung;Lim, Chae-Young;Lee, So-Young;Kim, Jung-Hyun;Woo, Eung-Je;Park, Hee-Myung
    • Journal of Veterinary Clinics
    • /
    • v.24 no.4
    • /
    • pp.479-485
    • /
    • 2007
  • A retrospective study of 94 hypercalcemic dogs was performed to find out most common causes that lead to hypercalcemia through investigating dogs referred to the Veterinary Teaching Hospital of Konkuk University from 2002 to 2004. During the study period, hypercalcemia was found in 94 dogs of 19 breeds, and they were evaluated as case group. Control group was made up of 94 dogs of 18 breeds without hypercalcemia admitted for the same study period. For general signalments, there were no significant differences between case and control group with the exception of age distribution. Shih-tzu(17.02%) and Yorkshire terrier(26.60%) was the most common breed in case and control group, respectively. The most common diseases associated with hypercalcemia were chronic renal failure (18.09%), acute renal failure(14.89%), and renal calculi(6.38%). Malignant neoplasia(lymphoma, hemangiosarcoma, chronic lymphocytic leukemia, mammary gland tumor, and multiple myeloma) and endocrinopathies(hyperadrenocorticism, hyperthyroidism, hypoadrenocorticism, and hypothyroidism) occupied 8.5% and 6.4%, respectively. This report is a first retrospective study of hypercalcemic dogs in South Korea.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

The Effect of PL Leadership and Characteristics of Project on Project Participants' Satisfaction and Performance (PL 리더십 성향과 프로젝트 특성요인이 프로젝트 참여 만족 및 성과에 미치는 영향)

  • Yang, Hee-Dong;Kim, Myung-Jin;Kang, So-Ra
    • Asia pacific journal of information systems
    • /
    • v.20 no.4
    • /
    • pp.53-79
    • /
    • 2010
  • The study was originated from recognition that project participants' satisfaction should be Improved to raise project performance and to make progress of a successful project since the above dissatisfaction was operated as a danger factor of the project. The study selected one large-scale sample project and attempted measuring characteristics of the project, participants' satisfaction and project performance with the whole project participants. The study analyzed correlations between individual level (team members) and group level (development team), and examined what effect a sub project manager under complicated hierarchical organization of the large-scale project, namely PL (project leader)'s leadership style had on each individual project participant's satisfaction and what effect project uncertainty in organization/technology environment had on project participants' satisfaction and project performance. The study verified that development team (group) had an effect on team member (individual)-level project participants' satisfaction by disclosing that there was a significant dispersion among groups within project participants' satisfaction by each individual. It is analyzed that it is necessary to make improvement through approach by each pertinent team to raise individual-level project participants' satisfaction. The study also verified PL's ideal leadership under strict methodology and hierarchical control of the large-scale project. Based on the verification of the hypotheses, the results of the analysis were produced as follows. First, the development team affects the satisfaction level that an individual has when he/she participates in a project. This suggests that the satisfaction with project participation should be improved at the team level. In addition, the project management style and leadership orientation of the manager of a sub project who is mostly affected by the team proved to have a direct influence on the satisfaction with project participation and project performances. Second, both the performance-oriented leadership and the relationship-oriented leadership of the PL of the development team were verified to have a significant effect on the satisfaction of the team members associated with project participation. In other words, when the team members recognize that the PL of the development team shows both the performance-oriented leadership and the relationship-oriented leadership, their satisfaction with project participation increases accordingly. Third, it was verified that the uncertainty of the organizational environment significantly affects the satisfaction level when the PL of the development team exerts a relationship-oriented and performance-oriented leadership. The higher the uncertainty of the organizational environment is, the more the satisfaction with project participation decreases whereas the relationship-oriented leadership has a more positive effect on the satisfaction than the performance-oriented leadership style. Fourth, when the PL of the development team exerts the relationship-related and performance-related leadership, the uncertainty of the technological environment has a significant influence on the satisfaction level. The higher the uncertainty of the technological environment is, the more the satisfaction with project participation decreases whereas the performance-oriented leadership has a more positive effect on the satisfaction than the relationship-oriented leadership style. The result of the research on the uncertainty of the project environment suggests that when the development team leader exerts a relationship-oriented and performance-oriented leadership style, the uncertainty of the organizational environment has a significant effect on the satisfaction with project participation; the higher the uncertainty of the organizational environment, the more the satisfaction level decreases, and the relationship-oriented leadership style affects the satisfaction level more positively than the performance-oriented leadership style. In addition, when the development team leader displays a relationship-oriented and performance-oriented leadership style, the uncertainty of the technological environment has a significant effect on the satisfaction with project participation; the higher the uncertainty of the technological environment. the more the satisfaction level decreases. The performance-oriented leadership style as well affects the satisfaction level more positively than the relationship-oriented leadership style. Based on the above results, the research provides the following implications when handling multiple concurrent projects. First, the satisfaction with the participation in the multiple concurrent projects needs to be enhanced at the team (group) level. Second. the manager of the project team, particularly the middle managers should have both a performance-oriented and relationship (task and human)-oriented attitude and exert a consolidated leadership in order to improve the satisfaction of team members with project participation and their performances. Third, as the uncertainty factor of the technological and organizational environment among the characteristics factors of the project has room for methodological improvement depending on one's effort even though there are some complications, we need to continuously prevent and control the risks resulting from the uncertainties of the technological and organizational environment of the project in order to enhance the satisfaction of project participation and project performances. Fourth, the performance (task)-oriented leadership is required when there is uncertainty in a technological environment while the relationship (human)-oriented leadership is required when there is uncertainty in an organizational environment. This research has the following limitations. First, this research intended to select one large-sized sample project and measure the project characteristics, the satisfaction of all the participants associated with project participation, and their performances. Therefore, it is inappropriate to generalize and apply the result of this result onto other numerous projects. Second, as this case study entailed a survey to measure the characteristics factors and performance of the project, since the result value was based on the perception of project team members, the data may have insufficient objectivity. Third, though this research targeted on all the project participants, some development teams did not provide sufficient data and questionnaires were collected from some specific development teams among the 23 development teams, causing a significant deviation in the response rate among the development teams. Therefore, we need to continuously conduct the follow-up researches making comparisons among the multiple projects, and centering on the characteristics factors of the project and its satisfaction level.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Contemplation on Measures to Advance Logistics Centers (물류센터 선진화를 위한 발전 방안에 대한 소고)

  • Sun, Il-Suck;Lee, Won-Dong
    • Journal of Distribution Science
    • /
    • v.9 no.1
    • /
    • pp.17-27
    • /
    • 2011
  • As the world becomes more globalized, business competition becomes fiercer, while consumers' needs for less expensive quality products are on the increase. Business operations make an effort to secure a competitive edge in costs and services, and the logistics industry, that is, the industry operating the storing and transporting of goods, once thought to be an expense, begins to be considered as the third cash cow, a source of new income. Logistics centers are central to storage, loading and unloading of deliveries, packaging operations, and dispensing goods' information. As hubs for various deliveries, they also serve as a core infrastructure to smoothly coordinate manufacturing and selling, using varied information and operation systems. Logistics centers are increasingly on the rise as centers of business supply activities, growing beyond their previous role of primarily storing goods. They are no longer just facilities; they have become logistics strongholds that encompass various features from demand forecast to the regulation of supply, manufacturing, and sales by realizing SCM, taking into account marketability and the operation of service and products. However, despite these changes in logistics operations, some centers have been unable to shed their past roles as warehouses. For the continuous development of logistics centers, various measures would be needed, including a revision of current supporting policies, formulating effective management plans, and establishing systematic standards for founding, managing, and controlling logistics centers. To this end, the research explored previous studies on the use and effectiveness of logistics centers. From a theoretical perspective, an evaluation of the overall introduction, purposes, and transitions in the use of logistics centers found issues to ponder and suggested measures to promote and further advance logistics centers. First, a fact-finding survey to establish demand forecast and standardization is needed. As logistics newspapers predicted that after 2012 supply would exceed demand, causing rents to fall, the business environment for logistics centers has faltered. However, since there is a shortage of fact-finding surveys regarding actual demand for domestic logistic centers, it is hard to predict what the future holds for this industry. Accordingly, the first priority should be to get to the essence of the current market situation by conducting accurate domestic and international fact-finding surveys. Based on those, management and evaluation indicators should be developed to build the foundation for the consistent advancement of logistics centers. Second, many policies for logistics centers should be revised or developed. Above all, a guideline for fair trade between a shipper and a commercial logistics center should be enacted. Since there are no standards for fair trade between them, rampant unfair trades according to market practices have brought chaos to market orders, and now the logistics industry is confronting its own difficulties. Therefore, unfair trade cases that currently plague logistics centers should be gathered by the industry and fair trade guidelines should be established and implemented. In addition, restrictive employment regulations for foreign workers should be eased, and logistics centers should be charged industry rates for the use of electricity. Third, various measures should be taken to improve the management environment. First, we need to find out how to activate value-added logistics. Because the traditional purpose of logistics centers was storage and loading/unloading of goods, their profitability had a limit, and the need arose to find a new angle to create a value added service. Logistic centers have been perceived as support for a company's storage, manufacturing, and sales needs, not as creators of profits. The center's role in the company's economics has been lowering costs. However, as the logistics' management environment spiraled, along with its storage purpose, developing a new feature of profit creation should be a desirable goal, and to achieve that, value added logistics should be promoted. Logistics centers can also be improved through cost estimation. In the meantime, they have achieved some strides in facility development but have still fallen behind in others, particularly in management functioning. Lax management has been rampant because the industry has not developed a concept of cost estimation. The centers have since made an effort toward unification, standardization, and informatization while realizing cost reductions by establishing systems for effective management, but it has been hard to produce profits. Thus, there is an urgent need to estimate costs by determining a basic cost range for each division of work at logistics centers. This undertaking can be the first step to improving the ineffective aspects of how they operate. Ongoing research and constant efforts have been made to improve the level of effectiveness in the manufacturing industry, but studies on resource management in logistics centers are hardly enough. Thus, a plan to calculate the optimal level of resources necessary to operate a logistics center should be developed and implemented in management behavior, for example, by standardizing the hours of operation. If logistics centers, shippers, related trade groups, academic figures, and other experts could launch a committee to work with the government and maintain an ongoing relationship, the constraint and cooperation among members would help lead to coherent development plans for logistics centers. If the government continues its efforts to provide financial support, nurture professional workers, and maintain safety management, we can anticipate the continuous advancement of logistics centers.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.