• Title/Summary/Keyword: frequency constraints

Search Result 382, Processing Time 0.029 seconds

Study on Disaster Response Strategies Using Multi-Sensors Satellite Imagery (다종 위성영상을 활용한 재난대응 방안 연구)

  • Jongsoo Park;Dalgeun Lee;Junwoo Lee;Eunji Cheon;Hagyu Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.755-770
    • /
    • 2023
  • Due to recent severe climate change, abnormal weather phenomena, and other factors, the frequency and magnitude of natural disasters are increasing. The need for disaster management using artificial satellites is growing, especially during large-scale disasters due to time and economic constraints. In this study, we have summarized the current status of next-generation medium-sized satellites and microsatellites in operation and under development, as well as trends in satellite imagery analysis techniques using a large volume of satellite imagery driven by the advancement of the space industry. Furthermore, by utilizing satellite imagery, particularly focusing on recent major disasters such as floods, landslides, droughts, and wildfires, we have confirmed how satellite imagery can be employed for damage analysis, thereby establishing its potential for disaster management. Through this study, we have presented satellite development and operational statuses, recent trends in satellite imagery analysis technology, and proposed disaster response strategies that utilize various types of satellite imagery. It was observed that during the stages of disaster progression, the utilization of satellite imagery is more prominent in the response and recovery stages than in the prevention and preparedness stages. In the future, with the availability of diverse imagery, we plan to research the fusion of cutting-edge technologies like artificial intelligence and deep learning, and their applicability for effective disaster management.

Reliability-Based Design Optimization for a Vertical-Type Breakwater with an Emphasis on Sliding, Overturn, and Collapse Failure (직립식 방파제 신뢰성 기반 최적 설계: 활동, 전도, 지반 훼손으로 인한 붕괴 파괴를 중심으로)

  • Yong Jun Cho
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.36 no.2
    • /
    • pp.50-60
    • /
    • 2024
  • To promote the application of reliability-based design within the Korean coastal engineering community, the author conducted reliability analyses and optimized the design of a vertical-type breakwater, considering multiple limit states in the seas off of Pusan and Gunsan - two representative ports in Korea. In this process, rather than relying on design waves of a specific return period, the author intentionally avoided such constraints. Instead, the author characterized the uncertainties associated with wave force, lift force, and overturning moment - key factors significantly influencing the integrity of a vertical-type breakwater. This characterization was achieved by employing a probabilistic model derived from the frequency analysis results of long-term in-situ wave data. The limit state of the vertical-type breakwater encompassed sliding, overturning, and collapse failure, with the close interrelation between wave force, lift force, and moment described using the Nataf joint probability distribution. Simulation results indicate, as expected, that considering only sliding failure underestimates the failure probability. Furthermore, it was shown that the failure probability of vertical-type breakwaters cannot be consistently secured using design waves with a specific return period. In contrast, breakwaters optimally designed to meet the reliability index requirement of 𝛽-3.5 to 4 consistently achieve a consistent failure probability across all sea areas.

The Actual Conditions and Improvement of the Eco-Forests Mater Plan, South Korea (우리나라 생태숲조성 기본계획 실태 및 개선방향)

  • Heo, Jae-Yong;Kim, Do-Gyun;Jeong, Jeong-Chae;Lee, Jeong
    • Korean Journal of Environment and Ecology
    • /
    • v.24 no.3
    • /
    • pp.235-248
    • /
    • 2010
  • This study was carried out to the actual conditions and improvement of the eco-forests master plan in South Korea, and suggested its problems and improvement direction. Results from survey and analysis of limiting factors or constraints in the construction plans of eco-forests in Korea revealed that there were highly frequent problems involving site feasibility, topographic aspect, and existing vegetation. The results of survey on the status of land use indicated that the average ratio of the use of private estate was 29.7%, so then it was estimated that a great amount of investment in purchase of eco-forest site would be required. Results from survey on major introduced facilities showed that there was high frequency of introduction of infrastructure, building facility, recreational facility, convenience facility, and information facility, and that there was low frequency of introduction of plant culture system, ecological facility, structural symbol and sculpture, and the likes. There was just one eco-forest park where more than 500 species of plants grew, and the result of investigation indicated that the diversity of plant species in 11 eco-forest parks was lower than the standards for construction of eco-forest. Results from analysis of the projects costs revealed that investment cost in facilities was higher than planting costs, and that a large amount of investment was made in the initial stage of the project. There was no planned budget for the purpose of cultivating and maintaining the plants and vegetation after construction of eco-forest. The basic concepts in construction of eco-forests were established according to the guidelines presented by the Korea Forest Service; however, the detailed work of the project was planned with its user-oriented approach. Then the construction of eco-forest was being planned following the directions, which would lead to development of a plant garden similar to arboretum or botanical garden. Therefore, it is required that the architect who designs eco-forest as well as the public officer concerned firmly establish the concepts of eco-forest, and that, through close analysis of development conditions, a candidate site to fit the purpose of constructing eco-forest be selected, and also a substantive management plan be established upon completion of construction of eco-forest.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Prosodic Phrasing and Focus in Korea

  • Baek, Judy Yoo-Kyung
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.246-246
    • /
    • 1996
  • Purpose: Some of the properties of the prosodic phrasing and some acoustic and phonological effects of contrastive focus on the tonal pattern of Seoul Korean is explored based on a brief experiment of analyzing the fundamental frequency(=FO) contour of the speech of the author. Data Base and Analysis Procedures: The examples were chosen to contain mostly nasal and liquid consonants, since it is difficult to track down the formants in stops and fricatives during their corresponding consonantal intervals and stops may yield an effect of unwanted increase in the FO value due to their burst into the following vowel. All examples were recorded three times and the spectrum of the most stable repetition was generated, from which the FO contour of each sentence was obtained, the peaks with a value higher than 250Hz being interpreted as a high tone (=H). The result is then discussed within the prosodic hierarchy framework of Selkirk (1986) and compared with the tonal pattern of the Northern Kyungsang dialect of Korean reported in Kenstowicz & Sohn (1996). Prosodic Phrasing: In N.K. Korean, H never appears both on the object and on the verb in a neutral sentence, which indicates the object and the verb form a single Phonological Phrase ($={\phi}$), given that there is only one pitch peak for each $={\phi}$. However, Seoul Korean shows that both the object and the verb have H of their own, indicating that they are not contained in one $={\phi}$. This violates the Optimality constraint of Wrap-XP (=Enclose a lexical head and its arguments in one $={\phi}$), while N.K. Korean obeys the constraint by grouping a VP in a single $={\phi}$. This asymmetry can be resolved through a constraint that favors the separate grouping of each lexical category and is ranked higher than Wrap-XP in Seoul Korean but vice versa in N.K. Korean; $Align-x^{lex}$ (=Align the left edge of a lexical category with that of a $={\phi}$). (1) nuna-ka manll-ll mEk-nIn-ta ('sister-NOM garlic-ACC eat-PRES-DECL') a. (LLH) (LLH) (HLL) ----Seoul Korean b. (LLH) (LLL LHL) ----N.K. Korean Focus and Phrasing: Two major effects of contrastive focus on phonological phrasing are found in Seoul Korean: (a) the peak of an Intonatioanl Phrase (=IP) falls on the focused element; and (b) focus has the effect of deleting all the following prosodic structures. A focused element always attracts the peak of IP, showing an increase of approximately 30Hz compared with the peak of a non-focused IP. When a subject is focused, no H appears either on the object or on the verb and a focused object is never followed by a verb with H. The post-focus deletion of prosodic boundaries is forced through the interaction of StressFocus (=If F is a focus and DF is its semantic domain, the highest prominence in DF will be within F) and Rightmost-IP (=The peak of an IP projects from the rightmost $={\phi}$). First Stress-F requires the peak of IP to fall on the focused element. Then to avoid violating Rightmost-IP, all the boundaries after the focused element should delete, minimizing the number of $={\phi}$'s intervening from the right edge of IP. (2) (omitted) Conclusion: In general, there seems to be no direct alignment constraints between the syntactically focused element and the edge of $={\phi}$ determined in phonology; all the alignment effects come from a single requirement that the peak of IP projects from the rightmost $={\phi}$ as proposed in Truckenbrodt (1995).

  • PDF

A Study on the Planning of Nationwide Indexing Services for Korea (전국색인지간행협동체제 편성방안에 관한 연구)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.12
    • /
    • pp.39-86
    • /
    • 1985
  • The main purpose of the present study is to survey the major iudexing bulletins of national nature in Korea, to define such problem areas as lacunae, duplicates and limitation in coverage in the indexing services currently available in Korea, and to make some suggestions for action for improving the existing indexing services in the light of general principles and the tradition and constraints unique to Korea. The major findings and conclusions reached at this study are summarised as follows: (A) A new indexing bulletin of general nature covering the entire field needs to be created in each of the following fields without an established indexing service available for the outcome of research and development activities in Korea. (1) Philosophy (2) Religion (3) Pure sciences (4) Art (5) Language (6) Literature (7) History (B) A new specialised indexing bulletin needs to be created in each of the following fields where indexing services are heavily utilised but no, or only partial, indexing service is available. (1) Social sciences (a) Statistics (b) Sociology (c) Folklore (d) Military science (2) Pure sciences (a) Mathematics (b) Physics (c) Chemistry (d) Astronomy (e) Geology (f) Mineralogy (g) Life sciences (h) Botany (i) Zoology (3) Applied sciences (a) Medicine (b) Agriculture (c) Civil engineering (d) Architectural engineering (e) Mechanical engineering (f) Electrical engineering (g) Chemical engineering (h) Domestic science (C) Publication of the indexing bulletins suggested in A and B above may be ideally carried on by a qualified and dependable learned society established in the respective fields and designated by the Minister of Education, and should be financially supported from the public fund under the provisions of Art. 27 of the Scientific Research Promotion Act of 1979. (D) The coverage and contents of the four indexing bulletins in the field of banking and financing published by the Library of the Bank of Korea are similar and considerably duplicated. It is, therefore, suggested that the four indexing bulletins are combined in one to form a more comprehensive and efficient bibliographical tool in the field and it is further developed into a general guide to the literature produced in the entire field of economics in Korea by gradually expanding its subject coverage. (E) For the similar reasons stated in D, the Index to the Articles on North Korea and the Catalogue of Theses on North Korea, both publisheds by the Ministry of Unification Library, are suggested to make into one. The Index to the Articles of the Selected North Korean Journals and the Index to the Articles of the North Korean Journals in Microfilm Housed in the Ministry of Unification Library, both published by the same Library, are also suggested to be combined in one. (F) The contents of the Catalogue of the Reports Submitted by Government Officials Who Have Travelled Abroad, published by the National Archives are included in the Index to the Information Materials Related to Government Administration, published by the National Archives. The publication of the former is hardly justified. (G) The contents of the Index to Legal Literature published by the Seoul National University Libraries and those of the Law Section of the Index to Scholastic Works published by the National Central Library are nearly identical. One of the two indexes should cease to be published. (H) Though five indexes are being published in the field of political science and four in the field of public administration, their subject coverage is limited. Naturally, these indexes are little usable to many other researchers in the two fields. A comprehensive index covering all the specialised areas in each field needs to be developed on one or all the existing indexes. (I) It is suggested that the Catalogue of the Scholastic Works on Curricula published by the National Central Library expands its subject coverage to become a more usable and effective index to all the researchers in the field of education. (J) The bimonthly Index to Periodical Articles and the specialised index by subject series published by the National Assembly Library, and the Index to Scholastic Works published by the National Central Library are expected to increase their coverage and frequency of publication to be used more effectively and more efficiently by all users in all fields till the indexing bulletins suggested in this study will fully be available in Korea.

  • PDF

Personalized Exhibition Booth Recommendation Methodology Using Sequential Association Rule (순차 연관 규칙을 이용한 개인화된 전시 부스 추천 방법)

  • Moon, Hyun-Sil;Jung, Min-Kyu;Kim, Jae-Kyeong;Kim, Hyea-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.195-211
    • /
    • 2010
  • An exhibition is defined as market events for specific duration to present exhibitors' main product range to either business or private visitors, and it also plays a key role as effective marketing channels. Especially, as the effect of the opinions of the visitors after the exhibition impacts directly on sales or the image of companies, exhibition organizers must consider various needs of visitors. To meet needs of visitors, ubiquitous technologies have been applied in some exhibitions. However, despite of the development of the ubiquitous technologies, their services cannot always reflect visitors' preferences as they only generate information when visitors request. As a result, they have reached their limit to meet needs of visitors, which consequently might lead them to loss of marketing opportunity. Recommendation systems can be the right type to overcome these limitations. They can recommend the booths to coincide with visitors' preferences, so that they help visitors who are in difficulty for choices in exhibition environment. One of the most successful and widely used technologies for building recommender systems is called Collaborative Filtering. Traditional recommender systems, however, only use neighbors' evaluations or behaviors for a personalized prediction. Therefore, they can not reflect visitors' dynamic preference, and also lack of accuracy in exhibition environment. Although there is much useful information to infer visitors' preference in ubiquitous environment (e.g., visitors' current location, booth visit path, and so on), they use only limited information for recommendation. In this study, we propose a booth recommendation methodology using Sequential Association Rule which considers the sequence of visiting. Recent studies of Sequential Association Rule use the constraints to improve the performance. However, since traditional Sequential Association Rule considers the whole rules to recommendation, they have a scalability problem when they are adapted to a large exhibition scale. To solve this problem, our methodology composes the confidence database before recommendation process. To compose the confidence database, we first search preceding rules which have the frequency above threshold. Next, we compute the confidences of each preceding rules to each booth which is not contained in preceding rules. Therefore, the confidence database has two kinds of information which are preceding rules and their confidence to each booth. In recommendation process, we just generate preceding rules of the target visitors based on the records of the visits, and recommend booths according to the confidence database. Throughout these steps, we expect reduction of time spent on recommendation process. To evaluate proposed methodology, we use real booth visit records which are collected by RFID technology in IT exhibition. Booth visit records also contain the visit sequence of each visitor. We compare the performance of proposed methodology with traditional Collaborative Filtering system. As a result, our proposed methodology generally shows higher performance than traditional Collaborative Filtering. We can also see some features of it in experimental results. First, it shows the highest performance at one booth recommendation. It detects preceding rules with some portions of visitors. Therefore, if there is a visitor who moved with very a different pattern compared to the whole visitors, it cannot give a correct recommendation for him/her even though we increase the number of recommendation. Trained by the whole visitors, it cannot correctly give recommendation to visitors who have a unique path. Second, the performance of general recommendation systems increase as time expands. However, our methodology shows higher performance with limited information like one or two time periods. Therefore, not only can it recommend even if there is not much information of the target visitors' booth visit records, but also it uses only small amount of information in recommendation process. We expect that it can give real?time recommendations in exhibition environment. Overall, our methodology shows higher performance ability than traditional Collaborative Filtering systems, we expect it could be applied in booth recommendation system to satisfy visitors in exhibition environment.

The Effect of Patent Citation Relationship on Business Performance : A Social Network Analysis Perspective (특허 인용 관계가 기업 성과에 미치는 영향 : 소셜네트워크분석 관점)

  • Park, Jun Hyung;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.127-139
    • /
    • 2013
  • With an advent of recent knowledge-based society, the interest in intellectual property has increased. Firms have tired to result in productive outcomes through continuous innovative activity. Especially, ICT firms which lead high-tech industry have tried to manage intellectual property more systematically. Firm's interest in the patent has increased in order to manage the innovative activity and Knowledge property. The patent involves not only simple information but also important values as information of technology, management and right. Moreover, as the patent has the detailed contents regarding technology development activity, it is regarded as valuable data. The patent which reflects technology spread and research outcomes and business performances are closely interrelated as the patent is considered as a significant the level of firm's innovation. As the patent information which represents companies' intellectual capital is accumulated continuously, it has become possible to do quantitative analysis. The advantages of patent in the related industry information and it's standardize information can be easily obtained. Through the patent, the flow of knowledge can be determined. The patent information can analyze in various levels from patent to nation. The patent information is used to analyze technical status and the effects on performance. The patent which has a high frequency of citation refers to having high technological values. Analyzing the patent information contains both citation index analysis using the number of citation and network analysis using citation relationship. Network analysis can provide the information on the flows of knowledge and technological changes, and it can show future research direction. Studies using the patent citation analysis vary academically and practically. For the citation index research, studies to analyze influential big patent has been conducted, and for the network analysis research, studies to find out the flows of technology in a certain industry has been conducted. Social network analysis is applied not only in the sociology, but also in a field of management consulting and company's knowledge management. Research of how the company's network position has an impact on business performances has been conducted from various aspects in a field of network analysis. Social network analysis can be based on the visual forms. Network indicators are available through the quantitative analysis. Social network analysis is used when analyzing outcomes in terms of the position of network. Social network analysis focuses largely on centrality and structural holes. Centrality indicates that actors having central positions among other actors have an advantage to exert stronger influence for exchange relationship. Degree centrality, betweenness centrality and closeness centrality are used for centrality analysis. Structural holes refer to an empty place in social structure and are defined as efficiency and constraints. This study stresses and analyzes firms' network in terms of the patent and how network characteristics have an influence on business performances. For the purpose of doing this, seventy-four ICT companies listed in S&P500 are chosen for the sample. UCINET6 is used to analyze the network structural characteristics such as outdegree centrality, betweenness centrality and efficiency. Then, regression analysis test is conducted to find out how these network characteristics are related to business performance. It is found that each network index has significant impacts on net income, i.e. business performance. However, it is found that efficiency is negatively associated with business performance. As the efficiency increases, net income decreases and it has a negative impact on business performances. Furthermore, it is shown that betweenness centrality solely has statistically significance for the multiple regression analysis with three network indexes. The patent citation network analysis shows the flows of knowledge between firms, and it can be expected to contribute to company's management strategies by analyzing company's network structural positions.