• Title/Summary/Keyword: Systems engineering

Search Result 44,573, Processing Time 0.07 seconds

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Extracting Beginning Boundaries for Efficient Management of Movie Storytelling Contents (스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출)

  • Park, Seung-Bo;You, Eun-Soon;Jung, Jason J.
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.279-292
    • /
    • 2011
  • Movie is a representative media that can transmit stories to audiences. Basically, a story is described by characters in the movie. Different from other simple videos, movies deploy narrative structures for explaining various conflicts or collaborations between characters. These narrative structures consist of 3 main acts, which are beginning, middle, and ending. The beginning act includes 1) introduction to main characters and backgrounds, and 2) conflicts implication and clues for incidents. The middle act describes the events developed by both inside and outside factors and the story dramatic tension heighten. Finally, in the end act, the events are developed are resolved, and the topic of story and message of writer are transmitted. When story information is extracted from movie, it is needed to consider that it has different weights by narrative structure. Namely, when some information is extracted, it has a different influence to story deployment depending on where it locates at the beginning, middle and end acts. The beginning act is the part that exposes to audiences for story set-up various information such as setting of characters and depiction of backgrounds. And thus, it is necessary to extract much kind information from the beginning act in order to abstract a movie or retrieve character information. Thereby, this paper proposes a novel method for extracting the beginning boundaries. It is the method that detects a boundary scene between the beginning act and middle using the accumulation graph of characters. The beginning act consists of the scenes that introduce important characters, imply the conflict relationship between them, and suggest clues to resolve troubles. First, a scene that the new important characters don't appear any more should be detected in order to extract a scene completed the introduction of them. The important characters mean the major and minor characters, which can be dealt as important characters since they lead story progression. Extra should be excluded in order to extract a scene completed the introduction of important characters in the accumulation graph of characters. Extra means the characters that appear only several scenes. Second, the inflection point is detected in the accumulation graph of characters. It is the point that the increasing line changes to horizontal line. Namely, when the slope of line keeps zero during long scenes, starting point of this line with zero slope becomes the inflection point. Inflection point will be detected in the accumulation graph of characters without extra. Third, several scenes are considered as additional story progression such as conflicts implication and clues suggestion. Actually, movie story can arrive at a scene located between beginning act and middle when additional several scenes are elapsed after the introduction of important characters. We will decide the ratio of additional scenes for total scenes by experiment in order to detect this scene. The ratio of additional scenes is gained as 7.67% by experiment. It is the story inflection point to change from beginning to middle act when this ratio is added to the inflection point of graph. Our proposed method consists of these three steps. We selected 10 movies for experiment and evaluation. These movies consisted of various genres. By measuring the accuracy of boundary detection experiment, we have shown that the proposed method is more efficient.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Network Planning on the Open Spaces in Geumho-dong, Seoul (서울 금호동 오픈스페이스 네트워크 계획)

  • Kang, Yon-Ju;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.5
    • /
    • pp.51-62
    • /
    • 2012
  • Geumho-dong, Seoul, a redeveloped residential area, is located in the foothills of Mt. Eungbong. The geographical undulation, the composition of a large apartment complex, and the partial implementation of the redevelopment project have caused the severe physical and social disconnections in this area. In order to recover functioning in the disconnected community, this study pays attention to the regeneration of the open spaces as an everyday place and in the form a network system among those open spaces. Various types of the open spaces are classified into points or faces, 'bases' and linear 'paths' analyze the network status. More than half of the open space have connecting-distance of 500m or more. Furthermore, many areas are not even included in the service-area of the open spaces. Analysis of the connectivity and integration value using the axial map has carried out to check weak linkages and to choose the sections where additional bases are required. In addition, to improve the quality of the bases and the paths, a field investigation is conducted and problems are diagnosed. The network planning of the open spaces in Geumho-dong is established, ensuring the quality and quantity of bases and paths. The plan includes the construction of an additional major base in the central area and six secondary bases in other parts, and comes up with ways to improve the environment of underdeveloped secondary bases. In the neighborhood parks at Mt. Daehyun areas, the major path are added, and the environment of the paths is improved in certain areas. Because of the network planning, the connecting-distances between bases are reduced significantly, the connectivity and integration value of the area are increased, and the service areas of the open spaces cover the whole area properly. Although this study has some limitations such as the needs for the legal and institutional supports and difficulties of a quantitative indexing process, its significance lies in the suggestion of a more reasonable and practical plan for the overall network system by defining complex types of open spaces simply and clearly and by examining the organic relationships quantitatively and qualitatively.

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF

The research for the yachting development of Korean Marina operation plans (요트 발전을 위한 한국형 마리나 운영방안에 관한 연구)

  • Jeong Jong-Seok;Hugh Ihl
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.899-908
    • /
    • 2004
  • The rise of income and introduction of 5 day a week working system give korean people opportunities to enjoy their leisure time. And many korean people have much interest in oceanic sports such as yachting and also oceanic leisure equipments. With the popularization and development of the equipments, the scope of oceanic activities has been expanding in Korea just as in the advanced oceanic countries. However, The current conditions for the sports in Korea are not advanced and even worse than underdeveloped countries. In order to develop the underdeveloped resources of Korean marina, we need to customize the marina models of advanced nations to serve the specific needs and circumstances of Korea As such we have carried out a comparative analysis of how Austrailia, Newzealand, Singapore, japan and Malaysia operate their marina, reaching the following conclusions. Firstly, in marina operations, in order to protect personal property rights and to preserve the environment, we must operate membership and non-membership, profit and non-profit schemes separately, yet without regulating the dress code entering or leaving the club house. Secondly, in order to accumulate greater value added, new sporting events should be hosted each year. There is also the need for an active use of volunteers, the generation of greater interest in yacht tourism, and the simplification of CIQ procedures for foreign yachts as well as the provision of language services. Thirdly, a permanent yacht school should be established, and classes should be taught by qualified instructors. Beginners, intermediary, and advanced learner classes should be managed separately with special emphasis on the dinghy yacht program for children. Fourthly, arrival and departure at the moorings must be regulated autonomically, and there must be systematic measures for the marina to be able, in part, to compensate for loss and damages to equipment, security and surveillance after usage fees have been paid for. Fifthly, marine safety personnel must be formed in accordance with Korea's current circumstances from civilian organizations in order to be used actively in benchmarking, rescue operations, and oceanic searches at times of disaster at sea.

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF

The Impact of the Reclamation and Utilization of Idle Hillside Lands on Future Food Production in Korea (식량(食糧)의 안정적(安定的) 공급(供給)을 위한 산지개발이용의 필요성(必要性)과 전망(展望))

  • Park, Johng-Moon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.11 no.4
    • /
    • pp.213-233
    • /
    • 1979
  • It is generally agreed that the country's population will grow up to the level of 52 million by the year of 2000 and that due the active growth of industry, urbanization and road constructions, sizable portion of existing arable lands will be utilized for other purposes than agriculture in near future. From 1966 to 1977, it was estimated that, the average annual conversion of arable lands to other uses, was 12,909 ha. If this trend persists, it is predicted that from 1978 to 1991 when the 6th Five Years Economic Development Plan will terminate, approximately 181,000 ha of arable lands will be converted for other uses again. On the other hand, it is certain that the increased population (39 million in 1981, 45 million in 1991, 52 million in 2001) and the changes in food pattern along with the enhancement of living standards will bring about the phenomenal increase in demands for not only the staple food but also the livestock products such as meat, milk and eggs, vegetables and fruits. These future increased demands for various foods, naturally mean the increased needs for the expansion of arable lands at the same time. It is predicted that, if more activities than present scale are not taken for the expansion of arable lands, the national food self sufficiency level will drop from 79% in 1977 down to 62% in 1991. To meet the increased food demands in future, there are several ways and means. These will include the increased land use intensity, elevation of unit area yield levels, minimization of conversion of arable lands to other uses and expansion of arable lands through the reclamations of idle hillside lands and tidal lands. Among these, the expansion of arable lands through reclamations of idle hillside lands and tidal lands are more positive measures to cope with the increased production of foods in future. The reclamation of hillside lands demands more attention because it needs more advanced technologies in agronomical and engineering aspects, larger scale fundings and integrated socioeconomic considerations. In agromical aspects, the thechniques for early improvement of chemical and physical properties of soils, proper soil conservation measures and rational cropping systems are of particular importance. As to the financial supports to encourage the farmings in hillside land, much bold fund inputs are essential for the construction of roads, installation of irrigation and drainage facilities, soil conservation mechanisms, which will ensure the stabilized farming with reasonable incomes in the newly reclaimed lands.

  • PDF