• Title/Summary/Keyword: Engineering information

Search Result 82,725, Processing Time 0.102 seconds

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Added Value of Chemical Exchange-Dependent Saturation Transfer MRI for the Diagnosis of Dementia

  • Jang-Hoon Oh;Bo Guem Choi;Hak Young Rhee;Jin San Lee;Kyung Mi Lee;Soonchan Park;Ah Rang Cho;Chang-Woo Ryu;Key Chung Park;Eui Jong Kim;Geon-Ho Jahng
    • Korean Journal of Radiology
    • /
    • v.22 no.5
    • /
    • pp.770-781
    • /
    • 2021
  • Objective: Chemical exchange-dependent saturation transfer (CEST) MRI is sensitive for detecting solid-like proteins and may detect changes in the levels of mobile proteins and peptides in tissues. The objective of this study was to evaluate the characteristics of chemical exchange proton pools using the CEST MRI technique in patients with dementia. Materials and Methods: Our institutional review board approved this cross-sectional prospective study and informed consent was obtained from all participants. This study included 41 subjects (19 with dementia and 22 without dementia). Complete CEST data of the brain were obtained using a three-dimensional gradient and spin-echo sequence to map CEST indices, such as amide, amine, hydroxyl, and magnetization transfer ratio asymmetry (MTRasym) values, using six-pool Lorentzian fitting. Statistical analyses of CEST indices were performed to evaluate group comparisons, their correlations with gray matter volume (GMV) and Mini-Mental State Examination (MMSE) scores, and receiver operating characteristic (ROC) curves. Results: Amine signals (0.029 for non-dementia, 0.046 for dementia, p = 0.011 at hippocampus) and MTRasym values at 3 ppm (0.748 for non-dementia, 1.138 for dementia, p = 0.022 at hippocampus), and 3.5 ppm (0.463 for non-dementia, 0.875 for dementia, p = 0.029 at hippocampus) were significantly higher in the dementia group than in the non-dementia group. Most CEST indices were not significantly correlated with GMV; however, except amide, most indices were significantly correlated with the MMSE scores. The classification power of most CEST indices was lower than that of GMV but adding one of the CEST indices in GMV improved the classification between the subject groups. The largest improvement was seen in the MTRasym values at 2 ppm in the anterior cingulate (area under the ROC curve = 0.981), with a sensitivity of 100 and a specificity of 90.91. Conclusion: CEST MRI potentially allows noninvasive image alterations in the Alzheimer's disease brain without injecting isotopes for monitoring different disease states and may provide a new imaging biomarker in the future.

Management of Visitors in the Seonunsan Provincial Park through an Analysis on Visitors' Travel Motivations (탐방객 방문 동기 분석을 통한 선운산도립공원 관리 방안)

  • Sung, Chan Yong;Kim, Dong Pil;Cho, Woo
    • Korean Journal of Environment and Ecology
    • /
    • v.30 no.6
    • /
    • pp.1047-1056
    • /
    • 2016
  • This study aims to provide managerial implications for provincial parks through an analysis on visitors' characteristics and motivational factors. The information was collected by surveying 290 visitors. The survey questionnaire consisted of questions regarding visitors' socioeconomic characteristics, characteristics of their travel behavior, visitors' motivation to visit the park, and the degree of satisfaction derived from visiting the park. Results show that most respondents appeared not to collect any information on the park prior to their visit. It was also seen that most visitors do not visit other tourist sites nearby, and are not aware of the Gochang UNESCO biosphere, which indicates that Gochang-gun, which is responsible for park management, needs to make more efforts to promote the park. A factor analysis on the visitors' motivation to visit the park extracted three factors to visit the Seonunsan Provincial Park: 'to hike,' 'to experience and observe nature,' i.e., nature learning field trip and camping, and 'to build and nurture bonding with family and friends.' To examine the effect of these various motivational factors had on the visitors' satisfaction level upon visiting the park, we conducted a multiple regression analysis with the three extracted factors to visit the park and the respondents' socioeconomic characteristics as independent variables, and the degree of recommendation of visiting the park as a dependent variable. The result shows found that, of the three travel factors, only the 'hiking' factor statistically significantly affected the degree of recommendation of visiting the park. This result suggests that the Seonunsan Provincial Park only satisfied hikers and failed to meet the demands for nature experience and observation. It is therefore suggested that the park managers develop new experience-based tourism programs, such as guided tours conducted by professional eco-interpreters.

Validation of ENVI-met Model with In Situ Measurements Considering Spatial Characteristics of Land Use Types (토지이용 유형별 공간특성을 고려한 ENVI-met 모델의 현장측정자료 기반의 검증)

  • Song, Bong-Geun;Park, Kyung-Hun;Jung, Sung-Gwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.156-172
    • /
    • 2014
  • This research measures and compares on-site net radiation energy, air temperature, wind speed, and surface temperature considering various spatial characteristics with a focus on land use types in urban areas in Changwon, Southern Gyeongsangnam-do, to analyze the accuracy of an ENVI-met model, which is an analysis program of microclimate. The on-site measurement was performed for three days in a mobile measurement: two days during the daytime and one day during the nighttime. The analysis using the ENVI-met model was also performed in the same time zone as the on-site measurement. The results indicated that the ENVI-met model showed higher net radiation than the on-site measurement by approximately $300Wm^{-2}$ during the daytime whereas the latter showed higher net radiation energy by approximately $200Wm^{-2}$ during the nighttime. The temperature was found to be much higher by approximately $2-6^{\circ}C$ in the on-site measurement during both the daytime and nighttime. The on-site measurement also showed higher surface temperature than the ENVI-met by approximately $7-13^{\circ}C$. In terms of the wind speed, there was a significant difference between the results of the ENVI-met model and on-site measurement. As for the correlation between the results of the ENVI-met model and on-site measurement, the temperature showed significantly high correlation whereas the correlations for the net radiation energy, surface temperature, and wind speed were very low. These results appear to be affected by excessive or under estimation of solar and terrestrial radiation and climatic conditions of the surrounding areas and characteristics of land cover. Hence, these factors should be considered when applying these findings in urban and environment planning for improving the microclimate in urban areas.

Analysis of Anxiety EGG per Driving Speed on Different Design Speed Road (상이한 설계속도 도로에서의 주행속도별 불안뇌파 분석)

  • Lim, Joon Beom;Lee, Soo Beom;Joo, Sung Kab;Shin, Joon Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.5
    • /
    • pp.2049-2056
    • /
    • 2013
  • With the advance in information communication, the information age has come, and desire of human being in increasing. In this circumstance, the necessity for design for building of superhighways is arising to improve the mobility in the field of transportation, too. This study was conducted to analyze if driver can drive at a design speed on a superhighway with a design speed exceeding 120km/h. For this study, it was experimented if the running speed that makes a driver feel anxious, increased, when road alignment and standard improved, due to the differences of design speed. For the experiment, 30 subjects were asked to attach brain wave analyzers to bodies. Then, this study compared powers of ${\beta}$ waves generated, when they felt anxious, driving on the roads with different design speeds, and driving virtually through a simulator. Here, Kangbyeonbukro (90km/h), Jayuro(100km/h), Joongang Expressway(110km/h), and Seohaean Expressway(120km/h) were selected as experimental sections. While drivers drove on the Kangbyeonbukro and Jayuro at a speed of 80km/h - 130km/h, on the Joongang Expressway at a speed of 100km/h - 150km/h, and Seohaean Expressway at a speed of 110km/h - 180km/h, powers of anxiety EEGs(electroencephalogram) were compared, and during the simulation driving at the same speed of 110km/h - 180km/h, powers of anxiety EEGs were compared and analyzed. Moreover, the speed when anxiety EEGs increased, was statistically verified through paired t-test. As the result, the speed when anxiety EEGs increased during the simulation driving was nearly 30km/h higher than when they increased during the actual driving on the expressways, and anxiety EEGs increased at the same speed, when subjects drove on the roads with a design speed of 90km/h and 100km/h. It means that there were small differences in road alignment and standard. However, the running speed to make drivers feel anxious was increased at both roads with a design speed of 110km/h and 120km/h. It implies that drivers can drive at a higher speed, as road alignment and standard improve.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

DISEASE DIAGNOSED AND DESCRIBED BY NIRS

  • Tsenkova, Roumiana N.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1031-1031
    • /
    • 2001
  • The mammary gland is made up of remarkably sensitive tissue, which has the capability of producing a large volume of secretion, milk, under normal or healthy conditions. When bacteria enter the gland and establish an infection (mastitis), inflammation is initiated accompanied by an influx of white cells from the blood stream, by altered secretory function, and changes in the volume and composition of secretion. Cell numbers in milk are closely associated with inflammation and udder health. These somatic cell counts (SCC) are accepted as the international standard measurement of milk quality in dairy and for mastitis diagnosis. NIR Spectra of unhomogenized composite milk samples from 14 cows (healthy and mastitic), 7days after parturition and during the next 30 days of lactation were measured. Different multivariate analysis techniques were used to diagnose the disease at very early stage and determine how the spectral properties of milk vary with its composition and animal health. PLS model for prediction of somatic cell count (SCC) based on NIR milk spectra was made. The best accuracy of determination for the 1100-2500nm range was found using smoothed absorbance data and 10 PLS factors. The standard error of prediction for independent validation set of samples was 0.382, correlation coefficient 0.854 and the variation coefficient 7.63%. It has been found that SCC determination by NIR milk spectra was indirect and based on the related changes in milk composition. From the spectral changes, we learned that when mastitis occurred, the most significant factors that simultaneously influenced milk spectra were alteration of milk proteins and changes in ionic concentration of milk. It was consistent with the results we obtained further when applied 2DCOS. Two-dimensional correlation analysis of NIR milk spectra was done to assess the changes in milk composition, which occur when somatic cell count (SCC) levels vary. The synchronous correlation map revealed that when SCC increases, protein levels increase while water and lactose levels decrease. Results from the analysis of the asynchronous plot indicated that changes in water and fat absorptions occur before other milk components. In addition, the technique was used to assess the changes in milk during a period when SCC levels do not vary appreciably. Results indicated that milk components are in equilibrium and no appreciable change in a given component was seen with respect to another. This was found in both healthy and mastitic animals. However, milk components were found to vary with SCC content regardless of the range considered. This important finding demonstrates that 2-D correlation analysis may be used to track even subtle changes in milk composition in individual cows. To find out the right threshold for SCC when used for mastitis diagnosis at cow level, classification of milk samples was performed using soft independent modeling of class analogy (SIMCA) and different spectral data pretreatment. Two levels of SCC - 200 000 cells/$m\ell$ and 300 000 cells/$m\ell$, respectively, were set up and compared as thresholds to discriminate between healthy and mastitic cows. The best detection accuracy was found with 200 000 cells/$m\ell$ as threshold for mastitis and smoothed absorbance data: - 98% of the milk samples in the calibration set and 87% of the samples in the independent test set were correctly classified. When the spectral information was studied it was found that the successful mastitis diagnosis was based on reviling the spectral changes related to the corresponding changes in milk composition. NIRS combined with different ways of spectral data ruining can provide faster and nondestructive alternative to current methods for mastitis diagnosis and a new inside into disease understanding at molecular level.

  • PDF

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.