• Title/Summary/Keyword: 단순보

Search Result 2,468, Processing Time 0.031 seconds

Seasonal Variation in Species Composition and Abundance of Shallow Water Fishes at Taean Beaches, in the Yellow Sea of Korea (태안 해빈 천해 어류 종조성의 계절 변화)

  • Noh, Hyung-Soo;Youk, Kwan-Su;Hwang, Hak-Bin;Lee, Tae-Won
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.14 no.3
    • /
    • pp.145-154
    • /
    • 2009
  • Seasonal variation in species composition and abundance of shallow water fish from the Hakampo and Yeonpo beaches in Taean in the western coast of Korea were determined by the analysis of monthly samples collected by a beach seine from January to December, 2007. A total of 30 species, 964 individuals and 10,564.1 g of fish were collected from the Hakampo beach, and a total of 46 species, 4,447 individuals and 28,622.4 g of fish from the Yeonpo beach. The juveniles of coastal fish such as Chelon haematochelius, Paralichthys olivaceus, Repomucenus lunatus, Sebastes schlegelii and Takifugu niphobles were predominated in abundance. And the juveniles of pelagic migrants such as Konosirus punctatus, Sardinella zunasi and Engraulis japonicus were abundantly collected between summer and autumn. The fish collected were mainly composed of small-sized species and juveniles. C. haematochelius and migrant fish were young of the year, and commercially important fish such as S. schlegeli, P. olivaceus, Pleuronectes yokohamae and Hexagrammos otakii were 1 to 2 years old juveniles. It is considered that they use the shallow water as a nursery ground until they move out to the deeper water. The number of species and abundance were lower in the fine sand Hakamp beach than in the muddy sand Yeonpo beach where some Zostera marina were also found. In Yeonpo beach the adult of Gymnogobius mororanus preferred to live in the muddy shallow water and Syngnathus schlegeli living in the sea grass were also abundantly collected in spring in addition to resident fish and pelagic migrants in warm months. The resident species were more abundance in the Taean beach than in the beach located in the southern part of the west coast of Korea where the juveniles of pelagic migrants were more abundant.

Improvement of Oxygen Isotope Analysis in Seawater samples with Stable Isotope Mass Spectrometer (질량분석기를 이용한 해수 중 산소안정동위원소 분석법의 개선)

  • Park, Mi-Kyung;Kang, Dong-Jin;Kim, Kyung-Ryul
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.13 no.4
    • /
    • pp.348-353
    • /
    • 2008
  • Oxygen isotope has not been used actively in water mass studies because of difficulties on the analysis though it has advantages as a water mass tracer. The most popular method to analysis the oxygen isotope ratio in water samples is equilibration method: isotopic equilibrium of water with $CO_2$ at constant temperature. The precision of oxygen isotope analysis using commercial automatic $H_2O/CO_2$ equilibrator is ${\pm}0.1%o$. This value is not sufficient for studies in open ocean. The object of this study is to improve the analytical precision enough to apply open ocean studies by modification of the instrument. When sample gas is transferred by the pressure difference, the fractionation which is preferential transportation of light isotope can be occurred since the long transportation path between the equilibrator and mass spectrometer. And the The biggest source of error during the analysis is long distance and large volume of the pathway of sample gas between. Therefore, liquid nitrogen trap and high vacuum system are introduced to the system. The precisions of 14 time analysis of same seawater sample are ${\pm}0.081%o$ and ${\pm}0.021%o$ by built-in system and by modified system in this study, respectively.

A Study on the Excavated Sab(a funeral fan) from Lime-filled Tomb and Lime-layered Tomb during the Joseon Dynasty (조선시대 회격·회곽묘 출토 삽(翣)에 대한 고찰)

  • Yi, Seung Hae;An, Bo Yeon
    • Korean Journal of Heritage: History & Science
    • /
    • v.41 no.2
    • /
    • pp.43-59
    • /
    • 2008
  • Sap(?, a funeral fan) is a funeral ceremonial object used in association with a Confucian ceremonial custom, which was crafted by making a wooden frame, attaching a white cloth or a thick paper onto it, drawing pictures on it, and making a holder for a handle. According to Liji(Records of Rites), Sap was used since the Zhou Dynasty, and these Chinese Sap examples are no big different than the Korean Sap examples, which were described in Joseon Wangjo Sillok(Annals of the Joseon Dynasty), Gukjo Oryeui(the Five Rites of the State), and Sarye Pyeollam(Handbook on Four Rituals). This study explored Sap excavated in lime-filled tombs and lime-layered tombs of aristocrats dating back to Joseon, as well as their historical records to examine Sap's characteristics according to their examples, manufacturing methods, and use time. The number and designs of Sap varied according to the deceased' social status aristocrats used mainly one pair of 亞-shaped Bulsap, and a pair of Hwasap with a cloud design depicted on it. A Sap was wrapped twice with Chojuji paper or Jeojuji paper, and for the third time with Yeonchangji paper. Then, it was covered with a white ramie, a hemp, a cotton, a silk satin, etc. Bobul(an axe shape and 亞-shape design) was drawn on both sides of Sap, and a rising current of cloud was drawn at the peripheral area mainly with red or scarlet pigments. Sap, which were excavated from aristocrats'lime-filled and lime-layered tombs, are the type of Sap which were separated from its handle. These excavated Sap are those whose long handles were burnt during the death carriage procession, leaving Sap, which later were erected on both sides of the coffin. The manufacturing process of excavated relics can be inferred by examining them. The excavated relics are classified into those with three points and those with two points according to the number of point. Of the three-point type(Type I), there is the kind of relic that was woven into something like a basket by using a whole wood plate or cutting bamboo into flat shapes. The three-point Sap was concentrated comparatively in the early half of Joseon, and was manufactured with various methods compared with its rather unified overall shape. In the meantime, the two-point Sap was manufactured with a relatively formatted method; its body was manufactured in the form of a rectangle or a reverse trapezoid, and then its upper parts with two points hanging from them were connected, and the top surface was made into a curve(Type II) or a straight line(Type III) differentiating it from the three-point type. This manufacturing method, compared with that of the three-point type, is simple, but is not greatly different from the three-point type manufacturing method. In particular, the method of crafting the top surface into a straight line has been used until today. Of the examined 30 Sap examples, those whose production years were made known from the buried persons'death years inscribed on the tomb stones, were reexamined, indicating that type I was concentrated in the first half of the $16^{th}$ century. Type II spanned from the second half of the $16^{th}$ century to the second half of the $17^{th}$ century, and type III spanned from the first half of the $17^{th}$ century to the first half of the $18^{th}$ century. The shape of Sap is deemed to have changed from type I to type II and again from type II to type III In the $17^{th}$ century, which was a time of change, types II and III coexisted. Of the three types of Sap, types II and III re similar because they have two points; thus a noteworthy transit time is thought to have been the middle of the $16^{th}$ century. Type I compared with types II and III is thought to have required more efforts and skills in the production process, and as time passed, the shape and manufacturing methods of Sap are presumed to have been further simplified according to the principle of economy. The simplification of funeral ceremonies is presumed to have been furthered after Imjinwaeran(Japanese invasion of Joseon, 1592~1598), given that as shown in the Annals of King Seonjo, state funerals were suspended several times. In the case of Sap, simplification began from the second half of the $16^{th}$ century, and even in the $18^{th}$ century, rather than separately crafting Sap, Sap was directly drawn on the coffin cover and the coffin. However, in this simplification of form, regulations on the use of Sap specified in Liji were observed, and thus the ceremony was rationally simplified.

Evaluation of CH4 Flux for Continuous Observation from Intertidal Flat Sediments in the Eoeun-ri, Taean-gun on the Mid-western Coast of Korea (서해안 태안 어은리 갯벌의 연속관측 메탄(CH4) 플럭스 특성 평가)

  • Lee, Jun-Ho;Rho, Kyoung Chan;Woo, Han Jun;Kang, Jeongwon;Jeong, Kap-Sik;Jang, Seok
    • Economic and Environmental Geology
    • /
    • v.48 no.2
    • /
    • pp.147-160
    • /
    • 2015
  • In 2014, on 31 August and 1 September, the emissions of $CH_4$, $CO_2$, and $O_2$ gases were measured six times using the closed chamber method from exposed tidal flat sediments in the same position relative to the low point of the tidal cycle in the Eoeun-ri, Taean-gun, on the Mid-western Coast of Korea. The concentrations of $CH_4$ in the air sample collected in the chamber were measured using gas chromatography with an EG analyzer, model GS-23, within 6 hours of collection, and the other gases were measured in real time using a multi-gas monitor. The gas emission fluxes (source (+), and sink (-)) were calculated from a simple linear regression analysis of the changes in the concentrations over time. In order to see the surrounding parameters (water content, temperature, total organic carbon, average mean size of sediments, and the temperature of the inner chamber) were measured at the study site. On the first day, across three measurements during 5 hours 20 minutes, the observed $CO_2$ flux absorption was -137.00 to $-81.73mg/m^2/hr$, and the $O_2$ absorption, measured simultaneously, was -0.03 to $0.00mg/m^2/hr$. On the second day using an identical number of measurements, the $CO_2$ absorption was -20.43 to $-2.11mg/m^2/hr$, and the $O_2$ absorption -0.18 to $-0.14mg/m^2/hr$. The $CH_4$ absorption before low tide was $-0.02mg/m^2/hr$ (first day, Pearson correlation coefficient using the SPSS statistical analysis is -0.555(n=5, p=0.332, pronounced negative linear relationship)), and $-0.15mg/m^2/hr$ (second day, -0.915(n=5, p=0.030, strong negative linear relationship)) on both measurement days. The emitted flux after low tide on both measurement days reached a minimum of $+0.00mg/m^2/hr$ (+0.713(n=5, p=0.176, linear relationship which can be almost ignored)), and a maximum of $+0.03mg/m^2/hr$ (+0.194(n=5, p=0.754, weak positive linear relationship)) after low tide. However, the absolute values of the $CH_4$ fluxes were analyzed at different times. These results suggest that rate for $CH_4$ fluxes, even the same time and area, were influenced by changes in the tidal cycle characteristics of surface sediments for understanding their correlation with these gas emissions, and surrounding parameters such as physiochemical sediments conditions.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

MR T2 Map Technique: How to Assess Changes in Cartilage of Patients with Osteoarthritis of the Knee (MR T2 Map 기법을 이용한 슬관절염 환자의 연골 변화 평가)

  • Cho, Jae-Hwan;Park, Cheol-Soo;Lee, Sun-Yeob;Kim, Bo-Hui
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.298-307
    • /
    • 2009
  • By using the MR T2 map technique, this study intends, first, to measure the change of T2 values of cartilage between healthy people and patients with osteoarthritis and, second, to assess the form and the damage of cartilage in the knee-joint, through which this study would consider the utility of the T2 map technique. Thirty healthy people were selected based on their clinical history and current status and another thirty patients with osteoarthritis of the knee who were screened by simple X-ray from November 2007 to December 2008 were selected. Their T2 Spin Echo (SE hereafter) images for the cartilage of the knee joint were collected by using the T2 SE sequence, one of the multi-echo methods (TR: 1,000 ms; TE values: 6.5, 13, 19.5, 26, 32.5. 40, 45.5, 52). Based on these images, the changes in the signal intensity (SI hereafter) for each section of the cartilage of the knee joint were measured, which yielded average values of T2 through the Origin 7.0 Professional (Northampton, MA 01060 USA). With these T2s, the independent samples T-test was performed by SPSS Window version 12.0 to run the quantitative analysis and to test the statistical significance between the healthy group and the patient group. Closely looking at T2 values for each anterior and lateral articular cartilage of the sagittal plane and the coronal plane, in the sagittal plane, the average T2 of the femoral cartilage in the patient group with arthritis of the knee ($42.22{\pm}2.91$) was higher than the average T2 of the healthy group ($36.26{\pm}5.01$). Also, the average T2 of the tibial cartilage in the patient group ($43.83{\pm}1.43$) was higher than the average T2 in the healthy group ($36.45{\pm}3.15$). In the case of the coronal plane, the average T2 of the medial femoral cartilage in the patient group ($45.65{\pm}7.10$) was higher than the healthy group ($36.49{\pm}8.41$) and so did the average T2 of the anterior tibial cartilage (i.e., $44.46{\pm}3.44$ for the patient group vs. $37.61{\pm}1.97$ for the healthy group). As for the lateral femoral cartilage in the coronal plane, the patient group displayed the higher T2 ($43.41{\pm}4.99$) than the healthy group did ($37.64{\pm}4.02$) and this tendency was similar in the lateral tibial cartilage (i.e., $43.78{\pm}8.08$ for the patient group vs. $36.62{\pm}7.81$ for the healthy group). Along with the morphological MR imaging technique previously used, the T2 map technique seems to help patients with cartilage problems, in particular, those with the arthritis of the knee for early diagnosis by quantitatively analyzing the structural and functional changes of the cartilage.

  • PDF

[ $^1H$ ] MR Spectroscopy of the Normal Human Brains: Comparison between Signa and Echospeed 1.5 T System (정상 뇌의 수소 자기공명분광 소견: 1.5 T Signa와 Echospeed 자기공명영상기기에서의 비교)

  • Kang Young Hye;Lee Yoon Mi;Park Sun Won;Suh Chang Hae;Lim Myung Kwan
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.79-85
    • /
    • 2004
  • Purpose : To evaluate the usefulness and reproducibility of $^1H$ MRS in different 1.5 T MR machines with different coils to compare the SNR, scan time and the spectral patterns in different brain regions in normal volunteers. Materials and Methods : Localized $^1H$ MR spectroscopy ($^1H$ MRS) was performed in a total of 10 normal volunteers (age; 20-45 years) with spectral parameters adjusted by the autoprescan routine (PROBE package). In all volunteers, MRS was performed in a three times using conventional MRS (Signa Horizon) with 1 channel coil and upgraded MRS (Echospeed plus with EXCITE) with both 1 channel and 8 channel coil. Using these three different machines and coils, SNRs of the spectra in both phantom and volunteers and (pre)scan time of MRS were compared. Two regions of the human brain (basal ganglia and deep white matter) were examined and relative metabolite ratios (NAA/Cr, Cho/Cr, and mI/Cr ratios) were measured in all volunteers. For all spectra, a STEAM localization sequence with three-pulse CHESS $H_2O$ suppression was used, with the following acquisition parameters: TR=3.0/2.0 sec, TE=30 msec, TM=13.7 msec, SW=2500 Hz, SI=2048 pts, AVG : 64/128, and NEX=2/8 (Signa/Echospeed). Results : The SNR was about over $30\%$ higher in Echospeed machine and time for prescan and scan was almost same in different machines and coils. Reliable spectra were obtained on both MRS systems and there were no significant differences in spectral patterns and relative metabolite ratios in two brain regions (p>0.05). Conclusion : Both conventional and new MRI systems are highly reliable and reproducible for $^1H$ MR spectroscopic examinations in human brains and there are no significant differences in applications for $^1H$ MRS between two different MRI systems.

  • PDF

A Study on Growth Type of Comic strips Heroes through Journey of Life (삶의 여정을 통한 만화 히어로 성장유형 연구)

  • Kim, MiRim
    • Cartoon and Animation Studies
    • /
    • s.29
    • /
    • pp.173-207
    • /
    • 2012
  • The four-phased plot which consists of introduction, development, turn and conclusion in the long-story structure tends to be patterned and schematized. The behavior of characters is in line with the beginning of human beings and the plot of comic strips basically has four phases. It is, however, not a simple arrangement but a complex one which was developed by organizing patterns of human power, behavior and emotions. With the results from a survey with college students studying comic strips, this study aims to categorize four characters from the archetypal system by Carol Pearson, four phases of the hero's journey by Joseph Campbell, and the four phases of the plot based on Aristotle's theory, which is the frame of the comic strip structure through supporting evidence extracted from comic strips in an integrated way. In this study, the categorization is performed by simplifying and systemizing a character's life cycle, which is a factor of a story structure in complex comic strips. This study is to identify what comic strip writers express by using the metaphor in the complicated long-story structure of comic strips This study reveals that the structure of introduction, development, turn and conclusion based on the plot theory by Aristotle is the metaphor of human life and fate and that the phases of development in the archetypal system by Carol Pearson, a Jung researcher influenced by Jung's theory are the metaphor of human life and fate. Also, the theories of Joseph Campbell, who also was influenced by Jung, are the metaphor of human life and fate as they projected complex emotions of joy, anger, sorrow, and pleasure onto the archetype of heroes and used the metaphor of the hero's journey. Lastly, the theories are introduced with the approach of 'guide to screenwriters' by Christopher Vogler. Meanwhile, this metaphor is the objective and goal of this study. The comic strips selected for this study seem to have long complex stories which have characters leaving their homes, going through adventures and difficulties, meeting the world in another way, experiencing tension, competition, wars, and hardship and returning home with compensation. They grow mentally and psychologically through their journeys and finally become heroes. They express the meaning of our introspection in a narrative through plots and images of comic strips. This appears complex but the basic structure of long comic strips has four phases of plot. The life style of an extraordinary character traveling for adventures and growing in long comic strips can be divided into four phases symbolizing childhood, adolescence, adulthood, and senescence and it is a psychological growth process. The archetypes of the character can be divided into four phases and the growth process can be explained. The hero's journey symbolized by the character can be also divided into four phases. Through theories, the complex arrangement of four-phased plots in comic strips corresponds with the growth process of introduction, development, turn and conclusion through the stages of life. At the same time, this study found that the characters becoming heroes are the metaphor of introspection and that the characters' growth and life correspond with the four phases in life through long comic strips. Long stories in long comic strips written by comic strip writers show that characters go on their journeys and change their lives through hardship and difficulty by logical construction of plot and their growth processes are presented in archetypal images and they reach introspection as heroes. The readers share time and space through images in comic strips and realize that they had the same experience as the characters emotionally by being moved by the stories.

The Effect of Retailer-Self Image Congruence on Retailer Equity and Repatronage Intention (자아이미지 일치성이 소매점자산과 고객의 재이용의도에 미치는 영향)

  • Han, Sang-Lin;Hong, Sung-Tai;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.29-62
    • /
    • 2012
  • As distribution environment is changing rapidly and competition is more intensive in the channel of distribution, the importance of retailer image and retailer equity is increasing as a different competitive advantages. Also, consumers are not functionally oriented and that their behavior is significantly affected by the symbols such as retailer image which identify retailer in the market place. That is, consumers do not choose products or retailers for their material utilities but consume the symbolic meaning of those products or retailers as expressed in their self images. The concept of self-image congruence has been utilized by marketers and researchers as an aid in better understanding how consumers identify themselves with the brands they buy and the retailer they patronize. Although self-image congruity theory has been tested across many product categories, the theory has not been tested extensively in the retailing. Therefore, this study attempts to investigate the impact of self image congruence between retailer image and self image of consumer on retailer equity such as retailer awareness, retailer association, perceived retailer quality, and retailer loyalty. The purpose of this study is to find out whether retailer-self image congruence can be a new antecedent of retailer equity. In addition, this study tries to examine how four-dimensional retailer equity constructs (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) affect customers' repatronage intention. For this study, data were gathered by survey and analyzed by structural equation modeling. The sample size in the present study was 254. The reliability of the all seven dimensions was estimated with Cronbach's alpha, composite reliability values and average variance extracted values. We determined whether the measurement model supports the convergent validity and discriminant validity by Exploratory factor analysis and Confirmatory Factor Analysis. For each pair of constructs, the square root of the average variance extracted values exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the AMOS 18.0. As expected, the image congruence hypotheses were supported. The greater the degree of congruence between retailer image and self-image, the more favorable were consumers' retailer evaluations. The all two retailer-self image congruence (actual self-image congruence and ideal self-image congruence) affected customer based retailer equity. This result means that retailer-self image congruence is important cue for customers to estimate retailer equity. In other words, consumers are often more likely to prefer products and retail stores that have images similar to their own self-image. Especially, it appeared that effect for the ideal self-image congruence was consistently larger than the actual self-image congruence on the retailer equity. The results mean that consumers prefer or search for stores that have images compatible with consumer's perception of ideal-self. In addition, this study revealed that customers' estimations toward customer based retailer equity affected the repatronage intention. The results showed that all four dimensions (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) had positive effect on the repatronage intention. That is, management and investment to improve image congruence between retailer and consumers' self make customers' positive evaluation of retailer equity, and then the positive customer based retailer equity can enhance the repatonage intention. And to conclude, retailer's image management is an important part of successful retailer performance management, and the retailer-self image congruence is an important antecedent of retailer equity. Therefore, it is more important to develop and improve retailer's image similar to consumers' image. Given the pressure to provide increased image congruence, it is not surprising that retailers have made significant investments in enhancing the fit between retailer image and self image of consumer. The enhancing such self-image congruence may allow marketers to target customers who may be influenced by image appeals in advertising.

  • PDF

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.