• Title/Summary/Keyword:

Search Result 60,911, Processing Time 0.089 seconds

The Effects of Storage of Human Saliva on DNA Isolation and Stability (인체타액의 보관이 DNA 분리와 안정도에 미치는 영향)

  • Kim, Yong-Woo;Kim, Young-Ku
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.1
    • /
    • pp.1-16
    • /
    • 2006
  • The most important progress in diagnostic sciences is the increased sensitivity and specificity in diagnostic procedures due to the development of micromethodologies and increasing availability of immunological and molecular biological reagents. The technological advances led to consider the diagnostic use of saliva for an array of analytes and DNA source. The purpose of the present study was to compare DNA from saliva with those from blood and buccal swab, to evaluate diagnostic and forensic application of saliva, to investigate the changes of genomic DNA in saliva according to the storage temperature and period of saliva samples, and to evaluate the integrity of the DNA from saliva stored under various storage conditions by PCR analysis. Peripheral venous blood, unstimulated whole saliva, stimulated whole saliva, and buccal swab were obtained from healthy 10 subjects (mean age: $29.9{\pm}9.8$ years) and genomic DNA was extracted using commercial kit. For the study of effects of various storage conditions on genomic DNA from saliva, stimulated whole saliva were obtained from healthy 20 subjects (mean age: $32.3{\pm}6.6$ years). After making aliquots from fresh saliva, they were stored at room temperature, $4^{\circ}C$, $-20^{\circ}C$, and $-70^{\circ}C$. Saliva samples after lyophilization and dry-out procedure were stored at room temperature. After 1, 3, and 5 months, the same experiment was performed to investigate the changes in genomic DNA in saliva samples. In case of saliva aliquots stored at room temperature and dry-out samples, the results in 2 weeks were also included. Integrity of DNA from saliva stored under various storage conditions was also evaluated by PCR amplification analysis of $\beta$-globin gene fragments (989-bp). The results were as follows: 1. Concentration of genomic DNA extracted from saliva was lower than that from blood (p<0.05), but there were no significant differences among various types of saliva samples. Purities of genomic DNA extracted from stimulated whole saliva and lyophilized one were significantly higher than that from blood (p<0.05). Purity of genomic DNA extracted from buccal swab was lower than those from various types of saliva samples (p<0.05). 2. Concentration of genomic DNA from saliva stored at room temperature showed gradual reduction after 1 month, and decreased significantly in 3 and 5 months (p<0.05, p<0.01, respectively). Purities of DNA from saliva stored for 3 and 5 months showed significant differences with those of fresh saliva and stored saliva for 1 month (p<0.05). 3. In the case of saliva stored at $4^{\circ}C$ and $-20^{\circ}C$, there were no significant changes of concentration of genomic DNA in 3 months. Concentration of DNA decreased significantly in 5 months (p<0.05). 4. There were no significant differences of concentration of genomic DNA from saliva stored at $-70^{\circ}C$ and from lyophilized one according to storage period. Concentration of DNA showed decreasing tendency in 5 months. 5. Concentration of genomic DNA immediately extracted from saliva dried on Petri dish were 60% compared with that of fresh saliva. Concentration of DNA from saliva stored at room temperature after dry-out showed rapid reduction within 2 weeks (p<0.05). 6. Amplification of $\beta$-globin gene using PCR was successful in all lyophilized saliva stored for 5 months. At the time of 1 month, $\beta$-globin gene was successfully amplified in all saliva samples stored at $-20^{\circ}C$ and $-70^{\circ}C$, and in some saliva samples stored at $4^{\circ}C$. $\beta$-globin gene was failed to amplify in saliva stored at room temperature and dry-out saliva.

Antioxidant and Antibacterial Activities of Glycyrrhiza uralensis Fisher (Jecheon, Korea) Extracts Obtained by various Extract Conditions (한국 제천 감초(Glycyrrhiza uralensis Fisher)의 추출 조건별 추출물의 항산화 및 항균 활성 평가)

  • Ha, Ji Hoon;Jeong, Yoon Ju;Seong, Joon Seob;Kim, Kyoung Mi;Kim, A Young;Fu, Min Min;Suh, Ji Young;Lee, Nan Hee;Park, Jino;Park, Soo Nam
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.41 no.4
    • /
    • pp.361-373
    • /
    • 2015
  • This study was carried out to evaluate the antioxidant and antibacterial activities of Glycyrriza uralensis Fisher (Jecheon, Korea) extracts obtained by various extraction conditions (85% ethanol, heating temperatures and times), and to establish the optimal extraction condition of G. uralensis for the application as cosmetic ingredients. The extracts obtained under different conditions were concentrated and made in the powdered (sample-1) and were the crude extract solutions without concentration (sample-2). The antioxidant effects were determined by free radical scavenging activity ($FSC_{50}$), ROS scavenging activity ($OSC_{50}$), and cellular protective effects. Antibacterial activity was determined by minimum inhibitory concentration (MIC) on human skin flora. DPPH free radical scavenging activity of sample-1 ($100{\mu}g/mL$) was 10% higher in group extracted for 6 h than 12 h, but sample-2 didn't show any significant differences. The extraction yield extracted with same temperature for 12 h was 2.6 times higher than 6 h, but total flavonoid content was 1.1 times higher. These results indicated that total flavonoid content hardly increased with increasing extraction time. Free radical scavenging activity, ROS scavenging activity and cellular protective effects were not dependent on the yield of extraction, but total flavonoid content of extraction. Antibacterial activity on three skin flora (S. aureus, B. subtilis, P. acnes)of sample-1 in different extraction conditions were evaluated on same concentration, and the group extracted at 25 and $40^{\circ}C$ showed 16 times higher than methyl paraben ($2,500{\mu}g/mL$). In conclusion, 85% ethanol extracts of G. uralensis extracted at $40^{\circ}C$ for 6 h showed the highest antioxidant and antibacterial activity. These results indicate that the extraction condition is important to be optimized by comprehensive evaluation of extraction yield with various conditions, yield of active component, and activity test with concentrations, and activity of 100% extract, for manufacturing process of products.

Korea's Street Processions and Traditional Performing Arts (한국의 가두행렬(街頭行列)과 전통연희)

  • Jeon, KyungWook
    • (The) Research of the performance art and culture
    • /
    • no.18
    • /
    • pp.513-557
    • /
    • 2009
  • The procession depicted in Goguryeo's ancient tomb mural consists of guards, honor guards, music band, and performing artists. Since this coincides with the royal processions of Goryeo and Joseon Dynasties, the relationship of its impact can be examined. The performing arts appearing in such street procession were mostly sanakbaekhui. During the Goryeo Dynasty, the king visited Bongeunsa templ when the lotus lantern festival was celebrated. At such time, on the left and right sides of the road travelled by the king were installed mountains made of lanterns and trees made of lanterns. The procession was quite large in scale and was accompanied by colorful music and performances. In the narye ceremony of the Goryeo Dynasty, as in China, street procession and performing arts took place. The jisinbarbgi performed by a peasant band in early January is a custom of narye. A new character appears in the royal narye during the first half of the Joseon period. Therefore the features of narye transforming according to the changes of the times can be examined. In the Joseon Dynasty's procession of a king returning to the palace, the royal band in front and behind the carriage of the king played marching music, and led by a sanbung this street procession headed toward the palace. Various performances also took place during this time. The samilyuga and munhuiyeon were festivals of the yangban class(nobility). Those who passed the state examination hired musicians and performers and paraded around town in Seoul for three days to celebrate the auspicious outcome for their family and to show off their family's power. In the Joseon's dongje and eupchijeui ceremonies, street processions were carried out with a shrine deity image or symbolic flag at the head. The dongje in a Korean village, combined with jisinbarbgi, incorporated a procession with the flags ymbolizing the guardian deity of the village at the head, and this went from house to house. The procession of suyeongyaru had the publicity impact of a mask play performance, and by creating a sense of unity among the participants, heightened the celebratory atmosphere. At the core of the bukcheonggun toseongri gwanweonnori was as treet procession imitating the traveling of high government officials. The toseong gwanweonnori has the folk religion function of praying for safe human living and abundance of grains for the village, the entertainment function of having fun and joy through street processions and various performances, and the social function of creating unity and harmony among the residents. In all the aforementioned events, the street procession had a large role in creating a celebratory atmosphere, and the performance of traditional performing arts in the middle of the procession or after the procession enabled the participants to feel united. The participants of the street procession felt cultural pride and self-confidence through the various events and they were able to have the opportunity to show off and proudly display their abilities.

Excavation of Kim Jeong-gi and Korean Archeology (창산 김정기의 유적조사와 한국고고학)

  • Lee, Ju-heun
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.4
    • /
    • pp.4-19
    • /
    • 2017
  • Kim Jeong-gi (pen-name: Changsan, Mar. 31, 1930 - Aug. 26, 2015) made a major breakthrough in the history of cultural property excavation in Korea: In 1959, he began to develop an interest in cultural heritage after starting work as an employee of the National Museum of Korea. For about thirty years until he retired from the National Research Institute of Cultural Heritage in 1987, he devoted his life to the excavation of our country's historical relics and artifacts and compiled countless data about them. He continued striving to identify the unique value and meaning of our cultural heritage in universities and excavation organizations until he passed away in 2015. Changsan spearheaded all of Korea's monumental archeological excavations and research. He is widely known at home and abroad as a scholar of Korean archeology, particularly in the early years of its existence as an academic discipline. As such, he has had a considerable influence on the development of Korean archeology. Although his multiple activities and roles are meaningful in terms of the country's archaeological history, there are limits to his contributions nevertheless. The Deoksugung Palace period (1955-1972), when the National Museum of Korea was situated in Deoksugung Palace, is considered to be a time of great significance for Korean archeology, as relics with diverse characteristics were researched during this period. Changsan actively participated in archeological surveys of prehistoric shell mounds and dwellings, conducted surveys of historical relics, measured many historical sites, and took charge of photographing and drawing such relics. He put to good use all the excavation techniques that he had learned in Japan, while his countrywide archaeological surveys are highly regarded in terms of academic history as well. What particularly sets his perspectives apart in archaeological terms is the fact that he raised the possibility of underwater tombs in ancient times, and also coined the term "Haemi Culture" as part of a theory of local culture aimed at furthering understanding of Bronze Age cultures in Korea. His input was simply breathtaking. In 1969, the National Research Institute of Cultural Heritage (NRICH) was founded and Changsan was appointed as its head. Despite the many difficulties he faced in running the institute with limited financial and human resources, he gave everything he had to research and field studies of the brilliant cultural heritages that Korea has preserved for so long. Changsan succeeded in restoring Bulguksa Temple, and followed this up with the successful excavation of the Cheonmachong Tomb and the Hwangnamdaechong Tomb in Gyeongju. He then explored the Hwangnyongsa Temple site, Bunhwangsa Temple, and the Mireuksa Temple site in order to systematically evaluate the Buddhist culture and structures of the Three Kingdoms Period. We can safely say that the large excavation projects that he organized and carried out at that time not only laid the foundations for Korean archeology but also made significant contributions to studies in related fields. Above all, in terms of the developmental process of Korean archeology, the achievements he generated with his exceptional passion during the period are almost too numerous to mention, but they include his systematization of various excavation methods, cultivation of archaeologists, popularization of archeological excavations, formalization of survey records, and promotion of data disclosure. On the other hand, although this "Excavation King" devoted himself to excavations, kept precise records, and paid keen attention to every detail, he failed to overcome the limitations of his era in the process of defining the nature of cultural remains and interpreting historical sites and structures. Despite his many roles in Korean archeology, the fact that he left behind a controversy over the identity of the occupant of the Hwangnamdaechong Tomb remains a sore spot in his otherwise perfect reputation.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Thinking in Terms of East-West Contacts through Spreading Process of Sarmathia-Pattened Scabbard on Tillya-Tepe Site in Afghanistan (아프가니스탄 틸랴 테페의 사르마티아(Sarmathia)식 검집 패용 방식의 전개 과정으로 본 동서교섭)

  • Lee, Song Ran
    • Korean Journal of Heritage: History & Science
    • /
    • v.45 no.4
    • /
    • pp.54-73
    • /
    • 2012
  • In this article, we examined the patterns of activities of the Sarmathians though in a humble measure, with a focus on the regions where the Sarmathian sheaths spreaded. One of the main weapons the mounted nomads like the Scythias, the Sarmathians, and the Alans used at war was a spear. Though complementary, a sword was the most convenient and appropriate weapon when fighting at a near distance, fallen from the horse to the ground. The Sarmathian swords continued the tradition of the Akinakes which the Scythias or the Persians used, but those of the Sarmathians showed some advances in terms of the easiness with which a sword was drawn out from a sheath, and the way the sheaths were worn to parts of a human body. It turns out that the Sarmathian sheaths, which were designed for the people to draw swords easily, having the sheaths attached to thighs through 4 bumps, spread extensively from Pazyryk, Altai, to South Siberia, Bactria, Parthia and Rome. The most noteworthy out of all the Sarmathian sheaths were the ones that were excavated from the 4th tomb in Tillatepe, Afghanistan which belonged to the region of Bactria. The owner of the fourth tomb of Tilla-tepe whose region was under the control of Kushan Dynasty at that time, was buried wearing Sarmathian swords, and regarded as a big shot in the region of Bactria which was also under the governance of Kushan Dynasty. The fact that the owner of the tomb wore two swords suggests that there had been active exchange between Bactria and Sarmathia. It seemed that the reason why the Sarmathians could play an important role in the exchange between the East and the West might have something to do with their role of supplying Chinese goods to Silk Road. That's why we are interested in how the copper mirrors of Han Dynasty, decoration beads like melon-type beads, crystal beads and goldring articulated beads, and the artifacts of South China which produced silks were excavated in the northern steppe route where the Sarmathians actively worked. Our study have established that the eye beads discovered in Sarmathian tomb estimated to have been built around the 1st century B.C. were reprocessed in China, and then imported to Sarmathia again. We should note the Huns as a medium between the Sarmathians and the South China which were far apart from each other. Thus gold-ring articulated beads which were spread out mainly across the South China has been discovered in the Huns' remains. On the other hand, between 2nd century B.C. and 2nd century A.D. which were main periods of the Sarmathians, it was considered that the traffic route connecting the steppe route and the South China might be West-South silk road which started from Yunnan, passed through Myanmar, Pakistan, and Afghanistan, and then went into the east of India. The West-south Silk road is presumed to have been used by nomadic tribes who wanted to get the goods from South China before the Oasis route was activated by the Han Dynasty's policy of managing the countries bordering on Western China.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.