• Title/Summary/Keyword: Research Tool

Search Result 8,861, Processing Time 0.044 seconds

A Study on Tile from the Early Period of the Three Kingdoms Period Excavated in Bonghwang-dong (김해 봉황동 유적 일대 출토 삼국시대 초기 기와 검토)

  • YUN Sunkyung;KIM Jiyeon
    • Korean Journal of Heritage: History & Science
    • /
    • v.56 no.4
    • /
    • pp.40-52
    • /
    • 2023
  • The basic purpose of building material called tiles is waterproofing and damp proofing, and they were restricted to use on important buildings to symbolize authority. This is especially true during the Three Kingdoms period, although unearthed examples are rare. Most of these tiles are found in ruins in the Silla and Baekje regions. Tiles were excavated from the Buwon-dong ruins that show the oldest manufacturing technique in the Gaya region to date, and tiles from the early Three Kingdoms period were recently excavated from the Gimhae Bonghwang-dong ruins, which is presumed to be the site of the royal palace of Geumgwan Gaya. These are important materials that show the appearance of tiles from the early days of Gimhae, the ancient capital of Geumgwan Gaya. The tiles excavated from the Bonghwang-dong ruins are reddish-yellow because a small amount of sand was mixed in the tile material and baked at a low temperature. The tiles are thin, no traces of fabric were identified, but traces of clay bands were identified. Tapping tool marks and traces of an anvil used in pottery production are clearly observed on the inside and outside, indicating that the tiles were made in the same way as earthenware manufacturing methods. If this is connected to the genealogy of the potters who made Gaya earthenware, it is estimated that tiles and earthenware were produced together as in the Songrim-ri ruins in Bulo-dong, Incheon, Songgok-dong ruins in Gyeongju, and Mulcheon-ri ruins. To date, tiles excavated from the Gimhae area have been identified only in places believed to be the Geumgwan Gaya City Wall (Royal Palace) in the Gimhae Basin. Considering what has been recorded so far and the geographical scenery, the Bonghwang-dong remains are the only city wall candidate site, and this is clearly revealed through the existence of the excavated tiles, which proves this. Considering that a small number of tiles were excavated during this time, it is estimated that the role of tiles as a luxury product with a symbolic meaning was greater than that of roofing materials, and there were strict restrictions and controls on its use.

Gap-Filling of Sentinel-2 NDVI Using Sentinel-1 Radar Vegetation Indices and AutoML (Sentinel-1 레이더 식생지수와 AutoML을 이용한 Sentinel-2 NDVI 결측화소 복원)

  • Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Yungyo Im;Youngmin Seo;Myoungsoo Won;Junghwa Chun;Kyungmin Kim;Keunchang Jang;Joongbin Lim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1341-1352
    • /
    • 2023
  • The normalized difference vegetation index (NDVI) derived from satellite images is a crucial tool to monitor forests and agriculture for broad areas because the periodic acquisition of the data is ensured. However, optical sensor-based vegetation indices(VI) are not accessible in some areas covered by clouds. This paper presented a synthetic aperture radar (SAR) based approach to retrieval of the optical sensor-based NDVI using machine learning. SAR system can observe the land surface day and night in all weather conditions. Radar vegetation indices (RVI) from the Sentinel-1 vertical-vertical (VV) and vertical-horizontal (VH) polarizations, surface elevation, and air temperature are used as the input features for an automated machine learning (AutoML) model to conduct the gap-filling of the Sentinel-2 NDVI. The mean bias error (MAE) was 7.214E-05, and the correlation coefficient (CC) was 0.878, demonstrating the feasibility of the proposed method. This approach can be applied to gap-free nationwide NDVI construction using Sentinel-1 and Sentinel-2 images for environmental monitoring and resource management.

Synthetic Data Generation with Unity 3D and Unreal Engine for Construction Hazard Scenarios: A Comparative Analysis

  • Aqsa Sabir;Rahat Hussain;Akeem Pedro;Mehrtash Soltani;Dongmin Lee;Chansik Park;Jae- Ho Pyeon
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1286-1288
    • /
    • 2024
  • The construction industry, known for its inherent risks and multiple hazards, necessitates effective solutions for hazard identification and mitigation [1]. To address this need, the implementation of machine learning models specializing in object detection has become increasingly important because this technological approach plays a crucial role in augmenting worker safety by proactively recognizing potential dangers on construction sites [2], [3]. However, the challenge in training these models lies in obtaining accurately labeled datasets, as conventional methods require labor-intensive labeling or costly measurements [4]. To circumvent these challenges, synthetic data generation (SDG) has emerged as a key method for creating realistic and diverse training scenarios [5], [6]. The paper reviews the evolution of synthetic data generation tools, highlighting the shift from earlier solutions like Synthpop and Data Synthesizer to advanced game engines[7]. Among the various gaming platforms, Unity 3D and Unreal Engine stand out due to their advanced capabilities in replicating realistic construction hazard environments [8], [9]. Comparing Unity 3D and Unreal Engine is crucial for evaluating their effectiveness in SDG, aiding developers in selecting the appropriate platform for their needs. For this purpose, this paper conducts a comparative analysis of both engines assessing their ability to create high-fidelity interactive environments. To thoroughly evaluate the suitability of these engines for generating synthetic data in construction site simulations, the focus relies on graphical realism, developer-friendliness, and user interaction capabilities. This evaluation considers these key aspects as they are essential for replicating realistic construction sites, ensuring both high visual fidelity and ease of use for developers. Firstly, graphical realism is crucial for training ML models to recognize the nuanced nature of construction environments. In this aspect, Unreal Engine stands out with its superior graphics quality compared to Unity 3D which typically considered to have less graphical prowess [10]. Secondly, developer-friendliness is vital for those generating synthetic data. Research indicates that Unity 3D is praised for its user-friendly interface and the use of C# scripting, which is widely used in educational settings, making it a popular choice for those new to game development or synthetic data generation. Whereas Unreal Engine, while offering powerful capabilities in terms of realistic graphics, is often viewed as more complex due to its use of C++ scripting and the blueprint system. While the blueprint system is a visual scripting tool that does not require traditional coding, it can be intricate and may present a steeper learning curve, especially for those without prior experience in game development [11]. Lastly, regarding user interaction capabilities, Unity 3D is known for its intuitive interface and versatility, particularly in VR/AR development for various skill levels. In contrast, Unreal Engine, with its advanced graphics and blueprint scripting, is better suited for creating high-end, immersive experiences [12]. Based on current insights, this comparative analysis underscores the user-friendly interface and adaptability of Unity 3D, featuring a built-in perception package that facilitates automatic labeling for SDG [13]. This functionality enhances accessibility and simplifies the SDG process for users. Conversely, Unreal Engine is distinguished by its advanced graphics and realistic rendering capabilities. It offers plugins like EasySynth (which does not provide automatic labeling) and NDDS for SDG [14], [15]. The development complexity associated with Unreal Engine presents challenges for novice users, whereas the more approachable platform of Unity 3D is advantageous for beginners. This research provides an in-depth review of the latest advancements in SDG, shedding light on potential future research and development directions. The study concludes that the integration of such game engines in ML model training markedly enhances hazard recognition and decision-making skills among construction professionals, thereby significantly advancing data acquisition for machine learning in construction safety monitoring.

A Study on ChoSonT'ongPaeJiIn (조선통폐지인(朝鮮通幣之印) 연구)

  • Moon, Sangleun
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.2
    • /
    • pp.220-239
    • /
    • 2019
  • According to the National Currency (國幣) article in GyeongGukDaeJeon (經國大典), the ChoSonT'ongPaeJiIn (朝鮮通幣之印) was a seal that was imprinted on both ends of a piece of hemp fabric (布). It was used for the circulation of hemp fabric as a fabric currency (布幣). The issued fabric currency was used as a currency for trade or as pecuniary means to have one's crime exempted or replace one's labor duty. The ChoSonT'ongPaeJiIn would be imprinted on a piece of hemp fabric (布) to collect one-twentieth of tax. The ChoSonT'ongPaeJiIn (朝鮮通幣之印) was one of the historical currencies and seal materials used during the early Chosun dynasty. Its imprint was a means of collecting taxes; hence, it was one of the taxation research materials. Despite its value, however, there has been no active research undertaken on it. Thus, the investigator conducted comprehensive research on it based on related content found in JeonRokTongGo (典錄通考), Dae'JeonHu-Sok'Rok (大典後續錄), JeongHeonSwaeRok (貞軒?錄) and other geography books (地理志) as well as the materials mentioned by researchers in previous studies. The investigator demonstrated that the ChoSonT'ongPaeJiIn was established based on the concept of circulating Choson fabric notes (朝鮮布貨) with a seal on ChongOseungp'o (正五升布) in entreaty documents submitted in 1401 and that the fabric currency (布幣) with the imprint of the ChoSonT'ongPaeJiIn was used as a currency for trade, pecuniary or taxation means of having one's crime exempted, or replacing one's labor, and as a tool of revenue from ships. The use of ChoSonT'ongPaeJiIn continued even after a ban on fabric currencies (布幣) in March 1516 due to a policy on the "use of Joehwa (paper notes)" in 1515. It was still used as an official seal on local official documents in 1598. During the reign of King Yeongjo (英祖), it was used to make a military service (軍布) hemp fabric. Some records of 1779 indicate that it was used as a means of taxation for international trade. It is estimated that approximately 330 ChoSonT'ongPaeJiIn were in circulation based on records in JeongHeonSwaeRok (貞軒?錄). Although there was the imprint of ChoSonT'ongPaeJiIn in An Inquiry on Choson Currency (朝鮮貨幣考) published in 1940, there had been no fabric currencies (布幣) with its imprint on them or genuine cases of the seal. It was recently found among the artifacts of Wongaksa Temple. The seal imprint was also found on historical manuscripts produced at the Jikjisa Temple in 1775. The investigator compared the seal imprints found on the historical manuscripts of the Jikjisa Temple, attached to TapJwaJongJeonGji (塔左從政志), and published in An Inquiry on Choson Currency with the ChoSonT'ongPaeJiIn housed at the Wongaksa Temple. It was found that these seal imprints were the same shape as the one at Wongaksa Temple. In addition, their overall form was the same as the one depicted in Daerokji (大麓誌) and LiJaeNanGo (?齋亂藁). These findings demonstrate that the ChoSonT'ongPaeJiIn at Wongaksa Temple was a seal made in the 15th century and is, therefore, an important artifact in the study of Choson's currency history, taxation, and seals. There is a need for future research examining its various aspects.

The Effects of the Computer Aided Innovation Capabilities on the R&D Capabilities: Focusing on the SMEs of Korea (Computer Aided Innovation 역량이 연구개발역량에 미치는 효과: 국내 중소기업을 대상으로)

  • Shim, Jae Eok;Byeon, Moo Jang;Moon, Hyo Gon;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.23 no.3
    • /
    • pp.25-53
    • /
    • 2013
  • This study analyzes the effect of Computer Aided Innovation (CAI) to improve R&D Capabilities empirically. Survey was distributed by e-mail and Google Docs, targeting CTO of 235 SMEs. 142 surveys were returned back (rate of return 60.4%) from companies. Survey results from 119 companies (83.8%) which are effective samples except no-response, insincere response, estimated value, etc. were used for statistics analysis. Companies with less than 50billion KRW sales of entire researched companies occupy 76.5% in terms of sample traits. Companies with less than 300 employees occupy 83.2%. In terms of the type of company business Partners (called 'partners with big companies' hereunder) who work with big companies for business occupy 68.1%. SMEs based on their own business (called 'independent small companies') appear to occupy 31.9%. The present status of holding IT system according to traits of company business was classified into partners with big companies versus independent SMEs. The present status of ERP is 18.5% to 34.5%. QMS is 11.8% to 9.2%. And PLM (Product Life-cycle Management) is 6.7% to 2.5%. The holding of 3D CAD is 47.1% to 21%. IT system-holding and its application of independent SMEs seemed very vulnerable, compared with partner companies of big companies. This study is comprised of IT infra and IT Utilization as CAI capacity factors which are independent variables. factors of R&D capabilities which are independent variables are organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability. The highest average value of variables was 4.24 in organization capability 2. The lowest average value was 3.01 in IT infra which makes users access to data and information in other areas and use them with ease when required during new product development. It seems that the inferior environment of IT infra of general SMEs is reflected in CAI itself. In order to review the validity used to measure variables, Factors have been analyzed. 7 factors which have over 1.0 pure value of their dependent and independent variables were extracted. These factors appear to explain 71.167% in total of total variances. From the result of factor analysis about measurable variables in this study, reliability of each item was checked by Cronbach's Alpha coefficient. All measurable factors at least over 0.611 seemed to acquire reliability. Next, correlation has been done to explain certain phenomenon by correlation analysis between variables. As R&D capabilities factors which are arranged as dependent variables, organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability turned out that they acquire significant correlation at 99% reliability level in all variables of IT infra and IT Utilization which are independent variables. In addition, correlation coefficient between each factor is less than 0.8, which proves that the validity of this study judgement has been acquired. The pair with the highest coefficient had 0.628 for IT utilization and technology-accumulating capability. Regression model which can estimate independent variables was used in this study under the hypothesis that there is linear relation between independent variables and dependent variables so as to identify CAI capability's impact factors on R&D. The total explanations of IT infra among CAI capability for independent variables such as organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability are 10.3%, 7%, 11.9%, 30.9%, and 10.5% respectively. IT Utilization exposes comprehensively low explanatory capability with 12.4%, 5.9%, 11.1%, 38.9%, and 13.4% for organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability respectively. However, both factors of independent variables expose very high explanatory capability relatively for technology-accumulating capability among independent variable. Regression formula which is comprised of independent variables and dependent variables are all significant (P<0.005). The suitability of regression model seems high. When the results of test for dependent variables and independent variables are estimated, the hypothesis of 10 different factors appeared all significant in regression analysis model coefficient (P<0.01) which is estimated to affect in the hypothesis. As a result of liner regression analysis between two independent variables drawn by influence factor analysis for R&D capability and R&D capability. IT infra and IT Utilization which are CAI capability factors has positive correlation to organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability with inside and outside which are dependent variables, R&D capability factors. It was identified as a significant factor which affects R&D capability. However, considering adjustable variables, a big gap is found, compared to entire company. First of all, in case of partner companies with big companies, in IT infra as CAI capability, organization capability, process capability, human resources capability, and technology capability out of R&D capacities seems to have positive correlation. However, collaboration capability appeared insignificance. IT utilization which is a CAI capability factor seemed to have positive relation to organization capability, process capability, human resources capability, and internal/external collaboration capability just as those of entire companies. Next, by analyzing independent types of SMEs as an adjustable variable, very different results were found from those of entire companies or partner companies with big companies. First of all, all factors in IT infra except technology-accumulating capability were rejected. IT utilization was rejected except technology-accumulating capability and collaboration capability. Comprehending the above adjustable variables, the following results were drawn in this study. First, in case of big companies or partner companies with big companies, IT infra and IT utilization affect improving R&D Capabilities positively. It was because most of big companies encourage innovation by using IT utilization and IT infra building over certain level to their partner companies. Second, in all companies, IT infra and IT utilization as CAI capability affect improving technology-accumulating capability positively at least as R&D capability factor. The most of factor explanation is low at around 10%. However, technology-accumulating capability is rather high around 25.6% to 38.4%. It was found that CAI capability contributes to technology-accumulating capability highly. Companies shouldn't consider IT infra and IT utilization as a simple product developing tool in R&D section. However, they have to consider to use them as a management innovating strategy tool which proceeds entire-company management innovation centered in new product development. Not only the improvement of technology-accumulating capability in department of R&D. Centered in new product development, it has to be used as original management innovative strategy which proceeds entire company management innovation. It suggests that it can be a method to improve technology-accumulating capability in R&D section and Dynamic capability to acquire sustainable competitive advantage.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF

Sesquiterpenoids Bioconversion Analysis by Wood Rot Fungi

  • Lee, Su-Yeon;Ryu, Sun-Hwa;Choi, In-Gyu;Kim, Myungkil
    • 한국균학회소식:학술대회논문집
    • /
    • 2016.05a
    • /
    • pp.19-20
    • /
    • 2016
  • Sesquiterpenoids are defined as $C_{15}$ compounds derived from farnesyl pyrophosphate (FPP), and their complex structures are found in the tissue of many diverse plants (Degenhardt et al. 2009). FPP's long chain length and additional double bond enables its conversion to a huge range of mono-, di-, and tri-cyclic structures. A number of cyclic sesquiterpenes with alcohol, aldehyde, and ketone derivatives have key biological and medicinal properties (Fraga 1999). Fungi, such as the wood-rotting Polyporus brumalis, are excellent sources of pharmaceutically interesting natural products such as sesquiterpenoids. In this study, we investigated the biosynthesis of P. brumalis sesquiterpenoids on modified medium. Fungal suspensions of 11 white rot species were inoculated in modified medium containing $C_6H_{12}O_6$, $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ for 20 days. Cultivation was stopped by solvent extraction via separation of the mycelium. The metabolites were identified as follows: propionic acid (1), mevalonic acid lactone (2), ${\beta}$-eudesmane (3), and ${\beta}$-eudesmol (4), respectively (Figure 1). The main peaks of ${\beta}$-eudesmane and ${\beta}$-eudesmol, which were indicative of sesquiterpene structures, were consistently detected for 5, 7, 12, and 15 days These results demonstrated the existence of terpene metabolism in the mycelium of P. brumalis. Polyporus spp. are known to generate flavor components such as methyl 2,4-dihydroxy-3,6-dimethyl benzoate; 2-hydroxy-4-methoxy-6-methyl benzoic acid; 3-hydroxy-5-methyl phenol; and 3-methoxy-2,5-dimethyl phenol in submerged cultures (Hoffmann and Esser 1978). Drimanes of sesquiterpenes were reported as metabolites from P. arcularius and shown to exhibit antimicrobial activity against Gram-positive bacteria such as Staphylococcus aureus (Fleck et al. 1996). The main metabolites of P. brumalis, ${\beta}$-Eudesmol and ${\beta}$-eudesmane, were categorized as eudesmane-type sesquiterpene structures. The eudesmane skeleton could be biosynthesized from FPP-derived IPP, and approximately 1,000 structures have been identified in plants as essential oils. The biosynthesis of eudesmol from P. brumalis may thus be an important tool for the production of useful natural compounds as presumed from its identified potent bioactivity in plants. Essential oils comprising eudesmane-type sesquiterpenoids have been previously and extensively researched (Wu et al. 2006). ${\beta}$-Eudesmol is a well-known and important eudesmane alcohol with an anticholinergic effect in the vascular endothelium (Tsuneki et al. 2005). Additionally, recent studies demonstrated that ${\beta}$-eudesmol acts as a channel blocker for nicotinic acetylcholine receptors at the neuromuscular junction, and it can inhibit angiogenesis in vitro and in vivo by blocking the mitogen-activated protein kinase (MAPK) signaling pathway (Seo et al. 2011). Variation of nutrients was conducted to determine an optimum condition for the biosynthesis of sesquiterpenes by P. brumalis. Genes encoding terpene synthases, which are crucial to the terpene synthesis pathway, generally respond to environmental factors such as pH, temperature, and available nutrients (Hoffmeister and Keller 2007, Yu and Keller 2005). Calvo et al. described the effect of major nutrients, carbon and nitrogen, on the synthesis of secondary metabolites (Calvo et al. 2002). P. brumalis did not prefer to synthesize sesquiterpenes under all growth conditions. Results of differences in metabolites observed in P. brumalis grown in PDB and modified medium highlighted the potential effect inorganic sources such as $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ on sesquiterpene synthesis. ${\beta}$-eudesmol was apparent during cultivation except for when P. brumalis was grown on $MgSO_4$-free medium. These results demonstrated that $MgSO_4$ can specifically control the biosynthesis of ${\beta}$-eudesmol. Magnesium has been reported as a cofactor that binds to sesquiterpene synthase (Agger et al. 2008). Specifically, the $Mg^{2+}$ ions bind to two conserved metal-binding motifs. These metal ions complex to the substrate pyrophosphate, thereby promoting the ionization of the leaving groups of FPP and resulting in the generation of a highly reactive allylic cation. Effect of magnesium source on the sesquiterpene biosynthesis was also identified via analysis of the concentration of total carbohydrates. Our current study offered further insight that fungal sesquiterpene biosynthesis can be controlled by nutrients. To profile the metabolites of P. brumalis, the cultures were extracted based on the growth curve. Despite metabolites produced during mycelia growth, there was difficulty in detecting significant changes in metabolite production, especially those at low concentrations. These compounds may be of interest in understanding their synthetic mechanisms in P. brumalis. The synthesis of terpene compounds began during the growth phase at day 9. Sesquiterpene synthesis occurred after growth was complete. At day 9, drimenol, farnesol, and mevalonic lactone (or mevalonic acid lactone) were identified. Mevalonic acid lactone is the precursor of the mevalonic pathway, and particularly, it is a precursor for a number of biologically important lipids, including cholesterol hormones (Buckley et al. 2002). Farnesol is the precursor of sesquiterpenoids. Drimenol compounds, bi-cyclic-sesquiterpene alcohols, can be synthesized from trans-trans farnesol via cyclization and rearrangement (Polovinka et al. 1994). They have also been identified in the basidiomycota Lentinus lepideus as secondary metabolites. After 12 days in the growth phase, ${\beta}$-elemene caryophyllene, ${\delta}$-cadiene, and eudesmane were detected with ${\beta}$-eudesmol. The data showed the synthesis of sesquiterpene hydrocarbons with bi-cyclic structures. These compounds can be synthesized from FPP by cyclization. Cyclic terpenoids are synthesized through the formation of a carbon skeleton from linear precursors by terpene cyclase, which is followed by chemical modification by oxidation, reduction, methylation, etc. Sesquiterpene cyclase is a key branch-point enzyme that catalyzes the complex intermolecular cyclization of the linear prenyl diphosphate into cyclic hydrocarbons (Toyomasu et al. 2007). After 20 days in stationary phase, the oxygenated structures eudesmol, elemol, and caryophyllene oxide were detected. Thus, after growth, sesquiterpenes were identified. Per these results, we showed that terpene metabolism in wood-rotting fungi occurs in the stationary phase. We also showed that such metabolism can be controlled by magnesium supplementation in the growth medium. In conclusion, we identified P. brumalis as a wood-rotting fungus that can produce sesquiterpenes. To mechanistically understand eudesmane-type sesquiterpene biosynthesis in P. brumalis, further research into the genes regulating the dynamics of such biosynthesis is warranted.

  • PDF

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.