• Title/Summary/Keyword: Programming

Search Result 7,716, Processing Time 0.038 seconds

The Brand Personality Effect: Communicating Brand Personality on Twitter and its Influence on Online Community Engagement (브랜드 개성 효과: 트위터 상의 브랜드 개성 전달이 온라인 커뮤니티 참여에 미치는 영향)

  • Cruz, Ruth Angelie B.;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.67-101
    • /
    • 2014
  • The use of new technology greatly shapes the marketing strategies used by companies to engage their consumers. Among these new technologies, social media is used to reach out to the organization's audience online. One of the most popular social media channels to date is the microblogging platform Twitter. With 500 million tweets sent on average daily, the microblogging platform is definitely a rich source of data for researchers, and a lucrative marketing medium for companies. Nonetheless, one of the challenges for companies in developing an effective Twitter campaign is the limited theoretical and empirical evidence on the proper organizational usage of Twitter despite its potential advantages for a firm's external communications. The current study aims to provide empirical evidence on how firms can utilize Twitter effectively in their marketing communications using the association between brand personality and brand engagement that several branding researchers propose. The study extends Aaker's previous empirical work on brand personality by applying the Brand Personality Scale to explore whether Twitter brand communities convey distinctive brand personalities online and its influence on the communities' level or intensity of consumer engagement and sentiment quality. Moreover, the moderating effect of the product involvement construct in consumer engagement is also measured. By collecting data for a period of eight weeks using the publicly available Twitter application programming interface (API) from 23 accounts of Twitter-verified business-to-consumer (B2C) brands, we analyze the validity of the paper's hypothesis by using computerized content analysis and opinion mining. The study is the first to compare Twitter marketing across organizations using the brand personality concept. It demonstrates a potential basis for Twitter strategies and discusses the benefits of these strategies, thus providing a framework of analysis for Twitter practice and strategic direction for companies developing their use of Twitter to communicate with their followers on this social media platform. This study has four specific research objectives. The first objective is to examine the applicability of brand personality dimensions used in marketing research to online brand communities on Twitter. The second is to establish a connection between the congruence of offline and online brand personalities in building a successful social media brand community. Third, we test the moderating effect of product involvement in the effect of brand personality on brand community engagement. Lastly, we investigate the sentiment quality of consumer messages to the firms that succeed in communicating their brands' personalities on Twitter.

The Cyber world of the Matrix as a typical type of 'Simulacre' (시뮬라크르의 전형(典型)으로서 매트릭스(Matrix)의 가상 세계)

  • 이종한
    • Archives of design research
    • /
    • v.17 no.1
    • /
    • pp.339-346
    • /
    • 2004
  • Matrix, produced by Larry & Andy Wachowski, was relatively precisely dealt with the cyber world. After the movie was released, it had a mania for the movie and was adopted into a various forms of cultural products. It was remade not only into the parodies of the other movies and TV programs, but also the clothes and miscellaneous items of the movie were reincarnated as an unique cultural trend. The cause of the popularity is the fresh storyline as well as the sophisticated visual effects and good-looking actors. The agony of the protagonist was connected with the people outside the movie who are yearning for the ideal world. He was confused at the fact that his circumstances which were believed as the real world were not tortally true, complicated between the sensually phisical truth and the spiritual truth and had an will for the freedom that would ransack the truth and save the other people from the fictitious world. Consequently, the movie has got sympathies with many audiences suggesting the situation that has no a firm belief of the reality, the difference between the real and the cyber world is meaningless and the faked images of the high-technology are overturned This thesis tries to study the present that the real images are excessly duplicated and consumed, related to the Jean Baudrillard's theory, 'Hyperreel'. Replaced the real objects by a technical programming in the Matrix world, there happens the image-violence that the true nature is slaughterred by images. In the world where the reproducts are more actual than the reality and pretends to be real, only semiotics are consumed and produced. That is to say, the tortally programmed images has no references and aims, therefore should be produced in an 'impediment-strategy' like a faked crisis. That is the step of 'Simulation' that artificially reincarnates the real. Based upon the Baudrillard's theory, 'Simulacre', this study tries to research today's post-modern situation that the boundary of the real world and the faked copy is vague and vanishing, through the analysis of the cyber world of the movie 'Matrix'.

  • PDF

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

Study of Web Services Interoperabiliy for Multiple Applications (다중 Application을 위한 Web Services 상호 운용성에 관한 연구)

  • 유윤식;송종철;최일선;임산송;정회경
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.217-220
    • /
    • 2004
  • According as utilization for web increases rapidly, it is demanded that model about support interaction between web-based applications systematically and solutions can integrate new distributed platforms and existing environment effectively, accordingly, Web Services appeared by solution in reply. These days, a lot of software and hardware companies try to adoption of Web Services to their market, attenpt to construct their applications associationing components from various Web Services providers. However, to execute Web Services completely. it must have interoperability and need the standardization work that avoid thing which is subject to platform, application as well as service and programming language from other companies. WS-I (Web Services Interoperability organization) have established Basic Profile 1.0 based on XML, UDDI, WSDL and SOAP for web services interoperability and developed usage scenario Profile to apply Web Services in practice. In this paper, to verify suitability Web Services interoperability between heterogeneous two applications, have design and implements the Book Information Web Services that based on the Web Services Client of J2SE platform and the Web Services Server of .NET platform, so that analysis and verify the service by adaptation of WS-I Basic Profile.

  • PDF

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

A study on the feasibility evaluation technique of urban utility tunnel by using quantitative indexes evaluation and benefit·cost analysis (정량적 지표평가와 비용·편익 분석을 활용한 도심지 공동구의 타당성 평가기법 연구)

  • Lee, Seong-Won;Chung, Jee-Seung;Na, Gwi-Tae;Bang, Myung-Seok;Lee, Joung-Bae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.1
    • /
    • pp.61-77
    • /
    • 2019
  • If a new utility tunnel is planned for high density existing urban areas in Korea, a rational decision-making process such as the determination of optimum design capacity by using the feasibility evaluation system based on quantitative evaluation indexes and the economic evaluation is needed. Thus, the previous study presented the important weight of individual higher-level indexes (3 items) and sub-indexes (16 items) through a hierarchy analysis (AHP) for quantitative evaluation index items, considering the characteristics of each urban type. In addition, an economic evaluation method was proposed considering 10 benefit items and 8 cost items by adding 3 new items, including the effects of traffic accidents, noise reduction and socio-economic losses, to the existing items for the benefit cost analysis suitable for urban utility tunnels. This study presented a quantitative feasibility evaluation method using the important weight of 16 sub-index items such as the road management sector, public facilities sector and urban environment sector. Afterwards, the results of quantitative feasibility and economic evaluation were compared and analyzed in 123 main road sections of the Seoul. In addition, a comprehensive evaluation method was proposed by the combination of the two evaluation results. The design capacity optimization program, which will be developed by programming the logic of the quantitative feasibility and economic evaluation system presented in this study, will be utilized in the planning and design phases of urban community zones and will ultimately contribute to the vitalization of urban utility tunnels.

Construction of Database System on Amylose and Protein Contents Distribution in Rice Germplasm Based on NIRS Data (벼 유전자원의 아밀로스 및 단백질 성분 함량 분포에 관한 자원정보 구축)

  • Oh, Sejong;Choi, Yu Mi;Lee, Myung Chul;Lee, Sukyeung;Yoon, Hyemyeong;Rauf, Muhammad;Chae, Byungsoo
    • Korean Journal of Plant Resources
    • /
    • v.32 no.2
    • /
    • pp.124-143
    • /
    • 2019
  • This study was carried out to build a database system for amylose and protein contents of rice germplasm based on NIRS (Near-Infrared Reflectance Spectroscopy) analysis data. The average waxy type amylose contents was 8.7% in landrace, variety and weed type, whereas 10.3% in breeding line. In common rice, the average amylose contents was 22.3% for landrace, 22.7% for variety, 23.6% for weed type and 24.2% for breeding line. Waxy type resources comprised of 5% of the total germplasm collections, whereas low, intermediate and high amylose content resources share 5.5%, 20.5% and 69.0% of total germplasm collections, respectively. The average percent of protein contents was 8.2 for landrace, 8.0 for variety, and 7.9 for weed type and breeding line. The average Variability Index Value was 0.62 in waxy rice, 0.80 in common rice, and 0.51 in protein contents. The accession ratio in arbitrary ranges of landrace was 0.45 in amylose contents ranging from 6.4 to 8.7%, and 0.26 in protein ranging from 7.3 to 8.2%. In the variety, it was 0.32 in amylose ranging from 20.1 to 22.7%, and 0.51 in protein ranging from 6.1 to 8.3%. And also, weed type was 0.67 in amylose ranging from 6.6 to 9.7%, and 0.33 in protein ranging from 7.0 to 7.9%, whereas, in breeding line it was 0.47 in amylose ranging from 10.0 to 12.0%, and 0.26 in protein ranging from 7.0 to 7.9%. These results could be helpful to build database programming system for germplasm management.

Application of Automated Microscopy Equipment for Rock Analog Material Experiments: Static Grain Growth and Simple Shear Deformation Experiments Using Norcamphor (유사물질 실험을 위한 자동화 현미경 실험 기기의 적용과 노캠퍼를 이용한 입자 성장 및 단순 전단 변형 실험의 예)

  • Ha, Changsu;Kim, Sungshil
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.233-245
    • /
    • 2021
  • Many studies on the microstructures in rocks have been conducted using experimental methods with various equipment as well as natural rock studies to see the development of microstructures and understand their mechanisms. Grain boundary migration of mineral aggregates in rocks could cause grain growth or grain size changes during metamorphism or deformation as one of the main recrystallization mechanisms. This study suggests improved ways regarding the analog material experiments with reformed equipment to see sequential observations of these grain boundary migration. It can be more efficient than the existing techniques and carry out an appropriate microstructure analysis. This reformed equipment was implemented to enable optical manipulation by mounting polarizing plates capable of rotating operation on a stereoscopic microscope and a deformation rig capable of experimenting with analog materials. The equipment can automatically control the temperature and strain rate of the deformation rig by microcontrollers and programming and can take digital photomicrographs with constant time intervals during the experiment to observe any microstructure changes. The composite images synthesized using images by rotated polarizing plates enable us to see more accurate grain boundaries. As a rock analog material, norcamphor(C7H10O) was used, which has similar birefringence to quartz. Static grain growth and simple shear deformation experiments were performed using the norcamphor to verify the effectiveness of the equipment. The static grain growth experiments showed the characteristics of typical grain growth behavior. The number of grains decreases and the average grain size increases over time. These case experiments also showed a clear difference between the growth curves with three temperature conditions. The result of the simple shear deformation experiment under the medium temperature-low strain rate showed no significant change in the average grain size but presented the increased elongation of grain shapes in the direction of about 53° regarding the direction perpendicular to the shearing direction as the shear strain increases over time. These microstructures are interpreted as both the plastic deformation and the internal recovery process in grains are balanced by the deformation under the given experimental conditions. These experiments using the reformed equipment represent the ability to sequentially observe changing the microstructure during experiments as desired in the tests with the analog material during the entire process.

Performance of Investment Strategy using Investor-specific Transaction Information and Machine Learning (투자자별 거래정보와 머신러닝을 활용한 투자전략의 성과)

  • Kim, Kyung Mock;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.65-82
    • /
    • 2021
  • Stock market investors are generally split into foreign investors, institutional investors, and individual investors. Compared to individual investor groups, professional investor groups such as foreign investors have an advantage in information and financial power and, as a result, foreign investors are known to show good investment performance among market participants. The purpose of this study is to propose an investment strategy that combines investor-specific transaction information and machine learning, and to analyze the portfolio investment performance of the proposed model using actual stock price and investor-specific transaction data. The Korea Exchange offers daily information on the volume of purchase and sale of each investor to securities firms. We developed a data collection program in C# programming language using an API provided by Daishin Securities Cybosplus, and collected 151 out of 200 KOSPI stocks with daily opening price, closing price and investor-specific net purchase data from January 2, 2007 to July 31, 2017. The self-organizing map model is an artificial neural network that performs clustering by unsupervised learning and has been introduced by Teuvo Kohonen since 1984. We implement competition among intra-surface artificial neurons, and all connections are non-recursive artificial neural networks that go from bottom to top. It can also be expanded to multiple layers, although many fault layers are commonly used. Linear functions are used by active functions of artificial nerve cells, and learning rules use Instar rules as well as general competitive learning. The core of the backpropagation model is the model that performs classification by supervised learning as an artificial neural network. We grouped and transformed investor-specific transaction volume data to learn backpropagation models through the self-organizing map model of artificial neural networks. As a result of the estimation of verification data through training, the portfolios were rebalanced monthly. For performance analysis, a passive portfolio was designated and the KOSPI 200 and KOSPI index returns for proxies on market returns were also obtained. Performance analysis was conducted using the equally-weighted portfolio return, compound interest rate, annual return, Maximum Draw Down, standard deviation, and Sharpe Ratio. Buy and hold returns of the top 10 market capitalization stocks are designated as a benchmark. Buy and hold strategy is the best strategy under the efficient market hypothesis. The prediction rate of learning data using backpropagation model was significantly high at 96.61%, while the prediction rate of verification data was also relatively high in the results of the 57.1% verification data. The performance evaluation of self-organizing map grouping can be determined as a result of a backpropagation model. This is because if the grouping results of the self-organizing map model had been poor, the learning results of the backpropagation model would have been poor. In this way, the performance assessment of machine learning is judged to be better learned than previous studies. Our portfolio doubled the return on the benchmark and performed better than the market returns on the KOSPI and KOSPI 200 indexes. In contrast to the benchmark, the MDD and standard deviation for portfolio risk indicators also showed better results. The Sharpe Ratio performed higher than benchmarks and stock market indexes. Through this, we presented the direction of portfolio composition program using machine learning and investor-specific transaction information and showed that it can be used to develop programs for real stock investment. The return is the result of monthly portfolio composition and asset rebalancing to the same proportion. Better outcomes are predicted when forming a monthly portfolio if the system is enforced by rebalancing the suggested stocks continuously without selling and re-buying it. Therefore, real transactions appear to be relevant.