• Title/Summary/Keyword: Communication Method

Search Result 15,193, Processing Time 0.045 seconds

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.

A Study on Outplacement Countermeasure and Retention Level Examination Analysis about Outplacement Competency of Special Security Government Official (특정직 경호공무원의 전직역량에 대한 보유수준 분석 및 전직지원방안 연구)

  • Kim, Beom-Seok
    • Korean Security Journal
    • /
    • no.33
    • /
    • pp.51-80
    • /
    • 2012
  • This study is to summarize main contents which was mentioned by Beomseok Kim' doctoral dissertation. The purpose of this study focuses on presenting the outplacement countermeasure and retention level examination analysis about outplacement competency of special security government official through implement of questionnaire method. The questionnaire for retention level examination including four groups of outplacement competency and twenty subcategories was implemented in the object of six hundered persons relevant to outplacement more than forty age and five grade administration official of special security government officials, who have outplacement experiences as outplacement successors, outplacement losers, and outplacement expectants, in order to achieve this research purpose effectively. The questionnaire examination items are four groups of outplacement competency and twenty subcategories which are the group of knowledge competency & four subcategories including expert knowledge, outplacement knowledge, self comprehension, and organization comprehension, the group of skill competency & nine subcategories including job skill competency, job performance skill, problem-solving skill, reforming skill, communication skill, organization management skill, crisis management skill, career development skill, and human network application skill, the group of attitude-emotion competency & seven subcategories including positive attitude, active attitude, responsibility, professionalism, devoting-sacrificing attitude, affinity, and self-controlling ability, and the group of value-ethics competency & two subcategories including ethical consciousness and morality. The respondents highly regard twenty-two outplacement competency and they consider themselves well-qualified for the subcategories valued over 4.0 such as the professional knowledge, active attitude, responsibility, ethics and morality while they mark the other subcategories below average still need to be improved. Thus, the following is suggestions for successful outplacement. First, individual effort is essential to strengthen their capabilities based on accurate self evaluation, for which the awareness and concept need to be redefined to help them face up to the reality by readjusting career goal to a realistic level. Second, active career development plan to improve shortcoming in terms of outplacement competency is required. Third, it is necessary to establish the infrastructure related to outplacement training such as ON-OFF Line training system and facilities for learning to reinforce user-oriented outplacement training as a regular training course before during after the retirement.

  • PDF

A Time Series Analysis of Urban Park Behavior Using Big Data (빅데이터를 활용한 도시공원 이용행태 특성의 시계열 분석)

  • Woo, Kyung-Sook;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.1
    • /
    • pp.35-45
    • /
    • 2020
  • This study focused on the park as a space to support the behavior of urban citizens in modern society. Modern city parks are not spaces that play a specific role but are used by many people, so their function and meaning may change depending on the user's behavior. In addition, current online data may determine the selection of parks to visit or the usage of parks. Therefore, this study analyzed the change of behavior in Yeouido Park, Yeouido Hangang Park, and Yangjae Citizen's Forest from 2000 to 2018 by utilizing a time series analysis. The analysis method used Big Data techniques such as text mining and social network analysis. The summary of the study is as follows. The usage behavior of Yeouido Park has changed over time to "Ride" (Dynamic Behavior) for the first period (I), "Take" (Information Communication Service Behavior) for the second period (II), "See" (Communicative Behavior) for the third period (III), and "Eat" (Energy Source Behavior) for the fourth period (IV). In the case of Yangjae Citizens' Forest, the usage behavior has changed over time to "Walk" (Dynamic Behavior) for the first, second, and third periods (I), (II), (III) and "Play" (Dynamic Behavior) for the fourth period (IV). Looking at the factors affecting behavior, Yeouido Park was had various factors related to sports, leisure, culture, art, and spare time compared to Yangjae Citizens' Forest. The differences in Yangjae Citizens' Forest that affected its main usage behavior were various elements of natural resources. Second, the behavior of the target areas was found to be focused on certain main behaviors over time and played a role in selecting or limiting future behaviors. These results indicate that the space and facilities of the target areas had not been utilized evenly, as various behaviors have not occurred, however, a certain main behavior has appeared in the target areas. This study has great significance in that it analyzes the usage of urban parks using Big Data techniques, and determined that urban parks are transformed into play spaces where consumption progressed beyond the role of rest and walking. The behavior occurring in modern urban parks is changing in quantity and content. Therefore, through various types of discussions based on the results of the behavior collected through Big Data, we can better understand how citizens are using city parks. This study found that the behavior associated with static behavior in both parks had a great impact on other behaviors.

Developing English Proficiency by Using English Animation (영어애니메이션을 활용한 영어 의사소통 능력 향상에 관한 연구)

  • Jung, Jae-Hee
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.107-142
    • /
    • 2014
  • The purpose of this study is to examine the effects of the teaching English factors on student's communicative competence and motivation by using animation at the College. To achieve this purpose, this study presented an effective integrative teaching model to develop students communicative competence. The study created animation based teaching English model by using the animation of Frozen and applied it to lectures. Using animation in the classroom was a creative English teaching technique involving authentic activities like English dram, English guide contest, and various communicative activities A case study on the use of the animation in English classes at was examined and the language teaching syllabus were provided. In order to investigate the motivation and proficiency of learners, the writer chose 79 students who took the lecture. The study discovered the students' motivation and proficiency in English improved significantly. The results of experiment are as follows: First, using animation in the English class was found to have meaningful influence student's intrinsic motivation to learn English. Second, using animation in the English class was found to be effective for developing student's English proficiency. Third, appropriate materials should be selected and applied it to the real classroom activities. In conclusion, one of disadvantages of learning is less communication and the authentic interaction in a real life, so that the integrative teaching methodology which is combined English content and English animation content is also the effective method to improve student's intrinsic motivations in the age of global village.

Long-term Result after Repair of Sinus Valsalva Aneurysm Rupture (발살바동류 및 파열의 수술 후 장기 성적)

  • Lim, Sang-Hyun;Chang, Byung-Chul;Joo, Hyun-Chul;Kang, Meyun-Shick;Hong, You-Sun
    • Journal of Chest Surgery
    • /
    • v.38 no.10 s.255
    • /
    • pp.693-698
    • /
    • 2005
  • Background: Sinus valsalva aneurysm (SVA) is a rare disease, and it is frequently accompanied by ventricular septal defect and aortic valve regurgitaion. For treatment of SVA, several surgical mordalities were applied, but there was no report on the long-term result after surgical repair in Korea. We reviewed our 28 years of experiences and analyzed the long-term results after treatment of sinus valsalva aneurysm with or without rupture. Material and Method: Between March 1974 and February 2002, 81 patients were operated under the impression of sinus valvsalva aneurysm or sinus valsalva aneurym rupture. Retrospectively we reviewed the patients' record. Mean age of patients was $29.2\pm11.5$ and there were 49 males. Accompanyng diseases were as follows: VSD in 50, PDA in 2, Behcet's disease in 2, TOF in 1, RVOTO in 1, AAE in 1. Seventy-seven $(95\%)$ patients had sinus valsalva rupture and in 14 patients, subacute bacterial endocarditis was accompanied. Degree of aortic valve regurgitation was as follows: grade I: 8, II: 10, III: 9, IV: 4. Most common rupture site was right coronary sinus (66 patients, $81\%$) and most common communication site was right ventricle (53 patients). In repair of sinus valsalva rupture, patch was used in 37 patients, and direct suture was done in 38 patients. Result: There was one surgical death $(1.2\%)$. Follow up was done in 78 patients $(97.5\%)$, mean follow up period was $123.3\pm80.9(3\~330\;months)$. During the follow up period, 3 patients died $(3.8\%)$. One patient died of heart failure, another patient died of arrhythmia and the other one died of unknown cause. In two patients, complete atrio-ventricular block was developed during follow up period, and there was no operation related event or complication. Kaplan-Meier survival analysis revealed $92.5\pm3.5\%$ survival at 15 and 27 years and it seems to be satisfactory. Conclusion: Long-term surgical results and survival is satisfactory after repair of sinus valsalva aneurysm with or without rupture.