• 제목/요약/키워드: Google matrix

Search Result 27, Processing Time 0.02 seconds

A Study on the Service Design for Online Service Company to Enhance User Experience

  • Lee, Ji-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.1
    • /
    • pp.101-107
    • /
    • 2012
  • Objective: The aim of this study is to investigate service design cases by online service companies and suggests framework to understand service design of them. Background: Recently, exploratory service design cases by online service companies such as Google, Apple, and NHN has been developed. Service design and online service experience design has been booming among user experience professionals but these two areas are not clearly defined. It is interesting to study the definition and key factors of service design and online service experience design. Moreover, investigating service design cases by online service companies is needed. Method: Due to diversification of service design cases by online service companies, this study has reviewed online resources and literatures about top 5 online service companies in USA and Korea. Furthermore, this study used expert interview who worked for service design at NHN. To understand the attributes of service design cases, this study developed 3 types of service design classifications scheme such as service design as extension of online service, space design and event service design. Finally, this study suggested a new framework for service design cases. Results: This study investigated service design cases by online service companies and suggests key issues and frameworks to uncover service design for online service. Conclusion: Service design cases for online service was analyzed by $2^*2$ matrix(extension, $enrichment^*product$, service) to explain characteristics and attributes. NHN's Knowledge-iN bookshelf at NHN library1 is a unique form of service design as a tool of enrichment of online experience. Application: The results of this study might help to understand service design cases and plan new service design for online service companies with structured framework.

A Study on the Application of SNS Big Data to the Industry in the Fourth Industrial Revolution (제4차 산업혁명에서 SNS 빅데이터의 외식산업 활용 방안에 대한 연구)

  • Han, Soon-lim;Kim, Tae-ho;Lee, Jong-ho;Kim, Hak-Seon
    • Culinary science and hospitality research
    • /
    • v.23 no.7
    • /
    • pp.1-10
    • /
    • 2017
  • This study proposed SNS big data analysis method of food service industry in the 4th industrial revolution. This study analyzed the keyword of the fourth industrial revolution by using Google trend. Based on the data posted on the SNS from January 1, 2016 to September 5, 2017 (1 year and 8 months) utilizing the "Social Metrics". Through the social insights, the related words related to cooking were analyzed and visualized about attributes, products, hobbies and leisure. As a result of the analysis, keywords were found such as cooking, entrepreneurship, franchise, restaurant, job search, Twitter, family, friends, menu, reaction, video, etc. As a theoretical implication of this study, we proposed how to utilize big data produced from various online materials for research on restaurant business, interpret atypical data as meaningful data and suggest the basic direction of field application. In order to utilize positioning of customers of restaurant companies in the future, this study suggests more detailed and in-depth consumer sentiment as a basic resource for marketing data development through various menu development and customers' perception change. In addition, this study provides marketing implications for the foodservice industry and how to use big data for the cooking industry in preparation for the fourth industrial revolution.

Combination of Brain Cancer with Hybrid K-NN Algorithm using Statistical of Cerebrospinal Fluid (CSF) Surgery

  • Saeed, Soobia;Abdullah, Afnizanfaizal;Jhanjhi, NZ
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.120-130
    • /
    • 2021
  • The spinal cord or CSF surgery is a very complex process. It requires continuous pre and post-surgery evaluation to have a better ability to diagnose the disease. To detect automatically the suspected areas of tumors and symptoms of CSF leakage during the development of the tumor inside of the brain. We propose a new method based on using computer software that generates statistical results through data gathered during surgeries and operations. We performed statistical computation and data collection through the Google Source for the UK National Cancer Database. The purpose of this study is to address the above problems related to the accuracy of missing hybrid KNN values and finding the distance of tumor in terms of brain cancer or CSF images. This research aims to create a framework that can classify the damaged area of cancer or tumors using high-dimensional image segmentation and Laplace transformation method. A high-dimensional image segmentation method is implemented by software modelling techniques with measures the width, percentage, and size of cells within the brain, as well as enhance the efficiency of the hybrid KNN algorithm and Laplace transformation make it deal the non-zero values in terms of missing values form with the using of Frobenius Matrix for deal the space into non-zero values. Our proposed algorithm takes the longest values of KNN (K = 1-100), which is successfully demonstrated in a 4-dimensional modulation method that monitors the lighting field that can be used in the field of light emission. Conclusion: This approach dramatically improves the efficiency of hybrid KNN method and the detection of tumor region using 4-D segmentation method. The simulation results verified the performance of the proposed method is improved by 92% sensitivity of 60% specificity and 70.50% accuracy respectively.

Artificial Intelligence and College Mathematics Education (인공지능(Artificial Intelligence)과 대학수학교육)

  • Lee, Sang-Gu;Lee, Jae Hwa;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.34 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • Today's healthcare, intelligent robots, smart home systems, and car sharing are already innovating with cutting-edge information and communication technologies such as Artificial Intelligence (AI), the Internet of Things, the Internet of Intelligent Things, and Big data. It is deeply affecting our lives. In the factory, robots have been working for humans more than several decades (FA, OA), AI doctors are also working in hospitals (Dr. Watson), AI speakers (Giga Genie) and AI assistants (Siri, Bixby, Google Assistant) are working to improve Natural Language Process. Now, in order to understand AI, knowledge of mathematics becomes essential, not a choice. Thus, mathematicians have been given a role in explaining such mathematics that make these things possible behind AI. Therefore, the authors wrote a textbook 'Basic Mathematics for Artificial Intelligence' by arranging the mathematics concepts and tools needed to understand AI and machine learning in one or two semesters, and organized lectures for undergraduate and graduate students of various majors to explore careers in artificial intelligence. In this paper, we share our experience of conducting this class with the full contents in http://matrix.skku.ac.kr/math4ai/.

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.

Professional Baseball Viewing Culture Survey According to Corona 19 using Social Network Big Data (소셜네트워크 빅데이터를 활용한 코로나 19에 따른 프로야구 관람문화조사)

  • Kim, Gi-Tak
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.6
    • /
    • pp.139-150
    • /
    • 2020
  • The data processing of this study focuses on the textom and social media words about three areas: 'Corona 19 and professional baseball', 'Corona 19 and professional baseball', and 'Corona 19 and professional sports' The data was collected and refined in a web environment and then processed in batch, and the Ucinet6 program was used to visualize it. Specifically, the web environment was collected using Naver, Daum, and Google's channels, and was summarized into 30 words through expert meetings among the extracted words and used in the final study. 30 extracted words were visualized through a matrix, and a CONCOR analysis was performed to identify clusters of similarity and commonality of words. As a result of analysis, the clusters related to Corona 19 and Pro Baseball were composed of one central cluster and five peripheral clusters, and it was found that the contents related to the opening of professional baseball according to the corona 19 wave were mainly searched. The cluster related to Corona 19 and unrelated to professional baseball consisted of one central cluster and five peripheral clusters, and it was found that the keyword of the position of professional baseball related to the professional baseball game according to Corona 19 was mainly searched. Corona 19 and the cluster related to professional sports consisted of one central cluster and five peripheral clusters, and it was found that the keywords related to the start of professional sports according to the aftermath of Corona 19 were mainly searched.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.