• Title/Summary/Keyword: GoogleNet

Search Result 52, Processing Time 0.022 seconds

A Semantic Network Analysis of Big Data regarding Food Exhibition at Convention Center (전시컨벤션센터 식품박람회와 관련된 빅데이터의 의미연결망 분석)

  • Kim, Hak-Seon
    • Culinary science and hospitality research
    • /
    • v.23 no.3
    • /
    • pp.257-270
    • /
    • 2017
  • The purpose of this study was to visualize the semantic network with big data related to food exhibition at convention center. For this, this study collected data containing 'coex food exhibition/bexco food exhibition' keywords from web pages and news on Google during one year from January 1 to December 31, 2016. Data were collected by using TEXTOM, a data collecting and processing program. From those data, degree centrality, closeness centrality, betweenness centrality and eigenvector centrality were analyzed by utilizing packaged NetDraw along with UCINET 6. The result showed that the web visibility of hospitality and destinations was high. In addition, the web visibility was also high for convention center programs, such as festival, exhibition, k-pop and event; hospitality related words, such as tourists, service, hotel, cruise, cuisine, travel. Convergence of iterated correlations showed 4 clustered named "Coex", "Bexco", "Nations" and "Hospitality". It is expected that this diagnosis on food exhibition at convention center according to changes in domestic environment by using these web information will be a foundation of baseline data useful for establishing convention marketing strategies.

Sketch Recognition Using LSTM with Attention Mechanism and Minimum Cost Flow Algorithm

  • Nguyen-Xuan, Bac;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.8-15
    • /
    • 2019
  • This paper presents a solution of the 'Quick, Draw! Doodle Recognition Challenge' hosted by Google. Doodles are drawings comprised of concrete representational meaning or abstract lines creatively expressed by individuals. In this challenge, a doodle is presented as a sequence of sketches. From the view of at the sketch level, to learn the pattern of strokes representing a doodle, we propose a sequential model stacked with multiple convolution layers and Long Short-Term Memory (LSTM) cells following the attention mechanism [15]. From the view at the image level, we use multiple models pre-trained on ImageNet to recognize the doodle. Finally, an ensemble and a post-processing method using the minimum cost flow algorithm are introduced to combine multiple models in achieving better results. In this challenge, our solutions garnered 11th place among 1,316 teams. Our performance was 0.95037 MAP@3, only 0.4% lower than the winner. It demonstrates that our method is very competitive. The source code for this competition is published at: https://github.com/ngxbac/Kaggle-QuickDraw.

AI Processor Technology Trends (인공지능 프로세서 기술 동향)

  • Kwon, Youngsu
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.5
    • /
    • pp.121-134
    • /
    • 2018
  • The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

Cody Recommendation System Using Deep Learning and User Preferences

  • Kwak, Naejoung;Kim, Doyun;kim, Minho;kim, Jongseo;Myung, Sangha;Yoon, Youngbin;Choi, Jihye
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.321-326
    • /
    • 2019
  • As AI technology is recently introduced into various fields, it is being applied to the fashion field. This paper proposes a system for recommending cody clothes suitable for a user's selected clothes. The proposed system consists of user app, cody recommendation module, and server interworking of each module and managing database data. Cody recommendation system classifies clothing images into 80 categories composed of feature combinations, selects multiple representative reference images for each category, and selects 3 full body cordy images for each representative reference image. Cody images of the representative reference image were determined by analyzing the user's preference using Google survey app. The proposed algorithm classifies categories the clothing image selected by the user into a category, recognizes the most similar image among the classification category reference images, and transmits the linked cody images to the user's app. The proposed system uses the ResNet-50 model to categorize the input image and measures similarity using ORB and HOG features to select a reference image in the category. We test the proposed algorithm in the Android app, and the result shows that the recommended system runs well.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

Research trend analysis of Korean new graduate nurses using topic modeling (토픽모델링을 활용한 신규간호사 관련 국내 연구동향 분석)

  • Park, Seungmi;Lee, Jung Lim
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.27 no.3
    • /
    • pp.240-250
    • /
    • 2021
  • Purpose: The aim of this study is to analyze the research trends of articles on just graduated Korean nurses during the past 10 years for exploring strategies for clinical adaptation. Methods: The topics of new graduate nurses were extracted from 110 articles that have been published in Korean journals between January 2010 and July 2020. Abstracts were retrieved from 4 databases (DBpia, RISS, KISS and Google scholar). Keywords were extracted from the abstracts and cleaned using semantic morphemes. Network analysis and topic modeling were performed using the NetMiner program. Results: The core keywords included 'education', 'training', 'program', 'skill', 'care', 'performance', and 'satisfaction'. In recent articles on new graduate nurses, three major topics were extracted by Latent Dirichlet Allocation (LDA) techniques: 'turnover', 'adaptation', 'education'. Conclusion: Previous articles focused on exploring the factors related to the adaptation and turnover intentions of new graduate nurses. It is necessary to conduct further research focused on various interventions at the individual, task, and organizational levels to improve the retention of new graduate nurses.

Differentiation among stability regimes of alumina-water nanofluids using smart classifiers

  • Daryayehsalameh, Bahador;Ayari, Mohamed Arselene;Tounsi, Abdelouahed;Khandakar, Amith;Vaferi, Behzad
    • Advances in nano research
    • /
    • v.12 no.5
    • /
    • pp.489-499
    • /
    • 2022
  • Nanofluids have recently triggered a substantial scientific interest as cooling media. However, their stability is challenging for successful engagement in industrial applications. Different factors, including temperature, nanoparticles and base fluids characteristics, pH, ultrasonic power and frequency, agitation time, and surfactant type and concentration, determine the nanofluid stability regime. Indeed, it is often too complicated and even impossible to accurately find the conditions resulting in a stabilized nanofluid. Furthermore, there are no empirical, semi-empirical, and even intelligent scenarios for anticipating the stability of nanofluids. Therefore, this study introduces a straightforward and reliable intelligent classifier for discriminating among the stability regimes of alumina-water nanofluids based on the Zeta potential margins. In this regard, various intelligent classifiers (i.e., deep learning and multilayer perceptron neural network, decision tree, GoogleNet, and multi-output least squares support vector regression) have been designed, and their classification accuracy was compared. This comparison approved that the multilayer perceptron neural network (MLPNN) with the SoftMax activation function trained by the Bayesian regularization algorithm is the best classifier for the considered task. This intelligent classifier accurately detects the stability regimes of more than 90% of 345 different nanofluid samples. The overall classification accuracy and misclassification percent of 90.1% and 9.9% have been achieved by this model. This research is the first try toward anticipting the stability of water-alumin nanofluids from some easily measured independent variables.

Big Data Analysis of the Women Who Score Goal Sports Entertainment Program: Focusing on Text Mining and Semantic Network Analysis.

  • Hyun-Myung, Kim;Kyung-Won, Byun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.222-230
    • /
    • 2023
  • The purpose of this study is to provide basic data on sports entertainment programs by collecting data on unstructured data generated by Naver and Google for SBS entertainment program 'Women Who Score Goal', which began regular broadcast in June 2021, and analyzing public perceptions through data mining, semantic matrix, and CONCOR analysis. Data collection was conducted using Textom, and 27,911 cases of data accumulated for 16 months from June 16, 2021 to October 15, 2022. For the collected data, 80 key keywords related to 'Kick a Goal' were derived through simple frequency and TF-IDF analysis through data mining. Semantic network analysis was conducted to analyze the relationship between the top 80 keywords analyzed through this process. The centrality was derived through the UCINET 6.0 program using NetDraw of UCINET 6.0, understanding the characteristics of the network, and visualizing the connection relationship between keywords to express it clearly. CONCOR analysis was conducted to derive a cluster of words with similar characteristics based on the semantic network. As a result of the analysis, it was analyzed as a 'program' cluster related to the broadcast content of 'Kick a Goal' and a 'Soccer' cluster, a sports event of 'Kick a Goal'. In addition to the scenes about the game of the cast, it was analyzed as an 'Everyday Life' cluster about training and daily life, and a cluster about 'Broadcast Manipulation' that disappointed viewers with manipulation of the game content.

A Study on the necessity of Open Source Software Intermediaries in the Software Distribution Channel (소프트웨어 유통에 있어 공개소프트웨어 중개자의필요성에 대한 연구)

  • Lee, Seung-Chang;Suh, Eung-Kyo;Ahn, Sung-Hyuck;Park, Hoon-Sung
    • Journal of Distribution Science
    • /
    • v.11 no.2
    • /
    • pp.45-55
    • /
    • 2013
  • Purpose - The development and implementation of OSS (Open Source Software) led to a dramatic change in corporate IT infrastructure, from system server to smart phone, because the performance, reliability, and security functions of OSS are comparable to those of commercial software. Today, OSS has become an indispensable tool to cope with the competitive business environment and the constantly-evolving IT environment. However, the use of OSS is insufficient in small and medium-sized companies and software houses. This study examines the need for OSS Intermediaries in the Software Distribution Channel. It is expected that the role of the OSS Intermediary will be reduced with the improvement of the distribution process. The purpose of this research is to prove that OSS Intermediaries increase the efficiency of the software distribution market. Research design, Data, and Methodology - This study presents the analysis of data gathered online to determine the extent of the impact of the intermediaries on the OSS market. Data was collected using an online survey, conducted by building a personal search robot (web crawler). The survey period lasted 9 days during which a total of 233,021 data points were gathered from sourceforge.net and Apple's App store, the two most popular software intermediaries in the world. The data collected was analyzed using Google's Motion Chart. Results - The study found that, beginning 2006, the production of OSS in the Sourceforge.net increased rapidly across the board, but in the second half of 2009, it dropped sharply. There are many events that can explain this causality; however, we found an appropriate event to explain the effect. It was seen that during the same period of time, the monthly production of OSS in the App store was increasing quickly. The App store showed a contrasting trend to software production. Our follow-up analysis suggests that appropriate intermediaries like App store can enlarge the OSS market. The increase was caused by the appearance of B2C software intermediaries like App store. The results imply that OSS intermediaries can accelerate OSS software distribution, while development of a better online market is critical for corporate users. Conclusion - In this study, we analyzed 233,021 data points on the online software marketplace at Sourceforge.net. It indicates that OSS Intermediaries are needed in the software distribution market for its vitality. It is also critical that OSS intermediaries should satisfy certain qualifications to play a key role as market makers. This study has several interesting implications. One implication of this research is that the OSS intermediary should make an effort to create a complementary relationship between OSS and Proprietary Software. The second implication is that the OSS intermediary must possess a business model that shares the benefits with all the participants (developer, intermediary, and users).The third implication is that the intermediary provides an OSS of high quality like proprietary software with a high level of complexity. Thus, it is worthwhile to examine this study, which proves that the open source software intermediaries are essential in the software distribution channel.

  • PDF

Pain Assessment in Nonverbal Older Adults with Dementia (언어적 의사소통이 어려운 치매환자에서의 통증 사정)

  • Kim, Hyun Sook;Yu, Su Jeong
    • Journal of Hospice and Palliative Care
    • /
    • v.16 no.3
    • /
    • pp.145-154
    • /
    • 2013
  • This study was performed to evaluate the existing pain assessment methods including the tools developed for use with nonverbal older adults with dementia, and to suggest recommendations to clinicians based on the evaluations. Computerized literature searches published after year 2000 using databases - Google scholar, RISS, KoreaMed, Medline, ScienceDirect, CINAHL - were done. Searching keywords were 'pain', 'pain assessment', and 'cognitive impairment/dementia'. The pain assessments for non-communicative dementia patients who are unable to self-report their pains are often made using the assessment tools relying on the observation of behavioral indicators or alternatively the strategy of surrogate reporting. While several tools in English version and only one in Korean are suggested for the pain assessments based on the observation of behavioral indicators, none are commonly used. In this review, we selectively evaluated those tools known to show relatively higher degree of validity and reliability for nonverbal older adults with dementia, namely, CNPI, DOLOPLUS 2, PACSLAC, PAINAD, and DS-DAT. It is hoped that the present review of selected tools for assessing pain in those vulnerable population and the general recommendations given be useful for clinicians in their palliative care practice. And future studies should focus on enriching the validation of the useful tools used to observe the nonverbal patient's behavioral indicators for pain in Korean.