• Title/Summary/Keyword: science, artificial intelligence

Search Result 1,441, Processing Time 0.035 seconds

The Effect of Health and Environmental Message Framing on Consumer Attitude and WoM: Focused on Vegan Product (건강과 환경 메시지 프레이밍에 따른 소비자 태도와 구전에 미치는 영향: 비건 제품을 중심으로)

  • Park, Seoyoung;Lim, Boram
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.127-146
    • /
    • 2023
  • Recently, digital advertising has shifted towards delivering messages through short ads of less than 15 seconds, and on social media, ads need to convey the message within 5 seconds before consumers skip them. Although the length of advertisements has decreased, advancements in artificial intelligence algorithms and big data analysis have made it possible to deliver personalized messages that cater to consumers' interests. In this changing landscape, the importance of delivering tailored messages through short and efficient ads is increasing. In this study, we examined the effects of message framing as part of effective message delivery. Specifically, we examined the differences in the effects of two framings, "health" and "environment," for vegan products. The growing consumer interest in health and the environment has elevated the interest in vegan products, and the vegan market is expanding rapidly. Consumers purchase vegan products not only for personal health benefits but also due to their ethical responsibility towards the environment, which can be considered ethical consumption. Previous research has not shown the differences in the effects between health and environment message framings, and the research has been limited to vegan food products. This study investigates the differences in the effects of health and environment message framings using a dish soap product category. By identifying which advertising messages, either health or environment, are more effective in promoting vegan products, this study provides insights for companies to enhance their message framing strategies effectively.

A Study on the Intention to Use of the AI-related Educational Content Recommendation System in the University Library: Focusing on the Perceptions of University Students and Librarians (대학도서관 인공지능 관련 교육콘텐츠 추천 시스템 사용의도에 관한 연구 - 대학생과 사서의 인식을 중심으로 -)

  • Kim, Seonghun;Park, Sion;Parkk, Jiwon;Oh, Youjin
    • Journal of Korean Library and Information Science Society
    • /
    • v.53 no.1
    • /
    • pp.231-263
    • /
    • 2022
  • The understanding and capability to utilize artificial intelligence (AI) incorporated technology has become a required basic skillset for the people living in today's information age, and various members of the university have also increasingly become aware of the need for AI education. Amidst such shifting societal demands, both domestic and international university libraries have recognized the users' need for educational content centered on AI, but a user-centered service that aims to provide personalized recommendations of digital AI educational content is yet to become available. It is critical while the demand for AI education amongst university students is progressively growing that university libraries acquire a clear understanding of user intention towards an AI educational content recommender system and the potential factors contributing to its success. This study intended to ascertain the factors affecting acceptance of such system, using the Extended Technology Acceptance Model with added variables - innovativeness, self-efficacy, social influence, system quality and task-technology fit - in addition to perceived usefulness, perceived ease of use, and intention to use. Quantitative research was conducted via online research surveys for university students, and quantitative research was conducted through written interviews of university librarians. Results show that all groups, regardless of gender, year, or major, have the intention to use the AI-related Educational Content Recommendation System, with the task suitability factor being the most dominant variant to affect use intention. University librarians have also expressed agreement about the necessity of the recommendation system, and presented budget and content quality issues as realistic restrictions of the aforementioned system.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.

Machine- and Deep Learning Modelling Trends for Predicting Harmful Cyanobacterial Cells and Associated Metabolites Concentration in Inland Freshwaters: Comparison of Algorithms, Input Variables, and Learning Data Number (담수 유해남조 세포수·대사물질 농도 예측을 위한 머신러닝과 딥러닝 모델링 연구동향: 알고리즘, 입력변수 및 학습 데이터 수 비교)

  • Yongeun Park;Jin Hwi Kim;Hankyu Lee;Seohyun Byeon;Soon-Jin Hwang;Jae-Ki Shin
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.3
    • /
    • pp.268-279
    • /
    • 2023
  • Nowadays, artificial intelligence model approaches such as machine and deep learning have been widely used to predict variations of water quality in various freshwater bodies. In particular, many researchers have tried to predict the occurrence of cyanobacterial blooms in inland water, which pose a threat to human health and aquatic ecosystems. Therefore, the objective of this study were to: 1) review studies on the application of machine learning models for predicting the occurrence of cyanobacterial blooms and its metabolites and 2) prospect for future study on the prediction of cyanobacteria by machine learning models including deep learning. In this study, a systematic literature search and review were conducted using SCOPUS, which is Elsevier's abstract and citation database. The key results showed that deep learning models were usually used to predict cyanobacterial cells, while machine learning models focused on predicting cyanobacterial metabolites such as concentrations of microcystin, geosmin, and 2-methylisoborneol (2-MIB) in reservoirs. There was a distinct difference in the use of input variables to predict cyanobacterial cells and metabolites. The application of deep learning models through the construction of big data may be encouraged to build accurate models to predict cyanobacterial metabolites.

Analysis of Users' Sentiments and Needs for ChatGPT through Social Media on Reddit (Reddit 소셜미디어를 활용한 ChatGPT에 대한 사용자의 감정 및 요구 분석)

  • Hye-In Na;Byeong-Hee Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2024
  • ChatGPT, as a representative chatbot leveraging generative artificial intelligence technology, is used valuable not only in scientific and technological domains but also across diverse sectors such as society, economy, industry, and culture. This study conducts an explorative analysis of user sentiments and needs for ChatGPT by examining global social media discourse on Reddit. We collected 10,796 comments on Reddit from December 2022 to August 2023 and then employed keyword analysis, sentiment analysis, and need-mining-based topic modeling to derive insights. The analysis reveals several key findings. The most frequently mentioned term in ChatGPT-related comments is "time," indicative of users' emphasis on prompt responses, time efficiency, and enhanced productivity. Users express sentiments of trust and anticipation in ChatGPT, yet simultaneously articulate concerns and frustrations regarding its societal impact, including fears and anger. In addition, the topic modeling analysis identifies 14 topics, shedding light on potential user needs. Notably, users exhibit a keen interest in the educational applications of ChatGPT and its societal implications. Moreover, our investigation uncovers various user-driven topics related to ChatGPT, encompassing language models, jobs, information retrieval, healthcare applications, services, gaming, regulations, energy, and ethical concerns. In conclusion, this analysis provides insights into user perspectives, emphasizing the significance of understanding and addressing user needs. The identified application directions offer valuable guidance for enhancing existing products and services or planning the development of new service platforms.

Dark-Blood Computed Tomography Angiography Combined With Deep Learning Reconstruction for Cervical Artery Wall Imaging in Takayasu Arteritis

  • Tong Su;Zhe Zhang;Yu Chen;Yun Wang;Yumei Li;Min Xu;Jian Wang;Jing Li;Xinping Tian;Zhengyu Jin
    • Korean Journal of Radiology
    • /
    • v.25 no.4
    • /
    • pp.384-394
    • /
    • 2024
  • Objective: To evaluate the image quality of novel dark-blood computed tomography angiography (CTA) imaging combined with deep learning reconstruction (DLR) compared to delayed-phase CTA images with hybrid iterative reconstruction (HIR), to visualize the cervical artery wall in patients with Takayasu arteritis (TAK). Materials and Methods: This prospective study continuously recruited 53 patients with TAK (mean age: 33.8 ± 10.2 years; 49 females) between January and July 2022 who underwent head-neck CTA scans. The arterial- and delayed-phase images were reconstructed using HIR and DLR. Subtracted images of the arterial-phase from the delayed-phase were then added to the original delayed-phase using a denoising filter to generate the final-dark-blood images. Qualitative image quality scores and quantitative parameters were obtained and compared among the three groups of images: Delayed-HIR, Dark-blood-HIR, and Dark-blood-DLR. Results: Compared to Delayed-HIR, Dark-blood-HIR images demonstrated higher qualitative scores in terms of vascular wall visualization and diagnostic confidence index (all P < 0.001). These qualitative scores further improved after applying DLR (Dark-blood-DLR compared to Dark-blood-HIR, all P < 0.001). Dark-blood DLR also showed higher scores for overall image noise than Dark-blood-HIR (P < 0.001). In the quantitative analysis, the contrast-to-noise ratio (CNR) values between the vessel wall and lumen for the bilateral common carotid arteries and brachiocephalic trunk were significantly higher on Dark-blood-HIR images than on Delayed-HIR images (all P < 0.05). The CNR values were significantly higher for Dark-blood-DLR than for Dark-blood-HIR in all cervical arteries (all P < 0.001). Conclusion: Compared with Delayed-HIR CTA, the dark-blood method combined with DLR improved CTA image quality and enhanced visualization of the cervical artery wall in patients with TAK.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.