• Title/Summary/Keyword: Memory systems

Search Result 2,164, Processing Time 0.037 seconds

Digital Nudge in an Online Review Environment: How Uploading Pictures First Affects the Quality of Reviews (온라인 리뷰 환경에서의 디지털 넛지: 사진을 먼저 업로드 하는 행동이 리뷰의 품질에 미치는 영향 )

  • Jaemin Lee;Taeyoung Kim;HoGeun Lee
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.1-26
    • /
    • 2023
  • Consumers tend to trust information provided by other consumers more than information provided by sellers. Therefore, while inducing consumers to write high-quality reviews is a very important task for companies, it is not easy to produce such high-quality reviews. Based on previous research on review writing and memory recall, we decided to develop a way to use digital nudge to help consumers naturally write high-quality reviews. Specifically, we designed an experiment to verify the effect of uploading a photo during the online review process on the quality of review of the review writer. We then recruited subjects and then divided them into groups that upload photos first and groups that do not. A task was assigned to each subject to write positive and negative reviews. As a result, it was confirmed that the behavior of uploading a photo first increases the review length. In addition, it was confirmed that when online users who upload photos first have extremely negative satisfaction with the product, the extent of two-sidedness of the review content increases.

An efficient interconnection network topology in dual-link CC-NUMA systems (이중 연결 구조 CC-NUMA 시스템의 효율적인 상호 연결망 구성 기법)

  • Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.49-56
    • /
    • 2004
  • The performance of the multiprocessor systems is limited by the several factors. The system performance is affected by the processor speed, memory delay, and interconnection network bandwidth/latency. By the evolution of semiconductor technology, off the shelf microprocessor speed breaks beyond GHz, and the processors can be scalable up to multiprocessor system by connecting through the interconnection networks. In this situation, the system performances are bound by the latencies and the bandwidth of the interconnection networks. SCI, Myrinet, and Gigabit Ethernet are widely adopted as a high-speed interconnection network links for the high performance cluster systems. Performance improvement of the interconnection network can be achieved by the bandwidth extension and the latency minimization. Speed up of the operation clock speed is a simple way to accomplish the bandwidth and latency betterment, while its physical distance makes the difficulties to attain the high frequency clock. Hence the system performance and scalability suffered from the interconnection network limitation. Duplicating the link of the interconnection network is one of the solutions to resolve the bottleneck of the scalable systems. Dual-ring SCI link structure is an example of the interconnection network improvement. In this paper, I propose a network topology and a transaction path algorism, which optimize the latency and the efficiency under the duplicated links. By the simulation results, the proposed structure shows 1.05 to 1.11 times better latency, and exhibits 1.42 to 2.1 times faster execution compared to the dual ring systems.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

An Analysis of the Roles of Experience in Information System Continuance (정보시스템의 지속적 사용에서 경험의 역할에 대한 분석)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.45-62
    • /
    • 2011
  • The notion of information systems (IS) continuance has recently emerged as one of the most important research issues in the field of IS. A great deal of research has been conducted thus far on the basis of theories adapted from various disciplines including consumer behaviors and social psychology, in addition to theories regarding information technology (IT) acceptance. This previous body of knowledge provides a robust research framework that can already account for the determination of IS continuance; however, this research points to other, thus-far-unelucidated determinant factors such as habit, which were not included in traditional IT acceptance frameworks, and also re-emphasizes the importance of emotion-related constructs such as satisfaction in addition to conscious intention with rational beliefs such as usefulness. Experiences should also be considered one of the most important factors determining the characteristics of information system (IS) continuance and the features distinct from those determining IS acceptance, because more experienced users may have more opportunities for IS use, which would allow them more frequent use than would be available to less experienced or non-experienced users. Interestingly, experience has dual features that may contradictorily influence IS use. On one hand, attitudes predicated on direct experience have been shown to predict behavior better than attitudes from indirect experience or without experience; as more information is available, direct experience may render IS use a more salient behavior, and may also make IS use more accessible via memory. Therefore, experience may serve to intensify the relationship between IS use and conscious intention with evaluations, On the other hand, experience may culminate in the formation of habits: greater experience may also imply more frequent performance of the behavior, which may lead to the formation of habits, Hence, like experience, users' activation of an IS may be more dependent on habit-that is, unconscious automatic use without deliberation regarding the IS-and less dependent on conscious intentions, Furthermore, experiences can provide basic information necessary for satisfaction with the use of a specific IS, thus spurring the formation of both conscious intentions and unconscious habits, Whereas IT adoption Is a one-time decision, IS continuance may be a series of users' decisions and evaluations based on satisfaction with IS use. Moreover. habits also cannot be formed without satisfaction, even when a behavior is carried out repeatedly. Thus, experiences also play a critical role in satisfaction, as satisfaction is the consequence of direct experiences of actual behaviors. In particular, emotional experiences such as enjoyment can become as influential on IS use as are utilitarian experiences such as usefulness; this is especially true in light of the modern increase in membership-based hedonic systems - including online games, web-based social network services (SNS), blogs, and portals-all of which attempt to provide users with self-fulfilling value. Therefore, in order to understand more clearly the role of experiences in IS continuance, analysis must be conducted under a research framework that includes intentions, habits, and satisfaction, as experience may not only have duration-based moderating effects on the relationship between both intention and habit and the activation of IS use, but may also have content-based positive effects on satisfaction. This is consistent with the basic assumptions regarding the determining factors in IS continuance as suggested by Oritz de Guinea and Markus: consciousness, emotion, and habit. The principal objective of this study was to explore and assess the effects of experiences in IS continuance, with special consideration given to conscious intentions and unconscious habits, as well as satisfaction. IN service of this goal, along with a review of the relevant literature regarding the effects of experiences and habit on continuous IS use, this study suggested a research model that represents the roles of experience: its moderating role in the relationships of IS continuance with both conscious intention and unconscious habit, and its antecedent role in the development of satisfaction. For the validation of this research model. Korean university student users of 'Cyworld', one of the most influential social network services in South Korea, were surveyed, and the data were analyzed via partial least square (PLS) analysis to assess the implications of this study. In result most hypotheses in our research model were statistically supported with the exception of one. Although one hypothesis was not supported, the study's findings provide us with some important implications. First the role of experience in IS continuance differs from its role in IS acceptance. Second, the use of IS was explained by the dynamic balance between habit and intention. Third, the importance of satisfaction was confirmed from the perspective of IS continuance with experience.

EEG based Cognitive Load Measurement for e-learning Application (이러닝 적용을 위한 뇌파기반 인지부하 측정)

  • Kim, Jun;Song, Ki-Sang
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.2
    • /
    • pp.125-154
    • /
    • 2009
  • This paper describes the possibility of human physiological data, especially brain-wave activity, to detect cognitive overload, a phenomenon that may occur while learner uses an e-learning system. If it is found that cognitive overload to be detectable, providing appropriate feedback to learners may be possible. To illustrate the possibility, while engaging in cognitive activities, cognitive load levels were measured by EEG (electroencephalogram) to seek detection of cognitive overload. The task given to learner was a computerized listening and recall test designed to measure working memory capacity, and the test had four progressively increasing degrees of difficulty. Eight male, right-handed, university students were asked to answer 4 sets of tests and each test took from 61 seconds to 198 seconds. A correction ratio was then calculated and EEG results analyzed. The correction ratio of listening and recall tests were 84.5%, 90.6%, 62.5% and 56.3% respectively, and the degree of difficulty had statistical significance. The data highlighted learner cognitive overload on test level of 3 and 4, the higher level tests. Second, the SEF-95% value was greater on test3 and 4 than on tests 1 and 2 indicating that tests 3 and 4 imposed greater cognitive load on participants. Third, the relative power of EEG gamma wave rapidly increased on the 3rd and $4^{th}$ test, and signals from channel F3, F4, C4, F7, and F8 showed statistically significance. These five channels are surrounding the brain's Broca area, and from a brain mapping analysis it was found that F8, right-half of the brain area, was activated relative to the degree of difficulty. Lastly, cross relation analysis showed greater increasing in synchronization at test3 and $4^{th}$ at test1 and 2. From these findings, it is possible to measure brain cognitive load level and cognitive over load via brain activity, which may provide atimely feedback scheme for e-learning systems.

  • PDF

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Conflicts between the Conservation and Removal of the Modern Historic Landscapes - A Case of the Demolition Controversy of the Japanese General Government Building in Seoul - (근대 역사 경관의 보존과 철거 - 구 조선총독부 철거 논쟁을 사례로 -)

  • Son, Eun-Shin;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.4
    • /
    • pp.21-35
    • /
    • 2018
  • In recent years, there has been a tendency to reuse 'landscapes of memory,' including industrial heritages, modern cultural heritages, and post-industrial parks, as public spaces in many cities. Among the various types of landscapes, 'modern historic landscapes', which were formed in the 19th and 20th centuries, are landscapes where the debate between conservation and removal is most frequent, according to the change of evaluation and recognition of modern history. This study examines conflicts between conservation and removal around modern historic landscapes and explores the value judgment criteria and the process of formation of those landscapes, as highlighted in the case of the demolition controversy of the old Japanese general government building in Seoul, which was dismantled in 1995. First, this study reviews newspaper articles, television news and debate programs from 1980-1999 and some articles related to the controversy of the Japanese general government building. Then it draws the following six factors as the main issues of the demolition controversy of the building: symbolic location, discoveries and responses of new historical facts, reaction and intervention of a related country, financial conditions, function and usage of the landscape, changes of urban, historical and architectural policies. Based on these issues, this study examines the conflicts between symbolic values that play an important role in the formation of modern historic landscapes and determines conservation or removal, and the utility of functional values that solve the problems and respond to criticisms that arise in the process of forming the modern historic landscape. Especially, it is noted that the most important factor that makes the decision is the symbolic values, although the determination of the conservation or removal of modern historic landscapes has changed according to changes in historical perceptions of modern history. Today, the modern historic landscape is an important site for urban design, and still has historical issues to be agreed upon and addressed. Thi study has contemporary significance from the point that it divides the many values of modern historic landscapes into symbolic values and functional values, evaluates these, and reviews the background social context.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.