• 제목/요약/키워드: Memory Information

검색결과 5,217건 처리시간 0.038초

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • 제25권1호
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • 제23권1호
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • 제27권3호
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • 제6권1호
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

Effects of stimulus similarity on P300 amplitude in P300-based concealed information test (P300-기반 숨긴정보검사에서 자극유사성이 P300의 진폭에 미치는 영향)

  • Eom, Jin-Sup;Han, Yu-Hwa;Sohn, Jin-Hun;Park, Kwang-Bai
    • Science of Emotion and Sensibility
    • /
    • 제13권3호
    • /
    • pp.541-550
    • /
    • 2010
  • The present study examined whether the physical similarity of test stimuli affects P300 amplitude and detection accuracy for the P300-based concealed information test (P300 CIT). As the participant pretended suffering from memory impairment by an accident, own name was used as a concealed information to be probed by the P300 CIT in which the participant discriminated between a target and other (probe, irrelevant) stimuli. One group of participants was tested in the easy task condition with low physical similarity among stimuli, the other group was tested in the difficult task condition with high physical similarity among stimuli. Using the base-to-peak P300 amplitude, the interaction effect of task difficulty and stimulus type was significant at $\alpha$=.1 level (p=.052). In the easy task condition the difference of P300 amplitude between the probe and the irrelevant stimuli was significant, while in the difficult task condition the difference was not significant. Using peak-to-peak P300 amplitude, on the other hand, the interaction effect of task difficulty and stimulus type was not significant with significant differences of P300 amplitude between the probe and the irrelevant stimuli in both task difficulty conditions. The difference of detection accuracy between task conditions was not significant with both measures of P300 amplitude although the difference was much smaller when peak-to-peak P300 amplitude was used. The results suggest that the efficiency of P300 CIT would not decrease even when the perceptual similarity among test stimuli is high.

  • PDF

A Spatio-Temporal Clustering Technique for the Moving Object Path Search (이동 객체 경로 탐색을 위한 시공간 클러스터링 기법)

  • Lee, Ki-Young;Kang, Hong-Koo;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • 제7권3호
    • /
    • pp.67-81
    • /
    • 2005
  • Recently, the interest and research on the development of new application services such as the Location Based Service and Telemetics providing the emergency service, neighbor information search, and route search according to the development of the Geographic Information System have been increasing. User's search in the spatio-temporal database which is used in the field of Location Based Service or Telemetics usually fixes the current time on the time axis and queries the spatial and aspatial attributes. Thus, if the range of query on the time axis is extensive, it is difficult to efficiently deal with the search operation. For solving this problem, the snapshot, a method to summarize the location data of moving objects, was introduced. However, if the range to store data is wide, more space for storing data is required. And, the snapshot is created even for unnecessary space that is not frequently used for search. Thus, non storage space and memory are generally used in the snapshot method. Therefore, in this paper, we suggests the Hash-based Spatio-Temporal Clustering Algorithm(H-STCA) that extends the two-dimensional spatial hash algorithm used for the spatial clustering in the past to the three-dimensional spatial hash algorithm for overcoming the disadvantages of the snapshot method. And, this paper also suggests the knowledge extraction algorithm to extract the knowledge for the path search of moving objects from the past location data based on the suggested H-STCA algorithm. Moreover, as the results of the performance evaluation, the snapshot clustering method using H-STCA, in the search time, storage structure construction time, optimal path search time, related to the huge amount of moving object data demonstrated the higher performance than the spatio-temporal index methods and the original snapshot method. Especially, for the snapshot clustering method using H-STCA, the more the number of moving objects was increased, the more the performance was improved, as compared to the existing spatio-temporal index methods and the original snapshot method.

  • PDF

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • 제16권3호
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

The Influence of Number of Targets on Commonness Knowledge Generation and Brain Activity during the Life Science Commonness Discovery Task Performance (생명과학 공통성 발견 과제 수행에서 대상의 수가 공통성 지식 생성과 뇌 활성에 미치는 영향)

  • Kim, Yong-Seong;Jeong, Jin-Su
    • Journal of Science Education
    • /
    • 제43권1호
    • /
    • pp.157-172
    • /
    • 2019
  • The purpose of this study is to analyze the influence of number of targets on common knowledge generation and brain activity during the common life science discovery task performance. In this study, 35 preliminary life science teachers participated. This study was intentionally made a block designed for EEG recording. EEGs were collected while subjects were performing common discovery tasks. The sLORETA method and the relative power spectrum analysis method were used to analyze the brain activity difference and the role of activated cortical and subcortical regions according to the degree of difficulty of common discovery task. As a result of the study, in the case of the Theta wave, the activity of the Theta wave was significantly decreased in the frontal lobe and increased in the occipital lobe when the difficult difficulty task was compared with the easy difficulty task. In the case of Alpha wave, the activity of Alpha decreased significantly in the frontal lobe when performing difficult task with difficulty. Beta wave activity decreased significantly in the frontal lobe, parietal lobe, and occipital lobe when performing difficult task. Finally, in the case of Gamma wave, activity of Gamma wave decreased in the frontal lobe and activity increased in the parietal lobe and temporal lobe when performing the difficult difficulty task compared to the task of easy difficulty. The level of difficulty of the commonality discovery task is determined by the cingulate gyrus, the cuneus, the lingual gyrus, the posterior cingulate, the precuneus, and the sub-gyral where it was shown to have an impact. Therefore, the difficulty of the commonality discovery task is the process of integrating the visual information extracted from the image and the location information, comparing the attributes of the objects, selecting the necessary information, visual work memory process of the selected information. It can be said to affect the process of perception.

The Effect of Consumers' Value Motives on the Perception of Blog Reviews Credibility: the Moderation Effect of Tie Strength (소비자의 가치 추구 동인이 블로그 리뷰의 신뢰성 지각에 미치는 영향: 유대강도에 따른 조절효과를 중심으로)

  • Chu, Wujin;Roh, Min Jung
    • Asia Marketing Journal
    • /
    • 제13권4호
    • /
    • pp.159-189
    • /
    • 2012
  • What attracts consumers to bloggers' reviews? Consumers would be attracted both by the Bloggers' expertise (i.e., knowledge and experience) and by his/her unbiased manner of delivering information. Expertise and trustworthiness are both virtues of information sources, particularly when there is uncertainty in decision-making. Noting this point, we postulate that consumers' motives determine the relative weights they place on expertise and trustworthiness. In addition, our hypotheses assume that tie strength moderates consumers' expectation on bloggers' expertise and trustworthiness: with expectation on expertise enhanced for power-blog user-group (weak-ties), and an expectation on trustworthiness elevated for personal-blog user-group (strong-ties). Finally, we theorize that the effect of credibility on willingness to accept a review is moderated by tie strength; the predictive power of credibility is more prominent for the personal-blog user-groups than for the power-blog user groups. To support these assumptions, we conducted a field survey with blog users, collecting retrospective self-report data. The "gourmet shop" was chosen as a target product category, and obtained data analyzed by structural equations modeling. Findings from these data provide empirical support for our theoretical predictions. First, we found that the purposive motive aimed at satisfying instrumental information needs increases reliance on bloggers' expertise, but interpersonal connectivity value for alleviating loneliness elevates reliance on bloggers' trustworthiness. Second, expertise-based credibility is more prominent for power-blog user-groups than for personal-blog user-groups. While strong ties attract consumers with trustworthiness based on close emotional bonds, weak ties gain consumers' attention with new, non-redundant information (Levin & Cross, 2004). Thus, when the existing knowledge system, used in strong ties, does not work as smoothly for addressing an impending problem, the weak-tie source can be utilized as a handy reference. Thus, we can anticipate that power bloggers secure credibility by virtue of their expertise while personal bloggers trade off on their trustworthiness. Our analysis demonstrates that power bloggers appeal more strongly to consumers than do personal bloggers in the area of expertise-based credibility. Finally, the effect of review credibility on willingness to accept a review is higher for the personal-blog user-group than for the power-blog user-group. Actually, the inference that review credibility is a potent predictor of assessing willingness to accept a review is grounded on the analogy that attitude is an effective indicator of purchase intention. However, if memory about established attitudes is blocked, the predictive power of attitude on purchase intention is considerably diminished. Likewise, the effect of credibility on willingness to accept a review can be affected by certain moderators. Inspired by this analogy, we introduced tie strength as a possible moderator and demonstrated that tie strength moderated the effect of credibility on willingness to accept a review. Previously, Levin and Cross (2004) showed that credibility mediates strong-ties through receipt of knowledge, but this credibility mediation is not observed for weak-ties, where a direct path to it is activated. Thus, the predictive power of credibility on behavioral intention - that is, willingness to accept a review - is expected to be higher for strong-ties.

  • PDF

The Magnetic Properties and Quantum Effects of Molecular Nanomagnets (분자 자성체의 자기 특성과 양자역학적 효과)

  • Jang, Zee-Hoon
    • Journal of the Korean Magnetics Society
    • /
    • 제14권2호
    • /
    • pp.83-88
    • /
    • 2004
  • Magnetism of molecular nanomagnet, which attracted a lot of academic attention after the discovery of the macroscopic quantum tunneling of magnetism, is reviewed. Molecular nanomagnet is metal-organic material in which magnetic ions are regularly located in the organic skeleton. Also, the interaction between the molecules is very small and those molecules form macroscopic molecular crystal in which molecules are residing at the element points in the crystal. Molecular nanomagnets show a lot of interesting features, especially, equivalence of macroscopic magnetic properties and molecular magnetic properties. In this paper, research results on molecular nanomagnet with microscopic tool like NMR are reviewed mainly. The new method to observe the quantum tunneling of magnetization discovered in Mnl2-ac with NMR is shown and the research results on the microscopic aspects of the macroscopic quantum tunneling of magnetization using the new method are shown. Also, the physical aspect of the level crossing effect which has been reported originally with NMR in molecular nanomagnet is reviewed with experiment results. The research results on the molecular nanomagnets will reveal the important information about the limit of the miniaturization of magnetic memory units and give us the basic scientific knowledge which is needed for the application for the quantum computation. Moreover, academically, many quantum mechanical theories which have not been checked the validity can be checked with experiments.