• Title/Summary/Keyword: higher order accuracy

Search Result 791, Processing Time 0.028 seconds

Simple Recovery Mechanism for Branch Misprediction in Global-History-Based Branch Predictors Allowing the Speculative Update of Branch History (분기 히스토리의 모험적 갱신을 허용하는 전역 히스토리 기반 분기예측기에서 분기예측실패를 위한 간단한 복구 메커니즘)

  • Ko, Kwang-Hyun;Cho, Young-Il
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.6
    • /
    • pp.306-313
    • /
    • 2005
  • Conditional branch prediction is an important technique for improving processor performance. Branch mispredictions, however, waste a large number of cycles, inhibit out-of-order execution, and waste electric power on mis-speculated instructions. Hence, the branch predictor with higher accuracy is necessary for good processor performance. In global-history-based predictors like gshare and GAg, many mispredictions come from commit update of the history. Some works on this subject have discussed the need for speculative update of the history and recovery mechanisms for branch mispredictions. In this paper, we present a simple mechanism for recovering the branch history after a misprediction. The proposed mechanism adds an age_counter to the original predictor and doubles the size of the branch history register. The age_counter counts the number of outstanding branches and uses it to recover the branch history register. Simulation results on the Simplescalar 3.0/PISA tool set and the SPECINTgS benchmarks show that gshare and GAg with the proposed recovery mechanism improved the average prediction accuracy by 2.14$\%$ and 9.21$\%$, respectively and the average IPC by 8.75$\%$ and 18.08$\%$, respectively over the original predictor.

A Study on the Development of Plural Gravity Models and their Application Method (복수 중력모형의 구축과 적용방법에 관한 연구)

  • Ryu, Yeong-Geun
    • Journal of Korean Society of Transportation
    • /
    • v.31 no.2
    • /
    • pp.60-68
    • /
    • 2013
  • This study developed plural gravity models and their application method in order to increase the accuracy of trip distribution estimation. The developed method initially involves utilizing the coefficient of determination ($R^2$) to set the target level. Afterwards, the gravity model is created, and if the gravity model's coefficient of determination is satisfactory in regards to the target level, the model creation is complete and future trip distribution estimation is calculated. If the coefficient of determination is not on par with the target level, the zone pair with the largest standardized residual is removed from the model until the target level is obtained. In respect to the model, the removed zone pairs are divided into positive(+) and negative(-) sides. In each of these sides, gravity models are made until the target level is reached. If there are no more zone pairs to remove, the model making process concludes, and future trip distribution estimation is calculated. The newly developed plural gravity model and application method was adopted for 42 zone pairs as a case study. The existing method of utilizing only one gravity model exhibited a coefficient of determination value ($R^2$) of 51.3%, however, the newly developed method produced three gravity models, and exhibited a coefficient of determination value ($R^2$) of over 90%. Also, the accuracy of the future trip distribution estimation was found to be higher than the existing method.

A Study on the Design of Prediction Model for Safety Evaluation of Partial Discharge (부분 방전의 안전도 평가를 위한 예측 모델 설계)

  • Lee, Su-Il;Ko, Dae-Sik
    • Journal of Platform Technology
    • /
    • v.8 no.3
    • /
    • pp.10-21
    • /
    • 2020
  • Partial discharge occurs a lot in high-voltage power equipment such as switchgear, transformers, and switch gears. Partial discharge shortens the life of the insulator and causes insulation breakdown, resulting in large-scale damage such as a power outage. There are several types of partial discharge that occur inside the product and the surface. In this paper, we design a predictive model that can predict the pattern and probability of occurrence of partial discharge. In order to analyze the designed model, learning data for each type of partial discharge was collected through the UHF sensor by using a simulator that generates partial discharge. The predictive model designed in this paper was designed based on CNN during deep learning, and the model was verified through learning. To learn about the designed model, 5000 training data were created, and the form of training data was used as input data for the model by pre-processing the 3D raw data input from the UHF sensor as 2D data. As a result of the experiment, it was found that the accuracy of the model designed through learning has an accuracy of 0.9972. It was found that the accuracy of the proposed model was higher in the case of learning by making the data into a two-dimensional image and learning it in the form of a grayscale image.

  • PDF

Distracted Driver Detection and Characteristic Area Localization by Combining CAM-Based Hierarchical and Horizontal Classification Models (CAM 기반의 계층적 및 수평적 분류 모델을 결합한 운전자 부주의 검출 및 특징 영역 지역화)

  • Go, Sooyeon;Choi, Yeongwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.439-448
    • /
    • 2021
  • Driver negligence accounts for the largest proportion of the causes of traffic accidents, and research to detect them is continuously being conducted. This paper proposes a method to accurately detect a distracted driver and localize the most characteristic parts of the driver. The proposed method hierarchically constructs a CNN basic model that classifies 10 classes based on CAM in order to detect driver distration and 4 subclass models for detailed classification of classes having a confusing or common feature area in this model. The classification result output from each model can be considered as a new feature indicating the degree of matching with the CNN feature maps, and the accuracy of classification is improved by horizontally combining and learning them. In addition, by combining the heat map results reflecting the classification results of the basic and detailed classification models, the characteristic areas of attention in the image are found. The proposed method obtained an accuracy of 95.14% in an experiment using the State Farm data set, which is 2.94% higher than the 92.2%, which is the highest accuracy among the results using this data set. Also, it was confirmed by the experiment that more meaningful and accurate attention areas were found than the results of the attention area found when only the basic model was used.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

The Evaluation of Quantitative Accuracy According to Detection Distance in SPECT/CT Applied to Collimator Detector Response(CDR) Recovery (Collimator Detector Response(CDR) 회복이 적용된 SPECT/CT에서 검출거리에 따른 정량적 정확성 평가)

  • Kim, Ji-Hyeon;Son, Hyeon-Soo;Lee, Juyoung;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.2
    • /
    • pp.55-64
    • /
    • 2017
  • Purpose Recently, with the spread of SPECT/CT, various image correction methods can be applied quickly and accurately, which enabled us to expect quantitative accuracy as well as image quality improvement. Among them, the Collimator Detector Response(CDR) recovery is a correction method aiming at resolution recovery by compensating the blurring effect generated from the distance between the detector and the object. The purpose of this study is to find out quantitative change depending on the change in detection distance in SPECT/CT images with CDR recovery applied. Materials and Methods In order to find out the error of acquisition count depending on the change of detection distance, we set the detection distance according to the obit type as X, Y axis radius 30cm for circular, X, Y axis radius 21cm, 10cm for non-circular and non-circular auto(=auto body contouring, ABC_spacing limit 1cm) and applied reconstruction methods by dividing them into Astonish(3D-OSEM with CDR recovery) and OSEM(w/o CDR recovery) to find out the difference in activity recovery depending on the use of CDR recovery. At this time, attenuation correction, scatter correction, and decay correction were applied to all images. For the quantitative evaluation, calibration scan(cylindrical phantom, $^{99m}TcO_4$ 123.3 MBq, water 9293 ml) was obtained for the purpose of calculating the calibration factor(CF). For the phantom scan, a 50 cc syringe was filled with 31 ml of water and a phantom image was obtained by setting $^{99m}TcO_4$ 123.3 MBq. We set the VOI(volume of interest) in the entire volume of the syringe in the phantom image to measure total counts for each condition and obtained the error of the measured value against true value set by setting CF to check the quantitative accuracy according to the correction. Results The calculated CF was 154.28 (Bq/ml/cps/ml) and the measured values against true values in each conditional image were analyzed to be circular 87.5%, non-circular 90.1%, ABC 91.3% and circular 93.6%, non-circular 93.6%, ABC 93.9% in OSEM and Astonish, respectively. The closer the detection distance, the higher the accuracy of OSEM, and Astonish showed almost similar values regardless of distance. The error was the largest in the OSEM circular(-13.5%) and the smallest in the Astonish ABC(-6.1%). Conclusion SPECT/CT images showed that when the distance compensation is made through the application of CDR recovery, the detection distance shows almost the same quantitative accuracy as the proximity detection even under the distant condition, and accurate correction is possible without being affected by the change in detection distance.

  • PDF

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

Determination of dynamic threshold for sea-ice detection through relationship between 11 µm brightness temperature and 11-12 µm brightness temperature difference (11 µm 휘도온도와 11-12 µm 휘도온도차의 상관성 분석을 활용한 해빙탐지 동적임계치 결정)

  • Jin, Donghyun;Lee, Kyeong-Sang;Choi, Sungwon;Seo, Minji;Lee, Darae;Kwon, Chaeyoung;Kim, Honghee;Lee, Eunkyung;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.243-248
    • /
    • 2017
  • Sea ice which is an important component of the global climate system is being actively detected by satellite because it have been distributed to polar and high-latitude region. and the sea ice detection method using satellite uses reflectance and temperature data. the sea ice detection method of Moderate-Resolution Imaging Spectroradiometer (MODIS), which is a technique utilizing Ice Surface Temperature (IST) have been utilized by many studies. In this study, we propose a simple and effective method of sea ice detection using the dynamic threshold technique with no IST calculation process. In order to specify the dynamic threshold, pixels with freezing point of MODIS IST of 273.0 K or less were extracted. For the extracted pixels, we analyzed the relationship between MODIS IST, MODIS $11{\mu}m$ channel brightness temperature($T_{11{\mu}m}$) and Brightness Temperature Difference ($BTD:T_{11{\mu}m}-T_{12{\mu}m}$). As a result of the analysis, the relationship between the three values showed a linear characteristic and the threshold value was designated by using this. In the case ofsea ice detection, if $T_{11{\mu}m}$ is below the specified threshold value, it is detected as sea ice on clear sky. And in order to estimate the performance of the proposed sea ice detection method, the accuracy was analyzed using MODIS Sea ice extent and then validation accuracy was higher than 99% in Producer Accuracy (PA).

An Evaluation for Effectiveness of Information Services by Reference Librarians at College and University Libraries in Korea (대학도서관 정보사서의 정보서비스 효율성 평가)

  • Han Sang Wan
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.13
    • /
    • pp.95-119
    • /
    • 1986
  • The objective of this study is to search for a theoretical and practical solution to the question of what is the most effective and qualitative method of information service for the college and university libraries in Korea. Assuming the maximum service or total service theory in information services, therefore, it appears natural that the subject specialist who is highly knowledgeable in his subject is indispensable in raising the quality of information librarians. The procedure of this research was as follows: There was no college and university library employing any full-time subject spceialist in Korea. This research, however, was proceeded on the assumption that subject specialists are already employed in all of the college and university libraries after the subject specialist system is established. The least qualification of subject specialist is limited, based on the criteria given by the foreign literature, to those who have master's degree in Library Science and bachelor's degree in any other subject area, those who have bacholor's degree in Library Science and master's degree in any other subject area, or those who have both bacholor's and master's degrees in Library Science with minor in any subject field . To prove the research hypothesis that the subject specialist will perform his role more efficiently than the generalist in effectively providing information service based on both accuracy and speed, this research as an obtrusive testing method analyzed the effectiveness by presenting information questions to the generalists and subject specialists who are information librarians in college and university libraries. For this study 20 librarians working at 12 university libraries were tested for performance levels of information services. The result showed $59.75\%$ an absolute performance rate and $75.20\%$ an adjust performance rate. Compared to Thomas Childer's 1970 study in which he used the unobtrusive testing method, these results were $5\%$ higher in the absolute performance rate and $11.36\%$ higher in the adjust performance rate. In comparing the generalist with the subject specialist in efficiency of information service, while the absolute performance rate was $57.08\%$ and the adjust performance rate was $73.08\%$ in the case of the generalist, the absolute rate was $63.75\%$ and the adjust rate was $78.38\%$ in the case of specialist, therefore, the efficiency of the subject specialist was $6.67\%$ higher in the absolute performance rate and $5.30\%$ higher in the adjust performance rate than that of generalist. But the factor of speediness was excluded from the analysis because of the difference between the time the interviewers recorded and the time the interviewee recorded. On the basis of the result of this research, it should be desirable to educate subject specialists and employ them as information librarians and for them to function as efficient subject specialists in order to improve the effectiveness of information services, the nucleus of the raison d'etre of college and university libraries.

  • PDF

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.