• Title/Summary/Keyword: 인식 성능

Search Result 4,647, Processing Time 0.036 seconds

Probability-based Pre-fetching Method for Multi-level Abstracted Data in Web GIS (웹 지리정보시스템에서 다단계 추상화 데이터의 확률기반 프리페칭 기법)

  • 황병연;박연원;김유성
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.261-274
    • /
    • 2003
  • The effective probability-based tile pre-fetching algorithm and the collaborative cache replacement algorithm are able to reduce the response time for user's requests by transferring tiles which will be used in advance and determining tiles which should be removed from the restrictive cache space of a client based on the future access probabilities in Web GISs(Geographical Information Systems). The Web GISs have multi-level abstracted data for the quick response time when zoom-in and zoom-out queries are requested. But, the previous pre-fetching algorithm is applied on only two-dimensional pre-fetching space, and doesn't consider expanded pre-fetching space for multi-level abstracted data in Web GISs. In this thesis, a probability-based pre-fetching algorithm for multi-level abstracted in Web GISs was proposed. This algorithm expanded the previous two-dimensional pre-fetching space into three-dimensional one for pre-fetching tiles of the upper levels or lower levels. Moreover, we evaluated the effect of the proposed pre-fetching algorithm by using a simulation method. Through the experimental results, the response time for user requests was improved 1.8%∼21.6% on the average. Consequently, in Web GISs with multi-level abstracted data, the proposed pre-fetching algorithm and the collaborative cache replacement algorithm can reduce the response time for user requests substantially.

  • PDF

Traffic Forecasting Model Selection of Artificial Neural Network Using Akaike's Information Criterion (AIC(AKaike's Information Criterion)을 이용한 교통량 예측 모형)

  • Kang, Weon-Eui;Baik, Nam-Cheol;Yoon, Hye-Kyung
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.155-159
    • /
    • 2004
  • Recently, there are many trials about Artificial neural networks : ANNs structure and studying method of researches for forecasting traffic volume. ANNs have a powerful capabilities of recognizing pattern with a flexible non-linear model. However, ANNs have some overfitting problems in dealing with a lot of parameters because of its non-linear problems. This research deals with the application of a variety of model selection criterion for cancellation of the overfitting problems. Especially, this aims at analyzing which the selecting model cancels the overfitting problems and guarantees the transferability from time measure. Results in this study are as follow. First, the model which is selecting in sample does not guarantees the best capabilities of out-of-sample. So to speak, the best model in sample is no relationship with the capabilities of out-of-sample like many existing researches. Second, in stability of model selecting criterion, AIC3, AICC, BIC are available but AIC4 has a large variation comparing with the best model. In time-series analysis and forecasting, we need more quantitable data analysis and another time-series analysis because uncertainty of a model can have an effect on correlation between in-sample and out-of-sample.

Characteristics of Intrusion MO and Perception of Target Hardening of Burglars (침입절도범 재소자의 수법 특성과 타겟하드닝 관련 인식)

  • Park, Hyeonho;Kim, Kang-Il;Kim, Hyo-gun
    • Korean Security Journal
    • /
    • no.60
    • /
    • pp.33-61
    • /
    • 2019
  • It is quite difficult to actually prove the effectiveness of so-called target-hardening, one of the various strategies used to reduce crime, one of the serious problems in society recently. In particular, three to five minutes is often used as golden time for intruders to give up or stop, which is based on foreign and some indirect research cases in Korea, but there were no studies that more directly identified the average break-in operation time or the abandonment time based on the elapsed time when the shield hardware resists intruders. This study was the first of its kind in Korea to investigate and verify samples of 90 inmates of break-in burglars who were imprisoned in August 2018 by profiling the average criminal experience, education level, age, height and weight of typical Korean professional break-in thieves, and specific criminal methods, average break-in operation time, and the criteria for giving up if not breached. According to the analysis results, in the survey on the number of pre-invasion theft crimes by intruders, many of the respondents who participated in the survey were criminals of professional invasions, and by their physical characteristics, there was not much difference from ordinary adult men. Residential facilities were the highest in the world, followed by commercial and educational facilities. According to the survey on the types of facilities that committed intrusion into residential facilities, it was not safe to say that single-family housing accounted for the largest portion of single-family housing, multi-family housing, apartment high-rise (more than three stories), and apartment low-rise (more than one to three stories) among residential facilities, and that the ratio of apartment high-rise was higher than expected. Based on the average time required to break into a place for an intrusion crime, it is assumed that the psychological time worked in a place where the break-in was difficult, since the break-in was not performed while measuring the time of the break-in operation. In the case of time to give up a crime, more than half of the respondents said they would give up the crime even in less than four minutes, suggesting that a significant number of intrusive crimes can be prevented even if the facility has four minutes of intrusion resistance. This proves that most intruders will give up the break-in if the break-in resistance performance of the security facility is exercised for more than five minutes.

Effectiveness Analysis and Application of Phosphorescent Pavement Markings for Improving Visibility (축광노면표시 시인성 개선에 따른 경제성 분석 및 적용방안)

  • Yi, Yongju;Lee, Kyujin;Kim, Sangtae;Choi, Keechoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.5
    • /
    • pp.815-825
    • /
    • 2017
  • Visibility of lane marking is impaired at night or in the rain, which thereby threatens traffic safety. Recently, various studies and technologies have been developed to improve lane marking visibility, such as the extension of lane marking life expectancy (up to 1.5 times), improvement of lane marking equipment productivity, improvement of lane marking visibility by applying phosphorescent material mixed paint. Cost-benefit analysis was performed with considering various benefit items that can be expected. About 45% of traffic accidents would be prevented by improving lane marking visibility. Additionally, accident reduction benefit and traffic congestion reduction benefit were calculated as much as 246 billion KRW per year and 12 billion KRW per year, respectively, by reducing repaint cycle due to enhanced durability. 45 billion KRW per year is expected to reduced with improved lane detection performance of autonomous vehicle. Meanwhile, total increased cost when introducing phosphorescent material mixed paint to 91,195km of nationwide road is identified as 1922 billion KRW per year. However, economic feasibility could not be secured with 0.16 of cost-benefit ratio when applied to the road network as a whole. In case of "Accident Hot Spot" analyzing section window (400m), one or more fatality or two or more injured (one or more injured in case of less than 2 lanes per direction) per year were caused by pavement marking related accident, economic feasibility was secured. In detail, 3.91 of cost-benefit ratio is estimated with comparison of the installation cost for 5,697 of accident hot spot and accident reduction benefit. Some limitations and future research agenda have also been discussed.

Techniques for Acquisition of Moving Object Location in LBS (위치기반 서비스(LBS)를 위한 이동체 위치획득 기법)

  • Min, Gyeong-Uk;Jo, Dae-Su
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.885-896
    • /
    • 2003
  • The typws of service using location Information are being various and extending their domain as wireless internet tochnology is developing and its application par is widespread, so it is prospected that LBS(Location-Based Services) will be killer application in wireless internet services. This location information is basic and high value-added information, and this information services make prior GIS(Geographic Information System) to be useful to anybody. The acquisition of this location information from moving object is very important part in LBS. Also the interfacing of acquisition of moving object between MODB and telecommunication network is being very important function in LBS. After this, when LBS are familiar to everybody, we can predict that LBS system load is so heavy for the acquisition of so many subscribers and vehicles. That is to say, LBS platform performance is fallen off because of overhead increment of acquiring moving object between MODB and wireless telecommunication network. So, to make stable of LBS platform, in this MODB system, acquisition of moving object location par as reducing the number of acquisition of unneccessary moving object location. We study problems in acquiring a huge number of moving objects location and design some acquisition model using past moving patternof each object to reduce telecommunication overhead. And after implementation these models, we estimate performance of each model.

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Analysis of Utilization and Maintenance of Major Agricultural machinery (Tractor, Combine Harvester and Rice Transplanter) (핵심 농기계(트랙터, 콤바인 및 이앙기) 이용 및 수리실태 분석)

  • Hong, Sungha;Choi, Kyu-hong
    • Journal of the Korean Society of International Agriculture
    • /
    • v.30 no.4
    • /
    • pp.292-299
    • /
    • 2018
  • In a survey in which farmers were asked about their levels of satisfaction with agricultural machines, Japanese products scored higher than local products by 1.2, 1.3, and 1.4 times for tractors, combine harvesters, and rice transplanter, respectively. Japanese products corresponded to generally high satisfaction levels in terms of operating performance, operability, frequency of breakdowns, and durability, excluding sales price and after-sales services. Effective countermeasures through quality improvement are therefore necessary for Korean products. Furthermore, a survey of dealers showed that the components and consumables for core agricultural machines had high frequencies of breakdowns and repairs. Four major components of tractors represented 85.3% of all breakdowns and repairs, five components of combine harvesters represented 89.6%, and three components of rice transplanters represented 80.5%. Moreover, a comparison of the technological levels between local and imported machines showed that the local machines' levels were at 60-100% for tractors, 70-100% for combine harvesters, and 70-95% for rice transplanters. Small and mid-sized tractors, 4 interrow combine harvesters, and 6 interrow rice transplanters showed similar levels of technology. The results of the analysis suggest that action is urgently needed at a policy level to establish an agricultural machinery component research center for the development, production, and supply of commonly-used components, with the participation of manufacturers of agricultural machines and components, in order to enhance the competitiveness of local manufacturers and to revitalize the agricultural machine market.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.