• Title/Summary/Keyword: data access pattern

Search Result 201, Processing Time 0.029 seconds

Implementation of Memory controller for Punctuality Guarantee from Memory-Free Inspection Equipment using DDR2 SDRAM (DDR2 SDRAM을 이용한 비메모리 검사장비에서 정시성을 보장하기 위한 메모리 컨트롤러 구현)

  • Jeon, Min-Ho;Shin, Hyun-Jun;Kang, Chul-Gyu;Oh, Chang-Heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.136-139
    • /
    • 2011
  • The conventional semiconductor equipment has adopted SRAM module as the test pattern memory, which has a simple design and does not require refreshing. However, SRAM has its disadvantages as it takes up more space as its capacity becomes larger, making it difficult to meet the requirements of large memories and compact size. if DRAM is adopted as the semiconductor inspection equipment, it takes up less space and costs less than SRAM. However, DRAM is also disadvantageous because it requires the memory cell refresh, which is not suitable for the semiconductor examination equipments that require correct timing. Therefore, In this paper, we will proposed an algorithm for punctuality guarantee of memory-free inspection equipment using DDR2 SDRAM. And we will produced memory controller using punctuality guarantee algorithm.

  • PDF

Cache Sensitive T-tree Index Structure (캐시를 고려한 T-트리 인덱스 구조)

  • Lee Ig-hoon;Kim Hyun Chul;Hur Jae Yung;Lee Snag-goo;Shim JunHo;Chang Juho
    • Journal of KIISE:Databases
    • /
    • v.32 no.1
    • /
    • pp.12-23
    • /
    • 2005
  • In the past decade, advances in speed of commodity CPUs have iu out-paced advances in memory latency Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. To reduce memory access latency, cache memory incorporated in the memory subsystem. but cache memories can reduce the memory latency only when the requested data is found in the cache. This mainly depends on the memory access pattern of the application. At this point, previous research has shown that B+ trees perform much faster than T-trees because B+ trees are more cache conscious than T-trees, and also proposed 'Cache Sensitive B+trees' (CSB. trees) that are more cache conscious than B+trees. The goal of this paper is to make T-trees be cache conscious as CSB-trees. We propose a new index structure called a 'Cache Sensitive T-trees (CST-trees)'. We implemented CST-trees and compared performance of CST-trees with performance of other index structures.

Interest-based Customer Segmentation Methodology Using Topic Modeling (토픽 분석을 활용한 관심 기반 고객 세분화 방법론)

  • Hyun, Yoonjin;Kim, Namgyu;Cho, Yoonho
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.1
    • /
    • pp.77-93
    • /
    • 2015
  • As the range of the customer choice becomes more diverse, the average life span of companies' products and services is becoming shorter. Most companies are striving to maximize the revenue by understanding the customer's needs and providing customized products and services. However, companies had to bear a significant burden, in terms of the time and cost involved in the process of determining each individual customer's needs. Therefore, an alternative method is employed that involves grouping the customers into different categories based on certain criteria and establishing a marketing strategy tailored for each group. In this way, customer segmentation and customer clustering are performed using demographic information and behavioral information. Demographic information included sex, age, income level, and etc., while behavioral information was usually identified indirectly through customers' purchase history and search history. However, there is a limitation regarding companies' customer behavioral information, because the information is usually obtained through the limited data provided by a customer on a company's website. This is because the pattern indicated when a customer accesses a particular site might not be representative of the general tendency of that customer. Therefore, in this study, rather than the pattern indicated through a particular site, a customer's interest is identified using that customer's access record pertaining to external news. Hence, by utilizing this method, we proposed a methodology to perform customer segmentation. In addition, by extracting the main issues through a topic analysis covering approximately 3,000 Internet news articles, the actual experiment applying customer segmentation is performed and the applicability of the proposed methodology is analyzed.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Hybrid SVM/ANN Algorithm for Efficient Indoor Positioning Determination in WLAN Environment (WLAN 환경에서 효율적인 실내측위 결정을 위한 혼합 SVM/ANN 알고리즘)

  • Kwon, Yong-Man;Lee, Jang-Jae
    • Journal of Integrative Natural Science
    • /
    • v.4 no.3
    • /
    • pp.238-242
    • /
    • 2011
  • For any pattern matching based algorithm in WLAN environment, the characteristics of signal to noise ratio(SNR) to multiple access points(APs) are utilized to establish database in the training phase, and in the estimation phase, the actual two dimensional coordinates of mobile unit(MU) are estimated based on the comparison between the new recorded SNR and fingerprints stored in database. The system that uses the artificial neural network(ANN) falls in a local minima when it learns many nonlinear data, and its classification accuracy ratio becomes low. To make up for this risk, the SVM/ANN hybrid algorithm is proposed in this paper. The proposed algorithm is the method that ANN learns selectively after clustering the SNR data by SVM, then more improved performance estimation can be obtained than using ANN only and The proposed algorithm can make the higher classification accuracy by decreasing the nonlinearity of the massive data during the training procedure. Experimental results indicate that the proposed SVM/ANN hybrid algorithm generally outperforms ANN algorithm.

Test sequence control chip design of logic test using FPGA (FPGA를 이용한 logic tester의 test sequence control chip 설계 및 검증)

  • Kang, Chang-Hun;Choi, In-Kyu;Choi, Chang;Han, Hye-Jin;Park, Jong-Sik
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.376-379
    • /
    • 2001
  • In this paper, I design the control chip that controls inner test sequence of Logic Tester to test chip. Logic tester has the thirteen inner instructions to control test sequence in test. And these instructions are saved in memory with test pattern data. Control chip generates address and control signal such as read, write signal of memory. Before testing, necessary data such as start address, end address, etc. are written to inner register of control chip. When test started, control chip receives the instruction in start address and executes, and generates address and control signals to access tester' inner memory. So whole test sequence is controlled by making the address and control signal in tester's inner memory. Control chip designs instruction's execution blocks, respectively. So if inner instruction is added from now on, a revision is easy. The control chip will be made using FPGA of Xilinx Co. in future.

  • PDF

A Study on the Improvement of Bayesian networks in e-Trade (전자무역의 베이지안 네트워크 개선방안에 관한 연구)

  • Jeong, Boon-Do
    • International Commerce and Information Review
    • /
    • v.9 no.3
    • /
    • pp.305-320
    • /
    • 2007
  • With expanded use of B2B(between enterprises), B2G(between enterprises and government) and EDI(Electronic Data Interchange), and increased amount of available network information and information protection threat, as it was judged that security can not be perfectly assured only with security technology such as electronic signature/authorization and access control, Bayesian networks have been developed for protection of information. Therefore, this study speculates Bayesian networks system, centering on ERP(Enterprise Resource Planning). The Bayesian networks system is one of the methods to resolve uncertainty in electronic data interchange and is applied to overcome uncertainty of abnormal invasion detection in ERP. Bayesian networks are applied to construct profiling for system call and network data, and simulate against abnormal invasion detection. The host-based abnormal invasion detection system in electronic trade analyses system call, applies Bayesian probability values, and constructs normal behavior profile to detect abnormal behaviors. This study assumes before and after of delivery behavior of the electronic document through Bayesian probability value and expresses before and after of the delivery behavior or events based on Bayesian networks. Therefore, profiling process using Bayesian networks can be applied for abnormal invasion detection based on host and network. In respect to transmission and reception of electronic documents, we need further studies on standards that classify abnormal invasion of various patterns in ERP and evaluate them by Bayesian probability values, and on classification of B2B invasion pattern genealogy to effectively detect deformed abnormal invasion patterns.

  • PDF

An Adjustment for a Regional Incongruity in Global land Cover Map: case of Korea

  • Park Youn-Young;Han Kyung-Soo;Yeom Jong-Min;Suh Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.3
    • /
    • pp.199-209
    • /
    • 2006
  • The Global Land Cover 2000 (GLC 200) project, as a most recent issue, is to provide for the year 2000 a harmonized land cover database over the whole globe. The classifications were performed according to continental or regional scales by corresponding organization using the data of VEGETATION sensor onboard the SPOT4 Satellite. Even if the global land cover classification for Asia provided by Chiba University showed a good accuracy in whole Asian area, some problems were detected in Korean region. Therefore, the construction of new land cover database over Korea is strongly required using more recent data set. The present study focuses on the development of a new upgraded land cover map at 1 km resolution over Korea considering the widely used K-means clustering, which is one of unsupervised classification technique using distance function for land surface pattern classification, and the principal components transformation. It is based on data sets from the Earth observing system SPOT4/VEGETATION. Newly classified land cover was compared with GLC 2000 for Korean peninsula to access how well classification performed using confusion matrix.

The Analysis of the Activity Patterns of Dog with Wearable Sensors Using Machine Learning

  • Hussain, Ali;Ali, Sikandar;Kim, Hee-Cheol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.141-143
    • /
    • 2021
  • The Activity patterns of animal species are difficult to access and the behavior of freely moving individuals can not be assessed by direct observation. As it has become large challenge to understand the activity pattern of animals such as dogs, and cats etc. One approach for monitoring these behaviors is the continuous collection of data by human observers. Therefore, in this study we assess the activity patterns of dog using the wearable sensors data such as accelerometer and gyroscope. A wearable, sensor -based system is suitable for such ends, and it will be able to monitor the dogs in real-time. The basic purpose of this study was to develop a system that can detect the activities based on the accelerometer and gyroscope signals. Therefore, we purpose a method which is based on the data collected from 10 dogs, including different nine breeds of different sizes and ages, and both genders. We applied six different state-of-the-art classifiers such as Random forests (RF), Support vector machine (SVM), Gradient boosting machine (GBM), XGBoost, k-nearest neighbors (KNN), and Decision tree classifier, respectively. The Random Forest showed a good classification result. We achieved an accuracy 86.73% while the detecting the activity.

  • PDF

Effects of Corpus Use on Error Identification in L2 Writing

  • Yoshiho Satake
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.1
    • /
    • pp.61-71
    • /
    • 2023
  • This study examines the effects of data-driven learning (DDL)-an approach employing corpora for inductive language pattern learning-on error identification in second language (L2) writing. The data consists of error identification instances from fifty-five participants, compared across different reference materials: the Corpus of Contemporary American English (COCA), dictionaries, and no use of reference materials. There are three significant findings. First, the use of COCA effectively identified collocational and form-related errors due to inductive inference drawn from multiple example sentences. Secondly, dictionaries were beneficial for identifying lexical errors, where providing meaning information was helpful. Finally, the participants often employed a strategic approach, identifying many simple errors without reference materials. However, while maximizing error identification, this strategy also led to mislabeling correct expressions as errors. The author has concluded that the strategic selection of reference materials can significantly enhance the effectiveness of error identification in L2 writing. The use of a corpus offers advantages such as easy access to target phrases and frequency information-features especially useful given that most errors were collocational and form-related. The findings suggest that teachers should guide learners to effectively use appropriate reference materials to identify errors based on error types.