• Title/Summary/Keyword: 순차적 탐색기법

Search Result 60, Processing Time 0.03 seconds

Analysis of Processes in Reading about 'Science Stories' in 6th Grade Science Textbook Using Eye-tracking (안구운동 추적 기법을 활용한 6학년 과학 교과서의 과학 이야기 읽기 과정 분석)

  • Park, Hyojeong;Shin, Donghoon
    • Journal of The Korean Association For Science Education
    • /
    • v.35 no.3
    • /
    • pp.383-393
    • /
    • 2015
  • This study analyzed the 6th grade elementary science textbook 'Science stories' reading process of students by utilizing eye movement tracking techniques. Participants read 3 articles in the new experimental science textbooks and solved 9 problems about each article. By understanding and academic achievement results, participants were divided into high-groups, middle-groups, and low-groups. The results of eye movement characteristics of the high-groups and low-groups had the following differences. Number of fixations and number of regressions were higher in high-groups. Average fixation duration and average regressive fixation duration were longer in low-groups. Fixation time for the key sentence of the article was longer in high-groups. Analysis of a scan path and post-interview, high-groups had frequent regression between sentences and they knew where the core of the article is and paid much attention there. In contrast low-groups are sequentially read most articles and some of them had a leap of abnormal range. Problem-solving approach is also different between groups. In conclusion reading style is associated with the science stories comprehension and students who had more regressions, much core search process, effective attention distribution, high concentration showed better understanding results. Also words or sentences used in textbooks are associated with science stories comprehension.

Fixed Size Memory Pool Management Method for Mobile Game Servers (모바일 게임 서버를 위한 고정크기 메모리 풀 관리 방법)

  • Park, Seyoung;Choi, Jongsun;Choi, Jaeyoung;Kim, Eunhoe
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.9
    • /
    • pp.327-336
    • /
    • 2015
  • Mobile game servers usually execute frequent dynamic memory allocation for generating the buffers that deal with clients requests. It causes to deteriorate the performance of game servers since it increases system workload and memory fragmentation. In this paper, we propose fixed-sized memory pool management method. Memory pool for the proposed method has a sequential memory structure based on circular linked list data structure. It solves memory fragmentation problem and saves time for searching the memory blocks which are required for memory allocation and deallocation. We showed the efficiency of the proposed method by evaluating the performance of dynamic memory allocation, through the proposed method and the memory pool management method based on boost open source library.

Multiple Target Position Tracking Algorithm for Linear Array in the Near Field (선배열 센서를 이용한 근거리 다중 표적 위치 추적 알고리즘)

  • Hwang Soo-Bok;Kim Jin-Seok;Kim Hyun-Sik;Park Myung-Ho;Nam Ki-Gon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.5
    • /
    • pp.294-300
    • /
    • 2005
  • Generally, traditional approaches to track the target position are to estimate ranges and bearings by 2-D MUSIC (MUltiple 519na1 Classification) method. and to associate estimates of 2-D MUSIC made at different time points with the right targets by JPDA (Joint Probabilistic Data Association) filter in the near field. However, the disadvantages of these approaches are that these have the data association Problem in tracking multiple targets. and that these require the heavy computational load in estimating a 2-D range/bearing spectrum. In case multiple targets are adjacent. the tracking performance degrades seriously because the estimate of each target's Position has a large error. In this paper, we proposed a new tracking algorithm using Position innovations extracted from the senor output covariance matrix in the near field. The proposed algorithm is demonstrated by the computer simulations dealing with the tracking of multiple closing and crossing targets.

High-resolution range and velocity estimation method based on generalized sinusoidal frequency modulation for high-speed underwater vehicle detection (고속 수중운동체 탐지를 위한 일반화된 사인파 주파수 변조 기반 고해상도 거리 및 속도 추정 기법)

  • Jinuk Park;Geunhwan Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.320-328
    • /
    • 2023
  • Underwater active target detection is vital for defense systems, requiring accurate detection and estimation of distance and velocity. Sequential transmission is necessary at each beam angle, but divided pulse length leads to range ambiguity. Multi-frequency transmission results in time-bandwidth product losses when bandwidth is divided. To overcome these problem, we propose a novel method using Generalized Sinusoidal Frequency Modulation (GSFM) for rapid target detection, enabling low-correlation pulses between subpulses without bandwidth division. The proposed method allows for rapid updates of the distance and velocity of target by employing GSFM with minimized pulse length. To evaluate our method, we simulated an underwater environment with reverberation. In the simulation, a linear frequency modulation of 0.05 s caused an average distance estimation error of 50 % and a velocity estimation error of 103 % due to limited frequency band. In contrast, GSFM accurately and quickly tracked targets with distance and velocity estimation errors of 10 % and 14 %, respectively, even with pulses of the same length. Furthermore, GSFM provided approximate azimuth information by transmitting highly orthogonal subpulses for each azimuth.

Technique for Placing Continuous Media on a Disk Array under Fault-Tolerance and Arbitrary-Rate Search (결함허용과 임의 속도 탐색을 고려한 연속 매체 디스크 배치 기법)

  • O, Yu-Yeong;Kim, Seong-Su;Kim, Jae-Hun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1166-1176
    • /
    • 1999
  • 연속 매체, 특히 비디오 데이타에 대한 일반 사용자 연산에는 재생뿐만 아니라 임의 속도 탐색 연산, 정지 연산, 그리고 그 외 다양한 연산이 있다. 이 연산 중에서 원하는 화면을 빨리 찾는 데에 유용한 고속 전진(FF: fast-forward)과 고속 후진(FB: fast-backward)은 재생 연산과는 달리 비순차적인 디스크 접근을 요구한다. 이러한 경우에 디스크 부하가 균등하지 않으면 일부 디스크에 접근이 편중되어 서비스 품질이 떨어진다. 본 논문에서는 디스크 배열을 이용한 저장 시스템에서 디스크 접근을 고르게 분산시키기 위하여 '소수 라운드 로빈(PRR: Prime Round Robin)' 방식으로 연속 매체를 디스크에 배치하는 기법에서 문제가 됐던 낭비된 디스크 저장 공간을 신뢰도 향상을 위해서 사용하는 '그룹화된 패리티를 갖는 소수 라운드 로빈(PRRgp: PRR with Grouped Parities)' 방식을 제안한다. 이 기법은 PRR 기법처럼 임의 속도 검색 연산에 있어서 디스크 배열을 구성하는 모든 디스크의 부하를 균등하게 할뿐만 아니라 낭비됐던 디스크 저장 공간에 신뢰도를 높이기 위한 패리티 정보를 저장함으로서 신뢰도를 향상시킬 수 있다. 신뢰도 모델링 방법으로 조합 모델과 마르코프 모델을 이용해서 결함발생율과 결함복구율을 고려한 신뢰도를 산출하고 비교.분석한다. PRR 기법으로 연속 매체를 저장하고 낭비되는 공간에 패리티 정보를 저장할 경우에 동시에 두 개 이상의 결함 발생 시에 그 결함으로부터 복구가 불가능하지만 PRRgp 기법에서는 약 30% 이상의경우에 대해서 동시에 두 개의 결함 발생 시에 저장한 패리티 정보를 이용한 복구가 가능할 뿐만 아니라 패리티 그룹의 수가 두 개 이상인 경우에는 두 개 이상의 결함에 대해서도 복구가 가능하다.Abstract End-user operations on continuous media (say video data) consist of arbitrary-rate search, pause, and others as well as normal-rate play. FF(fast-forward) / FB(fast-backward) among those operations are desirable to find out the scene of interest but they require non-sequential access of disks. When accesses are clustered to several disks without considering load balance, high quality services in playback may not be available. In this paper, we propose a new disk placement scheme, called PRRgp(Prime Round Robin with Grouped Parities), with enhanced reliability by using the wasted disk storage space in an old one(PRR: Prime Round Robin), in which continuous media are placed on a disk array based storage systems to distribute disk accesses uniformly. The PRRgp can not only achieve load balance of disks consisting of a disk array under arbitrary-rate search like PRR, but also improve reliability by storing parity information on the wasted disk space appropriately. We use combinatorial and Markov models to evaluate the reliability for a disk array and to analyze the results. When continuous media like PRR are placed and parity information on the wasted disk space is stored, we cannot tolerate more than two simultaneous faults. But they can be recovered by using stored parity information for about 30 percent as a whole in case of PRRgp presented in this paper. In addition, more than two faults can be tolerated in case there are more than two parity groups.

GB-Index: An Indexing Method for High Dimensional Complex Similarity Queries with Relevance Feedback (GB-색인: 고차원 데이타의 복합 유사 질의 및 적합성 피드백을 위한 색인 기법)

  • Cha Guang-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.362-371
    • /
    • 2005
  • Similarity indexing and searching are well known to be difficult in high-dimensional applications such as multimedia databases. Especially, they become more difficult when multiple features have to be indexed together. In this paper, we propose a novel indexing method called the GB-index that is designed to efficiently handle complex similarity queries as well as relevance feedback in high-dimensional image databases. In order to provide the flexibility in controlling multiple features and query objects, the GB-index treats each dimension independently The efficiency of the GB-index is realized by specialized bitmap indexing that represents all objects in a database as a set of bitmaps. Main contributions of the GB-index are three-fold: (1) It provides a novel way to index high-dimensional data; (2) It efficiently handles complex similarity queries; and (3) Disjunctive queries driven by relevance feedback are efficiently treated. Empirical results demonstrate that the GB-index achieves great speedups over the sequential scan and the VA-file.

Efficient Association Rule Mining based SON Algorithm for a Bigdata Platform (빅데이터 플랫폼을 위한 SON알고리즘 기반의 효과적인 연관 룰 마이닝)

  • Nguyen, Giang-Truong;Nguyen, Van-Quyet;Nguyen, Sinh-Ngoc;Kim, Kyungbaek
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1593-1601
    • /
    • 2017
  • In a big data platform, association rule mining applications could bring some benefits. For instance, in a agricultural big data platform, the association rule mining application could recommend specific products for farmers to grow, which could increase income. The key process of the association rule mining is the frequent itemsets mining, which finds sets of products accompanying together frequently. Former researches about this issue, e.g. Apriori, are not satisfying enough because huge possible sets can cause memory to be overloaded. In order to deal with it, SON algorithm has been proposed, which divides the considered set into many smaller ones and handles them sequently. But in a single machine, SON algorithm cause heavy time consuming. In this paper, we present a method to find association rules in our Hadoop based big data platform, by parallelling SON algorithm. The entire process of association rule mining including pre-processing, SON algorithm based frequent itemset mining, and association rule finding is implemented on Hadoop based big data platform. Through the experiment with real dataset, it is conformed that the proposed method outperforms a brute force method.

Development of Three-Dimensional Trajectory Model for Detecting Source Region of the Radioactive Materials Released into the Atmosphere (대기 누출 방사성물질 선원 위치 추적을 위한 3차원 궤적모델 개발)

  • Suh, Kyung-Suk;Park, Kihyun;Min, Byung-Il;Kim, Sora;Yang, Byung-Mo
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.1
    • /
    • pp.31-39
    • /
    • 2016
  • Background: It is necessary to consider the overall countermeasure for analysis of nuclear activities according to the increase of the nuclear facilities like nuclear power and reprocessing plants in the neighboring countries including China, Taiwan, North Korea, Japan and South Korea. South Korea and comprehensive nuclear-test-ban treaty organization (CTBTO) are now operating the monitoring instruments to detect radionuclides released into the air. It is important to estimate the origin of radionuclides measured using the detection technology as well as the monitoring analysis in aspects of investigation and security of the nuclear activities in neighboring countries. Materials and methods: A three-dimensional forward/backward trajectory model has been developed to estimate the origin of radionuclides for a covert nuclear activity. The developed trajectory model was composed of forward and backward modules to track the particle positions using finite difference method. Results and discussion: A three-dimensional trajectory model was validated using the measured data at Chernobyl accident. The calculated results showed a good agreement by using the high concentration measurements and the locations where was near a release point. The three-dimensional trajectory model had some uncertainty according to the release time, release height and time interval of the trajectory at each release points. An atmospheric dispersion model called long-range accident dose assessment system (LADAS), based on the fields of regards (FOR) technique, was applied to reduce the uncertainties of the trajectory model and to improve the detective technology for estimating the radioisotopes emission area. Conclusion: The detective technology developed in this study can evaluate in release area and origin for covert nuclear activities based on measured radioisotopes at monitoring stations, and it might play critical tool to improve the ability of the nuclear safety field.

Mining Frequent Trajectory Patterns in RFID Data Streams (RFID 데이터 스트림에서 이동궤적 패턴의 탐사)

  • Seo, Sung-Bo;Lee, Yong-Mi;Lee, Jun-Wook;Nam, Kwang-Woo;Ryu, Keun-Ho;Park, Jin-Soo
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.127-136
    • /
    • 2009
  • This paper proposes an on-line mining algorithm of moving trajectory patterns in RFID data streams considering changing characteristics over time and constraints of single-pass data scan. Since RFID, sensor, and mobile network technology have been rapidly developed, many researchers have been recently focused on the study of real-time data gathering from real-world and mining the useful patterns from them. Previous researches for sequential patterns or moving trajectory patterns based on stream data have an extremely time-consum ing problem because of multi-pass database scan and tree traversal, and they also did not consider the time-changing characteristics of stream data. The proposed method preserves the sequential strength of 2-lengths frequent patterns in binary relationship table using the time-evolving graph to exactly reflect changes of RFID data stream from time to time. In addition, in order to solve the problem of the repetitive data scans, the proposed algorithm infers candidate k-lengths moving trajectory patterns beforehand at a time point t, and then extracts the patterns after screening the candidate patterns by only one-pass at a time point t+1. Through the experiment, the proposed method shows the superior performance in respect of time and space complexity than the Apriori-like method according as the reduction ratio of candidate sets is about 7 percent.

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.