• Title/Summary/Keyword: Sequential Searching Method

Search Result 45, Processing Time 0.048 seconds

Signal Processing for Multiaxial Vibration Fatigue Test on Vehicle Component (자동차 부품에 대한 다축 진동내구 시험용 신호처리 방법)

  • Bae, Chul-Yong;Kim, Chan-Jung;Lee, Dong-Won;Lee, Bong-Hyun;Na, Byung-Chul
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.3
    • /
    • pp.368-374
    • /
    • 2008
  • Multi-axial simulation table(MAST) is widely used in motor companies as the multi-axial excitor for vibration fatigue of target component, which provides the vibrational condition as close as the vehicle test. However, the vibration fatigue performance of target component can be guaranteed with MAST system only in case the input profile covers the required severity of the target component on field test. In this paper, the signal processing for multi-axial vibration fatigue test on vehicle component is presented, from the data acquisition of the target component to the derivation of input profile. To compare the severity of vibration condition between field and proving ground, the energy principle of a equivalent damage is proposed and then, it is determined the optimal combination of special events on proving ground using a sequential searching optimal algorithm. To explain the vibration methodology clearly, seat and door component of vehicle are selected as a example.

Preliminary Development of a Scale for the Measurement of Information Avoidance

  • Kap-Seon, KIM
    • Journal of Wellbeing Management and Applied Psychology
    • /
    • v.6 no.1
    • /
    • pp.23-31
    • /
    • 2023
  • Purpose: The purpose of this study is a preliminary study to develop a comprehensive information avoidance scale that includes various search contexts. Research design, data and methodology: This study is a part of exploratory sequential design of mixed method for the development of information avoidance scale. Based on the themes derived from the analysis of the in-depth interview data collected in the qualitative research of the first stage of the study, 45 preliminary items on information search and avoidance were constructed. The factors related to information searching included information recognition, information seeking purpose, and information search expectations. Individual, information, time, and system factors were related to information avoidance. Pearson's correlation analysis was performed for the correlation between factor items, and Cronbach's alpha analysis was performed for the reliability analysis of the items. Exploratory factor analysis was applied to examine the construct validity of 35 items of information avoidance. Results: Among the information avoidance items, one of the less relevant among information purpose items, two information factor items, and one time factor item were excluded. Conclusions: A secondary survey should be conducted to confirm the validity and reliability of the scale composed of adjusted items (35) based on the results of exploratory factor analysis. The strength of this preliminary scale is that it was developed based on vivid qualitative data of ordinary people who had experiences of search and avoidance in various search contexts.

An Efficient Bitmap Indexing Method for Multimedia Data Reflecting the Characteristics of MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 특성을 반영한 효율적인 멀티미디어 데이타 비트맵 인덱싱 방법)

  • Jeong Jinguk;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.9-20
    • /
    • 2005
  • Recently, the MPEG-7 standard a multimedia content description standard is wide]y used for content based image/video retrieval systems. However, since the descriptors standardized in MPEG-7 are usually multidimensional and the problem called 'Curse of dimensionality', previously proposed indexing methods(for example, multidimensional indexing methods, dimensionality reduction methods, filtering methods, and so on) could not be used to effectively index the multimedia database represented in MPEG-7. This paper proposes an efficient multimedia data indexing mechanism reflecting the characteristics of MPEG-7 visual descriptors. In the proposed indexing mechanism, the descriptor is transformed into a histogram of some attributes. By representing the value of each bin as a binary number, the histogram itself that is a visual descriptor for the object in multimedia database could be represented as a bit string. Bit strings for all objects in multimedia database are collected to form an index file, bitmap index, in the proposed indexing mechanism. By XORing them with the descriptors for query object, the candidate solutions for similarity search could be computed easily and they are checked again with query object to precisely compute the similarity with exact metric such as Ll-norm. These indexing and searching mechanisms are efficient because the filtering process is performed by simple bit-operation and it reduces the search space dramatically. Upon experimental results with more than 100,000 real images, the proposed indexing and searching mechanisms are about IS times faster than the sequential searching with more than 90% accuracy.

Low Leakage Input Vector Searching Techniques for Logic Circuits at Standby States (대기상태인 논리 회로에서의 누설전류 최소화 입력 탐색 방법)

  • Lee, Sung-Chul;Shin, Hyun-Chul
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.10
    • /
    • pp.53-60
    • /
    • 2009
  • Due to increased integration density and reduced threshold voltages, leakage current reduction becomes important in the semiconductor IC design for low power consumption. In a combinational logic circuit, the leakage current in the standby state depends on the values of the input. In this research, we developed a new input vector control method to minimize the leakage power. A new efficient algorithm is developed to find the minimal leakage vector. It can reduce the leakage current by 15.7% from the average leakage current and by 6.7% from the results of simulated evolution method during standby or idle states for a set of benchmark circuits. The minimal leakage input vector, with idle input signal, can also reduce the leakage current by 6.8% from the average leakage current and by 3.2% from the results of simulated evolution method for sequential circuits.

An Unified Spatial Index and Visualization Method for the Trajectory and Grid Queries in Internet of Things

  • Han, Jinju;Na, Chul-Won;Lee, Dahee;Lee, Do-Hoon;On, Byung-Won;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.9
    • /
    • pp.83-95
    • /
    • 2019
  • Recently, a variety of IoT data is collected by attaching geosensors to many vehicles that are on the road. IoT data basically has time and space information and is composed of various data such as temperature, humidity, fine dust, Co2, etc. Although a certain sensor data can be retrieved using time, latitude and longitude, which are keys to the IoT data, advanced search engines for IoT data to handle high-level user queries are still limited. There is also a problem with searching large amounts of IoT data without generating indexes, which wastes a great deal of time through sequential scans. In this paper, we propose a unified spatial index model that handles both grid and trajectory queries using a cell-based space-filling curve method. also it presents a visualization method that helps user grasp intuitively. The Trajectory query is to aggregate the traffic of the trajectory cells passed by taxi on the road searched by the user. The grid query is to find the cells on the road searched by the user and to aggregate the fine dust. Based on the generated spatial index, the user interface quickly summarizes the trajectory and grid queries for specific road and all roads, and proposes a Web-based prototype system that can be analyzed intuitively through road and heat map visualization.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

GB-Index: An Indexing Method for High Dimensional Complex Similarity Queries with Relevance Feedback (GB-색인: 고차원 데이타의 복합 유사 질의 및 적합성 피드백을 위한 색인 기법)

  • Cha Guang-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.362-371
    • /
    • 2005
  • Similarity indexing and searching are well known to be difficult in high-dimensional applications such as multimedia databases. Especially, they become more difficult when multiple features have to be indexed together. In this paper, we propose a novel indexing method called the GB-index that is designed to efficiently handle complex similarity queries as well as relevance feedback in high-dimensional image databases. In order to provide the flexibility in controlling multiple features and query objects, the GB-index treats each dimension independently The efficiency of the GB-index is realized by specialized bitmap indexing that represents all objects in a database as a set of bitmaps. Main contributions of the GB-index are three-fold: (1) It provides a novel way to index high-dimensional data; (2) It efficiently handles complex similarity queries; and (3) Disjunctive queries driven by relevance feedback are efficiently treated. Empirical results demonstrate that the GB-index achieves great speedups over the sequential scan and the VA-file.

Combining A* and Genetic Algorithm for Efficient Path Search (효율적인 경로 탐색을 위한 A*와 유전자 알고리즘의 결합)

  • Kim, Kwang Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.943-948
    • /
    • 2018
  • In this paper, we propose a hybrid approach of combining $A^*$ and Genetic algorithm in the path search problem. In $A^*$, the cost from a start node to the intermediate node is optimized in principle but the path from that intermediate node to the goal node is generated and tested based on the cumulated cost and the next node in a priority queue is chosen to be tested. In that process, we adopt the genetic algorithm principle in that the group of nodes to generate the next node from an intermediate node is tested by its fitness function. Top two nodes are selected to use crossover or mutation operation to generate the next generation. If generated nodes are qualified, those nodes are inserted to the priority queue. The proposed method is compared with the original sequential selection and the random selection of the next searching path in $A^*$ algorithm and the result verifies the superiority of the proposed method.

Multi-axial Vibration Testing Methodology of Vehicle Component (자동차 부품에 대한 다축 진동내구 시험방법)

  • Kim, Chan-Jung;Bae, Chul-Yong;Lee, Dong-Won;Kwon, Seong-Jin;Lee, Bong-Hyun;Na, Byung-Chul
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.297-302
    • /
    • 2007
  • Vibrating test of vehicle component can be possible in lab-based simulators instead of field testing owing to the development of technology in control algorithm as well as computational process. Currently, Multi-Axial Simulation Table(MAST) is recommended as a vibrating equipment, which excites a target component for 3-directional translation and rotation motion simultaneously and hence, vibrational condition can be fully approximated to that of real road test. But, the vibration-free performance of target component is not guaranteed with MAST system, which is only simulator subjective to the operator. Rather, the reliability of multi-axial vibration test is dependent on the quality of input profile which should cover the required severity of vibrating condition on target component. In this paper, multi-axial vibration testing methodology of vehicle component is presented here, from data acquisition of vehicle accelerations to the obtaining the input profile of MAST using severe data at proving ground. To compare the severity of vibration condition, between real road test and proving ground one, energy principle of equivalent damage is proposed to calculate energy matrices of acceleration data and then, it is determined the optimal combination of special events on proving ground which is equivalent to real road test at the aspects of vibration fatigue using sequential searching optimal algorithm. To explain the vibration methodology clearly, seat and door component of vehicle are selected as a example.

  • PDF

Efficient Browsing Method based on Metadata of Video Contents (동영상 컨텐츠의 메타데이타에 기반한 효율적인 브라우징 기법)

  • Chun, Soo-Duck;Shin, Jung-Hoon;Lee, Sang-Jun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.513-518
    • /
    • 2010
  • The advancement of information technology along with the proliferation of communication and multimedia has increased the demand of digital contents. Video data of digital contents such as VOD, NOD, Digital Library, IPTV, and UCC are getting more permeated in various application fields. Video data have sequential characteristic besides providing the spatial and temporal information in its 3D format, making searching or browsing ineffective due to long turnaround time. In this paper, we suggest ATVC(Authoring Tool for Video Contents) for solving this issue. ATVC is a video editing tool that detects key frames using visual rhythm and insert metadata such as keywords into key frames via XML tagging. Visual rhythm is applied to map 3D spatial and temporal information to 2D information. Its processing speed is fast because it can get pixel information without IDCT, and it can classify edit-effects such as cut, wipe, and dissolve. Since XML data save key frame information via XML tag and keyword information, it can furnish efficient browsing.