• Title/Summary/Keyword: Nearest Neighbors Search

Search Result 27, Processing Time 0.022 seconds

A Scalable Index for Content-based Retrieval of Large Scale Multimedia Data (대용량 멀티미디어 데이터의 내용 기반 검색을 위한 고확장 지원 색인 기법)

  • Choi, Hyun-HWa;Lee, Mi-Young;Lee, Kyu-Chul
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.726-730
    • /
    • 2009
  • The proliferation of the web and digital photography has drastically increased multimedia data and has resulted in the need of the high quality internet service based on the moving picture like user generated contents(UGC). The keyword-based search on large scale images and video collections is too expensive and requires much manual intervention. Therefore the web search engine may provide the content-based retrieval on the multimedia data for search accuracy and customer satisfaction. In this paper, we propose a novel distributed index structure based on multiple length signature files according to data distribution. In addition, we describe how our scalable index technique can be used to find the nearest neighbors in the cluster environments.

  • PDF

Form-finding of lifting self-forming GFRP elastic gridshells based on machine learning interpretability methods

  • Soheila, Kookalani;Sandy, Nyunn;Sheng, Xiang
    • Structural Engineering and Mechanics
    • /
    • v.84 no.5
    • /
    • pp.605-618
    • /
    • 2022
  • Glass fiber reinforced polymer (GFRP) elastic gridshells consist of long continuous GFRP tubes that form elastic deformations. In this paper, a method for the form-finding of gridshell structures is presented based on the interpretable machine learning (ML) approaches. A comparative study is conducted on several ML algorithms, including support vector regression (SVR), K-nearest neighbors (KNN), decision tree (DT), random forest (RF), AdaBoost, XGBoost, category boosting (CatBoost), and light gradient boosting machine (LightGBM). A numerical example is presented using a standard double-hump gridshell considering two characteristics of deformation as objective functions. The combination of the grid search approach and k-fold cross-validation (CV) is implemented for fine-tuning the parameters of ML models. The results of the comparative study indicate that the LightGBM model presents the highest prediction accuracy. Finally, interpretable ML approaches, including Shapely additive explanations (SHAP), partial dependence plot (PDP), and accumulated local effects (ALE), are applied to explain the predictions of the ML model since it is essential to understand the effect of various values of input parameters on objective functions. As a result of interpretability approaches, an optimum gridshell structure is obtained and new opportunities are verified for form-finding investigation of GFRP elastic gridshells during lifting construction.

Accuracy Evaluation of Machine Learning Model for Concrete Aging Prediction due to Thermal Effect and Carbonation (콘크리트 탄산화 및 열효과에 의한 경년열화 예측을 위한 기계학습 모델의 정확성 검토)

  • Kim, Hyun-Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.23 no.4
    • /
    • pp.81-88
    • /
    • 2023
  • Numerous factors contribute to the deterioration of reinforced concrete structures. Elevated temperatures significantly alter the composition of the concrete ingredients, consequently diminishing the concrete's strength properties. With the escalation of global CO2 levels, the carbonation of concrete structures has emerged as a critical challenge, substantially affecting concrete durability research. Assessing and predicting concrete degradation due to thermal effects and carbonation are crucial yet intricate tasks. To address this, multiple prediction models for concrete carbonation and compressive strength under thermal impact have been developed. This study employs seven machine learning algorithms-specifically, multiple linear regression, decision trees, random forest, support vector machines, k-nearest neighbors, artificial neural networks, and extreme gradient boosting algorithms-to formulate predictive models for concrete carbonation and thermal impact. Two distinct datasets, derived from reported experimental studies, were utilized for training these predictive models. Performance evaluation relied on metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analytical outcomes demonstrate that neural networks and extreme gradient boosting algorithms outshine the remaining five machine learning approaches, showcasing outstanding predictive performance for concrete carbonation and thermal effect modeling.

A Dynamic Locality Sensitive Hashing Algorithm for Efficient Security Applications

  • Mohammad Y. Khanafseh;Ola M. Surakhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.79-88
    • /
    • 2024
  • The information retrieval domain deals with the retrieval of unstructured data such as text documents. Searching documents is a main component of the modern information retrieval system. Locality Sensitive Hashing (LSH) is one of the most popular methods used in searching for documents in a high-dimensional space. The main benefit of LSH is its theoretical guarantee of query accuracy in a multi-dimensional space. More enhancement can be achieved to LSH by adding a bit to its steps. In this paper, a new Dynamic Locality Sensitive Hashing (DLSH) algorithm is proposed as an improved version of the LSH algorithm, which relies on employing the hierarchal selection of LSH parameters (number of bands, number of shingles, and number of permutation lists) based on the similarity achieved by the algorithm to optimize searching accuracy and increasing its score. Using several tampered file structures, the technique was applied, and the performance is evaluated. In some circumstances, the accuracy of matching with DLSH exceeds 95% with the optimal parameter value selected for the number of bands, the number of shingles, and the number of permutations lists of the DLSH algorithm. The result makes DLSH algorithm suitable to be applied in many critical applications that depend on accurate searching such as forensics technology.

An Efficient Adaptive Bitmap-based Selective Tuning Scheme for Spatial Queries in Broadcast Environments

  • Song, Doo-Hee;Park, Kwang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.10
    • /
    • pp.1862-1878
    • /
    • 2011
  • With the advances in wireless communication technology and the advent of smartphones, research on location-based services (LBSs) is being actively carried out. In particular, several spatial index methods have been proposed to provide efficient LBSs. However, finding an optimal indexing method that balances query performance and index size remains a challenge in the case of wireless environments that have limited channel bandwidths and device resources (computational power, memory, and battery power). Thus, mechanisms that make existing spatial indexing techniques more efficient and highly applicable in resource-limited environments should be studied. Bitmap-based Spatial Indexing (BSI) has been designed to support LBSs, especially in wireless broadcast environments. However, the access latency in BSI is extremely large because of the large size of the bitmap, and this may lead to increases in the search time. In this paper, we introduce a Selective Bitmap-based Spatial Indexing (SBSI) technique. Then, we propose an Adaptive Bitmap-based Spatial Indexing (ABSI) to improve the tuning time in the proposed SBSI scheme. The ABSI is applied to the distribution of geographical objects in a grid by using the Hilbert curve (HC). With the information in the ABSI, grid cells that have no objects placed, (i.e., 0-bit information in the spatial bitmap index) are not tuned during a search. This leads to an improvement in the tuning time on the client side. We have carried out a performance evaluation and demonstrated that our SBSI and ABSI techniques outperform the existing bitmap-based DSI (B DSI) technique.

Intelligent System for the Prediction of Heart Diseases Using Machine Learning Algorithms with Anew Mixed Feature Creation (MFC) technique

  • Rawia Elarabi;Abdelrahman Elsharif Karrar;Murtada El-mukashfi El-taher
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.148-162
    • /
    • 2023
  • Classification systems can significantly assist the medical sector by allowing for the precise and quick diagnosis of diseases. As a result, both doctors and patients will save time. A possible way for identifying risk variables is to use machine learning algorithms. Non-surgical technologies, such as machine learning, are trustworthy and effective in categorizing healthy and heart-disease patients, and they save time and effort. The goal of this study is to create a medical intelligent decision support system based on machine learning for the diagnosis of heart disease. We have used a mixed feature creation (MFC) technique to generate new features from the UCI Cleveland Cardiology dataset. We select the most suitable features by using Least Absolute Shrinkage and Selection Operator (LASSO), Recursive Feature Elimination with Random Forest feature selection (RFE-RF) and the best features of both LASSO RFE-RF (BLR) techniques. Cross-validated and grid-search methods are used to optimize the parameters of the estimator used in applying these algorithms. and classifier performance assessment metrics including classification accuracy, specificity, sensitivity, precision, and F1-Score, of each classification model, along with execution time and RMSE the results are presented independently for comparison. Our proposed work finds the best potential outcome across all available prediction models and improves the system's performance, allowing physicians to diagnose heart patients more accurately.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.