• Title/Summary/Keyword: neighborhood metrics

Search Result 8, Processing Time 0.023 seconds

Contrast Enhancement using Histogram Equalization with a New Neighborhood Metrics

  • Sengee, Nyamlkhagva;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.737-745
    • /
    • 2008
  • In this paper, a novel neighborhood metric of histogram equalization (HE) algorithm for contrast enhancement is presented. We present a refinement of HE using neighborhood metrics with a general framework which orders pixels based on a sequence of sorting functions which uses both global and local information to remap the image greylevels. We tested a novel sorting key with the suggestion of using the original image greylevel as the primary key and a novel neighborhood distinction metric as the secondary key, and compared HE using proposed distinction metric and other HE methods such as global histogram equalization (GHE), HE using voting metric and HE using contrast difference metric. We found that our method can preserve advantages of other metrics, while reducing drawbacks of them and avoiding undesirable over-enhancement that can occur with local histogram equalization (LHE) and other methods.

  • PDF

A Novel Filter ed Bi-Histogram Equalization Method

  • Sengee, Nyamlkhagva;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.6
    • /
    • pp.691-700
    • /
    • 2015
  • Here, we present a new framework for histogram equalization in which both local and global contrasts are enhanced using neighborhood metrics. When checking neighborhood information, filters can simultaneously improve image quality. Filters are chosen depending on image properties, such as noise removal and smoothing. Our experimental results confirmed that this does not increase the computational cost because the filtering process is done by our proposed arrangement of making the histogram while checking neighborhood metrics simultaneously. If the two methods, i.e., histogram equalization and filtering, are performed sequentially, the first method uses the original image data and next method uses the data altered by the first. With combined histogram equalization and filtering, the original data can be used for both methods. The proposed method is fully automated and any spatial neighborhood filter type and size can be used. Our experiments confirmed that the proposed method is more effective than other similar techniques reported previously.

Contrast Enhancement for Segmentation of Hippocampus on Brain MR Images

  • Sengee, Nyamlkhagva;Sengee, Altansukh;Adiya, Enkhbolor;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.12
    • /
    • pp.1409-1416
    • /
    • 2012
  • An image segmentation result depends on pre-processing steps such as contrast enhancement, edge detection, and smooth filtering etc. Especially medical images are low contrast and contain some noises. Therefore, the contrast enhancement and noise removal techniques are required in the pre-processing. In this study, we present an extension by a novel histogram equalization in which both local and global contrast is enhanced using neighborhood metrics. When checking neighborhood information, filters can simultaneously improve image quality. Most important is that original image information can be used for both global brightness preserving and local contrast enhancement, and image quality improvement filtering. Our experiments confirmed that the proposed method is more effective than other similar techniques reported previously.

An Experimental Study of Image Thresholding Based on Refined Histogram using Distinction Neighborhood Metrics

  • Sengee, Nyamlkhagva;Purevsuren, Dalaijargal;tumurbaatar, Tserennadmid
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.87-92
    • /
    • 2022
  • In this study, we aimed to illustrate that the thresholding method gives different results when tested on the original and the refined histograms. We use the global thresholding method, the well-known image segmentation method for separating objects and background from the image, and the refined histogram is created by the neighborhood distinction metric. If the original histogram of an image has some large bins which occupy the most density of whole intensity distribution, it is a problem for global methods such as segmentation and contrast enhancement. We refined the histogram to overcome the big bin problem in which sub-bins are created from big bins based on distinction metric. We suggest the refined histogram for preprocessing of thresholding in order to reduce the big bin problem. In the test, we use Otsu and median-based thresholding techniques and experimental results prove that their results on the refined histograms are more effective compared with the original ones.

Determining Absolute Interpolation Weights for Neighborhood-Based Collaborative Filtering

  • Kim, Hyoung-Do
    • Management Science and Financial Engineering
    • /
    • v.16 no.2
    • /
    • pp.53-65
    • /
    • 2010
  • Despite the overall success of neighbor-based CF methods, there are some fundamental questions about neighbor selection and prediction mechanism including arbitrary similarity, over-fitting interpolation weights, no trust consideration between neighbours, etc. This paper proposes a simple method to compute absolute interpolation weights based on similarity values. In order to supplement the method, two schemes are additionally devised for high-quality neighbour selection and trust metrics based on co-ratings. The former requires that one or more neighbour's similarity should be better than a pre-specified level which is higher than the minimum level. The latter gives higher trust to neighbours that have more co-ratings. Experimental results show that the proposed method outperforms the pure IBCF by about 8% improvement. Furthermore, it can be easily combined with other predictors for achieving better prediction quality.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

An Energy Efficient Interference-aware Routing Protocol for Underwater WSNs

  • Khan, Anwar;Javaid, Nadeem;Ali, Ihsan;Anisi, Mohammad Hossein;Rahman, Atiq Ur;Bhatti, Naeem;Zia, Muhammad;Mahmood, Hasan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4844-4864
    • /
    • 2017
  • Interference-aware routing protocol design for underwater wireless sensor networks (UWSNs) is one of the key strategies in reducing packet loss in the highly hostile underwater environment. The reduced interference causes efficient utilization of the limited battery power of the sensor nodes that, in consequence, prolongs the entire network lifetime. In this paper, we propose an energy-efficient interference-aware routing (EEIAR) protocol for UWSNs. A sender node selects the best relay node in its neighborhood with the lowest depth and the least number of neighbors. Combination of the two routing metrics ensures that data packets are forwarded along the least interference paths to reach the final destination. The proposed work is unique in that it does not require the full dimensional localization information of sensor nodes and the network total depth is segmented to identify source, relay and neighbor nodes. Simulation results reveal better performance of the scheme than the counterparts DBR and EEDBR techniques in terms of energy efficiency, packet delivery ratio and end-to-end delay.

Parameter search methodology of support vector machines for improving performance (속도 향상을 위한 서포트 벡터 머신의 파라미터 탐색 방법론)

  • Lee, Sung-Bo;Kim, Jae-young;Kim, Cheol-Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.329-337
    • /
    • 2017
  • This paper proposes a search method that explores parameters C and σ values of support vector machines (SVM) to improve performance while maintaining search accuracy. A traditional grid search method requires tremendous computational times because it searches all available combinations of C and σ values to find optimal combinations which provide the best performance of SVM. To address this issue, this paper proposes a deep search method that reduces computational time. In the first stage, it divides C-σ- accurate metrics into four regions, searches a median value of each region, and then selects a point of the highest accurate value as a start point. In the second stage, the selected start points are re-divided into four regions, and then the highest accurate point is assigned as a new search point. In the third stage, after eight points near the search point. are explored and the highest accurate value is assigned as a new search point, corresponding points are divided into four parts and it calculates an accurate value. In the last stage, it is continued until an accurate metric value is the highest compared to the neighborhood point values. If it is not satisfied, it is repeated from the second stage with the input level value. Experimental results using normal and defect bearings show that the proposed deep search algorithm outperforms the conventional algorithms in terms of performance and search time.