• Title/Summary/Keyword: Algorithm partition

Search Result 360, Processing Time 0.03 seconds

Graph-based High-level Motion Segmentation using Normalized Cuts (Normalized Cuts을 이용한 그래프 기반의 하이레벨 모션 분할)

  • Yun, Sung-Ju;Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.671-680
    • /
    • 2008
  • Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where ow line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of repeated frames within temporal distances, we consider similarities between neighboring frames as well as all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

On Generating Backbone Based on Energy and Connectivity for WSNs (무선 센서네트워크에서 노드의 에너지와 연결성을 고려한 클러스터 기반의 백본 생성 알고리즘)

  • Shin, In-Young;Kim, Moon-Seong;Choo, Hyun-Seung
    • Journal of Internet Computing and Services
    • /
    • v.10 no.5
    • /
    • pp.41-47
    • /
    • 2009
  • Routing through a backbone, which is responsible for performing and managing multipoint communication, reduces the communication overhead and overall energy consumption in wireless sensor networks. However, the backbone nodes will need extra functionality and therefore consume more energy compared to the other nodes. The power consumption imbalance among sensor nodes may cause a network partition and failures where the transmission from some sensors to the sink node could be blocked. Hence optimal construction of the backbone is one of the pivotal problems in sensor network applications and can drastically affect the network's communication energy dissipation. In this paper a distributed algorithm is proposed to generate backbone trees through robust multi-hop clusters in wireless sensor networks. The main objective is to form a properly designed backbone through multi-hop clusters by considering energy level and degree of each node. Our improved cluster head selection method ensures that energy is consumed evenly among the nodes in the network, thereby increasing the network lifetime. Comprehensive computer simulations have indicated that the newly proposed scheme gives approximately 10.36% and 24.05% improvements in the performances related to the residual energy level and the degree of the cluster heads respectively and also prolongs the network lifetime.

  • PDF

An Energy-Efficient Clustering Using Division of Cluster in Wireless Sensor Network (무선 센서 네트워크에서 클러스터의 분할을 이용한 에너지 효율적 클러스터링)

  • Kim, Jong-Ki;Kim, Yoeng-Won
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2008
  • Various studies are being conducted to achieve efficient routing and reduce energy consumption in wireless sensor networks where energy replacement is difficult. Among routing mechanisms, the clustering technique has been known to be most efficient. The clustering technique consists of the elements of cluster construction and data transmission. The elements that construct a cluster are repeated in regular intervals in order to equalize energy consumption among sensor nodes in the cluster. The algorithms for selecting a cluster head node and arranging cluster member nodes optimized for the cluster head node are complex and requires high energy consumption. Furthermore, energy consumption for the data transmission elements is proportional to $d^2$ and $d^4$ around the crossover region. This paper proposes a means of reducing energy consumption by increasing the efficiency of the cluster construction elements that are regularly repeated in the cluster technique. The proposed approach maintains the number of sensor nodes in a cluster at a constant level by equally partitioning the region where nodes with density considerations will be allocated in cluster construction, and reduces energy consumption by selecting head nodes near the center of the cluster. It was confirmed through simulation experiments that the proposed approach consumes less energy than the LEACH algorithm.

  • PDF

A Study on Improved Image Matching Method using the CUDA Computing (CUDA 연산을 이용한 개선된 영상 매칭 방법에 관한 연구)

  • Cho, Kyeongrae;Park, Byungjoon;Yoon, Taebok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.4
    • /
    • pp.2749-2756
    • /
    • 2015
  • Recently, Depending on the quality of data increases, the problem of time-consuming to process the image is raised by being required to accelerate the image processing algorithms, in a traditional CPU and CUDA(Compute Unified Device Architecture) based recognition system for computing speed and performance gains compared to OpenMP When character recognition has been learned by the system to measure the input by the character data matching is implemented in an environment that recognizes the region of the well, so that the font of the characters image learning English alphabet are each constant and standardized in size and character an image matching method for calculating the matching has also been implemented. GPGPU (General Purpose GPU) programming platform technology when using the CUDA computing techniques to recognize and use the four cores of Intel i5 2500 with OpenMP to deal quickly and efficiently an algorithm, than the performance of existing CPU does not produce the rate of four times due to the delay of the data of the partition and merge operation proposed a method of improving the rate of speed of about 3.2 times, and the parallel processing of the video card that processes a result, the sequential operation of the process compared to CPU-based who performed the performance gain is about 21 tiems improvement in was confirmed.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Path-based In-network Join Processing for Event Detection and Filtering in Sensor Networks (센서 네트워크에서 이벤트 검출 및 필터링을 위한 경로기반 네트워크-내 조인 프로세싱 방법)

  • Jeon, Ju-Hyuk;Yoo, Jae-Soo;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.620-630
    • /
    • 2006
  • Event-detection is an important application of sensor networks. Join operations can facilitate event-detection with a condition table predefined by a user. When join operations are used for event-detection, it is desirable, if possible, to do in-network join processing to reduce communication costs. In this paper, we propose an energy-efficient in-network join algorithm, called PBA. In PBA, each partition of a condition table is stored along the path from each node to the base station, and then in-network joins are performed on the path. Since each node can identify the parts to store in its storage by its level, PBA reduces the cost of disseminating a condition table considerably Moreover, while the existing method does not work well when the ratio of the size of the condition table to the density of the network is a little bit large, our proposed method PBA does not have such a restriction and works efficiently in most cases. The results of experiments show that PBA is efficient usually and especially provides significant cost reduction over existing one when a condition table is relatively large in comparison with the density of the network, or the routing tree of the network is high.

Spherical Pyramid-Technique : An Efficient Indexing Technique for Similarity Search in High-Dimensional Data (구형 피라미드 기법 : 고차원 데이터의 유사성 검색을 위한 효율적인 색인 기법)

  • Lee, Dong-Ho;Jeong, Jin-Wan;Kim, Hyeong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1270-1281
    • /
    • 1999
  • 피라미드 기법 1 은 d-차원의 공간을 2d개의 피라미드들로 분할하는 특별한 공간 분할 방식을 이용하여 고차원 데이타를 효율적으로 색인할 수 있는 새로운 색인 방법으로 제안되었다. 피라미드 기법은 고차원 사각형 형태의 영역 질의에는 효율적이나, 유사성 검색에 많이 사용되는 고차원 구형태의 영역 질의에는 비효율적인 면이 존재한다. 본 논문에서는 고차원 데이타를 많이 사용하는 유사성 검색에 효율적인 새로운 색인 기법으로 구형 피라미드 기법을 제안한다. 구형 피라미드 기법은 먼저 d-차원의 공간을 2d개의 구형 피라미드로 분할하고, 각 단일 구형 피라미드를 다시 구형태의 조각으로 분할하는 특별한 공간 분할 방법에 기반하고 있다. 이러한 공간 분할 방식은 피라미드 기법과 마찬가지로 d-차원 공간을 1-차원 공간으로 변환할 수 있다. 따라서, 변환된 1-차원 데이타를 다루기 위하여 B+-트리를 사용할 수 있다. 본 논문에서는 이렇게 분할된 공간에서 고차원 구형태의 영역 질의를 효율적으로 처리할 수 있는 알고리즘을 제안한다. 마지막으로, 인위적 데이타와 실제 데이타를 사용한 다양한 실험을 통하여 구형 피라미드 기법이 구형태의 영역 질의를 처리하는데 있어서 기존의 피라미드 기법보다 효율적임을 보인다.Abstract The Pyramid-Technique 1 was proposed as a new indexing method for high- dimensional data spaces using a special partitioning strategy that divides d-dimensional space into 2d pyramids. It is efficient for hypercube range query, but is not efficient for hypersphere range query which is frequently used in similarity search. In this paper, we propose the Spherical Pyramid-Technique, an efficient indexing method for similarity search in high-dimensional space. The Spherical Pyramid-Technique is based on a special partitioning strategy, which is to divide the d-dimensional data space first into 2d spherical pyramids, and then cut the single spherical pyramid into several spherical slices. This partition provides a transformation of d-dimensional space into 1-dimensional space as the Pyramid-Technique does. Thus, we are able to use a B+-tree to manage the transformed 1-dimensional data. We also propose the algorithm of processing hypersphere range query on the space partitioned by this partitioning strategy. Finally, we show that the Spherical Pyramid-Technique clearly outperforms the Pyramid-Technique in processing hypersphere range queries through various experiments using synthetic and real data.

Influence of Self-driving Data Set Partition on Detection Performance Using YOLOv4 Network (YOLOv4 네트워크를 이용한 자동운전 데이터 분할이 검출성능에 미치는 영향)

  • Wang, Xufei;Chen, Le;Li, Qiutan;Son, Jinku;Ding, Xilong;Song, Jeongyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.157-165
    • /
    • 2020
  • Aiming at the development of neural network and self-driving data set, it is also an idea to improve the performance of network model to detect moving objects by dividing the data set. In Darknet network framework, the YOLOv4 (You Only Look Once v4) network model was used to train and test Udacity data set. According to 7 proportions of the Udacity data set, it was divided into three subsets including training set, validation set and test set. K-means++ algorithm was used to conduct dimensional clustering of object boxes in 7 groups. By adjusting the super parameters of YOLOv4 network for training, Optimal model parameters for 7 groups were obtained respectively. These model parameters were used to detect and compare 7 test sets respectively. The experimental results showed that YOLOv4 can effectively detect the large, medium and small moving objects represented by Truck, Car and Pedestrian in the Udacity data set. When the ratio of training set, validation set and test set is 7:1.5:1.5, the optimal model parameters of the YOLOv4 have highest detection performance. The values show mAP50 reaching 80.89%, mAP75 reaching 47.08%, and the detection speed reaching 10.56 FPS.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

Development of Sample Survey Design for the Industrial Research and Development Statistics (표본조사에 의한 기업 연구개발활동 통계 작성방안)

  • Cho, Seong-Pyo;Park, Sun-Young;Han, Ki-In;Noh, Min-Sun
    • Journal of Technology Innovation
    • /
    • v.17 no.2
    • /
    • pp.1-23
    • /
    • 2009
  • The Survey on the Industrial Research and Development(R&D) is the primary source of information on R&D performed by Korea industrial sector. The results of the survey are used to assess trends in R&D expenditures. Government agencies, corporations, and research organizations use the data to investigate productivity determinants, formulate tax policy, and compare individual company performance with industry averages. Recently, Korea Industrial Technology Association(KOITA) has collected the data by complete enumeration. Koita has, currently, considered sample survey because the number of R&D institutions in industry has been dramatically increased. This study develops survey design for the industrial research and development(R&D) statistics by introducing a sample survey. Companies are divided into 8 groups according to the amount of R&D expenditures and firm size or type. We collect the sample from 24 or 8 sampling strata and compare the results with those of complete enumeration survey. The estimates from 24 sampling strata are not significantly different to the results of complete enumeration survey. We propose the survey design as follows: Companies are divided into 11 groups including the companies of which R&D expenditures are unknown. All large companies are included in the survey and medium and small companies are sampled from 70% and 3%. Simple random sampling (SRS) is applied to the small company partition since they show uniform distribution in R&D expenditures. The independent probability proportionate to size (PPS) sampling procedure may be applied to those companies identified as 'not R&D performers'. When respondents do not provide the requested information, estimates for the missing data are made using imputation algorithms. In the future study, new key variables should be developed in survey questionnaires.

  • PDF