• Title/Summary/Keyword: Time Weighted Algorithm

Search Result 312, Processing Time 0.024 seconds

Fuzzy Hybrid Control of a Smart TMD for Reduction of Wind Responses in a Tall Building (초고층건물의 풍응답제어를 위한 스마트 TMD의 퍼지 하이브리드제어)

  • Kim, Han-Sang;Kim, Hyun-Su
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.2
    • /
    • pp.135-144
    • /
    • 2009
  • Fuzzy hybrid control technique with a smart tuned mass damper(STMD) was proposed in this study for the suppression of wind-induced motion of a tall building. To develop the effective control algorithm for a STMD, skyhook and groundhook control algorithms were employed. Usually, skyhook controller can effectively reduce STMD motion and groundhook controller shows good control performance for the reduction of building responses. In this study, fuzzy hybrid controller, which can determine an optimal weighting factor for combining two controllers in real time, was developed to improve the control performance of conventional hybrid controller using weighted sum approach. A 76-story office building was used as an example structure to investigate the performance of the proposed controller. A magnetorheological(MR) damper was used to develop a STMD and the control performance of STMD was evaluated comparing with the passive and active TMD. The numerical studies show that the control effectiveness of a STMD is significantly superior to that of the conventional TMD. It is also shown that fuzzy hybrid controller can effectively adjust skyhook and groundhook control algorithms and reduce both responses of STMD and building.

3D Image Mergence using Weighted Bipartite Matching Method based on Minimum Distance (최소 거리 기반 가중치 이분 분할 매칭 방법을 이용한 3차원 영상 정합)

  • Jang, Taek-Jun;Joo, Ki-See;Jang, Bog-Ju;Kang, Kyeang-Yeong
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.5
    • /
    • pp.494-501
    • /
    • 2008
  • In this paper, to merge whole 3D information of an occluded body from view point, the new image merging algorithm is introduced after obtaining images of body on the turn table from 4 directions. The two images represented by polygon meshes are merged using weight bipartite matching method with different weights according to coordinates and axes based on minimum distance since two images merged don't present abrupt variation of 3D coordinates and scan direction is one direction. To obtain entire 3D information of body, these steps are repeated 3 times since the obtained images are 4. This proposed method has advantage 200 - 300% searching time reduction rather than conventional branch and bound, dynamic programming, and hungarian method though the matching accuracy rate is a little bit less than these methods.

  • PDF

An Adaptive Proximity Route Selection Method in DHT-Based Peer-to-Peer Systems (DHT 기반 피어-투-피어 시스템을 위한 적응적 근접경로 선택기법)

  • Song Ji-Young;Han Sae-Young;Park Sung-Yong
    • The KIPS Transactions:PartA
    • /
    • v.13A no.1 s.98
    • /
    • pp.11-18
    • /
    • 2006
  • In the Internet of various networks, it is difficult to reduce real routing time by just minimizing their hop count. We propose an adaptive proximity route selection method in DHT-based peer-to-peer systems, in which nodes select the nぉe with smallest lookup latency among their routing table entries as a next routing node. Using Q-Routing algorithm and exponential recency-weighted average, each node estimates the total latency and establishes a lookup table. Moreover, without additional overhead, nodes exchange their lookup tables to update their routing tables. Several simulations measuring the lookup latencies and hop-to-hop latency show that our method outperforms the original Chord method as well as CFS' server selection method.

Design and Development of the Multiple Kinect Sensor-based Exercise Pose Estimation System (다중 키넥트 센서 기반의 운동 자세 추정 시스템 설계 및 구현)

  • Cho, Yongjoo;Park, Kyoung Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.3
    • /
    • pp.558-567
    • /
    • 2017
  • In this research, we developed an efficient real-time human exercise pose estimation system using multiple Kinects. The main objective of this system is to measure and recognize the user's posture (such as knee curl or lunge) more accurately by employing Kinects on the front and the sides. Especially it is designed as an extensible and modular method which enables to support various additional postures in the future. This system is configured as multiple clients and the Unity3D server. The client processes Kinect skeleton data and send to the server. The server performs the multiple-Kinect calibration process and then applies the pose estimation algorithm based on the Kinect-based posture recognition model using feature extractions and the weighted averaging of feature values for different Kinects. This paper presents the design and implementation of the human exercise pose estimation system using multiple Kinects and also describes how to build and execute an interactive Unity3D exergame.

Joint Mode Selection and Resource Allocation for Mobile Relay-Aided Device-to-Device Communication

  • Tang, Rui;Zhao, Jihong;Qu, Hua;Zhu, Zhengcang;Zhang, Yanpeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.950-975
    • /
    • 2016
  • Device-to-Device (D2D) communication underlaying cellular networks is a promising add-on component for future radio communication systems. It provides more access opportunities for local device pairs and enhances system throughput (ST), especially when mobile relays (MR) are further enabled to facilitate D2D links when the channel condition of their desired links is unfavorable. However, mutual interference is inevitable due to spectral reuse, and moreover, selecting a suitable transmission mode to benefit the correlated resource allocation (RA) is another difficult problem. We aim to optimize ST of the hybrid system via joint consideration of mode selection (MS) and RA, which includes admission control (AC), power control (PC), channel assignment (CA) and relay selection (RS). However, the original problem is generally NP-hard; therefore, we decompose it into two parts where a hierarchical structure exists: (i) PC is mode-dependent, but its optimality can be perfectly addressed for any given mode with additional AC design to achieve individual quality-of-service requirements. (ii) Based on that optimality, the joint design of MS, CA and RS can be viewed from the graph perspective and transferred into the maximum weighted independent set problem, which is then approximated by our greedy algorithm in polynomial-time. Thanks to the numerical results, we elucidate the efficacy of our mechanism and observe a resulting gain in MR-aided D2D communication.

Representative Feature Extraction of Objects using VQ and Its Application to Content-based Image Retrieval (VQ를 이용한 영상의 객체 특징 추출과 이를 이용한 내용 기반 영상 검색)

  • Jang, Dong-Sik;Jung, Seh-Hwan;Yoo, Hun-Woo;Sohn, Yong--Jun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.724-732
    • /
    • 2001
  • In this paper, a new method of feature extraction of major objects to represent an image using Vector Quantization(VQ) is proposed. The principal features of the image, which are used in a content-based image retrieval system, are color, texture, shape and spatial positions of objects. The representative color and texture features are extracted from the given image using VQ(Vector Quantization) clustering algorithm with a general feature extraction method of color and texture. Since these are used for content-based image retrieval and searched by objects, it is possible to search and retrieve some desirable images regardless of the position, rotation and size of objects. The experimental results show that the representative feature extraction time is much reduced by using VQ, and the highest retrieval rate is given as the weighted values of color and texture are set to 0.5 and 0.5, respectively, and the proposed method provides up to 90% precision and recall rate for 'person'query images.

  • PDF

VM Scheduling for Efficient Dynamically Migrated Virtual Machines (VMS-EDMVM) in Cloud Computing Environment

  • Supreeth, S.;Patil, Kirankumari
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1892-1912
    • /
    • 2022
  • With the massive demand and growth of cloud computing, virtualization plays an important role in providing services to end-users efficiently. However, with the increase in services over Cloud Computing, it is becoming more challenging to manage and run multiple Virtual Machines (VMs) in Cloud Computing because of excessive power consumption. It is thus important to overcome these challenges by adopting an efficient technique to manage and monitor the status of VMs in a cloud environment. Reduction of power/energy consumption can be done by managing VMs more effectively in the datacenters of the cloud environment by switching between the active and inactive states of a VM. As a result, energy consumption reduces carbon emissions, leading to green cloud computing. The proposed Efficient Dynamic VM Scheduling approach minimizes Service Level Agreement (SLA) violations and manages VM migration by lowering the energy consumption effectively along with the balanced load. In the proposed work, VM Scheduling for Efficient Dynamically Migrated VM (VMS-EDMVM) approach first detects the over-utilized host using the Modified Weighted Linear Regression (MWLR) algorithm and along with the dynamic utilization model for an underutilized host. Maximum Power Reduction and Reduced Time (MPRRT) approach has been developed for the VM selection followed by a two-phase Best-Fit CPU, BW (BFCB) VM Scheduling mechanism which is simulated in CloudSim based on the adaptive utilization threshold base. The proposed work achieved a Power consumption of 108.45 kWh, and the total SLA violation was 0.1%. The VM migration count was reduced to 2,202 times, revealing better performance as compared to other methods mentioned in this paper.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Linear programming models using a Dantzig type risk for portfolio optimization (Dantzig 위험을 사용한 포트폴리오 최적화 선형계획법 모형)

  • Ahn, Dayoung;Park, Seyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.229-250
    • /
    • 2022
  • Since the publication of Markowitz's (1952) mean-variance portfolio model, research on portfolio optimization has been conducted in many fields. The existing mean-variance portfolio model forms a nonlinear convex problem. Applying Dantzig's linear programming method, it was converted to a linear form, which can effectively reduce the algorithm computation time. In this paper, we proposed a Dantzig perturbation portfolio model that can reduce management costs and transaction costs by constructing a portfolio with stable and small (sparse) assets. The average return and risk were adjusted according to the purpose by applying a perturbation method in which a certain part is invested in the existing benchmark and the rest is invested in the assets proposed as a portfolio optimization model. For a covariance estimation, we proposed a Gaussian kernel weight covariance that considers time-dependent weights by reflecting time-series data characteristics. The performance of the proposed model was evaluated by comparing it with the benchmark portfolio with 5 real data sets. Empirical results show that the proposed portfolios provide higher expected returns or lower risks than the benchmark. Further, sparse and stable asset selection was obtained in the proposed portfolios.

Research on the Development of Distance Metrics for the Clustering of Vessel Trajectories in Korean Coastal Waters (국내 연안 해역 선박 항적 군집화를 위한 항적 간 거리 척도 개발 연구)

  • Seungju Lee;Wonhee Lee;Ji Hong Min;Deuk Jae Cho;Hyunwoo Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.367-375
    • /
    • 2023
  • This study developed a new distance metric for vessel trajectories, applicable to marine traffic control services in the Korean coastal waters. The proposed metric is designed through the weighted summation of the traditional Hausdorff distance, which measures the similarity between spatiotemporal data and incorporates the differences in the average Speed Over Ground (SOG) and the variance in Course Over Ground (COG) between two trajectories. To validate the effectiveness of this new metric, a comparative analysis was conducted using the actual Automatic Identification System (AIS) trajectory data, in conjunction with an agglomerative clustering algorithm. Data visualizations were used to confirm that the results of trajectory clustering, with the new metric, reflect geographical distances and the distribution of vessel behavioral characteristics more accurately, than conventional metrics such as the Hausdorff distance and Dynamic Time Warping distance. Quantitatively, based on the Davies-Bouldin index, the clustering results were found to be superior or comparable and demonstrated exceptional efficiency in computational distance calculation.