• Title/Summary/Keyword: tradeoff

Search Result 394, Processing Time 0.024 seconds

Application of InVEST Offshore Wind Model for Evaluation of Offshore Wind Energy Resources in Jeju Island (제주도 해상풍력 에너지 자원평가를 위한 InVEST Offshore Wind 모형 적용)

  • KIM, Tae-Yun;JANG, Seon-Ju;KIM, Choong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.2
    • /
    • pp.47-59
    • /
    • 2017
  • This study aims to assess offshore wind energy resources around Jeju Island using the InVEST Offshore Wind model. First the wind power density around the coast of Jeju was calculated using reanalysis data from the Korean Local Analysis and Prediction System (KLAPS). Next, the net present value (NPV) for the 168MW offshore wind farm scenario was evaluated taking into consideration factors like costs (turbine development, submarine cable installation, maintenance), turbine operation efficiency, and a 20year operation period. It was determined that there are high wind resources along both the western and eastern coasts of Jeju Island, with high wind power densities of $400W/m^2$ calculated. To visually evaluate the NPV around Jeju Island, a classification of five grades was employed, and results showed that the western sea area has a high NPV, with wind power resources over $400W/m^2$. The InVEST Offshore Wind model can quickly provide optimal spatial information for various wind farm scenarios. The InVEST model can be used in combination with results of marine ecosystem service evaluation to design an efficient marine spatial plan around Jeju Island.

Local Stereo Matching Method based on Improved Matching Cost and Disparity Map Adjustment (개선된 정합 비용 및 시차 지도 재생성 기반 지역적 스테레오 정합 기법)

  • Kang, Hyun Ryun;Yun, In Yong;Kim, Joong Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.5
    • /
    • pp.65-73
    • /
    • 2017
  • In this paper, we propose a stereo matching method to improve the image quality at the hole and the disparity discontinuity regions. The stereo matching method extracts disparity map finding corresponding points between stereo image pair. However conventional stereo matching methods have a problem about the tradeoff between accuracy and precision with respect to the length of the baseline of the stereo image pair. In addition, there are hole and disparity discontinuity regions which are caused by textureless regions and occlusion regions of the stereo image pair. The proposed method extracts initial disparity map improved at disparity discontinuity and miss-matched regions using modified AD-Census-Gradient method and adaptive weighted cost aggregation. And then we conduct the disparity map refinement to improve at miss-matched regions, while also improving the accuracy of the image. Experimental results demonstrate that the proposed method produces high-quality disparity maps by successfully improving miss-matching regions and accuracy while maintaining matching performance compared to existing methods which produce disparity maps with high matching performance. And the matching performance is increased about 3.22(%) compared to latest stereo matching methods in case of test images which have high error ratio.

An integrated framework of security tool selection using fuzzy regression and physical programming (퍼지회귀분석과 physical programming을 활용한 정보보호 도구 선정 통합 프레임워크)

  • Nguyen, Hoai-Vu;Kongsuwan, Pauline;Shin, Sang-Mun;Choi, Yong-Sun;Kim, Sang-Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.143-156
    • /
    • 2010
  • Faced with an increase of malicious threats from the Internet as well as local area networks, many companies are considering deploying a security system. To help a decision maker select a suitable security tool, this paper proposed a three-step integrated framework using linear fuzzy regression (LFR) and physical programming (PP). First, based on the experts' estimations on security criteria, analytic hierarchy process (AHP) and quality function deployment (QFD) are employed to specify an intermediate score for each criterion and the relationship among these criteria. Next, evaluation value of each criterion is computed by using LFR. Finally, a goal programming (GP) method is customized to obtain the most appropriate security tool for an organization, considering a tradeoff among the multi-objectives associated with quality, credibility and costs, utilizing the relative weights calculated by the physical programming weights (PPW) algorithm. A numerical example provided illustrates the advantages and contributions of this approach. Proposed approach is anticipated to help a decision maker select a suitable security tool by taking advantage of experts' experience, with noises eliminated, as well as the accuracy of mathematical optimization methods.

Noise Removal using Fuzzy Mask Filter (퍼지 마스크 필터를 이용한 잡음 제거)

  • Lee, Sang-Jun;Yoon, Seok-Hyun;Kim, Kwang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.41-45
    • /
    • 2010
  • Image processing techniques are fundamental in human vision-based image information processing. There have been widely studied areas such as image transformation, image enhancement, image restoration, and image compression. One of research subgoals in those areas is enhancing image information for the correct information retrieval. As a fundamental task for the image recognition and interpretation, image enhancement includes noise filtering techniques. Conventional filtering algorithms may have high noise removal rate but usually have difficulty in conserving boundary information. As a result, they often use additional image processing algorithms in compensation for the tradeoff of more CPU time and higher possibility of information loss. In this paper, we propose a Fuzzy Mask Filtering algorithm that has high noise removal rate but lesser problems in above-mentioned side-effects. Our algorithm firstly decides a threshold based on fuzzy logic with information from masks. Then it decides the output pixel value by that threshold. In a designed experiment that has random impulse noise and salt pepper noise, the proposed algorithm was more effective in noise removal without information loss.

Vertical Integration of Solar business and its Value Analysis: Efficiency or Flexibility (태양광 수직통합화가 사업가치에 미치는 영향: 효율성 및 유연성)

  • Kim, Kyung-Nam;Jeon, Woo-Chan;SonU, Suk-Ho
    • New & Renewable Energy
    • /
    • v.8 no.2
    • /
    • pp.33-43
    • /
    • 2012
  • Why solar companies preferred vertical integration of whole value chain? Major solar companies have built internally strong vertical integration of entire PV value chain. We raise a question whether such integration increases the corporate value and whether market situation affects the result. To test these questions, we conducted multi-variant analysis where characteristic factors mainly affect the corporate value measured in terms of Tobin'Q, based on the financial and non-financial data of PV companies listed in US stock market between 2005 and 2010. We hypothesize that since integration increases the overall efficiency but decreases the flexibility to adjust to various market situation, the combined effect of the efficiency gain and the flexibility loss ultimately determines the sign of integration effect on the corporate vale. We infer that the combined effect will be influenced heavily by business cycle, as in boom market (Seller's market) the efficiency gain may be larger than the flexibility loss and vice versa in bust market. We test whether the sign of combined effect changes after the year of 2009 and which factors influence most the sign. Year of 2009 is known as the year when market shifted from Seller's to Buyer's market. We show that 1) integration increases corporate value in general but after 2009 integration significantly decreases the value, 2) the ratios such as Production/Total Cost, Cash turnover period chosen for reversal of the flexibility measure are negatively affect Tobin's Q and especially stronger after 2009. This shows the flexibility improves corporate value and stronger in the recess period (Buyer's market). These results imply that solar company should set up integration strategy considering the tradeoff between efficiency and flexibility and the impact of the business cycle on both factors. Strategy only based on the price competitiveness determined in boom time can bring undesirable outcomes to the company. In addition, Strategic alliances in some value chains as a flexible bondage should be taken in account as complementary choice to the rigid integration.

One-key Keyboard: A Very Small QWERTY Keyboard Supporting Text Entry for Wearable Computing (원키 키보드: 웨어러블 컴퓨팅 환경에서 문자입력을 지원하는 초소형 QWERTY 키보드)

  • Lee, Woo-Hun;Sohn, Min-Jung
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.21-28
    • /
    • 2006
  • Most of the commercialized wearable text input devices are wrist-worn keyboards that have adopted the minimization method of reducing keys. Generally, a drastic key reduction in order to achieve sufficient wearability increases KSPC(Keystrokes per Character), decreases text entry performance, and requires additional effort to learn a new typing method. We are faced with wearability-usability tradeoff problems in designing a good wearable keyboard. To address this problem, we introduced a new keyboard minimization method of reducing key pitch. From a series of empirical studies, we found the potential of a new method which has a keyboard with a 7mm key pitch, good wearability and social acceptance in terms of physical form factors, and allows users to type 15.0WPM in 3 session trials. However, participants point out that a lack of passive haptic feedback in keying action and visual feedback on users' input deteriorate the text entry performance. We have developed the One-key Keyboard that addresses this problem. The traditional desktop keyboard has one key per character, but the One-key Keyboard has only one key ($70mm{\times}35mm$) on which a 10*5 QWERTY key array is printed. The One-key Keyboard detects the position of the fingertip at the time of the keying event and figures out the character entered. We conducted a text entry performance test comprised of 5 sessions. The participants typed 18.9WPM with a 6.7% error rate over all sessions and achieved up to 24.5WPM. From the experiment's results, the One-key Keyboard was evaluated as a potential text input device for wearable computing, balancing wearability, social acceptance, input speed, and learnability.

  • PDF

A Model-based Methodology for Application Specific Energy Efficient Data path Design Using FPGAs (FPGA에서 에너지 효율이 높은 데이터 경로 구성을 위한 계층적 설계 방법)

  • Jang Ju-Wook;Lee Mi-Sook;Mohanty Sumit;Choi Seonil;Prasanna Viktor K.
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.451-460
    • /
    • 2005
  • We present a methodology to design energy-efficient data paths using FPGAs. Our methodology integrates domain specific modeling, coarse-grained performance evaluation, design space exploration, and low-level simulation to understand the tradeoffs between energy, latency, and area. The domain specific modeling technique defines a high-level model by identifying various components and parameters specific to a domain that affect the system-wide energy dissipation. A domain is a family of architectures and corresponding algorithms for a given application kernel. The high-level model also consists of functions for estimating energy, latency, and area that facilitate tradeoff analysis. Design space exploration(DSE) analyzes the design space defined by the domain and selects a set of designs. Low-level simulations are used for accurate performance estimation for the designs selected by the DSE and also for final design selection We illustrate our methodology using a family of architectures and algorithms for matrix multiplication. The designs identified by our methodology demonstrate tradeoffs among energy, latency, and area. We compare our designs with a vendor specified matrix multiplication kernel to demonstrate the effectiveness of our methodology. To illustrate the effectiveness of our methodology, we used average power density(E/AT), energy/(area x latency), as themetric for comparison. For various problem sizes, designs obtained using our methodology are on average $25\%$ superior with respect to the E/AT performance metric, compared with the state-of-the-art designs by Xilinx. We also discuss the implementation of our methodology using the MILAN framework.

WordNet-Based Category Utility Approach for Author Name Disambiguation (저자명 모호성 해결을 위한 개념망 기반 카테고리 유틸리티)

  • Kim, Je-Min;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.225-232
    • /
    • 2009
  • Author name disambiguation is essential for improving performance of document indexing, retrieval, and web search. Author name disambiguation resolves the conflict when multiple authors share the same name label. This paper introduces a novel approach which exploits ontologies and WordNet-based category utility for author name disambiguation. Our method utilizes author knowledge in the form of populated ontology that uses various types of properties: titles, abstracts and co-authors of papers and authors' affiliation. Author ontology has been constructed in the artificial intelligence and semantic web areas semi-automatically using OWL API and heuristics. Author name disambiguation determines the correct author from various candidate authors in the populated author ontology. Candidate authors are evaluated using proposed WordNet-based category utility to resolve disambiguation. Category utility is a tradeoff between intra-class similarity and inter-class dissimilarity of author instances, where author instances are described in terms of attribute-value pairs. WordNet-based category utility has been proposed to exploit concept information in WordNet for semantic analysis for disambiguation. Experiments using the WordNet-based category utility increase the number of disambiguation by about 10% compared with that of category utility, and increase the overall amount of accuracy by around 98%.

Characterizing Information Processing in Visual Search According to Probability of Target Prevalence (표적 출현확률에 따른 시각탐색 정보처리 특성)

  • Park, Hyung-Bum;Son, Han-Gyeol;Hyun, Joo-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.3
    • /
    • pp.357-375
    • /
    • 2015
  • In our daily life, the probability of target prevalence in visual search varies from very low to high. However, most laboratory studies of visual search used a fixed probability of target prevalence at 50%. The present study examined the properties of information processing during visual search where the probability of target prevalence was manipulated to vary from low (20%), medium (50%), to high (80%). The search items were made of simple shape stimuli, and search accuracy, signal detection measures, and reaction times (RTs) were analyzed for characterizing the effect of target prevalence on the information processing strategies for visual search. The analyses showed that the rates of misses increased whereas those of false alarms decreased in the search condition of low target prevalence, whereas the pattern was reversed in the high prevalence condition. Signal detection measures revealed that the target prevalence shifted response criterion (c) without affecting sensitivity (d'). In addition, RTs for correct rejection responses in the target-absent trials became delayed as the prevalence increased, whereas those for hits in the target-present trials were relatively constant regardless of the prevalence. The RT delay in the target-absent trials indicates that increased target prevalence made the 'quitting threshold' for search termination more conservative. These results support an account that the target prevalence effect in visual search arises from a shift of decision criteria and the subsequent changes in search information processing, while rejecting the account of a speed-accuracy tradeoff.

State-Aware Re-configuration Model for Multi-Radio Wireless Mesh Networks

  • Zakaria, Omar M.;Hashim, Aisha-Hassan Abdalla;Hassan, Wan Haslina;Khalifa, Othman Omran;Azram, Mohammad;Goudarzi, Shidrokh;Jivanadham, Lalitha Bhavani;Zareei, Mahdi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.146-170
    • /
    • 2017
  • Joint channel assignment and routing is a well-known problem in multi-radio wireless mesh networks for which optimal configurations is required to optimize the overall throughput and fairness. However, other objectives need to be considered in order to provide a high quality service to network users when it deployed with high traffic dynamic. In this paper, we propose a re-configuration optimization model that optimizes the network throughput in addition to reducing the disruption to the mesh clients' traffic due to the re-configuration process. In this multi-objective optimization model, four objective functions are proposed to be minimized namely maximum link-channel utilization, network average contention, channel re-assignment cost, and re-routing cost. The latter two objectives focus on reducing the re-configuration overhead. This is to reduce the amount of disrupted traffic due to the channel switching and path re-routing resulted from applying the new configuration. In order to adapt to traffic dynamics in the network which might be caused by many factors i.e. users' mobility, a centralized heuristic re-configuration algorithm called State-Aware Joint Routing and Channel Assignment (SA-JRCA) is proposed in this research based on our re-configuration model. The proposed algorithm re-assigns channels to radios and re-configures flows' routes with aim of achieving a tradeoff between maximizing the network throughput and minimizing the re-configuration overhead. The ns-2 simulator is used as simulation tool and various metrics are evaluated. These metrics include channel-link utilization, channel re-assignment cost, re-routing cost, throughput, and delay. Simulation results show the good performance of SA-JRCA in term of packet delivery ratio, aggregated throughput and re-configuration overhead. It also shows higher stability to the traffic variation in comparison with other compared algorithms which suffer from performance degradation when high traffic dynamics is applied.