• Title/Summary/Keyword: Benchmark Problem

Search Result 462, Processing Time 0.022 seconds

Increasing Transnational Threats and Terrorism and Establishment of Integrated Border Security Systems: Focused on U.S., Canada and Australia (초국가적 위협 및 테러리즘 증가와 통합국경안보체계 구축: 미국, 캐나다, 호주를 중심으로)

  • Yoon, Taeyoung
    • Convergence Security Journal
    • /
    • v.17 no.4
    • /
    • pp.69-78
    • /
    • 2017
  • Since the September 11, 2001, transnational crimes and terrorism have increased, the importance of border security has been emphasized and integrated CIQ capability has been required. The U.S., Canada, and Australia are consolidating CIQ to strengthen border security, focusing on strengthening travelers and goods immigration control and airports, ports and land border security. In 2003, the U.S. established the Customs and Border Protection(CBP) under the Department of Homeland Security. Canada also established the Canada Border Services Agency(CBSA) under the Public Safety Canada in 2003. The Australian Customs and Border Protection Service was integrated with the Department of Immigration and Border Protection(DIBP) and the Australian Border Force was established in 2015. However, Korea operates a distributed border management system for each CIQ task which is unable to respond to complex border threats such as illegal immigration, entry of terrorists, smuggling of drugs, and gun trade in the airports, ports and land borders. In order to solve this problem, it is possible to consider integrating sequentially the customs and quarantine services which have high similarities, and to integrate the entire CIQ tasks with the Korea Customs Service delegated to the immigration control duties in the mid to long term. There is also a plan to benchmark the CIQ single accountability agencies in the U.S., Canada, and Australia in accordance with the Korean situation and to establish a new integrated border security organization.

A study on image segmentation for depth map generation (깊이정보 생성을 위한 영상 분할에 관한 연구)

  • Lim, Jae Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.707-716
    • /
    • 2017
  • The advances in image display devices necessitate display images suitable for the user's purpose. The display devices should be able to provide object-based image information when a depthmap is required. In this paper, we represent the algorithm using a histogram-based image segmentation method for depthmap generation. In the conventional K-means clustering algorithm, the number of centroids is parameterized, so existing K-means algorithms cannot adaptively determine the number of clusters. Further, the problem of K-means algorithm tends to sink into the local minima, which causes over-segmentation. On the other hand, the proposed algorithm is adaptively able to select centroids and can stand on the basis of the histogram-based algorithm considering the amount of computational complexity. It is designed to show object-based results by preventing the existing algorithm from falling into the local minimum point. Finally, we remove the over-segmentation components through connected-component labeling algorithm. The results of proposed algorithm show object-based results and better segmentation results of 0.017 and 0.051, compared to the benchmark method in terms of Probabilistic Rand Index(PRI) and Segmentation Covering(SC), respectively.

A Partial Scan Design by Unifying Structural Analysis and Testabilities (구조분석과 테스트 가능도의 통합에 의한 부분스캔 설계)

  • Park, Jong-Uk;Sin, Sang-Hun;Park, Seong-Ju
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1177-1184
    • /
    • 1999
  • 본 논문에서는 스캔플립프롭 선택 시간이 짧고 높은 고장 검출률(fault coverage)을 얻을 수 있는 새로운 부분스캔 설계 기술을 제안한다. 순차회로에서 테스트패턴 생성을 용이하게 하기 위하여 완전스캔 및 부분스캔 설계 기술이 널리 이용되고 있다. 스캔 설계로 인한 추가영역을 최소화 하고 최대의 고장 검출률을 목표로 하는 부분스캔 기술은 크게 구조분석과 테스트 가능도(testability)에 의한 설계 기술로 나누어 볼 수 있다. 구조분석에 의한 부분스캔은 짧은 시간에 스캔플립프롭을 선택할 수 있지만 고장 검출률은 낮다. 반면 테스트 가능도에 의한 부분스캔은 구조분석에 의한 부분스캔보다 스캔플립프롭의 선택 시간이 많이 걸리는 단점이 있지만 높은 고장 검출률을 나타낸다. 본 논문에서는 구조분석에 의한 부분스캔과 테스트 가능도에 의한 부분스캔 설계 기술의 장단점을 비교.분석하여 통합함으로써 스캔플립프롭 선택 시간을 단축하고 고장 검출률을 높일 수 있는 새로운 부분스캔 설계 기술을 제안한다. 실험결과 대부분의 ISCAS89 벤치마크 회로에서 스캔플립프롭 선택 시간은 현격히 감소하였고 비교적 높은 고장 검출률을 나타내었다.Abstract This paper provides a new partial scan design technique which not only reduces the time for selecting scan flip-flops but also improves fault coverage. To simplify the problem of the test pattern generation in the sequential circuits, full scan and partial scan design techniques have been widely adopted. The partial scan techniques which aim at minimizing the area overhead while maximizing the fault coverage, can be classified into the techniques based on structural analysis and testabilities. In case of the partial scan by structural analysis, it does not take much time to select scan flip-flops, but fault coverage is low. On the other hand, although the partial scan by testabilities generally results in high fault coverage, it requires more time to select scan flip-flops than the former method. In this paper, we analyzed and unified the strengths of the techniques by structural analysis and by testabilities. The new partial scan design technique not only reduces the time for selecting scan flip-flops but also improves fault coverage. Test results demonstrate the remarkable reduction of the time to select the scan flip-flops and high fault coverage in most ISCAS89 benchmark circuits.

Development and Verification of OGSFLAC Simulator for Hydromechanical Coupled Analysis: Single-phase Fluid Flow Analysis (수리-역학적 복합거동 해석을 위한 OGSFLAC 시뮬레이터 개발 및 검증: 단상 유체 거동 해석)

  • Park, Chan-Hee;Kim, Taehyun;Park, Eui-Seob;Jung, Yong-Bok;Bang, Eun-Seok
    • Tunnel and Underground Space
    • /
    • v.29 no.6
    • /
    • pp.468-479
    • /
    • 2019
  • It is essential to comprehend coupled hydro-mechanical behavior to utilize subsurface for the recent demand for underground space usage. In this study, we developed a new simulator for numerical simulation as a tool for researching to consider the various domestic field and subsurface conditions. To develop the new module, we combined OpenGeoSys, one of the scientific software package that handles fluid mechanics (H), thermodynamics (T), and rock and soil mechanics (M) in the subsurface with FLAC3D, one of the commercial software for geotechnical engineering problems reinforced. In this simulator development, we design OpenGeoSys as a master and FLAC3D as a slave via a file-based sequential coupling. We have chosen Terzaghi's consolidation problem related to single-phase fluid flow at a saturated condition as a benchmark model to verify the proposed module. The comparative results between the analytical solution and numerical analysis showed a good agreement.

DNA Sequence Design using $\varepsilon$ -Multiobjective Evolutionary Algorithm ($\varepsilon$-다중목적함수 진화 알고리즘을 이용한 DNA 서열 디자인)

  • Shin Soo-Yong;Lee In-Hee;Zhang Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.12
    • /
    • pp.1217-1228
    • /
    • 2005
  • Recently, since DNA computing has been widely studied for various applications, DNA sequence design which is the most basic and important step for DNA computing has been highlighted. In previous works, DNA sequence design has been formulated as a multi-objective optimization task, and solved by elitist non-dominated sorting genetic algorithm (NSGA-II). However, NSGA-II needed lots of computational time. Therefore, we use an $\varepsilon$- multiobjective evolutionarv algorithm ($\varepsilon$-MOEA) to overcome the drawbacks of NSGA-II in this paper. To compare the performance of two algorithms in detail, we apply both algorithms to the DTLZ2 benchmark function. $\varepsilon$-MOEA outperformed NSGA-II in both convergence and diversity, $70\%$ and $73\%$ respectively. Especially, $\varepsilon$-MOEA finds optimal solutions using small computational time. Based on these results, we redesign the DNA sequences generated by the previous DNA sequence design tools and the DNA sequences for the 7-travelling salesman problem (TSP). The experimental results show that $\varepsilon$-MOEA outperforms the most cases. Especially, for 7-TSP, $\varepsilon$-MOEA achieves the comparative results two tines faster while finding $22\%$ improved diversity and $92\%$ improved convergence in final solutions using the same time.

New Method to Calculate Cost of Capital for Telecommunication Market (통신시장의 투자보수율 산정 개선방안)

  • Kim, Chang-Soo;Chon, Mi-Lim
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.181-190
    • /
    • 2012
  • Cost of capital is one of the key factors of accounting regulation policy for telecommunication market. This paper aims at investigating efficient policy improvements concerning accounting regulation for telecommunication market focused on cost of capital calculation methods and its application. At First, cost of capital estimating method should be improved. In estimating the cost of equity capital, it is necessary to use benchmark method for Equity risk premium. It will reduce analytical errors caused by a rapid economic change and inflation. It is also more desirable to use debt premium adding method for the cost of debt capital. Optimal capital structure method may be considered a better way to estimates capital structure. Secondly, cost of capital estimating process also has to be reformed. Telecommunication industry changes rapidly so it does not reflect fast environmental changes. Therefore, cost of capital should be calculated every year. Cost of capital should be calculated by individual companies. There is information asymmetry between regulators and regulatees. Because of that cost of capital calculating process takes long time and cost a lot. To solve this problem, regulator should legislate on cost of capital calculation and then regulating companies report the calculating result. Lastly, major telecommunication companies are all listed now and it is possible to calculating it separately. We must continuously improve the estimating method and application of cost of capital and due to the fast growing of telecommunication industry. The process of determining the calculating method must be discussed and best method chosen.

A Comparative Experiment on Dimensional Reduction Methods Applicable for Dissimilarity-Based Classifications (비유사도-기반 분류를 위한 차원 축소방법의 비교 실험)

  • Kim, Sang-Woon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.59-66
    • /
    • 2016
  • This paper presents an empirical evaluation on dimensionality reduction strategies by which dissimilarity-based classifications (DBC) can be implemented efficiently. In DBC, classification is not based on feature measurements of individual objects (a set of attributes), but rather on a suitable dissimilarity measure among the individual objects (pair-wise object comparisons). One problem of DBC is the high dimensionality of the dissimilarity space when a lots of objects are treated. To address this issue, two kinds of solutions have been proposed in the literature: prototype selection (PS)-based methods and dimension reduction (DR)-based methods. In this paper, instead of utilizing the PS-based or DR-based methods, a way of performing DBC in Eigen spaces (ES) is considered and empirically compared. In ES-based DBC, classifications are performed as follows: first, a set of principal eigenvectors is extracted from the training data set using a principal component analysis; second, an Eigen space is expanded using a subset of the extracted and selected Eigen vectors; third, after measuring distances among the projected objects in the Eigen space using $l_p$-norms as the dissimilarity, classification is performed. The experimental results, which are obtained using the nearest neighbor rule with artificial and real-life benchmark data sets, demonstrate that when the dimensionality of the Eigen spaces has been selected appropriately, compared to the PS-based and DR-based methods, the performance of the ES-based DBC can be improved in terms of the classification accuracy.

A Study on Improvement of Korean Defense Specification Classification System through the Domestic and Foreign Standard Classification System Research and Analysis (국내외 표준 분류체계 조사·분석을 통한 국방규격 분류체계 개선방안 연구)

  • Yeom, Seul-Ki
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.457-465
    • /
    • 2021
  • This study analyzed the reality and problems of the defense standard classification system. This paper proposes a plan to perform standard management tasks efficiently through case analysis of domestic and foreign standard classification systems. To continuously solve the problem of military product quality in defense technical data, it is necessary to promptly reflect the civilian's excellent technology and benchmark the civilian standard system to manage high-quality defense standards. First, to analyze the reality, the NATO classification system was analyzed through the private KS of domestic and ICS codes, the US defense standard system of overseas. In the case of the Korean military, the reality of the defense standard classification system was grasped through the National Defense Standards Comprehensive System operated by the Defense Acquisition Program Administration. The classification of the ministry of defense's weapon system and force support system is the most suitable classification system for the Korean military, which is classified into eight weapon systems and six force support standard systems for all steps. Specifically, it was classified into 12 major categories, 66 categories, and 352 sub-categories. In this study, the establishment of the defense standard management system can improve the classification system of new defense standards by reflecting the superior technology of the private sector.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.