• Title/Summary/Keyword: 마크

Search Result 3,080, Processing Time 0.026 seconds

Dashboard Design for Evidence-based Policymaking of Sejong City Government (세종시 데이터 증거기반 정책수립을 위한 대시보드 디자인에 관한 연구)

  • Park, Jin-A;An, Se-Yun
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.12
    • /
    • pp.173-183
    • /
    • 2019
  • Sejong, Korea's special multifunctional administrative city, was created as a national project to relocated government ministries, the aim being to pursue more balanced regional economic development and boost national competitiveness. During the second phase development will focus on mitigating the challenges raised due to the increasing population and urbanization development. All of infrastructure, apartments, houses, private buildings, commercial structures, public buildings, citizens are producing more and more complex data. To face these challenges, Sejong city governments and policy maker recognizes the opportunity to ensure more enriched lives for citizen with data-driven city management, and effectively exploring how to use existing data to improve policy services and a more sustainable economic policy to enhance sustainable city management. As a city government is a complex decision making system, the analysis of astounding increase in city dada is valuable to gain insight in the affecting traffic flow. To support the requirement specification and management of government policy making, the graphic representation of information and data should be provide a different approach in the intuitive way. With in context, this paper outlines the design of interactive, web-based dashboard which provides data visualization regarding better policy making and risk management.

An Explicit Dynamic Memory Management Scheme in Java Run-Time Environment (자바 실행시간 환경에서 명시적인 동적 메모리 관리 기법)

  • 배수강;이승룡;전태웅
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.58-72
    • /
    • 2003
  • The objects generated by the keyword new in Java are automatically managed by the garbage collector Inside Java Virtual Machine (JVM) not like using the keywords free or delete in C or C++ programming environments. This provides a means of freedom of memory management burden to the application programmers. The garbage collector however. inherently has its own run time execution overhead. Thus it causes the performance degradation of JVM significantly. In order to mitigate the execution burden of a garbage collector, we propose a novel way of dynamic memory management scheme in Java environment. In the proposed method, the application programmers can explicitly manage the objects In a simple way, which in consequence the run-time overhead can be reduced while the garbage collector is under processing. In order to accomplish this, Java application firstly calls the APIs that arc implemented by native Jana, and then calls the subroutines depending on the JVM, which in turn support to keep the portability characteristic Java has. In this way, we can not only sustain the stability in execution environments. but also improve performance of garbage collector by simply calling the APIs. Our simulation study show that the proposed scheme improves the execution time of the garbage collector from 10.07 percent to 52.24 percent working on Mark-and-Sweep algorithm.

Analysis of the Actual State of Direction Guidance System on Road Traffic Signs in Urban Area -Centering around Suwon City- (도시부 도로안내표지의 지명정보 전달체계 실태분석 - 경기도 수원시를 중심으로 -)

  • Yoon, Hyo-Jin;Park, Mi-So
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.6 no.1 s.20
    • /
    • pp.29-38
    • /
    • 2006
  • There are increasing needs to provide sufficient information on road directions through road signs for expanding cities and traffic networks. Improving efficiency of direction guidance information from road signs not only requires criteria for but also systematic approach to selecting place names that appear on road signs. As such, this paper looks at road direction information from existing road signs that leads to Suwon and investigates whether the current system of road signs provides efficient, systematic and continuous direction information for road users to easily reach their destination. In this paper, Suwon's city hall is set up as the final destination, which is linked from five other cities, Anyang, Osan, Ansan, Yongin and Seongnam. The paper attempts to find out whether there is continuity and suitable number of road signs for direction information by analyzing the road signs between these 5 cities and Suwon with respect to direction, direction advance notice and direction guidance. It is found that drivers cannot easily find the needed information on their destination from the existing road signs and that continuity of selected place names that systematically appear on road signs is insufficiency. In addition, direction guidance on road signs is problematic, because the appearance frequency of road signs is not adequate and the continuity of road signs is not effective. Moreover, there is insufficient information on local direction guidance for immediate destinations on road signs with respect to turning left or right or going straight.

A Multi-thresholding Approach Improved with Otsu's Method (Otsu의 방법을 개선한 멀티 스래쉬홀딩 방법)

  • Li Zhe-Xue;Kim Sang-Woon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.5 s.311
    • /
    • pp.29-37
    • /
    • 2006
  • Thresholding is a fundamental approach to segmentation that utilizes a significant degree of pixel popularity or intensity. Otsu's thresholding employed the normalized histogram as a discrete probability density function. Also it utilized a criterion that minimizes the between-class variance of pixel intensity to choose a threshold value for segmentation. However, the Otsu's method has a disadvantage of repeatedly searching optimal thresholds for the entire range. In this paper, a simple but fast multi-level thresholding approach is proposed by means of extending the Otsu's method. Rather than invoke the Otsu's method for the entire gray range, we advocate that the gray-level range of an image be first divided into smaller sub-ranges, and that the multi-level thresholds be achieved by iteratively invoking this dividing process. Initially, in the proposed method, the gray range of the object image is divided into 2 classes with a threshold value. Here, the threshold value for segmentation is selected by invoking the Otsu's method for the entire range. Following this, the two classes are divided into 4 classes again by applying the Otsu's method to each of the divided sub-ranges. This process is repeatedly performed until the required number of thresholds is obtained. Our experimental results for three benchmark images and fifty faces show a possibility that the proposed method could be used efficiently for pattern matching and face recognition.

An Iterative Data-Flow Optimal Scheduling Algorithm based on Genetic Algorithm for High-Performance Multiprocessor (고성능 멀티프로세서를 위한 유전 알고리즘 기반의 반복 데이터흐름 최적화 스케줄링 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.115-121
    • /
    • 2015
  • In this paper, we proposed an iterative data-flow optimal scheduling algorithm based on genetic algorithm for high-performance multiprocessor. The basic hardware model can be extended to include detailed features of the multiprocessor architecture. This is illustrated by implementing a hardware model that requires routing the data transfers over a communication network with a limited capacity. The scheduling method consists of three layers. In the top layer a genetic algorithm takes care of the optimization. It generates different permutations of operations, that are passed on to the middle layer. The global scheduling makes the main scheduling decisions based on a permutation of operations. Details of the hardware model are not considered in this layer. This is done in the bottom layer by the black-box scheduling. It completes the scheduling of an operation and ensures that the detailed hardware model is obeyed. Both scheduling method can insert cycles in the schedule to ensure that a valid schedule is always found quickly. In order to test the performance of the scheduling method, the results of benchmark of the five filters show that the scheduling method is able to find good quality schedules in reasonable time.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

The Effects of the Statistical Uncertainties in Monte Carlo Photon Dose Calculation for the Radiation Therapy (방사선 치료를 위한 몬테칼로 광자선 선량계산 시 통계적 불확실성 영향 평가)

  • Cheong, Kwang-Ho;Suh, Tae-Suk;Cho, Byung-Chul
    • Journal of Radiation Protection and Research
    • /
    • v.29 no.2
    • /
    • pp.105-115
    • /
    • 2004
  • The Monte Carlo simulation requires very much time to obtain a result of acceptable accuracy. Therefore we should know the optimum number of history not to sacrifice time as well as the accuracy. In this study, we have investigated the effects of statistical uncertainties of the photon dose calculation. BEAMnrc and DOSXYZnrc systems were used for the Monte Carlo dose calculation and the case of mediastinum was simulated. The several dose calculation result from various number of histories had been obtained and analyzed using the criteria of isodose curve comparison, dose volume histogram comparison(DVH) and root mean-square differences(RMSD). Statistical uncertainties were observed most evidently in isodose curve comparison and RMSD while DVHs were less sensitive. The acceptable uncertainties $(\bar{{\Delta}D})$ of the Monte Carlo photon dose calculation for the radiation therapy were estimated within total 9% error or 1% error for over than $D_{max}/2$ voxels or voxels at maximum dose.

Generation of Efficient Fuzzy Classification Rules Using Evolutionary Algorithm with Data Partition Evaluation (데이터 분할 평가 진화알고리즘을 이용한 효율적인 퍼지 분류규칙의 생성)

  • Ryu, Joung-Woo;Kim, Sung-Eun;Kim, Myung-Won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.32-40
    • /
    • 2008
  • Fuzzy rules are very useful and efficient to describe classification rules especially when the attribute values are continuous and fuzzy in nature. However, it is generally difficult to determine membership functions for generating efficient fuzzy classification rules. In this paper, we propose a method of automatic generation of efficient fuzzy classification rules using evolutionary algorithm. In our method we generate a set of initial membership functions for evolutionary algorithm by supervised clustering the training data set and we evolve the set of initial membership functions in order to generate fuzzy classification rules taking into consideration both classification accuracy and rule comprehensibility. To reduce time to evaluate an individual we also propose an evolutionary algorithm with data partition evaluation in which the training data set is partitioned into a number of subsets and individuals are evaluated using a randomly selected subset of data at a time instead of the whole training data set. We experimented our algorithm with the UCI learning data sets, the experiment results showed that our method was more efficient at average compared with the existing algorithms. For the evolutionary algorithm with data partition evaluation, we experimented with our method over the intrusion detection data of KDD'99 Cup, and confirmed that evaluation time was reduced by about 70%. Compared with the KDD'99 Cup winner, the accuracy was increased by 1.54% while the cost was reduced by 20.8%.

Characteristics of Bearing Capacity under Square Footing on Two-layered Sand (2개층 사질토지반에서 정방형 기초의 지지력 특성)

  • 김병탁;김영수;이종현
    • Journal of the Korean Geotechnical Society
    • /
    • v.17 no.4
    • /
    • pp.289-299
    • /
    • 2001
  • 본 연구는 균질 및 2개층 비균질지반에서 사질토지반 상에 놓인 정방형 기초의 극한지지력과 침하에 대하여 고찰하였다. 본 연구는 얕은기초의 거동에 대한 정방형 기초의 크기, 지반 상대밀도, 기초 폭에 대한 상부층의 두께 비(H/B), 상부층 아래 경계면의 경사($\theta$) 그리고 지반강성비의 영향을 규명하기 위하여 모형실험을 수행하였다. 동일 상대밀도에서 지지력 계수($N_{{\gamma}}$)는 일정하지 않으며 기초 폭에 직접적으로 관련되며 지지력계수는 기초 폭이 증가함에 따라 감소하였다. 기초크기의 영향과 구속압력의 영향을 고려하는 Ueno 방법에 의한 극한지지력의 예측값은 고전적인 지지력 산정식보다 더 잘 일치하며 그 값은 실험값의 65% 이상으로 나타났다. $\theta$=$0^{\circ}$인 2개층 지반의 결과에 근거하여, 극한지지력에 대한 하부층 지반의 영향을 무시할 수 있는 한계 상부층 두께는 기초 폭의 2배로 결정되었다. 그러나, 73%의 상부층 상대밀도인 경우는 침하비($\delta$B) 0.05 이하에서만 이 결과가 유효하였다. 경계면이 경사진 2개층 지반의 결과에 근거하여, 상부층의 상대밀도가 느슨할수록 그리고 상부층의 두께가 클수록 극한지지력에 대한 경계면 경사의 영향은 크지 않는 것으로 나타났다. 경계면의 경사가 증가함에 따른 극한침하량의 변화는 경계면이 수평인 경우($\theta$=$0^{\circ}$)를 기준으로 0.82~1.2(상부층 $D_{r}$=73%인 경우) 그리고 0.9~1.07(상부층 $D_{r}$=50%인 경우) 정도로 나타났다.Markup Language 문서로부터 무선 마크업 언어 문서로 자동 변환된 텍스트를 인코딩하는 경우와 같이 특정한 응용 분야에서는 일반 문자열에 대한 확장 인코딩 기법을 적용할 필요가 있을 수 있다.mical etch-stop method for the etching of Si in TMAH:IPA;pyrazine solutions provides a powerful and versatile alternative process for fabricating high-yield Si micro-membranes. the RSC circle, but also to the logistics system in the SLC circle. Thus, the RSLC model can maximize combat synergy effects by integrating the RSC and the SLC. With a similar logic, this paper develops "A Revised System of Systems with Logistics (RSSL)" which combines "A New system of Systems" and logistics. These tow models proposed here help explain several issues such as logistics environment in future warfare, MOE(Measure of Effectiveness( on logistics performance, and COA(Course of Actions) for decreasing mass and increasing velocity. In particular, velocity in logistics is emphasized.

  • PDF

Development of CPLD technology mapping algorithm for Sequential Circuit under Time Constraint (시간제약 조건하에서 순차 회로를 위한 CPLD 기술 매핑 알고리즘 개발)

  • Youn, Chung-Mo;Kim, Hi-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.1
    • /
    • pp.224-234
    • /
    • 2000
  • In this paper, we propose a new CPLD technology mapping algorithm for sequential circuit under time constraints. The algorithm detects feedbacks of sequential circuit, separate each feedback variables into immediate input variable, and represent combinational part into DAG. Also, among the nodes of the DAG, the nodes that the number of outdegree is more than or equal to 2 is not separated, but replicated from the DAG, and reconstructed to fanout-free-tree. To use this construction method is for reason that area is less consumed than the TEMPLA algorithm to implement circuits, and process time is improved rather than TMCPLD within given time constraint. Using time constraint and delay of device the number of partitionable multi-level is defined, the number of OR terms that the initial costs of each nodes is set to and total costs that the costs is set to after merging nodes is calculated, and the nodes that the number of OR terms of CLBs that construct CPLD is excessed is partitioned and is reconstructed as subgraphs. The nodes in the partitioned subgraphs is merged through collapsing, and the collapsed equations is performed by bin packing so that if fit to the number of OR terms in the CLBs of a given device. In the results of experiments to MCNC circuits for logic synthesis benchmark, we can shows that proposed technology mapping algorithm reduces the number of CLBs bu 15.58% rather than the TEMPLA, and reduces process time rather than the TMCPLD.

  • PDF