• 제목/요약/키워드: 블록 분할

Search Result 545, Processing Time 0.029 seconds

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

Anti-wrinkle and Whitening Effects of Essential Oil from Abies koreana (Abies koreana 유래 정유의 항주름 및 미백 효과)

  • Song, Byeong-Wook;Song, Min-Jeong;Park, Mi-Jin;Choi, Don-Ha;Lee, Sung-Suk;Kim, Myungkil;Hwang, Ki-Chul;Kim, Il-Kwon
    • Journal of Life Science
    • /
    • v.28 no.5
    • /
    • pp.524-531
    • /
    • 2018
  • The essential oil from Abies koreana E.H. Wilson had been developed, however, its efficacy has not yet been studied especially in terms of skin care research. The aim of this study is to investigate the effects of Abies koreana extracts (AKE) on melanogenesis and wrinkle formation in B16F10 melanoma cells (B16F10) and human dermal fibroblast cell line (HDF). The essential oil was extracted by hydrodistillation method and purified by anhydrous sodium sulfate. At a concentration of $10^{-5}$-fold, viability in these cells had been defined by cytotoxicity assays. Anti-melanogenic effects on B16F10 were evaluated using tyrosinase inhibition assay, and real-time PCR for verifying gene expression of tyrosinase, tyrosinase related protein-1 and -2 (TRP-1 and -2). AKEs reduced about 5-fold of tyrosinase inhibitory activity compared to ${\alpha}$-melanocyte-stimulating hormone (${\alpha}$-MSH)-induced group and about 30% reduction compared to Arbutin induced group. The mRNA levels of three melanin-related factors were increased, separately. To investigate the effects of anti-wrinkle, procollagen type I c peptide synthesis assay (PIP) and Western blot were performed. At AKE-treated group, PIP was up-regulated and the expression of collagen type 1 and matrix metalloproteinase (MMP)-1 were improved. Furthermore, AKE presented anti-wrinkle effects by increasing UVB-inhibited collagen type 1 expression, and reducing UVB-induced MMP-1 production at $60mJ/cm^2$ of UVB radiation. Therefore, Abies koreana extracts has potentials as a safe and an effective skin ingredient for whitening and anti-wrinkle.

Effects of the Variability of Individual Data on the Group Results; an Acupuncture Study Using fMRI (기능적 자기공명영상을 이용한 침 연구에 있어서 개체 별 다양성이 그룹분석에 미치는 영향 연구)

  • Bae, Seong-In;Jahng, Geon-Ho;Ryu, Chang-Woo;Lim, Sabina
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.277-289
    • /
    • 2009
  • Recently, functional MRI has been used to investigate the neurobiological mechanisms of acupuncture and the specificity of acupoint. The group data tend to be regarded as more important than the individual data in the most of previous studies. This study was designed to investigate the effect of the variability of individual data on the group results. A functional MRI (fMRI) of the whole brain was performed in fifteen healthy subjects during placebo and acupuncture stimulations at the ST36 acupoint. After remaining at rest for 30 seconds, the acupuncture was inserted and twisted at the rate of 2 Hz for 45 seconds and then the acupuncture was removed immediately. This process was repeated three times. Individual and group analyses were performed by voxel-based analyses using SPM2 software. Visual inspections of the activation and deactivation maps from individual sessions have shown the large variability across fifteen subjects. This means that the group data reflected the brain activation responses of only a few subjects. We suggest that the individual data should be presented to demonstrate the effect of acupuncture.

  • PDF

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.