• Title/Summary/Keyword: Performance-approach goal

Search Result 248, Processing Time 0.024 seconds

Breast Mass Classification using the Fundamental Deep Learning Approach: To build the optimal model applying various methods that influence the performance of CNN

  • Lee, Jin;Choi, Kwang Jong;Kim, Seong Jung;Oh, Ji Eun;Yoon, Woong Bae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.97-102
    • /
    • 2016
  • Deep learning enables machines to have perception and can potentially outperform humans in the medical field. It can save a lot of time and reduce human error by detecting certain patterns from medical images without being trained. The main goal of this paper is to build the optimal model for breast mass classification by applying various methods that influence the performance of Convolutional Neural Network (CNN). Google's newly developed software library Tensorflow was used to build CNN and the mammogram dataset used in this study was obtained from 340 breast cancer cases. The best classification performance we achieved was an accuracy of 0.887, sensitivity of 0.903, and specificity of 0.869 for normal tissue versus malignant mass classification with augmented data, more convolutional filters, and ADAM optimizer. A limitation of this method, however, was that it only considered malignant masses which are relatively easier to classify than benign masses. Therefore, further studies are required in order to properly classify any given data for medical uses.

Software Design of Packet Analyzer based on Byte-Filtered Packet Inspection Mechanism for UW-ASN

  • Muminov, Sardorbek;Yun, Nam-Yeol;Park, Soo-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.12
    • /
    • pp.1572-1582
    • /
    • 2011
  • The rapid growth of UnderWater Acoustic Sensor Networks (UW-ASNs) has led researchers to enhance underwater MAC protocols against limitations existing in underwater environment. We propose the customized robust real-time packet inspection mechanism with addressing the problem of the search for the data packet loss and network performance quality analysis in UW-ASNs, and describe our experiences using this approach. The goal of this work is to provide a framework to assess the network real-time performance quality. We propose a customized and adaptive mechanism to detect, monitor and analyze the data packets according to the MAC protocol standards in UW-ASNs. The packet analyzing method and software we propose is easy to implement, maintain, update and enhance. We take input stream as real data packets from sniffer node in capture mode and perform fully analysis. We were interested in developing software and hardware designed tool with the same capabilities which almost all terrestrial network packet sniffers have. Experimental results confirm that the best way to achieve maximum performance requires the most adaptive algorithm. In this paper, we present and offer the proposed packet analyzer, which can be effectively used for implementing underwater MAC protocols.

Over-Strength, Ductility and Response Modification Factor of Small-Size Reinforced Concrete Moment Frame Buildings (소규모 철근콘크리트 모멘트골조 건축물의 초과강도, 연성도 및 반응수정계수)

  • Kim, Taewan;Chu, Yurim;Park, Hong-Gun;Shin, Yeong Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.145-153
    • /
    • 2016
  • Small-size buildings are not designed by professional structural engineers in Korea. Therefore, their seismic performance can not be exactly estimated because their member sizes and reinforcement may be over- or under-designed. A prescriptive design criteria for the small-size buildings exists, but it also provides over-designed structural members since structural analysis is not incorporated, so it is necessary to revise the prescriptive criteria. The goal of this study was to provide an information for the revision, which is seismic performance and capability of small-size reinforced concrete moment frame buildings. For the study, the state of existing small-size reinforce-concrete buildings such as member size and reinforcement was identified by investigating their structural drawings. Then, over-strength, ductility and response modification factor of the small-size reinforced concrete moment frame buildings were estimated by analytical approach along with seismic performance check. The result showed that they possess moderate over-strength and ductility, and may use slightly increased response modification factor.

Goal-driven Optimization Strategy for Energy and Performance-Aware Data Centers for Cloud-Based Wind Farm CMS

  • Elijorde, Frank;Kim, Sungho;Lee, Jaewan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1362-1376
    • /
    • 2016
  • A cloud computing system can be characterized by the provision of resources in the form of services to third parties on a leased, usage-based basis, as well as the private infrastructures maintained and utilized by individual organizations. To attain the desired reliability and energy efficiency in a cloud data center, trade-offs need to be carried out between system performance and power consumption. Resolving these conflicting goals is often the major challenge encountered in the design of optimization strategies for cloud data centers. The work presented in this paper is directed towards the development of an Energy-efficient and Performance-aware Cloud System equipped with strategies for dynamic switching of optimization approach. Moreover, a platform is also provided for the deployment of a Wind Farm CMS (Condition Monitoring System) which allows ubiquitous access. Due to the geographically-dispersed nature of wind farms, the CMS can take advantage of the cloud's highly scalable architecture in order to keep a reliable and efficient operation capable of handling multiple simultaneous users and huge amount of monitoring data. Using the proposed cloud architecture, a Wind Farm CMS is deployed in a virtual platform to monitor and evaluate the aging conditions of the turbine's major components in concurrent, yet isolated working environments.

Identifying Significant Components of Structures for Seismic Performance Using FOSM Method (FOSM 방법을 이용한 내진성능 중요부재 판별법)

  • Lee, Tae-Hyung;Mosalam, Khalid
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.37-45
    • /
    • 2009
  • The identification of significant structural components under seismic loading through a probabilistic approach is of interest to many structural engineers. The First-Order Second Moment method can be used to achieve this goal by estimating uncertainty in the seismic demand of a structural system induced by the capacity uncertainties of each structural component. Significant structural components are those to which the seismic demand of the structure is more sensitive than it is to other ones. The developed procedure demonstrated by a ductile reinforced concrete frame shows that it is computationally effective and robust in terms of identifying significant structural components.

Reversible Data Hiding Using a Piecewise Autoregressive Predictor Based on Two-stage Embedding

  • Lee, Byeong Yong;Hwang, Hee Joon;Kim, Hyoung Joong
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.4
    • /
    • pp.974-986
    • /
    • 2016
  • Reversible image watermarking, a type of digital data hiding, is capable of recovering the original image and extracting the hidden message with precision. A number of reversible algorithms have been proposed to achieve a high embedding capacity and a low distortion. While numerous algorithms for the achievement of a favorable performance regarding a small embedding capacity exist, the main goal of this paper is the achievement of a more favorable performance regarding a larger embedding capacity and a lower distortion. This paper therefore proposes a reversible data hiding algorithm for which a novel piecewise 2D auto-regression (P2AR) predictor that is based on a rhombus-embedding scheme is used. In addition, a minimum description length (MDL) approach is applied to remove the outlier pixels from a training set so that the effect of a multiple linear regression can be maximized. The experiment results demonstrate that the performance of the proposed method is superior to those of previous methods.

Performance Analysis of K-set Flash Memory Management (K-집합 플래시 메모리 관리 성능 분석)

  • Park Je-ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.5
    • /
    • pp.389-394
    • /
    • 2004
  • In this paper, according to characteristics of flash memory, a memory recycling method is proposed in order to decrease the necessary cost preventing performance degradation at the same time, In order to optimize the demanding costs, the new approach partitions the search space of flash memory segments into K segment groups, A method for memory space allocation, in addition, is proposed in order to satisfy the goal of even wearing over the total memory space, The optimized configuration of the proposed method is achieved through experiments, The fact that the newly proposed methods outperform the existing approaches regarding cost and performance is evaluated by simulations, Furthermore the experimental results demonstrate that the memory allocation method affects even wearing in great deal.

  • PDF

New execution model for CAPE using multiple threads on multicore clusters

  • Do, Xuan Huyen;Ha, Viet Hai;Tran, Van Long;Renault, Eric
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.825-834
    • /
    • 2021
  • Based on its simplicity and user-friendly characteristics, OpenMP has become the standard model for programming on shared-memory architectures. Checkpointing-aided parallel execution (CAPE) is an approach that utilizes the discontinuous incremental checkpointing technique (DICKPT) to translate and execute OpenMP programs on distributed-memory architectures automatically. Currently, CAPE implements the OpenMP execution model by utilizing the DICKPT to distribute parallel jobs and their data to slave machines, and then collects the results after executing these distributed jobs. Although this model has been proven to be effective in terms of performance and compatibility with OpenMP on distributed-memory systems, it cannot fully exploit the capabilities of multicore processors. This paper presents a novel execution model for CAPE that utilizes two levels of parallelism. In the proposed model, we add another level of parallelism in the form of multithreaded processes on slave machines with the goal of better exploiting their multicore CPUs. Initial experimental results presented near the end of this paper demonstrate that this model provides significantly enhanced CAPE performance.

APPLICATION OF PERFORMANCED BASED DESIGN IN FIRE PROTECTION ENGINEERING

  • Cha, David S.
    • Proceedings of the Korea Institute of Fire Science and Engineering Conference
    • /
    • 1997.11a
    • /
    • pp.423-438
    • /
    • 1997
  • Today's building and fire prevention codes are mostly prescriptive. Prescriptive codes are based on major fires in earlier years that created a need for specific building provision. These codes provide a minimum level of safety. As the general and engineering uses of computers have increased over the years, so has use of computers in the fire protection engineering. This has allowed fire protection engineers to develop alternative approaches to solve today's fire protection problems or to evaluate the performance of a specific fire safety goal. A performance based approach to building and fire codes involves the following: 1) identifying specific goals, such as, safely getting out of the building in 10 minutes, 2) obtain conceptual approval from authorities, 3) define performance level, 4) develop design solutions and identify tools such as, fire tests, models, or methods, to demonstrate that a design will meet the fire protection objective 5) test solutions, 6) present test method and results to the authorities. Some people in the fire protection community consider this to be nothing more than an intellectual exercise, while the others view it as a way to reduce expenses on large project$^4$ Others in fire protection community view this as a way to refine the design process to design fire protection systems to better protect the fire hazards. This paper will focus on application of these tools, specifically computer fire models, to actual cases such as: design of a smoke control system heat transfer analysis and egress of building occupants during potential fires.

  • PDF

Optimizing Performance and Energy Efficiency in Cloud Data Centers Through SLA-Aware Consolidation of Virtualized Resources (클라우드 데이터 센터에서 가상화된 자원의 SLA-Aware 조정을 통한 성능 및 에너지 효율의 최적화)

  • Elijorde, Frank I.;Lee, Jaewan
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.1-10
    • /
    • 2014
  • The cloud computing paradigm introduced pay-per-use models in which IT services can be created and scaled on-demand. However, service providers are still concerned about the constraints imposed by their physical infrastructures. In order to keep the required QoS and achieve the goal of upholding the SLA, virtualized resources must be efficiently consolidated to maximize system throughput while keeping energy consumption at a minimum. Using ANN, we propose a predictive SLA-aware approach for consolidating virtualized resources in a cloud environment. To maintain the QoS and to establish an optimal trade-off between performance and energy efficiency, the server's utilization threshold dynamically adapts to the physical machine's resource consumption. Furthermore, resource-intensive VMs are prevented from getting underprovisioned by assigning them to hosts that are both capable and reputable. To verify the performance of our proposed approach, we compare it with non-optimized conventional approaches as well as with other previously proposed techniques in a heterogeneous cloud environment setup.