• Title/Summary/Keyword: traditional metrics

Search Result 81, Processing Time 0.029 seconds

Enhanced Privacy Preservation of Cloud Data by using ElGamal Elliptic Curve (EGEC) Homomorphic Encryption Scheme

  • vedaraj, M.;Ezhumalai, P.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4522-4536
    • /
    • 2020
  • Nowadays, cloud is the fastest emerging technology in the IT industry. We can store and retrieve data from the cloud. The most frequently occurring problems in the cloud are security and privacy preservation of data. For improving its security, secret information must be protected from various illegal accesses. Numerous traditional cryptography algorithms have been used to increase the privacy in preserving cloud data. Still, there are some problems in privacy protection because of its reduced security. Thus, this article proposes an ElGamal Elliptic Curve (EGEC) Homomorphic encryption scheme for safeguarding the confidentiality of data stored in a cloud. The Users who hold a data can encipher the input data using the proposed EGEC encryption scheme. The homomorphic operations are computed on encrypted data. Whenever user sends data access permission requests to the cloud data storage. The Cloud Service Provider (CSP) validates the user access policy and provides the encrypted data to the user. ElGamal Elliptic Curve (EGEC) decryption was used to generate an original input data. The proposed EGEC homomorphic encryption scheme can be tested using different performance metrics such as execution time, encryption time, decryption time, memory usage, encryption throughput, and decryption throughput. However, efficacy of the ElGamal Elliptic Curve (EGEC) Homomorphic Encryption approach is explained by the comparison study of conventional approaches.

Deep Interpretable Learning for a Rapid Response System (긴급대응 시스템을 위한 심층 해석 가능 학습)

  • Nguyen, Trong-Nghia;Vo, Thanh-Hung;Kho, Bo-Gun;Lee, Guee-Sang;Yang, Hyung-Jeong;Kim, Soo-Hyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.805-807
    • /
    • 2021
  • In-hospital cardiac arrest is a significant problem for medical systems. Although the traditional early warning systems have been widely applied, they still contain many drawbacks, such as the high false warning rate and low sensitivity. This paper proposed a strategy that involves a deep learning approach based on a novel interpretable deep tabular data learning architecture, named TabNet, for the Rapid Response System. This study has been processed and validated on a dataset collected from two hospitals of Chonnam National University, Korea, in over 10 years. The learning metrics used for the experiment are the area under the receiver operating characteristic curve score (AUROC) and the area under the precision-recall curve score (AUPRC). The experiment on a large real-time dataset shows that our method improves compared to other machine learning-based approaches.

DISTANCE MEASUREMENT IN THE AEC/FM INDUSTRY: AN OVERVIEW OF TECHNOLOGIES

  • Jasmine Hines;Abbas Rashidi;Ioannis Brilakis
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.616-623
    • /
    • 2013
  • One of the oldest, most common engineering problems is measuring the dimensions of different objects and the distances between locations. In AEC/FM, related uses vary from large-scale applications such as measuring distances between cities to small-scale applications such as measuring the depth of a crack or the width of a welded joint. Within the last few years, advances in applying new technologies have prompted the development of new measuring devices such as ultrasound and laser-based measurers. Because of wide varieties in type, associated costs, and levels of accuracy, the selection of an optimal measuring technology is challenging for construction engineers and facility managers. To tackle this issue, we present an overview of various measuring technologies adopted by experts in the area of AEC/FM. As the next step, to evaluate the performance of these technologies, we select one indoor and one outdoor case and measure several dimensions using six categories of technologies: tapes, total stations, laser measurers, ultrasound devices, laser scanners, and image-based technologies. Then we evaluate the results according to various metrics such as accuracy, ease of use, operation time, associated costs, compare these results, and recommend optimal technologies for specific applications. The results also revealed that in most applications, computer vision-based technologies outperform traditional devices in terms of ease of use, associated costs, and accuracy.

  • PDF

Deep Learning Framework with Convolutional Sequential Semantic Embedding for Mining High-Utility Itemsets and Top-N Recommendations

  • Siva S;Shilpa Chaudhari
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.44-55
    • /
    • 2024
  • High-utility itemset mining (HUIM) is a dominant technology that enables enterprises to make real-time decisions, including supply chain management, customer segmentation, and business analytics. However, classical support value-driven Apriori solutions are confined and unable to meet real-time enterprise demands, especially for large amounts of input data. This study introduces a groundbreaking model for top-N high utility itemset mining in real-time enterprise applications. Unlike traditional Apriori-based solutions, the proposed convolutional sequential embedding metrics-driven cosine-similarity-based multilayer perception learning model leverages global and contextual features, including semantic attributes, for enhanced top-N recommendations over sequential transactions. The MATLAB-based simulations of the model on diverse datasets, demonstrated an impressive precision (0.5632), mean absolute error (MAE) (0.7610), hit rate (HR)@K (0.5720), and normalized discounted cumulative gain (NDCG)@K (0.4268). The average MAE across different datasets and latent dimensions was 0.608. Additionally, the model achieved remarkable cumulative accuracy and precision of 97.94% and 97.04% in performance, respectively, surpassing existing state-of-the-art models. This affirms the robustness and effectiveness of the proposed model in real-time enterprise scenarios.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

Backpack- and UAV-based Laser Scanning Application for Estimating Overstory and Understory Biomass of Forest Stands (임분 상하층의 바이오매스 조사를 위한 백팩형 라이다와 드론 라이다의 적용성 평가)

  • Heejae Lee;Seunguk Kim;Hyeyeong Choe
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.3
    • /
    • pp.363-373
    • /
    • 2023
  • Forest biomass surveys are regularly conducted to assess and manage forests as carbon sinks. LiDAR (Light Detection and Ranging), a remote sensing technology, has attracted considerable attention, as it allows for objective acquisition of forest structure information with minimal labor. In this study, we propose a method for estimating overstory and understory biomass in forest stands using backpack laser scanning (BPLS) and unmanned aerial vehicle laser scanning (UAV-LS), and assessed its accuracy. For overstory biomass, we analyzed the accuracy of BPLS and UAV-LS in estimating diameter at breast height (DBH) and tree height. For understory biomass, we developed a multiple regression model for estimating understory biomass using the best combination of vertical structure metrics extracted from the BPLS data. The results indicated that BPLS provided accurate estimations of DBH (R2 =0.92), but underestimated tree height (R2 =0.63, bias=-5.56 m), whereas UAV-LS showed strong performance in estimating tree height (R2 =0.91). For understory biomass, metrics representing the mean height of the points and the point density of the fourth layer were selected to develop the model. The cross-validation result of the understory biomass estimation model showed a coefficient of determination of 0.68. The study findings suggest that the proposed overstory and understory biomass survey methods using BPLS and UAV-LS can effectively replace traditional biomass survey methods.

Research on the Development of Distance Metrics for the Clustering of Vessel Trajectories in Korean Coastal Waters (국내 연안 해역 선박 항적 군집화를 위한 항적 간 거리 척도 개발 연구)

  • Seungju Lee;Wonhee Lee;Ji Hong Min;Deuk Jae Cho;Hyunwoo Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.367-375
    • /
    • 2023
  • This study developed a new distance metric for vessel trajectories, applicable to marine traffic control services in the Korean coastal waters. The proposed metric is designed through the weighted summation of the traditional Hausdorff distance, which measures the similarity between spatiotemporal data and incorporates the differences in the average Speed Over Ground (SOG) and the variance in Course Over Ground (COG) between two trajectories. To validate the effectiveness of this new metric, a comparative analysis was conducted using the actual Automatic Identification System (AIS) trajectory data, in conjunction with an agglomerative clustering algorithm. Data visualizations were used to confirm that the results of trajectory clustering, with the new metric, reflect geographical distances and the distribution of vessel behavioral characteristics more accurately, than conventional metrics such as the Hausdorff distance and Dynamic Time Warping distance. Quantitatively, based on the Davies-Bouldin index, the clustering results were found to be superior or comparable and demonstrated exceptional efficiency in computational distance calculation.

Performance Enhancement and Evaluation of AES Cryptography using OpenCL on Embedded GPGPU (OpenCL을 이용한 임베디드 GPGPU환경에서의 AES 암호화 성능 개선과 평가)

  • Lee, Minhak;Kang, Woochul
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.303-309
    • /
    • 2016
  • Recently, an increasing number of embedded processors such as ARM Mali begin to support GPGPU programming frameworks, such as OpenCL. Thus, GPGPU technologies that have been used in PC and server environments are beginning to be applied to the embedded systems. However, many embedded systems have different architectural characteristics compare to traditional PCs and low-power consumption and real-time performance are also important performance metrics in these systems. In this paper, we implement a parallel AES cryptographic algorithm for a modern embedded GPU using OpenCL, a standard parallel computing framework, and compare performance against various baselines. Experimental results show that the parallel GPU AES implementation can reduce the response time by about 1/150 and the energy consumption by approximately 1/290 compare to OpenMP implementation when 1000KB input data is applied. Furthermore, an additional 100 % performance improvement of the parallel AES algorithm was achieved by exploiting the characteristics of embedded GPUs such as removing copying data between GPU and host memory. Our results also demonstrate that higher performance improvement can be achieved with larger size of input data.

CGRA Compilation Boost up for Acceleration of Graphics (영상처리 가속을 위한 CGRA compilation 속도 향상)

  • Kim, Wonsub;Choi, Yoonseo;Kim, Jaehyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.166-168
    • /
    • 2014
  • Coarse-grained reconfigurable architectures (CGRAs) present a potential of high compute throughput with energy efficiency. A CGRA consists of an array of functional units (FU), which communicate with each other through an interconnect network containing transmission nodes and register files. To achieve high performance from the software solutions mapped onto CGRAs, modulo scheduling of loops is generally employed. One of the key challenges in modulo scheduling for CGRAs is to explicitly handle routings of operands from a source to a destination operations through various routing resources. Existing modulo schedulers for CGRAs are slow because finding a valid routing is generally a searching problem over a large space, even with the guidance of well-defined cost metrics. Applications in traditional embedded multimedia domains are regarded relatively tolerant to a slow compile time in exchange of a high quality solution. However, many rapidly growing domains of applications, such as 3D graphics, require a fast compilation. Entrances of CGRAs to these domains have been blocked mainly due to its long compile time. We attack this problem by utilizing patternized routes, for which resources and time slots for a success can be estimated in advance when a source operation is placed. By conservatively reserving predefined resources at predefined time slots, future routings originated from the source operation are guaranteed. Experiments on a real-world 3D graphics benchmark suite show that our scheduler improves the compile time up to 6000 times while achieving average 70% throughputs of the state-of-art CGRA modulo scheduler, edge-centric modulo scheduler (EMS).

  • PDF

Combining Multiple Sources of Evidence to Enhance Web Search Performance

  • Yang, Kiduk
    • Journal of Korean Library and Information Science Society
    • /
    • v.45 no.3
    • /
    • pp.5-36
    • /
    • 2014
  • The Web is rich with various sources of information that go beyond the contents of documents, such as hyperlinks and manually classified directories of Web documents such as Yahoo. This research extends past fusion IR studies, which have repeatedly shown that combining multiple sources of evidence (i.e. fusion) can improve retrieval performance, by investigating the effects of combining three distinct retrieval approaches for Web IR: the text-based approach that leverages document texts, the link-based approach that leverages hyperlinks, and the classification-based approach that leverages Yahoo categories. Retrieval results of text-, link-, and classification-based methods were combined using variations of the linear combination formula to produce fusion results, which were compared to individual retrieval results using traditional retrieval evaluation metrics. Fusion results were also examined to ascertain the significance of overlap (i.e. the number of systems that retrieve a document) in fusion. The analysis of results suggests that the solution spaces of text-, link-, and classification-based retrieval methods are diverse enough for fusion to be beneficial while revealing important characteristics of the fusion environment, such as effects of system parameters and relationship between overlap, document ranking and relevance.