• Title/Summary/Keyword: Big data analysis tool

Search Result 156, Processing Time 0.026 seconds

High-performance computing for SARS-CoV-2 RNAs clustering: a data science-based genomics approach

  • Oujja, Anas;Abid, Mohamed Riduan;Boumhidi, Jaouad;Bourhnane, Safae;Mourhir, Asmaa;Merchant, Fatima;Benhaddou, Driss
    • Genomics & Informatics
    • /
    • v.19 no.4
    • /
    • pp.49.1-49.11
    • /
    • 2021
  • Nowadays, Genomic data constitutes one of the fastest growing datasets in the world. As of 2025, it is supposed to become the fourth largest source of Big Data, and thus mandating adequate high-performance computing (HPC) platform for processing. With the latest unprecedented and unpredictable mutations in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the research community is in crucial need for ICT tools to process SARS-CoV-2 RNA data, e.g., by classifying it (i.e., clustering) and thus assisting in tracking virus mutations and predict future ones. In this paper, we are presenting an HPC-based SARS-CoV-2 RNAs clustering tool. We are adopting a data science approach, from data collection, through analysis, to visualization. In the analysis step, we present how our clustering approach leverages on HPC and the longest common subsequence (LCS) algorithm. The approach uses the Hadoop MapReduce programming paradigm and adapts the LCS algorithm in order to efficiently compute the length of the LCS for each pair of SARS-CoV-2 RNA sequences. The latter are extracted from the U.S. National Center for Biotechnology Information (NCBI) Virus repository. The computed LCS lengths are used to measure the dissimilarities between RNA sequences in order to work out existing clusters. In addition to that, we present a comparative study of the LCS algorithm performance based on variable workloads and different numbers of Hadoop worker nodes.

Development of an intelligent skin condition diagnosis information system based on social media

  • Kim, Hyung-Hoon;Ohk, Seung-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.241-251
    • /
    • 2022
  • Diagnosis and management of customer's skin condition is an important essential function in the cosmetics and beauty industry. As the social media environment spreads and generalizes to all fields of society, the interaction of questions and answers to various and delicate concerns and requirements regarding the diagnosis and management of skin conditions is being actively dealt with in the social media community. However, since social media information is very diverse and atypical big data, an intelligent skin condition diagnosis system that combines appropriate skin condition information analysis and artificial intelligence technology is necessary. In this paper, we developed the skin condition diagnosis system SCDIS to intelligently diagnose and manage the skin condition of customers by processing the text analysis information of social media into learning data. In SCDIS, an artificial neural network model, AnnTFIDF, that automatically diagnoses skin condition types using artificial neural network technology, a deep learning machine learning method, was built up and used. The performance of the artificial neural network model AnnTFIDF was analyzed using test sample data, and the accuracy of the skin condition type diagnosis prediction value showed a high performance of about 95%. Through the experimental and performance analysis results of this paper, SCDIS can be evaluated as an intelligent tool that can be used efficiently in the skin condition analysis and diagnosis management process in the cosmetic and beauty industry. And this study can be used as a basic research to solve the new technology trend, customized cosmetics manufacturing and consumer-oriented beauty industry technology demand.

Digital Forensics Investigation of Redis Database (Redis 데이터베이스에 대한 디지털 포렌식 조사 기법 연구)

  • Choi, Jae Mun;Jeong, Doo Won;Yoon, Jong Seong;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.5
    • /
    • pp.117-126
    • /
    • 2016
  • Recently, increasing utilization of Big Data or Social Network Service involves the increases in demand for NoSQL Database that overcomes the limitations of existing relational database. A forensic examination of Relational Database has steadily researched in terms of Digital Forensics. In contrast, the forensic examination of NoSQL Database is rarely studied. In this paper, We introduce Redis (which is) based on Key-Value Store NoSQL Database, and research the collection and analysis of forensic artifacts then propose recovery method of deleted data. Also we developed a recovery tool, it will be verified our recovery algorithm.

A Trend Analysis of Agricultural and Food Marketing Studies Using Text-mining Technique (텍스트마이닝 기법을 이용한 국내 농식품유통 연구동향 분석)

  • Yoo, Li-Na;Hwang, Su-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.215-226
    • /
    • 2017
  • This study analyzed trends in agricultural and food marketing studies from 1984 to 2015 using text-mining techniques. Text-mining is a part of Big-data analysis, which is an effective tool to objectively process large amounts of information based on categorization and trend analysis. In the present study, frequency analysis, topic analysis and association rules were conducted. Titles of agricultural and food marketing studies in four journals and reports were used for placing the analysis. The results showed that 1,126 total theses related to agricultural and food marketing could be categorized into six subjects. There were significant changes in research trends before and after the 2000s. While research before 2000s focused on farm and wholesale level marketing, research after the 2000s mainly covered consumption, (processed)food, exports and imports. Local food and school meals are new subjects that are increasingly being studied. Issues regarding agricultural supply and demand were the only subjects investigated in policy research studies. Interest in agricultural supply and demand was lost after the 2000s. A number of studies after the 2010s analyzed consumption, primarily consumption trends and consumer behavior.

A Review Study of the Success Factors Based the Information Systems Success Model (정보시스템 성공모델 기반 성공요인에 관한 문헌적 고찰)

  • Nam, Soo-Tai;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.123-125
    • /
    • 2016
  • Big data analysis refers the ability to store, manage and analyze collected data from an existing database management tool. In addition, extract value from large amounts of structured or unstructured data set and means the technology to analyze the results. Meta-analysis refers to a statistical literature synthesis method from the quantitative results of many known empirical studies. We conducted a meta-analysis and review of between success factors based the information systems success model researches. This study focused a total of 14 research papers that established causal relationships between success factors based the information systems success model published in Korea academic journals during 2000 and 2016. Based on these findings, several theoretical and practical implications were suggested and discussed with the difference from previous researches.

  • PDF

Analysis of news bigdata on 'Gather Town' using the Bigkinds system

  • Choi, Sui
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.53-61
    • /
    • 2022
  • Recent years have drawn a great attention to generation MZ and Metaverse, due to 4th industrial revolution and the development of digital environment that blurs the boundary between reality and virtual reality. Generation MZ approaches the information very differently from the existing generations and uses distinguished communication methods. In terms of learning, they have different motivations, types, skills and build relationships differently. Meanwhile, Metaverse is drawing a great attention as a teaching method that fits traits of gen MZ. Thus, the current research aimed to investigate how to increase the use of Metaverse in Educational Technology. Specifically, this research examined the antecedents of popularity of Gather Town, a platform of Metaverse. Big data of news articles have been collected and analyzed using the Bigkinds system provided by Korea Press Foundation. The analysis revealed, first, a rapid increasing trend of media exposure of Gather Town since July 2021. This suggests a greater utilization of Gather Town in the field of education after the COVID-19 pandemic. Second, Word Association Analysis and Word Cloud Analysis showed high weights on education related words such as 'remote', 'university', and 'freshman', while words like 'Metaverse', 'Metaverse platform', 'Covid19', and 'Avatar' were also emphasized. Third, Network Analysis extracted 'COVID19', 'Avatar', 'University student', 'career', 'YouTube' as keywords. The findings also suggest potential value of Gather Town as an educational tool under COVID19 pandemic. Therefore, this research will contribute to the application and utilization of Gather Town in the field of education.

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

A novel on Data Prediction Process using Deep Learning based on R (R기반의 딥 러닝을 이용한 데이터 예측 프로세스에 관한 연구)

  • Jung, Se-hoon;Kim, Jong-chan;Park, Hong-joon;So, Won-ho;Sim, Chun-bo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.421-422
    • /
    • 2015
  • Deep learning, a deepen neural network technology that demonstrates the enhanced performance of neural network analysis, has been getting the spotlight in recent years. The present study proposed a process to test the error rates of certain variables and predict big data by using R, a analysis visualization tool based on deep learning, applying the RBM(Restricted Boltzmann Machine) algorithm to deep learning. The weighted value of each dependent variable was also applied after the classification of dependent variables. The investigator tested input data with the RBM algorithm and designed a process to detect error rates with the application of R.

  • PDF

A Plan for Establishing IOT-based Building Maintenance Platform (S-LCC): Focusing a Concept Model on the Function Configuration and Practical Use of Measurement Data (IOT 기반 건축물 유지관리 플랫폼 구축(S-LCC) 방안 : 기능구성과 계측 데이터 활용을 위한 개념 모델을 중심으로)

  • Park, Tae-Keun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.611-618
    • /
    • 2020
  • The reliability of the results of LCC analysis is determined by accurate analytical procedures and energy data from which the uncertainty is removed. Until now, systems that can automatically measure these energy data and produce databases have not been commercialized. Therefore this paper proposes a concept model of an S-LCC platform that can automatically collect and analyze electric energy consumption data of equipment systems using the IOT, which is the core tool in the Fourth Industrial Revolution and operates the equipment system efficiently using the analyzed results. The proposed concept model was developed by the convergence of existing BLCS and IOT and was comprised of five modules: Facility Control Module, LCC Analysis Module, Energy Consumption Control Module, Efficiency Analysis Module, and Maintenance Standard Reestablishment Module. Using the results of LCC analysis deduced from this system, the deterioration condition of an equipment system can be identified in real-time. The results can be used as the baseline data to re-establish standards for the maintenance factor, replacement frequency, and lifetime of existing equipment, and establish new maintenance standards for new equipment. If the S-LCC platform is established, it would increase the reliability of LCC analysis, reduce the labor force for entering data and improve accuracy, and would also change disregarded data into big data with high potential.

A Study on the New Partial Discharge Pattern Analysis System used by PA Map (Pulse Analysis Map) (PA Map(Pulse Analysis Map)을 이용한 새로운 부분방전 패턴인식에 관한 연구)

  • Kim, Ji-Hong;Kim, Jeung-Tae;Kim, Jin-Gi;Koo, Ja-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.6
    • /
    • pp.1092-1098
    • /
    • 2007
  • Since one decade, the detection of HFPD (High frequency Partial Discharge) has been proposed as one of the effective method for the diagnosis of the power component under service in power grids. As a tool for HFPD detection, Metal Foil sensor based on the embedded technology has been commercialized for mainly power cable due to its advantages. Recently, for the on-site noise discrimination, several PA (Pulse analysis) methods have been reported and the related software, such as Neural Network and Fuzzy, have been proposed to separate the PD (Partial Discharge) signals from the noises since their wave shapes are completely different from each other. On the other hand, the relevant fundamental investigation has not yet clearly made while it is reported that the effectiveness of the current methods based on PA is dependant on the types of sensors. Moreover, regarding the identification of the vital defects introducible into the Power Cable, the direct identification of the nature of defects from the PD signals through Metal Foil coupler has not yet been realized. As a trial for solving above shortcomings, different types of software have been proposed and employed without any convincing probability of identification. In this regards, our novel algorithm 'PA Map' based on the pulse analysis is suggested to identify directly the defects inside the power cable from the HFPD signals which is output of the HFCT and metal foil sensors. This method enables to discriminate the noise and then to make the data analysis related to the PD signals. For the purpose, the HFPD detection and PA (Pulse Analysis) system have been developed and then the effect of noise discrimination has been investigated by use of the artificial defects using real scale mockup. Throughout these works, our system is proved to be capable of separating the small void discharges among the very large noises such as big air corona and ground floating discharges at the on-site as well as of identifying the concerned defects.