• Title/Summary/Keyword: Fast Visualization

Search Result 137, Processing Time 0.022 seconds

A Design and Development of Big Data Indexing and Search System using Lucene (루씬을 이용한 빅데이터 인덱싱 및 검색시스템의 설계 및 구현)

  • Kim, DongMin;Choi, JinWoo;Woo, ChongWoo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.107-115
    • /
    • 2014
  • Recently, increased use of the internet resulted in generation of large and diverse types of data due to increased use of social media, expansion of a convergence of among industries, use of the various smart device. We are facing difficulties to manage and analyze the data using previous data processing techniques since the volume of the data is huge, form of the data varies and evolves rapidly. In other words, we need to study a new approach to solve such problems. Many approaches are being studied on this issue, and we are describing an effective design and development to build indexing engine of big data platform. Our goal is to build a system that could effectively manage for huge data set which exceeds previous data processing range, and that could reduce data analysis time. We used large SNMP log data for an experiment, and tried to reduce data analysis time through the fast indexing and searching approach. Also, we expect our approach could help analyzing the user data through visualization of the analyzed data expression.

Characteristic and Accuracy Analysis of Digital Elevation Data for 3D Spatial Modeling (3차원 공간 모델링을 위한 수치고도자료의 특징 및 정확도 분석)

  • Lee, Keun-Wang;Park, Joon-Kyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.744-749
    • /
    • 2018
  • Informatization and visualization technology for real space is a key technology for construction of geospatial information. Three-dimensional (3D) modeling is a method of constructing geospatial information from data measured by various methods. The 3D laser scanner has been mainly used as a method for acquiring digital elevation data. On the other hand, the unmanned aerial vehicle (UAV), which has been attracting attention as a promising technology of the fourth industrial revolution, has been evaluated as a technology for obtaining fast geospatial information, and various studies are being carried out. However, there is a lack of evaluation on the quantitative work efficiency and data accuracy of the data construction technology for 3D geospatial modeling. In this study, various analyses were carried out on the characteristics, work processes, and accuracy of point cloud data acquired by a 3D laser scanner and an unmanned aerial vehicle. The 3D laser scanner and UAV were used to generate digital elevation data of the study area, and the characteristics were analyzed. Through evaluation of the accuracy, it was confirmed that digital elevation data from a 3D laser scanner and UAV show accuracy within a 10 cm maximum, and it is suggested that it can be used for spatial information construction. In the future, collecting 3D elevation data from a 3D laser scanner and UAV is expected to be utilized as an efficient geospatial information-construction method.

Study on the Growth Factors for Rapidly Cultivating Mycobacterium spp. (마이코박테리움을 신속하게 배양할 수 있는 성장 인자에 관한 연구)

  • Ha, Sung-Il;Park, Kang-Gyun;Suk, Hyun-Soo;Shin, Jeong-Seob;Shin, Dong-Pil;Kwon, Min-O;Park, Yeon-Joon
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.51 no.2
    • /
    • pp.177-184
    • /
    • 2019
  • Mycobacteria grow slowly. Therefore, a solid medium should be used for eight weeks and a liquid medium for six weeks. The purpose of this study was to find the growth factors that can grow Mycobacterium rapidly and to help develop a solid medium for rapid identification. Three types of Mycobacterium growth factors were evaluated with 10 Mycobacteria by adding activated charcoal, defibrinated sheep blood, and L-ascorbic acid to $Difco^{TM}$ Mycobacteria 7H11 agar (Becton, Dickinson and Company, Sparks, MD, USA). The time to detection and the distinguishability of a colony were compared with that of the current method. In the rapidly growing Mycobacterium, the difference in detection time between the new media and conventional media confirmed that the new media was faster. M. kansasii and M. intracelluare grew faster in 7H11 C than in 7H11 medium. MTB grew faster than the other media in 7H11 C. This study confirmed that the two growth factors affect fast-growing Mycobacteria and slow-growing Mycobacteria. 7H11 C showed better distinguishability than the conventional media in all 10 Mycobacterium due to the color contrast. In particular, when the MTB was grown, the size of the colonies was larger than with other media, so visualization was easy.

An Examination of Core Competencies for Data Librarians (데이터사서의 핵심 역량 분석 연구)

  • Park, Hyoungjoo
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.301-319
    • /
    • 2022
  • In recent decades, research became more data-intensive in the fast-paced information environment. Researchers are facing new challenges in managing their research data due to the increasing volume of data-driven research and the policies of major funding agencies. Information professionals have begun to offer various data support services such as training, instruction, data curation, data management planning and data visualization. However, the emerging field of data librarians, including specific roles and competencies, has not been clearly established even though librarians are taking on new roles in data services. Therefore, there is a need to identify a set of competencies for data librarians in this growing field. The purpose of this study is to consider varying core competencies for data librarians. This exploratory study examines 95 online recruiting advertisements regarding data librarians posted between 2017 and 2021. This study finds core competencies for data librarians that include skills in technology, communication and interpersonal relationships, training/consulting, service, library management, metadata knowledge and knowledge of data curation. Specific core technology skills include knowledge of statistical software and computer programming. This study contributes to an understanding of core competencies for data librarians to help future information professionals prepare their competencies as data librarians and the instructors who develop and revise curriculum and course materials.

Receptor binding motif surrounding sites in the Spike 1 protein of infectious bronchitis virus have high susceptibility to mutation related to selective pressure

  • Seung-Min Hong;Seung-Ji Kim;Se-Hee An;Jiye Kim;Eun-Jin Ha;Howon Kim;Hyuk-Joon Kwon;Kang-Seuk Choi
    • Journal of Veterinary Science
    • /
    • v.24 no.4
    • /
    • pp.51.1-51.17
    • /
    • 2023
  • Background: To date, various genotypes of infectious bronchitis virus (IBV) have co-circulated and in Korea, GI-15 and GI-19 lineages were prevailing. The spike protein, particularly S1 subunit, is responsible for receptor binding, contains hypervariable regions and is also responsible for the emerging of novel variants. Objective: This study aims to investigate the putative major amino acid substitutions for the variants in GI-19. Methods: The S1 sequence data of IBV isolated from 1986 to 2021 in Korea (n = 188) were analyzed. Sequence alignments were carried out using Multiple alignment using Fast Fourier Transform of Geneious prime. The phylogenetic tree was generated using MEGA-11 (ver. 11.0.10) and Bayesian analysis was performed by BEAST v1.10.4. Selective pressure was analyzed via online server Datamonkey. Highlights and visualization of putative critical amino acid were conducted by using PyMol software (version 2.3). Results: Most (93.5%) belonged to the GI-19 lineage in Korea, and the GI-19 lineage was further divided into seven subgroups: KM91-like (Clade A and B), K40/09-like, QX-like (I-IV). Positive selection was identified at nine and six residues in S1 for KM91-like and QX-like IBVs, respectively. In addition, several positive selection sites of S1-NTD were indicated to have mutations at common locations even when new clades were generated. They were all located on the lateral surface of the quaternary structure of the S1 subunits in close proximity to the receptor-binding motif (RBM), putative RBM motif and neutralizing antigenic sites in S1. Conclusions: Our results suggest RBM surrounding sites in the S1 subunit of IBV are highly susceptible to mutation by selective pressure during evolution.

Investigation of thermal hydraulic behavior of the High Temperature Test Facility's lower plenum via large eddy simulation

  • Hyeongi Moon ;Sujong Yoon;Mauricio Tano-Retamale ;Aaron Epiney ;Minseop Song;Jae-Ho Jeong
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3874-3897
    • /
    • 2023
  • A high-fidelity computational fluid dynamics (CFD) analysis was performed using the Large Eddy Simulation (LES) model for the lower plenum of the High-Temperature Test Facility (HTTF), a ¼ scale test facility of the modular high temperature gas-cooled reactor (MHTGR) managed by Oregon State University. In most next-generation nuclear reactors, thermal stress due to thermal striping is one of the risks to be curiously considered. This is also true for HTGRs, especially since the exhaust helium gas temperature is high. In order to evaluate these risks and performance, organizations in the United States led by the OECD NEA are conducting a thermal hydraulic code benchmark for HTGR, and the test facility used for this benchmark is HTTF. HTTF can perform experiments in both normal and accident situations and provide high-quality experimental data. However, it is difficult to provide sufficient data for benchmarking through experiments, and there is a problem with the reliability of CFD analysis results based on Reynolds-averaged Navier-Stokes to analyze thermal hydraulic behavior without verification. To solve this problem, high-fidelity 3-D CFD analysis was performed using the LES model for HTTF. It was also verified that the LES model can properly simulate this jet mixing phenomenon via a unit cell test that provides experimental information. As a result of CFD analysis, the lower the dependency of the sub-grid scale model, the closer to the actual analysis result. In the case of unit cell test CFD analysis and HTTF CFD analysis, the volume-averaged sub-grid scale model dependency was calculated to be 13.0% and 9.16%, respectively. As a result of HTTF analysis, quantitative data of the fluid inside the HTTF lower plenum was provided in this paper. As a result of qualitative analysis, the temperature was highest at the center of the lower plenum, while the temperature fluctuation was highest near the edge of the lower plenum wall. The power spectral density of temperature was analyzed via fast Fourier transform (FFT) for specific points on the center and side of the lower plenum. FFT results did not reveal specific frequency-dominant temperature fluctuations in the center part. It was confirmed that the temperature power spectral density (PSD) at the top increased from the center to the wake. The vortex was visualized using the well-known scalar Q-criterion, and as a result, the closer to the outlet duct, the greater the influence of the mainstream, so that the inflow jet vortex was dissipated and mixed at the top of the lower plenum. Additionally, FFT analysis was performed on the support structure near the corner of the lower plenum with large temperature fluctuations, and as a result, it was confirmed that the temperature fluctuation of the flow did not have a significant effect near the corner wall. In addition, the vortices generated from the lower plenum to the outlet duct were identified in this paper. It is considered that the quantitative and qualitative results presented in this paper will serve as reference data for the benchmark.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.