• Title/Summary/Keyword: Toolkit

Search Result 371, Processing Time 0.032 seconds

Study on Management of Water Pipes in Buildings using Augmented Reality (증강현실을 이용한 건물의 수도관 관리 방안 연구)

  • Sang-Hyun Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1229-1238
    • /
    • 2023
  • Digital twin is a technology that creates a virtual space that replicates the real world and manages the real world efficiently by integrating the real and virtual spaces. The digital twin concept for water facilities is to effectively manage water pipes in the real world by implementing them in a virtual space and augmenting them to the interior space of the building. In the proposed method, the Unity 3D game engine is used to implement the application of digital twin technology in the interior of a building. The AR Foundation toolkit based on ARCore is used as the augmented reality technology for our Digital Twin implementation. In digital twin applications, it is essential to match the real and virtual worlds. In the proposed method, 2D image markers are used to match the real and virtual worlds. The Unity shader program is also applied to make the augmented objects visually realistic. The implementation results show that the proposed method is simple but accurate in placing water pipes in real space, and visually effective in representing water pipes on the wall.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Inplementation of a Hydrogen Leakage Simulator with HyRAM+ (HyRAM+를 이용한 수소 누출 시뮬레이터 구현)

  • Sung-Ho Hwang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.551-557
    • /
    • 2024
  • Hydrogen is a renewable energy source with various characteristics such as clean, carbon-free and high-energy, and is internationally recognized as a "future energy". With the rapid development of the hydrogen energy industry, more hydrogen infrastructure is needed to meet the demand for hydrogen. However, hydrogen infrastructure accidents have been occurring frequently, hindering the development of the hydrogen industry. HyRAM+, developed by Sandia National Laboratories, is a software toolkit that integrates data and methods related to hydrogen safety assessments for various storage applications, including hydrogen refueling stations. HyRAM+'s physics mode simulates hydrogen leak results depending on the hydrogen refueling station components, graphing gas plume dispersion, jet frame temperature and trajectory, and radiative heat flux. In this paper, hydrogen leakage data was extracted from a hydrogen refueling station in Samcheok, Gangwon-do, using HyRAM+ software. A hydrogen leakage simulator was developed using data extracted from HyRAM+. It was implemented as a dashboard that shows the data generated by the simulator using a database and Grafana.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Monte Carlo Simulation of the Carbon Beam Nozzle for the Biomedical Research Facility in RAON (한국형 중이온 가속기 RAON의 의생물 연구시설 탄소 빔 노즐에 대한 Monte Carlo 시뮬레이션)

  • Bae, Jae-Beom;Cho, Byung-Cheol;Kwak, Jung-Won;Park, Woo-Yoon;Lim, Young-Kyung;Chung, Hyun-Tai
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.12-17
    • /
    • 2015
  • The purpose of the Monte Carlo simulation study was to provide the optimized nozzle design to satisfy the beam conditions for biomedical researches in the Korean heavy-ion accelerator, RAON. The nozzle design was required to produce $C^{12}$ beam satisfying the three conditions; the maximum field size, the dose uniformity and the beam contamination. We employed the GEANT4 toolkit in Monte Carlo simulation to optimize the nozzle design. The beams for biomedical researches were required that the maximum field size should be more than $15{\times}15cm^2$, the dose uniformity was to be less than 3% and the level of beam contamination due to the scattered radiation from collimation systems was less than 5% of total dose. For the field size, we optimized the tilting angle of the circularly rotating beam controlled by a pair of dipole magnets at the most upstream of the user beam line unit and the thickness of the scatter plate located downstream of the dipole magnets. The values of beam scanning angle and the thickness of the scatter plate could be successfully optimized to be $0.5^{\circ}$ and 0.05 cm via this Monte Carlo simulation analysis. For the dose uniformity and the beam contamination, we introduced the new beam configuration technique by the combination of scanning and static beams. With the combination of a central static beam and a circularly rotating beam with the tilting angle of $0.5^{\circ}$ to beam axis, the dose uniformity could be established to be 1.1% in $15{\times}15cm^2$ sized maximum field. For the beam contamination, it was determined by the ratio of the absorbed doses delivered by $C^{12}$ ion and other particles. The level of the beam contamination could be achieved to be less than 2.5% of total dose in the region from 5 cm to 17 cm water equivalent depth in the combined beam configuration. Based on the results, we could establish the optimized nozzle design satisfying the beam conditions which were required for biomedical researches.

Genetic Traceability of Black Pig Meats Using Microsatellite Markers

  • Oh, Jae-Don;Song, Ki-Duk;Seo, Joo-Hee;Kim, Duk-Kyung;Kim, Sung-Hoon;Seo, Kang-Seok;Lim, Hyun-Tae;Lee, Jae-Bong;Park, Hwa-Chun;Ryu, Youn-Chul;Kang, Min-Soo;Cho, Seoae;Kim, Eui-Soo;Choe, Ho-Sung;Kong, Hong-Sik;Lee, Hak-Kyo
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.27 no.7
    • /
    • pp.926-931
    • /
    • 2014
  • Pork from Jeju black pig (population J) and Berkshire (population B) has a unique market share in Korea because of their high meat quality. Due to the high demand of this pork, traceability of the pork to its origin is becoming an important part of the consumer demand. To examine the feasibility of such a system, we aim to provide basic genetic information of the two black pig populations and assess the possibility of genetically distinguishing between the two breeds. Muscle samples were collected from slaughter houses in Jeju Island and Namwon, Chonbuk province, Korea, for populations J and B, respectively. In total 800 Jeju black pigs and 351 Berkshires were genotyped at thirteen microsatellite (MS) markers. Analyses on the genetic diversity of the two populations were carried out in the programs MS toolkit and FSTAT. The population structure of the two breeds was determined by a Bayesian clustering method implemented in structure and by a phylogenetic analysis in Phylip. Population J exhibited higher mean number of alleles, expected heterozygosity and observed heterozygosity value, and polymorphism information content, compared to population B. The $F_{IS}$ values of population J and population B were 0.03 and -0.005, respectively, indicating that little or no inbreeding has occurred. In addition, genetic structure analysis revealed the possibility of gene flow from population B to population J. The expected probability of identify value of the 13 MS markers was $9.87{\times}10^{-14}$ in population J, $3.17{\times}10^{-9}$ in population B, and $1.03{\times}10^{-12}$ in the two populations. The results of this study are useful in distinguishing between the two black pig breeds and can be used as a foundation for further development of DNA markers.

Genomic Polymorphism Analysis using Microsatellite Markers in Gyeongju Donggyeong Dogs

  • Kim, Seung-Chang;Kim, Lee-Kyung;Choi, Seog-Kyu;Park, Chang-Min;Park, Sun-Ae;Cho, Yong-Min;Lim, Dajeong;Chai, Han-Ha;Lee, Seung-Hwan;Lee, Ji-Woong;Sun, Sang-Soo;Choi, Bong-Hwan
    • Reproductive and Developmental Biology
    • /
    • v.36 no.4
    • /
    • pp.243-248
    • /
    • 2012
  • This study was conducted to find a useful marker for gene polymorphism analysis using Microsatellite marker (MS marker) in Gyeongju Donggyeong dog. Twenty three MS marker analyzed the genetic features of DNA using 100 Gyeongju Donggyeong dogs in Gyeongju area. It was performed multiplex PCR with 3 set primer divided 9, 10 and 4 by analysis of conditions among MS markers. The results were calculated heterozygosity, polymorphic information content (PIC), allele frequency and number of allele at each locus using Microsatellite Toolkit software and Cervus 3.0 program. Total 148 alleles were genotyped to determine and average 6.43 alleles was detected. FH3381 had the highest of 15 alleles and FH2834 had the lowest of 2 alleles. Expected heterozygosity had a wide range from 0.282 to 0.876 and had average value of 0.6496. Also, Observed heterozygosity had a more wide range from 0.200 to 0.950 and had average value of 0.6404. PIC had range from 0.262 to 0.859 and average PIC was calculated 0.606. Especially, FH2998 represented the highest rate of observed heterozygosity of 0.950 and FH3381 represented the highest rate of expected heterozygosity of 0.876 and PIC of 0.859. The use of these markers was considered to be useful to study genetic traits of Gyeongju Donggyeong dog.

Android Based Mobile Combination Login Application (안드로이드 기반 모바일 통합로그인 애플리케이션)

  • Lim, Jung-Gun;Choi, Chang-Suk;Park, Tae-Eun;Ki, Hyo-Sun;An, Beongku
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.151-156
    • /
    • 2013
  • Android that was made by Google and Open Handset Alliance is the open source software toolkit for mobile phone. In a few years, Android will be used by millions of Android mobile phones and other mobile devices, and become the main platform for application developers. In this paper, the integrated login application based on Google's Android platform is developed. The main features of the mobile combination login application content based on Android are as follows. First, the application has more convenient login functionality than the functionality of general web browser as the web browser of the mobile-based applications(web browser style applications) as well as security features and faster screen(view) capability by reducing the amount of data transfer. Second, the application is so useful for management of ID and Password, and it can easily manage multiple ID information such as message, mail, profile. The results of performance evaluation of the developed application show the functionality that can login many kinds of portal sites simultaneously as well as the ability that can maintain login continuously. Currently, we are trying to develope a couple of the technologies that can insert multiple accounts into one ID and check all information on one screen.

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.

Precision Validation of Electromagnetic Physics in Geant4 Simulation for Proton Therapy (양성자 치료 전산모사를 위한 Geant4 전자기 물리 모델 정확성 검증)

  • Park, So-Hyun;Rah, Jeong-Eun;Shin, Jung-Wook;Park, Sung-Yong;Yoon, Sei-Chul;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.225-234
    • /
    • 2009
  • Geant4 (GEometry ANd Tracking) provides various packages specialized in modeling electromagnetic interactions. The validation of Geant4 physics models is a significant issue for the applications of Geant4 based simulation in medical physics. The purpose of this study is to evaluate accuracy of Geant4 electromagnetic physics for proton therapy. The validation was performed both the Continuous slowing down approximation (CSDA) range and the stopping power. In each test, the reliability of the electromagnetic models in a selected group of materials was evaluated such as water, bone, adipose tissue and various atomic elements. Results of Geant4 simulation were compared with the National Institute of Standards and Technology (NIST) reference data. As results of comparison about water, bone and adipose tissue, average percent difference of CSDA range were presented 1.0%, 1.4% and 1.4%, respectively. Average percent difference of stopping power were presented 0.7%, 1.0% and 1.3%, respectively. The data were analyzed through the kolmogorov-smirnov Goodness-of-Fit statistical analysis test. All the results from electromagnetic models showed a good agreement with the reference data, where all the corresponding p-values are higher than the confidence level $\alpha=0.05$ set.

  • PDF