• Title/Summary/Keyword: Open Source Platform

Search Result 286, Processing Time 0.022 seconds

A Comparative Analysis of Domestic and Foreign Docker Container-Based Research Trends (국내·외 도커 컨테이너 기반 연구 동향 비교 분석)

  • Bae, Sun-Young
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.10
    • /
    • pp.742-753
    • /
    • 2022
  • Cloud computing, which is rapidly growing as one of the core technologies of the 4th industrial revolution, has become the center of global IT trend change, and Docker, a container-based open source platform, is the mainstream for virtualization technology for cloud computing. Therefore, in this paper, research trends based on Docker containers were compared and analyzed, focusing on studies published from March 2013 to July 2022. As a result of the study, first, the number of papers published by year, domestic and foreign research were steadily increasing. Second, the keywords of the study, in domestic research, Docker, Docker Containers, and Containers were in the order, and in foreign research, Cloud Computing, Containers, and Edge Computing were in the order. Third, in the frequency of publishing institutions to estimate research trends, the utilization was the highest in two papers of the Korean Next Generation Computer Society and the Korean Computer Accounting Society. In the overseas research, IEEE Communications Surveys & Tutorials, IEEE Access, and Computer were in the order. Fourth, in the research method, experiments 78(26.3%) and surveys 32(10.8%) were conducted in domestic research. In foreign research, experiments 128(43.1%) and surveys 59(19.9%) were conducted. In the experiment of implementation research, In domestic research, System 25(8.4%), Algorithm 24(8.1%), Performance Measurement and Improvement 16(5.4%) were in the order, In foreign research, Algorithm 37(12.5%), Performance Measurement and Improvement 17(9.1%), followed by Framework 26(8.8%). Through this, it is expected that it will be used as basic data that can lead the research direction of Docker container-based cloud computing such as research methods, research topics, research fields, and technology development.

Real-time Steel Surface Defects Detection Appliocation based on Yolov4 Model and Transfer Learning (Yolov4와 전이학습을 기반으로한 실시간 철강 표면 결함 검출 연구)

  • Bok-Kyeong Kim;Jun-Hee Bae;NGUYEN VIET HOAN;Yong-Eun Lee;Young Seok Ock
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.31-41
    • /
    • 2022
  • Steel is one of the most fundamental components to mechanical industry. However, the quality of products are greatly impacted by the surface defects in the steel. Thus, researchers pay attention to the need for surface defects detector and the deep learning methods are the current trend of object detector. There are still limitations and rooms for improvements, for example, related works focus on developing the models but don't take into account real-time application with practical implication on industrial settings. In this paper, a real-time application of steel surface defects detection based on YOLOv4 is proposed. Firstly, as the aim of this work to deploying model on real-time application, we studied related works on this field, particularly focusing on one-stage detector and YOLO algorithm, which is one of the most famous algorithm for real-time object detectors. Secondly, using pre-trained Yolov4-Darknet platform models and transfer learning, we trained and test on the hot rolled steel defects open-source dataset NEU-DET. In our study, we applied our application with 4 types of typical defects of a steel surface, namely patches, pitted surface, inclusion and scratches. Thirdly, we evaluated YOLOv4 trained model real-time performance to deploying our system with accuracy of 87.1 % mAP@0.5 and over 60 fps with GPU processing.

A novel method for determining dose distribution on panoramic reconstruction computed tomography images from radiotherapy computed tomography

  • Hiroyuki Okamoto;Madoka Sakuramachi;Wakako Yatsuoka;Takao Ueno;Kouji Katsura;Naoya Murakami;Satoshi Nakamura;Kotaro Iijima;Takahito Chiba;Hiroki Nakayama;Yasunori Shuto;Yuki Takano;Yuta Kobayashi;Hironori Kishida;Yuka Urago;Masato Nishitani;Shuka Nishina;Koushin Arai;Hiroshi Igaki
    • Imaging Science in Dentistry
    • /
    • v.54 no.2
    • /
    • pp.129-137
    • /
    • 2024
  • Purpose: Patients with head and neck cancer (HNC) who undergo dental procedures during radiotherapy (RT) face an increased risk of developing osteoradionecrosis (ORN). Accordingly, new tools must be developed to extract critical information regarding the dose delivered to the teeth and mandible. This article proposes a novel approach for visualizing 3-dimensional planned dose distributions on panoramic reconstruction computed tomography (pCT) images. Materials and Methods: Four patients with HNC who underwent volumetric modulated arc therapy were included. One patient experienced ORN and required the extraction of teeth after RT. In the study approach, the dental arch curve (DAC) was defined using an open-source platform. Subsequently, pCT images and dose distributions were generated based on the new coordinate system. All teeth and mandibles were delineated on both the original CT and pCT images. To evaluate the consistency of dose metrics, the Mann-Whitney U test and Student t-test were employed. Results: A total of 61 teeth and 4 mandibles were evaluated. The correlation coefficient between the 2 methods was 0.999, and no statistically significant difference was observed (P>0.05). This method facilitated a straightforward and intuitive understanding of the delivered dose. In 1 patient, ORN corresponded to the region of the root and the gum receiving a high dosage (approximately 70 Gy). Conclusion: The proposed method particularly benefits dentists involved in the management of patients with HNC. It enables the visualization of a 3-dimensional dose distribution in the teeth and mandible on pCT, enhancing the understanding of the dose delivered during RT.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Functional recovery after transplantation of mouse bone marrow-derived mesenchymal stem cells for hypoxic-ischemic brain injury in immature rats (저산소 허혈 뇌 손상을 유발시킨 미성숙 흰쥐에서 마우스 골수 기원 중간엽 줄기 세포 이식 후 기능 회복)

  • Choi, Wooksun;Shin, Hye Kyung;Eun, So-Hee;Kang, Hoon Chul;Park, Sung Won;Yoo, Kee Hwan;Hong, Young Sook;Lee, Joo Won;Eun, Baik-Lin
    • Clinical and Experimental Pediatrics
    • /
    • v.52 no.7
    • /
    • pp.824-831
    • /
    • 2009
  • Purpose : We aimed to investigate the efficacy of and functional recovery after intracerebral transplantation of different doses of mouse mesenchymal stem cells (mMSCs) in immature rat brain with hypoxic-ischemic encephalopathy (HIE). Methods : Postnatal 7-days-old Sprague-Dawley rats, which had undergone unilateral HI operation, were given stereotaxic intracerebral injections of either vehicle or mMSCs and then tested for locomotory activity in the 2nd, 4th, 6th, and 8th week of the stem cell injection. In the 8th week, Morris water maze test was performed to evaluate the learning and memory dysfunction for a week. Results : In the open field test, no differences were observed in the total distance/the total duration (F=0.412, P=0.745) among the 4 study groups. In the invisible-platform Morris water maze test, significant differences were observed in escape latency (F=380.319, P<0.01) among the 4 groups. The escape latency in the control group significantly differed from that in the high-dose mMSC and/or sham group on training days 2-5 (Scheffe's test, P<0.05) and became prominent with time progression (F=6.034, P<0.01). In spatial probe trial and visible-platform Morris water maze test, no significant improvement was observed in the rats that had undergone transplantation. Conclusion : Although the rats that received a high dose of mMSCs showed significant recovery in the learning-related behavioral test only, our data support that mMSCs may be used as a valuable source to improve outcome in HIE. Further study is necessary to identify the optimal dose that shows maximal efficacy for HIE treatment.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.