• Title/Summary/Keyword: real-time database

Search Result 947, Processing Time 0.032 seconds

Joint Reasoning of Real-time Visual Risk Zone Identification and Numeric Checking for Construction Safety Management

  • Ali, Ahmed Khairadeen;Khan, Numan;Lee, Do Yeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.313-322
    • /
    • 2020
  • The recognition of the risk hazards is a vital step to effectively prevent accidents on a construction site. The advanced development in computer vision systems and the availability of the large visual database related to construction site made it possible to take quick action in the event of human error and disaster situations that may occur during management supervision. Therefore, it is necessary to analyze the risk factors that need to be managed at the construction site and review appropriate and effective technical methods for each risk factor. This research focuses on analyzing Occupational Safety and Health Agency (OSHA) related to risk zone identification rules that can be adopted by the image recognition technology and classify their risk factors depending on the effective technical method. Therefore, this research developed a pattern-oriented classification of OSHA rules that can employ a large scale of safety hazard recognition. This research uses joint reasoning of risk zone Identification and numeric input by utilizing a stereo camera integrated with an image detection algorithm such as (YOLOv3) and Pyramid Stereo Matching Network (PSMNet). The research result identifies risk zones and raises alarm if a target object enters this zone. It also determines numerical information of a target, which recognizes the length, spacing, and angle of the target. Applying image detection joint logic algorithms might leverage the speed and accuracy of hazard detection due to merging more than one factor to prevent accidents in the job site.

  • PDF

Information-providing Application Based on Web Crawling (웹 크롤링을 통한 개인 맞춤형 정보제공 애플리케이션)

  • Ju-Hyeon Kim;Jeong-Eun Choi;U-Gyeong Shin;Min-Jun Piao;Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.1
    • /
    • pp.21-27
    • /
    • 2024
  • This paper presents the implementation of a personalized real-time information-providing application utilizing filtering and web crawling technologies. The implemented application performs web crawling based on the user-set keywords within web pages, using the Jsoup library as a basis for the selected keywords. The crawled data is then stored in a MySQL database. The stored data is presented to the user through an application implemented using Flutter. Additionally, mobile push notifications are provided using Firebase Cloud Messaging (FCM). Through these methods, users can efficiently obtain the desired information quickly. Furthermore, there is an expectation that this approach can be applied to the Internet of Things (IoT) where big data is generated, allowing users to receive only the information they need.

Study on the Failure Diagnosis of Robot Joints Using Machine Learning (기계학습을 이용한 로봇 관절부 고장진단에 대한 연구)

  • Mi Jin Kim;Kyo Mun Ku;Jae Hong Shim;Hyo Young Kim;Kihyun Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.113-118
    • /
    • 2023
  • Maintenance of semiconductor equipment processes is crucial for the continuous growth of the semiconductor market. The process must always be upheld in optimal condition to ensure a smooth supply of numerous parts. Additionally, it is imperative to monitor the status of the robots that play a central role in the process. Just as many senses of organs judge a person's body condition, robots also have numerous sensors that play a role, and like human joints, they can detect the condition first in the joints, which are the driving parts of the robot. Therefore, a normal state test bed and an abnormal state test bed using an aging reducer were constructed by simulating the joint, which is the driving part of the robot. Various sensors such as vibration, torque, encoder, and temperature were attached to accurately diagnose the robot's failure, and the test bed was built with an integrated system to collect and control data simultaneously in real-time. After configuring the user screen and building a database based on the collected data, the characteristic values of normal and abnormal data were analyzed, and machine learning was performed using the KNN (K-Nearest Neighbors) machine learning algorithm. This approach yielded an impressive 94% accuracy in failure diagnosis, underscoring the reliability of both the test bed and the data it produced.

  • PDF

Chlorophyll contents and expression profiles of photosynthesis-related genes in water-stressed banana plantlets

  • Sri Nanan Widiyanto;Syahril Sulaiman;Simon Duve;Erly Marwani;Husna Nugrahapraja;Diky Setya Diningrat
    • Journal of Plant Biotechnology
    • /
    • v.50
    • /
    • pp.127-136
    • /
    • 2023
  • Water scarcity decreases the rate of photosynthesis and, consequently, the yield of banana plants (Musa spp). In this study, transcriptome analysis was performed to identify photosynthesis-related genes in banana plants and determine their expression profiles under water stress conditions. Banana plantlets were in vitro cultured on Murashige and Skoog agar medium with and without 10% polyethylene glycol and marked as BP10 and BK. Chlorophyll contents in the plant shoots were determined spectrophotometrically. Two cDNA libraries generated from BK and BP10 plantlets, respectively, were used as the reference for transcriptome data. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) and visualized using the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway prediction. Morphological observations indicated that water deficiency caused chlorosis and reduced the shoot chlorophyll content of banana plantlets. GO enrichment identified 52 photosynthesis-related genes that were affected by water stress. KEGG visualization revealed the pathways related to the 52 photosynthesisr-elated genes and their allocations in four GO terms. Four, 12, 15, and 21 genes were related to chlorophyll biosynthesis, the Calvin cycle, the photosynthetic electron transfer chain, and the light-harvesting complex, respectively. Differentially expressed gene (DEG) analysis using DESeq revealed that 45 genes were down-regulated, whereas seven genes were up-regulated. Four of the down-regulated genes were responsible for chlorophyll biosynthesis and appeared to cause the decrease in the banana leaf chlorophyll content. Among the annotated DEGs, MaPNDO, MaPSAL, and MaFEDA were selected and validated using quantitative real-time PCR.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF

The Study on the Development of Analysis and Management System for Traffic Accident Spatial DB (교통사고 공간 DB관리 및 분석 시스템 개발에 관한 연구)

  • Yu Ji Yeon;Jeon Jae Yong;Jeon Hyeong Seob;Cho Gi Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.23 no.4
    • /
    • pp.345-352
    • /
    • 2005
  • In up-to-date information anger time it is caused by with business of traffic accident control and analysis and two time it accomplishes a business. National Police Office which controls a traffic accident does not execute an up-to-date technique. And, it is working yet by the hand, There is to traffic accident analysis and the research regarding the analysis against the research which it follows in geography element and composition element and an accident cause is weak. Consequently, effectively establishment and it enforces a traffic safety policy and from the hazard which it evaluates traffic accident data the system and scientific analysis against a traffic accident occurrence cause and a feature in basic must become accomplished. The research which it sees constructs a traffic accident data in GIS base. It is like that, it uses the PDA where is not the collection of data of text form in existing and at real-time it converts store and an accident data rightly in standard traffic accident data form and it will be able to manage. It was related with a space data peculiarity and the research regarding the system development with the geography analysis data about an accident cause under manifesting it accomplished.

A Single Index Approach for Subsequence Matching that Supports Normalization Transform in Time-Series Databases (시계열 데이터베이스에서 단일 색인을 사용한 정규화 변환 지원 서브시퀀스 매칭)

  • Moon Yang-Sae;Kim Jin-Ho;Loh Woong-Kee
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.513-524
    • /
    • 2006
  • Normalization transform is very useful for finding the overall trend of the time-series data since it enables finding sequences with similar fluctuation patterns. The previous subsequence matching method with normalization transform, however, would incur index overhead both in storage space and in update maintenance since it should build multiple indexes for supporting arbitrary length of query sequences. To solve this problem, we propose a single index approach for the normalization transformed subsequence matching that supports arbitrary length of query sequences. For the single index approach, we first provide the notion of inclusion-normalization transform by generalizing the original definition of normalization transform. The inclusion-normalization transform normalizes a window by using the mean and the standard deviation of a subsequence that includes the window. Next, we formally prove correctness of the proposed method that uses the inclusion-normalization transform for the normalization transformed subsequence matching. We then propose subsequence matching and index building algorithms to implement the proposed method. Experimental results for real stock data show that our method improves performance by up to $2.5{\sim}2.8$ times over the previous method. Our approach has an additional advantage of being generalized to support many sorts of other transforms as well as normalization transform. Therefore, we believe our work will be widely used in many sorts of transform-based subsequence matching methods.

Optimization of Post-Processing for Subsequence Matching in Time-Series Databases (시계열 데이터베이스에서 서브시퀀스 매칭을 위한 후처리 과정의 최적화)

  • Kim, Sang-Uk
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.555-560
    • /
    • 2002
  • Subsequence matching, which consists of index searching and post-processing steps, is an operation that finds those subsequences whose changing patterns are similar to that of a given query sequence from a time-series database. This paper discusses optimization of post-processing for subsequence matching. The common problem occurred in post-processing of previous methods is to compare the candidate subsequence with the query sequence for discarding false alarms whenever each candidate subsequence appears during index searching. This makes a sequence containing candidate subsequences to be accessed multiple times from disk, and also have a candidate subsequence to be compared with the query sequence multiple times. These redundancies cause the performance of subsequence matching to degrade seriously. In this paper, we propose a new optimal method for resolving the problem. The proposed method stores ail the candidate subsequences returned by index searching into a binary search tree, and performs post-processing in a batch fashion after finishing the index searching. By this method, we are able to completely eliminate the redundancies mentioned above. For verifying the performance improvement effect of the proposed method, we perform extensive experiments using a real-life stock data set. The results reveal that the proposed method achieves 55 times to 156 times speedup over the previous methods.

A Self-Service Business Intelligence System for Recommending New Crops (재배 작물 추천을 위한 셀프서비스 비즈니스 인텔리전스 시스템)

  • Kim, Sam-Keun;Kim, Kwang-Chae;Kim, Hyeon-Woo;Jeong, Woo-Jin;Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.527-535
    • /
    • 2021
  • Traditional business intelligence (BI) systems have been used widely as tools for better decision-making on time. On the other hand, building a data warehouse (DW) for the efficient analysis of rapidly growing data is time-consuming and complex. In particular, the ETL (Extract, Transform, and Load) process required to build a data warehouse has become much more complex as the BI platform moves to a cloud environment. Various BI solutions based on the NoSQL database, such as MongoDB, have been proposed to overcome these ETL issues. Decision-makers want easy access to data without the help of IT departments or BI experts. Recently, self-service BI (SSBI) has emerged as a way to solve these BI issues. This paper proposes a self-service BI system with farming data using the MongoDB cloud as DW to support the selection of new crops by return-farmers. The proposed system includes functions to provide insights to decision-makers, including data visualization using MongoDB charts, reporting for advanced data search, and monitoring for real-time data analysis. Decision makers can access data directly in various ways and can analyze data in a self-service method using the functions of the proposed system.