• Title/Summary/Keyword: service engineering

Search Result 12,168, Processing Time 0.046 seconds

Research on Bridge Maintenance Methods Using BIM Model and Augmented Reality (BIM 모델과 증강현실을 활용한 교량 유지관리방안 연구)

  • Choi, Woonggyu;Pa Pa Win Aung;Sanyukta Arvikar;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Bridges, which are construction structures, have increased from 584 to 38,405 since the 1970s. However, as the number of bridges increases, the number of bridges with a service life of more than 30 years increases to 21,737 (71%) by 2030, resulting in fatal accidents due to basic human resource maintenance of facilities. Accordingly, the importance of bridge safety inspection and maintenance measures is increasing, and the need for decision-making support for supervisors who manage multiple bridges is also required. Currently, the safety inspection and maintenance method of bridges is to write down damage, condition, location, and specifications on the exterior survey map by hand or to record them by taking pictures with a camera. However, errors in notation of damage or defects or mistakes by supervisors are possible, typos, etc. may reduce the reliability of the overall safety inspection and diagnosis. To improve this, this study visualizes damage data recorded in the BIM model in an AR environment and proposes a maintenance plan for bridges with a small number of people through maintenance decision-making support for supervisors.

Automation of Sampling for Public Survey Performance Assessment (공공측량 성과심사 표본추출 자동화 가능성 분석)

  • Choi, Hyun;Jin, Cheol;Lee, Jung Il;Kim, Gi Hong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.95-100
    • /
    • 2024
  • The public survey performance review conducted by the Spatial Information Quality Management Institute is conducted at the screening rate in accordance with the regulations, and the examiner directly judges the overall trend of the submitted performance based on the extracted sample. However, the evaluation of the Ministry of Land, Infrastructure and Transport, the evaluation trustee shall be specified by random extraction (Random Collection) is specified by the sample. In this study, it analyzed the details of the actual site and analyzed through securing actual performance review data. In addition, we analyzed considerations according to various field conditions and studied ways to apply the public survey performance review sampling algorithm. Therefore, detailed sampling criteria analysis by performance reviewers is necessary. A relative comparison was made feasible by comparing the data for which the real performance evaluation was performed with the outcomes of the Python automation program. This automation program is expected to be employed as a foundation program for the automated application of public survey performance evaluation sampling in the future.

A Study on the Extraction of Psychological Distance Embedded in Company's SNS Messages Using Machine Learning (머신 러닝을 활용한 회사 SNS 메시지에 내포된 심리적 거리 추출 연구)

  • Seongwon Lee;Jin Hyuk Kim
    • Information Systems Review
    • /
    • v.21 no.1
    • /
    • pp.23-38
    • /
    • 2019
  • The social network service (SNS) is one of the important marketing channels, so many companies actively exploit SNSs by posting SNS messages with appropriate content and style for their customers. In this paper, we focused on the psychological distances embedded in the SNS messages and developed a method to measure the psychological distance in SNS message by mixing a traditional content analysis, natural language processing (NLP), and machine learning. Through a traditional content analysis by human coding, the psychological distance was extracted from the SNS message, and these coding results were used for input data for NLP and machine learning. With NLP, word embedding was executed and Bag of Word was created. The Support Vector Machine, one of machine learning techniques was performed to train and test the psychological distance in SNS message. As a result, sensitivity and precision of SVM prediction were significantly low because of the extreme skewness of dataset. We improved the performance of SVM by balancing the ratio of data by upsampling technique and using data coded with the same value in first content analysis. All performance index was more than 70%, which showed that psychological distance can be measured well.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Comparing Prediction Uncertainty Analysis Techniques of SWAT Simulated Streamflow Applied to Chungju Dam Watershed (충주댐 유역의 유출량에 대한 SWAT 모형의 예측 불확실성 분석 기법 비교)

  • Joh, Hyung-Kyung;Park, Jong-Yoon;Jang, Cheol-Hee;Kim, Seong-Joon
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.9
    • /
    • pp.861-874
    • /
    • 2012
  • To fulfill applicability of Soil and Water Assessment Tool (SWAT) model, it is important that this model passes through a careful calibration and uncertainty analysis. In recent years, many researchers have come up with various uncertainty analysis techniques for SWAT model. To determine the differences and similarities of typical techniques, we applied three uncertainty analysis procedures to Chungju Dam watershed (6,581.1 $km^2$) of South Korea included in SWAT-Calibration Uncertainty Program (SWAT-CUP): Sequential Uncertainty FItting algorithm ver.2 (SUFI2), Generalized Likelihood Uncertainty Estimation (GLUE), Parameter Solution (ParaSol). As a result, there was no significant difference in the objective function values between SUFI2 and GLUE algorithms. However, ParaSol algorithm shows the worst objective functions, and considerable divergence was also showed in 95PPU bands with each other. The p-factor and r-factor appeared from 0.02 to 0.79 and 0.03 to 0.52 differences in streamflow respectively. In general, the ParaSol algorithm showed the lowest p-factor and r-factor, SUFI2 algorithm was the highest in the p-factor and r-factor. Therefore, in the SWAT model calibration and uncertainty analysis of the automatic methods, we suggest the calibration methods considering p-factor and r-factor. The p-factor means the percentage of observations covered by 95PPU (95 Percent Prediction Uncertainty) band, and r-factor is the average thickness of the 95PPU band.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Study on Detection Methodology for Influential Areas in Social Network using Spatial Statistical Analysis Methods (공간통계분석기법을 이용한 소셜 네트워크 유력지역 탐색기법 연구)

  • Lee, Young Min;Park, Woo Jin;Yu, Ki Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.21-30
    • /
    • 2014
  • Lately, new influentials have secured a large number of volunteers on social networks due to vitalization of various social media. There has been considerable research on these influential people in social networks but the research has limitations on location information of Location Based Social Network Service(LBSNS). Therefore, the purpose of this study is to propose a spatial detection methodology and application plan for influentials who make comments about diverse social and cultural issues in LBSNS using spatial statistical analysis methods. Twitter was used to collect analysis object data and 168,040 Twitter messages were collected in Seoul over a month-long period. In addition, 'politics,' 'economy,' and 'IT' were set as categories and hot issue keywords as given categories. Therefore, it was possible to come up with an exposure index for searching influentials in respect to hot issue keywords, and exposure index by administrative units of Seoul was calculated through a spatial joint operation. Moreover, an influential index that considers the spatial dependence of the exposure index was drawn to extract information on the influential areas at the top 5% of the influential index and analyze the spatial distribution characteristics and spatial correlation. The experimental results demonstrated that spatial correlation coefficient was relatively high at more than 0.3 in same categories, and correlation coefficient between politics category and economy category was also more than 0.3. On the other hand, correlation coefficient between politics category and IT category was very low at 0.18, and between economy category and IT category was also very weak at 0.15. This study has a significance for materialization of influentials from spatial information perspective, and can be usefully utilized in the field of gCRM in the future.

A Study on Replay Experiments and Thermal Analysis for Autoignition Phenomenon of Shredded Waste Tires (폐타이어 분쇄물의 자연발화현상에 대한 재연실험 및 열분석에 관한 연구)

  • Koh, Jae Sun;Jang, Man Joon
    • Fire Science and Engineering
    • /
    • v.26 no.6
    • /
    • pp.99-108
    • /
    • 2012
  • These days, spontaneous ignition phenomena by oxidizing heat frequently occur in the circumstances of processing and storing waste tires. Therefore, to examine the phenomena, in this work, this researcher conducted the tests of fires of fragmented waste tires (shredded tire), closely investigated components of the fire residual materials collected in the processing and storing place, and analyzed the temperature of the starting of the ignition, weight loss, and heat of reaction. For the study, this researcher conducted fire tests with fragmented waste tires in the range of 2.5 mm to 15 mm, whose heat could be easily accumulated, and performed heat analysis through DSC and TGA, DTA, DTG, and GC/MS to give scientific probability to the possibility of spontaneous ignition. According to the tests, at the 48-hour storage, rapid increase in temperature ($178^{\circ}C$), Graphite phenomenon, smoking were observed. And the result from the DTA and DTG analysis showed that at $166.15^{\circ}C$, the minimum weight loss occurred. And, the result from the test on the waste tire analysis material 1 (Unburnt) through DSC and TGA analysis revealed that at $180^{\circ}C$ or so, thermal decomposition started. As a result, the starting temperature of ignition was considered to be $160^{\circ}C$ to $180^{\circ}C$. And, at $305^{\circ}C$, 10 % of the initial weight of the material reduced, and at $416.12^{\circ}C$, 50 % of the intial weight of the material decreased. The result from the test on oxidation and self-reaction through GC/MS and DSC analysis presented that oxidized components like 1,3 cyclopentnadiene were detected a lot. But according to the result from the heat analysis test on standard materials and fragmented waste tires, their heat value was lower than the basis value so that self-reaction was not found. Therefore, to prevent spontaneous ignition by oxidizing heat of waste tires, it is necessary to convert the conventional process into Cryogenic Process that has no or few heat accumulation at the time of fragmentation. And the current storing method in which broken and fragmented materials are stored into large burlap bags (500 kg) should be changed to the method in which they are stored into small burlap bags in order to prevent heat accumulation.

Physicochemical Characteristics and Antioxidant activities of Sikhye Made with Pigmented Rice (유색미로 제조한 식혜의 이화학적 특성 및 항산화 활성에 관한 연구)

  • Yang, Ji-won;Kim, Young Eon;Lee, Kyung Hee
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.25 no.5
    • /
    • pp.830-841
    • /
    • 2015
  • This study compared the physicochemical characteristics, proximate composition, taste compound and antioxidant properties of Sikhye prepared with pigmented rice. Proximate composition showed a significant difference depending on the type of pigmented rice except crude fat contents and pH, color was a significant difference depending on the type of pigmented rice. The highest brix degree was $15.07^{\circ}Brix$ in red and black rice Sikhye. Each highest value of reducing sugar and free sugar content showed milled rice and brown rice Sikhye. Titratable acidity and total acidity of the pigmented rice Sikhye were highest for black rice Sikhye, free sugar content were highest for green rice Sikhye. Analysis of their relative antioxidative properties indicated that black rice Sikhye had the highest total polyphenol, flavonoid, and anthocyanin content, the highest levels of DPPH radical scavenging ability, and the highest level of reducing power and ferric reducing ability of plasma scores. Principal component analysis suggested that black rice Sikhye had a strong association with antioxidant properties, brown and red rice Sikhye had the strongest association with the sweetness and unique flavor.

Implementation of An Automatic Authentication System Based on Patient's Situations and Its Performance Evaluation (환자상황 기반의 자동인증시스템 구축 및 성능평가)

  • Ham, Gyu-Sung;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.25-34
    • /
    • 2020
  • In the current medical information system, a system environment is constructed in which Biometric data generated by using IoT or medical equipment connected to a patient can be stored in a medical information server and monitored at the same time. Also, the patient's biometric data, medical information, and personal information after simple authentication using only the ID / PW via the mobile terminal of the medical staff are easily accessible. However, the method of accessing these medical information needs to be improved in the dimension of protecting patient's personal information, and provides a quick authentication system for first aid. In this paper, we implemented an automatic authentication system based on the patient's situation and evaluated its performance. Patient's situation was graded into normal and emergency situation, and the situation of the patient was determined in real time using incoming patient biometric data from the ward. If the patient's situation is an emergency, an emergency message including an emergency code is send to the mobile terminal of the medical staff, and they attempted automatic authentication to access the upper medical information of the patient. Automatic authentication is a combination of user authentication(ID/PW, emergency code) and mobile terminal authentication(medical staff's role, working hours, work location). After user authentication, mobile terminal authentication is proceeded automatically without additional intervention by medical staff. After completing all authentications, medical staffs get authorization according to the role of medical staffs and patient's situations, and can access to the patient's graded medical information and personal information through the mobile terminal. We protected the patient's medical information through limited medical information access by the medical staff according to the patient's situation, and provided an automatic authentication without additional intervention in an emergency situation. We performed performance evaluation to verify the performance of the implemented automatic authentication system.