• Title/Summary/Keyword: Service Engineering

Search Result 12,223, Processing Time 0.044 seconds

Marine Algal Flora and Community Structure in Kijang on the Southern East Coast of Korea (부산시 기장군 연안의 해조상 및 군집 특성)

  • Choi, Chang-Geun;Chowdhury, M.T.H.;Choi, In-Young;Hong, Yong-Ki
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.15 no.3
    • /
    • pp.133-139
    • /
    • 2010
  • Marine algal flora and community structure were investigated at four sites in Kijang area on the southern east coast of Korea in August 2006 and August 2009, respectively. A total of 54 seaweeds including 6 green, 10 brown, 38 red were collected and identified. Of 54 seaweeds, 35 species were found throughout the survey period. Mean biomass in wet weight was from $616.0\;g\;m^{-2}$ to $1,462.4\;g\;m^{-2}$2 in 2006, and $354.8\;g\;m^{-2}$ to $965.6\;g\;m^{-2}$ in 2009. Maximum biomass was recorded at Mundong site, and minimum was recorded at Seoam (2006) and Dongbaek (2009) sites. The flora investigated (2006, 2009) could be classified into six functional groups such as coarsely branched form (58.7%, 58.1%), thick leathery form (10.9%, 11.6%), filamentous form (13.0%, 9.3%), crustose form (6.5%, 9.3%), sheet form (6.5%, 7.0%) and jointed calcareous form (4.3%, 4.7%) during survey period. The R/P, C/P and (R+C)/P values reflecting flora characteristics were 4.00, 0.75 and 4.75 at 2006, and 5.17, 1.00 and 6.17 at 2009, respectively. Therefore, the number of marine algae species and biomass in Kijang area were similar when they were comparing with the previous data. It suggest that any changes of seaweed diversity have not been observed in Kijang coastal area before and after the anthropogenic construction between 2006 and 2009.

Research on Bridge Maintenance Methods Using BIM Model and Augmented Reality (BIM 모델과 증강현실을 활용한 교량 유지관리방안 연구)

  • Choi, Woonggyu;Pa Pa Win Aung;Sanyukta Arvikar;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Bridges, which are construction structures, have increased from 584 to 38,405 since the 1970s. However, as the number of bridges increases, the number of bridges with a service life of more than 30 years increases to 21,737 (71%) by 2030, resulting in fatal accidents due to basic human resource maintenance of facilities. Accordingly, the importance of bridge safety inspection and maintenance measures is increasing, and the need for decision-making support for supervisors who manage multiple bridges is also required. Currently, the safety inspection and maintenance method of bridges is to write down damage, condition, location, and specifications on the exterior survey map by hand or to record them by taking pictures with a camera. However, errors in notation of damage or defects or mistakes by supervisors are possible, typos, etc. may reduce the reliability of the overall safety inspection and diagnosis. To improve this, this study visualizes damage data recorded in the BIM model in an AR environment and proposes a maintenance plan for bridges with a small number of people through maintenance decision-making support for supervisors.

Automation of Sampling for Public Survey Performance Assessment (공공측량 성과심사 표본추출 자동화 가능성 분석)

  • Choi, Hyun;Jin, Cheol;Lee, Jung Il;Kim, Gi Hong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.95-100
    • /
    • 2024
  • The public survey performance review conducted by the Spatial Information Quality Management Institute is conducted at the screening rate in accordance with the regulations, and the examiner directly judges the overall trend of the submitted performance based on the extracted sample. However, the evaluation of the Ministry of Land, Infrastructure and Transport, the evaluation trustee shall be specified by random extraction (Random Collection) is specified by the sample. In this study, it analyzed the details of the actual site and analyzed through securing actual performance review data. In addition, we analyzed considerations according to various field conditions and studied ways to apply the public survey performance review sampling algorithm. Therefore, detailed sampling criteria analysis by performance reviewers is necessary. A relative comparison was made feasible by comparing the data for which the real performance evaluation was performed with the outcomes of the Python automation program. This automation program is expected to be employed as a foundation program for the automated application of public survey performance evaluation sampling in the future.

A Study on the Analysis of Reasons for Job Change and Countermeasures among Professionals in the Ship Management Industry (선박관리산업 전문인력 이직 원인 분석 및 대책 연구)

  • Tae-Ryong Park;Do-Yeon Ha;Yul-Seong Kim
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.146-154
    • /
    • 2024
  • The ship management industry in South Korea has been growing steadily, leading the government to implement policies to support its development in response to changing environmental conditions. These policies aim to improve the competitiveness of South Korea's ship management industry by recognizing the importance of skilled professionals in determining its success. Plans and policies have been put in place to cultivate these professionals, but ship management companies are currently facing a serious shortage of manpower. To enhance the industry's competitiveness, it is essential to attract and retain competent ship management professionals. Therefore, this study investigates the reasons for turnover among these professionals. The research results identified four factors contributing to turnover: Work Environment, Economic Compensation and Welfare Benefits, Self-Development, and Promotion and Career Advancement. Subsequent multiple regression analysis based on these factors revealed the need to strengthen economic rewards and benefits in order to reduce turnover rates among ship management professionals. This study provides foundational data for the development of stable human resource management policies for the future of the ship management industry.

Performance of Passive UHF RFID System in Impulsive Noise Channel Based on Statistical Modeling (통계적 모델링 기반의 임펄스 잡음 채널에서 수동형 UHF RFID 시스템의 성능)

  • Jae-sung Roh
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.835-840
    • /
    • 2023
  • RFID(Radio Frequency Identification) systems are attracting attention as a key component of Internet of Things technology due to the cost and energy efficiency of application services. In order to use RFID technology in the IoT application service field, it is necessary to be able to store and manage various information for a long period of time as well as simple recognition between the reader and tag of the RFID system. And in order to read and write information to tags, a performance improvement technology that is strong and reliable in poor wireless channels is needed. In particular, in the UHF(Ultra High Frequency) RFID system, since multiple tags communicate passively in a crowded environment, it is essential to improve the recognition rate and transmission speed of individual tags. In this paper, Middleton's Class A impulsive noise model was selected to analyze the performance of the RFID system in an impulsive noise environment, and FM0 encoding and Miller encoding were applied to the tag to analyze the error rate performance of the RFID system. As a result of analyzing the performance of the RFID system in Middleton's Class A impulsive noise channel, it was found that the larger the Gaussian noise to impulsive noise power ratio and the impulsive noise index, the more similar the characteristics to the Gaussian noise channel.

A Study on the Extraction of Psychological Distance Embedded in Company's SNS Messages Using Machine Learning (머신 러닝을 활용한 회사 SNS 메시지에 내포된 심리적 거리 추출 연구)

  • Seongwon Lee;Jin Hyuk Kim
    • Information Systems Review
    • /
    • v.21 no.1
    • /
    • pp.23-38
    • /
    • 2019
  • The social network service (SNS) is one of the important marketing channels, so many companies actively exploit SNSs by posting SNS messages with appropriate content and style for their customers. In this paper, we focused on the psychological distances embedded in the SNS messages and developed a method to measure the psychological distance in SNS message by mixing a traditional content analysis, natural language processing (NLP), and machine learning. Through a traditional content analysis by human coding, the psychological distance was extracted from the SNS message, and these coding results were used for input data for NLP and machine learning. With NLP, word embedding was executed and Bag of Word was created. The Support Vector Machine, one of machine learning techniques was performed to train and test the psychological distance in SNS message. As a result, sensitivity and precision of SVM prediction were significantly low because of the extreme skewness of dataset. We improved the performance of SVM by balancing the ratio of data by upsampling technique and using data coded with the same value in first content analysis. All performance index was more than 70%, which showed that psychological distance can be measured well.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Comparing Prediction Uncertainty Analysis Techniques of SWAT Simulated Streamflow Applied to Chungju Dam Watershed (충주댐 유역의 유출량에 대한 SWAT 모형의 예측 불확실성 분석 기법 비교)

  • Joh, Hyung-Kyung;Park, Jong-Yoon;Jang, Cheol-Hee;Kim, Seong-Joon
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.9
    • /
    • pp.861-874
    • /
    • 2012
  • To fulfill applicability of Soil and Water Assessment Tool (SWAT) model, it is important that this model passes through a careful calibration and uncertainty analysis. In recent years, many researchers have come up with various uncertainty analysis techniques for SWAT model. To determine the differences and similarities of typical techniques, we applied three uncertainty analysis procedures to Chungju Dam watershed (6,581.1 $km^2$) of South Korea included in SWAT-Calibration Uncertainty Program (SWAT-CUP): Sequential Uncertainty FItting algorithm ver.2 (SUFI2), Generalized Likelihood Uncertainty Estimation (GLUE), Parameter Solution (ParaSol). As a result, there was no significant difference in the objective function values between SUFI2 and GLUE algorithms. However, ParaSol algorithm shows the worst objective functions, and considerable divergence was also showed in 95PPU bands with each other. The p-factor and r-factor appeared from 0.02 to 0.79 and 0.03 to 0.52 differences in streamflow respectively. In general, the ParaSol algorithm showed the lowest p-factor and r-factor, SUFI2 algorithm was the highest in the p-factor and r-factor. Therefore, in the SWAT model calibration and uncertainty analysis of the automatic methods, we suggest the calibration methods considering p-factor and r-factor. The p-factor means the percentage of observations covered by 95PPU (95 Percent Prediction Uncertainty) band, and r-factor is the average thickness of the 95PPU band.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Study on Detection Methodology for Influential Areas in Social Network using Spatial Statistical Analysis Methods (공간통계분석기법을 이용한 소셜 네트워크 유력지역 탐색기법 연구)

  • Lee, Young Min;Park, Woo Jin;Yu, Ki Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.21-30
    • /
    • 2014
  • Lately, new influentials have secured a large number of volunteers on social networks due to vitalization of various social media. There has been considerable research on these influential people in social networks but the research has limitations on location information of Location Based Social Network Service(LBSNS). Therefore, the purpose of this study is to propose a spatial detection methodology and application plan for influentials who make comments about diverse social and cultural issues in LBSNS using spatial statistical analysis methods. Twitter was used to collect analysis object data and 168,040 Twitter messages were collected in Seoul over a month-long period. In addition, 'politics,' 'economy,' and 'IT' were set as categories and hot issue keywords as given categories. Therefore, it was possible to come up with an exposure index for searching influentials in respect to hot issue keywords, and exposure index by administrative units of Seoul was calculated through a spatial joint operation. Moreover, an influential index that considers the spatial dependence of the exposure index was drawn to extract information on the influential areas at the top 5% of the influential index and analyze the spatial distribution characteristics and spatial correlation. The experimental results demonstrated that spatial correlation coefficient was relatively high at more than 0.3 in same categories, and correlation coefficient between politics category and economy category was also more than 0.3. On the other hand, correlation coefficient between politics category and IT category was very low at 0.18, and between economy category and IT category was also very weak at 0.15. This study has a significance for materialization of influentials from spatial information perspective, and can be usefully utilized in the field of gCRM in the future.