• Title/Summary/Keyword: Real-time analysis system

Search Result 3,097, Processing Time 0.038 seconds

A Study on the Method of Energy Evaluation in Water Supply Networks (상수관망의 에너지 평가기법에 관한 연구)

  • Kim, Seong-Won;Kim, Dohwan;Choi, Doo Yong;Kim, Juhwan
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.7
    • /
    • pp.745-754
    • /
    • 2013
  • The systematic analysis and evaluation of required energy in the processes of drinking water production and supply have attracted considerable interest considering the need to overcome electricity shortage and control greenhouse gas emissions. On the basis of a review of existing research results, a practical method is developed in this study for evaluating energy in water supply networks. The proposed method can be applied to real water supply systems. A model based on the proposed method is developed by combining the hydraulic analysis results that are obtained using the EPANET2 software with a mathematical energy model on the MATLAB platform. It is suggested that performance indicators can evaluate the inherent efficiency of water supply facilities as well as their operational efficiency depending on the pipeline layout, pipe condition, and leakage level. The developed model is validated by applying it to virtual and real water supply systems. It is expected that the management of electric power demand on the peak time of water supply and the planning of an energy-efficient water supply system can be effectively achieved by the optimal management of energy by the proposed method in this study.

A Study on an Automatic Classification Model for Facet-Based Multidimensional Analysis of Civil Complaints (패싯 기반 민원 다차원 분석을 위한 자동 분류 모델)

  • Na Rang Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.135-144
    • /
    • 2024
  • In this study, we propose an automatic classification model for quantitative multidimensional analysis based on facet theory to understand public opinions and demands on major issues through big data analysis. Civil complaints, as a form of public feedback, are generated by various individuals on multiple topics repeatedly and continuously in real-time, which can be challenging for officials to read and analyze efficiently. Specifically, our research introduces a new classification framework that utilizes facet theory and political analysis models to analyze the characteristics of citizen complaints and apply them to the policy-making process. Furthermore, to reduce administrative tasks related to complaint analysis and processing and to facilitate citizen policy participation, we employ deep learning to automatically extract and classify attributes based on the facet analysis framework. The results of this study are expected to provide important insights into understanding and analyzing the characteristics of big data related to citizen complaints, which can pave the way for future research in various fields beyond the public sector, such as education, industry, and healthcare, for quantifying unstructured data and utilizing multidimensional analysis. In practical terms, improving the processing system for large-scale electronic complaints and automation through deep learning can enhance the efficiency and responsiveness of complaint handling, and this approach can also be applied to text data processing in other fields.

Numerical Analysis of Flow Characteristies inside innes part of Fluid Control Valve System (유동해석을 통한 유체제어벨브 시스템의 내부 유동 특성 분석)

  • Son, Chang-Woo;Seo, Tae-Il;Kim, Kwang-Hee;Lee, Sun-Ryong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.6
    • /
    • pp.160-166
    • /
    • 2018
  • The worldwide semi-conductor market has been growing for a long time. Manufacturing lines of semi-conductors need to handle several types of toxic gases. In particular, they need to be controlled accurately in real time. This type of toxic gas control system consists of many different kinds of parts, e.g., fittings, valves, tubes, filters, and regulators. These parts obviously need to be manufactured precisely and be corrosion resistant because they have to control high pressure gases for long periods without any leakage. For this, surface machining and hardening technologies of the metal block and metal gasket need to be studied. This type of study depends on various factors, such as geometric shapes, part materials, surface hardening method, and gas pressures. This paper presents strong concerns on a series of simulation processes regarding the differences between the inlet and outlet pressures considering several different fluid velocity, tube diameters, and V-angles. Indeed, this study will very helpful to determine the important design factors as well as precisely manufacture these parts. The EP (Electrolytic Polishing) process was used to obtain cleaner surfaces, and hardness tests were carried out after the EP process.

GIS Based Real-Time Transit Information Integration and Its Transit Planning Implications

  • Hwang, Da-Hae;Kim, Dong-Young;Choi, Yun-Soo;Cho, Seong-Kil
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.2 s.40
    • /
    • pp.87-93
    • /
    • 2007
  • Over the years, Advanced Public Transportation System (APTS) has been implemented to manage and operate public transportation system. With the expanding mass spatio-temporal data such as comprehensive spatial information of each individual passenger and public transportation vehicle, it has been required to consolidate and analyze these multiple data sets from various sources. This paper demonstrates how GIS is utilized for the consolidation of massive transit related spatio-temporal information. And it presents effective applications to improve transit planning process and support transit related decision-making activities. GIS based system is used to combine multi-agents' data in the areas of transit operation and individual transit ride and transfer management. Due to the unique comprehensiveness and the level of detail of the data provide by the Seoul Transit system, this GIS based information consolidation is the first in its class. Based on the integrated database, this paper describes the effective and efficient GIS based analysis deployed in a transportation system planning process. The data integration systems and analytic models developed in this paper can be transferred and applied by any municipal governments provided that the appropriate data is available.

  • PDF

The Establishment of an Activity-Based EVM - PMIS Integration Model (액티비티 기반의 EVM - PMIS 통합모델 구축)

  • Na, Kwang-Tae;Kang, Byeung-Hee
    • Journal of the Korea Institute of Building Construction
    • /
    • v.10 no.1
    • /
    • pp.199-212
    • /
    • 2010
  • To establish an infrastructure for technology and information in the domestic construction industry, several construction regulations pertaining to construction information have been institutionalized. However, there are major problems with the domestic information classification system, earned value management (EVM) and project management information system (PMIS). In particular, the functions of the current PMIS have consisted of a builder-oriented system, and as EVM is not applied to PMIS, the functions of reporting, analysis and forecast for owners are lacking. Moreover, owners cannot confirm information on construction schedule and cost in real time due to the differences between the EVM and PMIS operation systems. The purpose of this study is to provide a framework that is capable of operating PMIS efficiently under an e-business environment, by providing a proposal on how to establish a work breakdown structure (WBS) and an EVM - PMIS integration model, so that PMIS may provide the function of EVM, and stakeholders may have all information in common. At the core of EVM - PMIS integration is the idea that EVM and PMIS have the same operation system, in order to be an activity-based system. The principle of the integration is data integration, in which the information field of an activity is connected with the field of a relational database table consisting of sub-modules for the schedule and cost management function of PMIS using a relational database management system. Therefore, the planned value (PV), cost value (CV), actual cost (AC), schedule variance (SV), schedule performance index (SPI), cost variance (CV) and cost performance index (CPI) of an activity are connected with the field of the relational database table for the schedule and cost sub-modules of PMIS.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Reproducibility Evaluation of Deep inspiration breath-hold(DIBH) technique by respiration data and heart position analysis during radiation therapy for Left Breast cancer patients (좌측 유방암 환자의 방사선치료 중 환자의 호흡과 심장 위치 분석을 통한 Deep inspiration breath-hold(DIBH) 기법의 재현성 평가)

  • Jo, Jae Young;Bae, Sun Myung;Yoon, In Ha;Lee, Ho Yeon;Kang, Tae Young;Baek, Geum Mun;Bae, Jae Beom
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.297-303
    • /
    • 2014
  • Purpose : The purpose of this study is reproducibility evaluation of deep inspiration breath-hold(DIBH) technique by respiration data and heart position analysis in radiation therapy for Left Breast cancer patients. Materials and Methods : Free breathing(FB) Computed Tomography(CT) images and DIBH CT images of three left breast cancer patients were used to evaluate the heart volume and dose during treatment planing system( Eclipse version 10.0, Varian, USA ). The signal of RPM (Real-time Position Management) Respiratory Gating System (version 1.7.5, Varian, USA) was used to evaluate respiration stability of DIBH during breast radiation therapy. The images for measurement of heart position were acquired by the Electronic portal imaging device(EPID) cine acquisition mode. The distance of heart at the three measuring points(A, B, C) on each image was measured by Offline Review (ARIA 10, Varian, USA). Results : Significant differences were found between the FB and DIBH plans for mean heart dose (6.82 vs. 1.91 Gy), heart $V_{30}$ (68.57 vs. $8.26cm^3$), $V_{20}$ (76.43 vs. $11.34cm^3$). The standard deviation of DIBH signal of each patient was ${\pm}0.07cm$, ${\pm}0.04cm$, ${\pm}0.13cm$, respectively. The Maximum and Minimum heart distance on EPID images were measured as 0.32 cm and 0.00 cm. Conclusion : Consequently, using the DIBH technique with radiation therapy for left breast cancer patients is very useful to establish the treatment plan and to reduce the heart dose. In addition, it is beneficial to using the Cine acquisition mode of EPID for the reproducibility evaluation of DIBH.