• Title/Summary/Keyword: 성능향상

Search Result 19,417, Processing Time 0.05 seconds

Effect of Flywheel Weight on the Vibration of Diesel Engine (플라이휠 중량(重量)이 디젤 기관(機關)의 진동(振動)에 미치는 영향(影響))

  • Myung, Byung Soo;Kim, Sung Rai
    • Korean Journal of Agricultural Science
    • /
    • v.20 no.2
    • /
    • pp.167-180
    • /
    • 1993
  • Most of small size diesel engines are widely used with the same size and weight flywheel in the levels of 6.0kW and 7.5kW. This study was conducted to obtain basic data which affect the engine performance of the power tiller. The flywheel weight was considered as a major factor in this research. Basically, fuel consumption ratio, motoring loss, torque, vibration and mechanical efficiency of the engine were measured and analyzed on four levels of flywheel weight, 32.2, 29.4, 26.2 and $24.2kg_f$, respectively. Results were obtained as follows: 1. The weights of flywheel were $23.7kg_f$ from design program of JSME and $24.5kg_f$ from ASME and SAE design criteria. Therefore, the flywheel weight of $32.2kg_f$ might be reduced about $8kg_f$ in 7.5kW engine. 2. The rated outputs of 6.0kW and 7.5kW engine were actually 7.43kW and 7.85kW, respectively. When flywheel weight was reduced from $32.2kg_f$ to $24.2kg_f$, outputs were increased from 7.43kW to 7.70kW in 6.0kW engine and from 7.85kW to 8.25kW in 7.5kW engine. 3. When the flywheel weight was reduced from $32.2kg_f$ to $24.2kg_f$, fuel consumption ratio was decreased from 300.8 to 296.8g/kW-hr in 6.0kW engine and also from 313.6 to 312.8g/kW-hr in 7.5 kW engine, respectively. 4. When the flywheel weight was reduced from $32.2kg_f$ to $24.2kg_f$, mechanical efficiency of engine was increased from 76.1% to 76.8% in 6.0kW engine and also from 76.7% to 77.0% in 7.5kW engine, respectively. 5. When the flywheel weight was reduced from $32.2kg_f$ to $24.2kg_f$, vibration was decreased at X-axis and Z-axis in 6.0kW engine, however, slightly increased at Y-axis in 6.0kW engine and at all axes in 7.5kW engine. 6. When the flywheel weight was reduced from $32.2kg_f$ to $24.4kg_f$ motoring loss was decreased from 2.33kW to 1.75kW in 6.0kW engine and also from 2.46kW to 1.84kW in 7.5kW engine.

  • PDF

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

Study on 3D Printer Suitable for Character Merchandise Production Training (캐릭터 상품 제작 교육에 적합한 3D프린터 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.455-486
    • /
    • 2015
  • The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Evaluation of Proficiency Test between Radioimmunoassay and Chemiluminescence Immunoassay (방사면역측정법과 화학발광면역측정법간의 숙련도 비교평가)

  • Noh, Gyeong-Woon;Kim, Tae-Hoon;Kim, Ji-Young;Kim, Hyun-Joo;Lee, Ho-Young;Choi, Joon-Young;Lee, Byoeng-Il;Choe, Jae-Gol;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • Purpose: To establish accurate external quality assurance (EQA) test, cross institutional and modality tests were performed using WHO certificated reference material (CRM) and same pooled patients serum. Materials and Methods: Accuracy and precision were evaluated using CRM and pooled patients' serum for AFP, CEA, PSA, CA 125, CA 19-9, T3, T4, Tg, TSH. To evaluate the accuracy and precision, recover test and coefficient variation were measured. RIA test were performed in major 5 RIA laboratory and EIA (CLIA) test were done in 5 major EIA laboratory. same sample of CRM and pooled serum were delivered to each laboratory. Results: In 2009, mean precision of total tumor marker of RIA was $14.8{\pm}4.2%$ and that of EIA(CLIA) was $19.2{\pm}6.9%$. In 2010, mean precision of 5 tumor marker and T3, T4, Tg, TSH was $13.8{\pm}6.1%$ in RIA and $15.5{\pm}7.7%$ in EIA (CLIA). There was no significant difference between RIA and EIA. In RIA, the coefficient variations (CV) of AFP, CEA, PSA, CA 125, T3, T4, TSH were within 20%. The CV of CA 19-9 was over 20% but there was no significant difference with EIA (CLIA) (p=0.345). In recovery test using CRM, AFP, PSA, T4, TSH showed 92~103% of recovery in RIA. In recovery test using commercial material, CEA, CA 125, CA 19-9 showed relatively lower recovery than CRM but there was no significant difference between RIA and EIA (CLIA). Conclusion: By evaluating the precision and accuracy of each test, EQA test could more accurately measured the quality of each test and performance of laboratory.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

A Study of the Effect of the Permeability and Selectivity on the Performance of Membrane System Design (분리막 투과도와 분리도 인자의 시스템 설계 효과 연구)

  • Shin, Mi-Soo;Jang, Dongsoon;Lee, Yongguk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.38 no.12
    • /
    • pp.656-661
    • /
    • 2016
  • Manufacturing membrane materials with high selectivity and permeability is quite desirable but practically not possible, since the permeability and selectivity are usually inversely proportional. From the viewpoint of reducing the cost of $CO_2$ capture, module performance is even more important than the performance of membrane materials itself, which is affected by the permeance of the membrane (P, stagecut) and selectivity (S). As a typical example, when the mixture with a composition of 13% $CO_2$ and 87% of $N_2$ is fed into the module with 10% stage cut and selectivity 5, in the 10 parts of the permeate, $CO_2$ represents 4.28 parts and $N_2$ represents 5.72 parts. In this case, the $CO_2$ concentration in the permeate is 42.8% and the recovery rate of $CO_2$ in this first separation appears as 4.28/13 = 32.9%. When permeance and selectivity are doubled, however, from 10% to 20% and from 5 to 10, respectively, the $CO_2$ concentration in the permeant becomes 64.5% and the recovery rate is 12.9/13 = 99.2%. Since in this case, most of the $CO_2$ is separated, this may be the ideal condition. For a given feed concentration, the $CO_2$ concentration in the separated gas decreases if permeance is larger than the threshold value for complete recovery at a given selectivity. Conversely, for a given permeance, increasing the selectivity over the threshold value does not improve the process further. For a given initial feed gas concentration, if permeance or selectivity is larger than that required for the complete separation of $CO_2$, the process becomes less efficient. From all these considerations, we can see that there exists an optimum design for a given set of conditions.

A Study on the Application of RTLS Technology for the Automation of Spray-Applied Fire Resistive Covering Work (뿜칠내화피복 작업 자동화시스템을 위한 RTLS 기술 적용에 관한 연구)

  • Kim, Kyoon-Tai
    • Journal of the Korea Institute of Building Construction
    • /
    • v.9 no.5
    • /
    • pp.79-86
    • /
    • 2009
  • In a steel structure, spray-applied fire resistive materials are crucial in preventing structural strength from being weakened in the event of a fire. The quality control of such materials, however, is difficult for manual workers, who can frequently be in short supply. These skilled workers are also very likely to be exposed to environmental hazards. Problems with construction work such as this, which are specifically the difficulty of achieving quality control and the dangerous nature of the work itself, can be solved to some degree by the introduction of automated equipment. It is, however, very difficult to automate the work process, from operation to the selection of a location for the equipment, as the environment of a construction site has not yet been structured to accommodate automation. This is a fundamental study on the possibility of the automation of spray-applied fire resistive coating work. In this study, the linkability of the cutting-edge RTLS to an automation system is reviewed, and a scenario for the automation of spray-applied fire resistive coating work and system composition is presented. The system suggested in this study is still in a conceptual stage, and as such, there are many restrictions still to be resolved. Despite this fact, automation is expected to have good effectiveness in terms of preventing fire from spreading by maintaining a certain level of strength at a high temperature when a fire occurs, as it maintains the thickness of the fire-resistive coating at a specified level, and secures the integrity of the coating with the steel structure, thereby enhancing the fire-resistive performance. It also expected that if future research is conducted in this area in relation to a cutting-edge monitoring TRS, such as the ubiquitous sensor network (USN) and/or building information model (BIM), it will contribute to raising the level of construction automation in Korea, reducing costs through the systematic and efficient management of construction resources, shortening construction periods, and implementing more precise construction