• Title/Summary/Keyword: 시스템효율

Search Result 26,295, Processing Time 0.058 seconds

A Study on the Characteristics of Oil-water Separation in Non-point Source Control Facility by Coalescence Mechanism of Spiral Buoyant Media (나선형 부유 고분자 여재의 Coalescence 특성을 이용한 비점오염원 저감시설의 유수분리특성 연구)

  • Kang, Sung-Won;Kim, Seog-Ku;Kim, Young-Im;Yun, Sang-Leen;Kim, Soo-Hae;Kim, Mee-Kyung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.8
    • /
    • pp.950-955
    • /
    • 2007
  • Non-point source control system which had been designed only for oil-water separation in the fields of oil refinery and garage was upgraded in this research for the removal of runoff pollutants in impervious urban area. Pollutants including oil from driveway and bridge were eliminated by two types of pathway in the system. One is the coalescence mechanism that the oil droplets in the runoff come into contact with each other in the spiral buoyant media surface and form larger coalesced droplets of oil that are carried upstream to the oil layer. The other is the precipitation that solids in runoff were settled by gravity in the system. In this research, coalescing characteristics of oil and water separation were investigated through image analyses, and efficiencies of the non-point source control system were evaluated using dust in driveway and waste engine oil. Media made of high density and high molecular weight polyethylene was indeterminate helical shape and had sleek surface by analysing SEM photographs and BET. Surface area and specific gravity of media which were measured directly were 1,428 $mm^2$ and 45.3 $kg/m^3$ respectively. From the image analyses of the oil droplets photographs which were taken by using microscope, it was proved clearly that the coalescence was the main pathway in the removal of oil from the runoff. Finally, the performances of the non-point source control system filled up with the media were suspended solid $86.6\sim95.2%$, $COD_{Cr}$, $87.3\sim95.4%$, n-Hexane extractable materials $71.8\sim94.8%$ respectively.

Evaluation on Usefulness of Abdomen and Chest Motion Control Device (ABCHES) for the Tumor with a Large Respiratory Motion in Radiotherapy (호흡으로 인한 움직임이 큰 종양의 방사선치료 시 Abdomen and Chest Motion Control Device (ABCHES)의 유용성 평가)

  • Cho, Yoon-Jin;Jeon, Mi-Jin;Shin, Dong-Bong;Kim, Jong-Dae;Kim, Sei-Joon;Ha, Jin-Sook;Im, Jung-Ho;Lee, Ik-Jae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.85-93
    • /
    • 2012
  • Purpose: It is essential to minimize the respiratory-induced motion of involved organs in the Tomotherapy for tumor located in the chest and abdominal region. However, the application of breathing control system to Tomotherapy is limited. This study was aimed to investigate the possible application of the ABCHES system and its efficacy as a means of breathing control in the tomotherapy treatment. Materials and Methods: Five subjects who were treated with a Hi-Art Tomotherapy system for lung, liver, gallbladder and pancreatic tumors. All patients undertook trained on two breathing methodes using an ABCHES, free breathing methode and shallow breathing methode. When the patients could carry out the breathing control, 4D-CT scan was a total of 10 4D tomographic images were acquired. A radiologist resident manually drew the tumor region, including surrounding nomal organs, on each of CT images at the inhalation phase, the exhalation phase and the 40% phase (mid-inhalation) and average CT image. Those CT images were then exported to the Tomotherapy planning station. Data exported from the Tomotherapy planning station was analyzed to quantify characteristics of dose-volume histograms and motion of tumors. Organ motions under free breathing and shallow breathing were examined six directions, respectively. Radiation exposure to the surrounding organs were also measured and compared. Results: Organ motion is in the six directions with more than a 5 mm displacement. A total of 12 Organ motions occurred during free breathing while organ motions decreased to 2 times during shallow breathing under the use of Abches. Based on the quantitative analysis of the dose-volume histograms shallow breathing showed lower resulting values, compared to free breathing, in every measure. That is, treatment volume, the dose of radiation to the tumor and two surrounding normal organs (mean doses), the volume of healthy tissue exposed to radiation were lower at the shallow breathing state. Conclusion: This study proposes that the use of ABCHES is effective for the Tomotherapy treatment as it makes shortness of breathing easy for patients. Respiratory-induced tumor motion is minimized, and radiation exposure to surrounding normal tissues is also reduced as a result.

  • PDF

A Study on the Dynamic Behavior Characteristics of a Small Fishing Crane (소형 어로 크레인의 동적 거동 특성에 관한 연구)

  • 이원섭;이대재
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.37 no.3
    • /
    • pp.163-173
    • /
    • 2001
  • The dynamic behavior characteristics of a small fishing crane for inshore and coastal fishing vessels was experimentally analyzed in order to improve the fishing operation and to reduce considerably manual work of fisherman. The small fishing crane was designed to be controlled electro-hydraulically by means of proportional valves and solenoid valves, and also to be controlled the speed of each operation. The dynamic behavior characteristics was investigated by measuring the changes of parameters such as oil pressure, swing angle of load, load tension, the lifting angle and the swing angle of crane arm when the arms extended in a side way was given a test load. The results obtained are summarized as follows: 1. The designed small fishing crane can be proportionally controlled by means of proportional valves and rapidly by operating the solenoid valves, respectively. The capacity, turning angle, maximum reach of crane were 2 T-M, $180^\circ$, 3.7m, respectively. 2. The vertical change of crane arm on the extension of lifting cylinder was $1.2^\circ$/cm, and the swing speed of crane arm due to the extension of swing cylinder by on/off operations of solenoid valves was $15^\circ$/sec, with the swing period of 1.4 sec and the angle fluctuation of $\pm$11.0$^{\circ}$. 3. When changing simultaneously the horizontal and vertical positions of the lifting load by on/off operations of solenoid valves, the swing and lifting speeds of crane arm were $4.46^\circ$/sec and $6.4^\circ$/sec, respectively. 4. The movements of the designed crane were particularly smooth as they are controlled with the aid of proportional valves than by means of solenoid valves.

  • PDF

Estimation of Evaporation Rate of Swine Slurry Using the Natural Evaporation System(NES) in summer (여름철 자연증발시스템(NES)의 腞슬러리 증발효율 평가)

  • Kim, K.Y.;Choi, H.L.;Kim, J.G.
    • Journal of Animal Science and Technology
    • /
    • v.44 no.4
    • /
    • pp.459-474
    • /
    • 2002
  • The purpose of this study was to establish the optimal operation condition of the natural evaporation system(NES) which was used for reducing swine slurry. Especially the main point of this study is to estimate the effect of climate condition(clear & rainy) and spray type(batch & flow) for the evaporation rate of swine slurry applying the NES in summer. Experiment was performed from June to August, which was generally regarded as summer in Korea, with the spray type of batch in 2000 and that of flow in 2001. As a result of experiment for batch and flow type, the averaged evaporation rate was measured into 2.71 and 3.59 l/ton . $m^2$ . day on clear days and 0.62 and 0.66 l/ton . $m^2$ . day on raint days, respectively. Based on the calculated evaporation rate by the climate condition and the spray type, it was proved that the averaged reduction rate for total input(1t/day) were 15.99% and 3.19% on clear and rainy days and the evaporation rate of the flow type was superior to that of the batch type by 5%, approximately. Therefore, it was concluded that the supplementary equipment, such as fan, should by operated in rainy days and the spray type of flow rather than that of batch should be recommended to increase the evaporation rate in the natural evaporation system(NES).

Review on Usefulness of EPID (Electronic Portal Imaging Device) (EPID (Electronic Portal Imaging Device)의 유용성에 관한 고찰)

  • Lee, Choong Won;Park, Do Keun;Choi, A Hyun;Ahn, Jong Ho;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2013
  • Purpose: Replacing the film which used to be used for checking the set-up of the patient and dosimetry during radiation therapy, more and more EPID equipped devices are in use at present. Accordingly, this article tried to evaluated the accuracy of the position check-up and the usefulness of dosimetry during the use of an electronic portal imaging device. Materials and Methods: On 50 materials acquired with the search of Korea Society Radiotherapeutic Technology, The Korean Society for Radiation Oncology, and Pubmed using "EPID", "Portal dosimetry", "Portal image", "Dose verification", "Quality control", "Cine mode", "Quality - assurance", and "In vivo dosimetry" as indexes, the usefulness of EPID was analyzed by classifying them as history of EPID and dosimetry, set-up verification and characteristics of EPID. Results: EPID is developed from the first generation of Liquid-filled ionization chamber, through the second generation of Camera-based fluoroscopy, and to the third generation of Amorphous-silicon EPID imaging modes can be divided into EPID mode, Cine mode and Integrated mode. When evaluating absolute dose accuracy of films and EPID, it was found that EPID showed within 1% and EDR2 film showed within 3% errors. It was confirmed that EPID is better in error measurement accuracy than film. When gamma analyzing the dose distribution of the base exposure plane which was calculated from therapy planning system, and planes calculated by EDR2 film and EPID, both film and EPID showed less than 2% of pixels which exceeded 1 at gamma values (r%>1) with in the thresholds such as 3%/3 mm and 2%/2 mm respectively. For the time needed for full course QA in IMRT to compare loads, EDR2 film recorded approximately 110 minutes, and EPID recorded approximately 55 minutes. Conclusion: EPID could easily replace conventional complicated and troublesome film and ionization chamber which used to be used for dosimetry and set-up verification, and it was proved to be very efficient and accurate dosimetry device in quality assurance of IMRT (intensity modulated radiation therapy). As cine mode imaging using EPID allows locating tumors in real-time without additional dose in lung and liver which are mobile according to movements of diaphragm and in rectal cancer patients who have unstable position, it may help to implement the most optimal radiotherapy for patients.

  • PDF

Comparison of Three Different Helmet Bolus Device for Total Scalp Irradiation (Total Scalp의 방사선 치료 시 Helmet Bolus 제작방법에 관한 연구)

  • Song, Yong-Min;Kim, Jong-Sik;Hong, Chae-Seon;Ju, Sang-Gyu;Park, Ju-Young;Park, Su-Yeon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.1
    • /
    • pp.31-37
    • /
    • 2012
  • Purpose: This study evaluated the usefulness of Helmet bolus device using Bolx-II, paraffin wax, solid thermoplastic material in total scalp irradiation. Materials and Methods: Using Rando phantom, we applied Bolx-II (Action Products, USA), paraffin wax (Densply, USA), solid thermoplastic material (Med-Tec, USA) on the whole scalp to make helmet bolus device. Computed tomography (GE, Ultra Light Speed16) images were acquired at 5 mm thickness. Then, we set up the optimum treatment plan and analyzed the variation in density of each bolus (Philips, Pinnacle). To evaluate the dose distribution, Dose-homogeneity index (DHI, $D_{90}/D_{10}$) and Conformity index (CI, $V_{95}/TV$) of Clinical Target Volume (CTV) using Dose-Volume Histogram (DVH) and $V_{20}$, $V_{30}$ of normal brain tissues. we assessed the efficiency of production process by measuring total time taken to produce. Thermoluminescent dosimeters (TLD) were used to verify the accuracy. Results: Density variation value of Bolx-II, paraffin wax, solid thermoplastic material turned out to be $0.952{\pm}0.13g/cm^3$, $0.842{\pm}0.17g/cm^3$, $0.908{\pm}0.24g/cm^3$, respectively. The DHI and CI of each helmet bolus device which used Bolx-II, paraffin wax, solid thermoplastic material were 0.89, 0.85, 0.77 and 0.86, 0.78, 0.74, respectively. The result of Bolx-II was the best. $V_{20}$ and $V_{30}$ of brain tissues were 11.50%, 10.80%, 10.07% and 7.62%, 7.40%, 7.31%, respectively. It took 30, 120, 90 minutes to produce. The measured TLD results were within ${\pm}7%$ of the planned values. Conclusion: The application of helmet bolus which used Bolx-II during total scalp irradiation not only improves homogeneity and conformity of Clinical Target Volume but also takes short time and the production method is simple. Thus, the helmet bolus which used Bolx-II is considered to be useful for the clinical trials.

  • PDF

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.