• Title/Summary/Keyword: 시스템 최적

Search Result 7,954, Processing Time 0.037 seconds

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

The Evaluation of Forest-road Network Considering Optimum Forest-road Arrangement and Yarding Function (최적임도배치(最適林道配置) 및 집재기능(集材機能)을 고려(考慮)한 임도배치망(林道配置網) 평가(評價))

  • Park, Sang Jun;Bae, Sang Tae
    • Current Research on Agriculture and Life Sciences
    • /
    • v.19
    • /
    • pp.45-54
    • /
    • 2001
  • This study was carried out to provide fundamental data for prospective forest-road project and forest-road network arrangement through appraising existing forest-road network with density, extension distance, maximum yarding distance and yarding area, position of forest-road line considered above foundation of two theories, one is "theory of optimal forest-road density" which has expense for yarding cost and constructing forest-road minimized, the other is "theory of optimal forest-road arrangement" which has investment effect maximized. The results are as follows. 1. In density and extension distance of the forest-road by site, it was showed up that density of existing forest-road is lower than that of calculated forest-road. So, it is thought that some additional forest-roads have to be constructed. 2. In the arrangement of the forest-road network by site, it was showed up that the arrangement of calculated forest-road is higher than that of existing forest-road arrangement for the forestry and yarding function. So, it is thought that the arrangement of forest-road network have to be considered to maximize the investment effect. 3. In "mean maximum distance for yarding" and "mean area which yarding can be done" by horizontal and inclined distance, the existing forest-road networks were different from those of calculated forest-road network. So, calculated forest-road network making investment effect maximize is more effective than existing forest-road network. Hence, in prospective forest-road project, it is needed that forest-road network having "area which yarding can be done" maximized through considering function for yarding have to be constructed.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Development of Yóukè Mining System with Yóukè's Travel Demand and Insight Based on Web Search Traffic Information (웹검색 트래픽 정보를 활용한 유커 인바운드 여행 수요 예측 모형 및 유커마이닝 시스템 개발)

  • Choi, Youji;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.155-175
    • /
    • 2017
  • As social data become into the spotlight, mainstream web search engines provide data indicate how many people searched specific keyword: Web Search Traffic data. Web search traffic information is collection of each crowd that search for specific keyword. In a various area, web search traffic can be used as one of useful variables that represent the attention of common users on specific interests. A lot of studies uses web search traffic data to nowcast or forecast social phenomenon such as epidemic prediction, consumer pattern analysis, product life cycle, financial invest modeling and so on. Also web search traffic data have begun to be applied to predict tourist inbound. Proper demand prediction is needed because tourism is high value-added industry as increasing employment and foreign exchange. Among those tourists, especially Chinese tourists: Youke is continuously growing nowadays, Youke has been largest tourist inbound of Korea tourism for many years and tourism profits per one Youke as well. It is important that research into proper demand prediction approaches of Youke in both public and private sector. Accurate tourism demands prediction is important to efficient decision making in a limited resource. This study suggests improved model that reflects latest issue of society by presented the attention from group of individual. Trip abroad is generally high-involvement activity so that potential tourists likely deep into searching for information about their own trip. Web search traffic data presents tourists' attention in the process of preparation their journey instantaneous and dynamic way. So that this study attempted select key words that potential Chinese tourists likely searched out internet. Baidu-Chinese biggest web search engine that share over 80%- provides users with accessing to web search traffic data. Qualitative interview with potential tourists helps us to understand the information search behavior before a trip and identify the keywords for this study. Selected key words of web search traffic are categorized by how much directly related to "Korean Tourism" in a three levels. Classifying categories helps to find out which keyword can explain Youke inbound demands from close one to far one as distance of category. Web search traffic data of each key words gathered by web crawler developed to crawling web search data onto Baidu Index. Using automatically gathered variable data, linear model is designed by multiple regression analysis for suitable for operational application of decision and policy making because of easiness to explanation about variables' effective relationship. After regression linear models have composed, comparing with model composed traditional variables and model additional input web search traffic data variables to traditional model has conducted by significance and R squared. after comparing performance of models, final model is composed. Final regression model has improved explanation and advantage of real-time immediacy and convenience than traditional model. Furthermore, this study demonstrates system intuitively visualized to general use -Youke Mining solution has several functions of tourist decision making including embed final regression model. Youke Mining solution has algorithm based on data science and well-designed simple interface. In the end this research suggests three significant meanings on theoretical, practical and political aspects. Theoretically, Youke Mining system and the model in this research are the first step on the Youke inbound prediction using interactive and instant variable: web search traffic information represents tourists' attention while prepare their trip. Baidu web search traffic data has more than 80% of web search engine market. Practically, Baidu data could represent attention of the potential tourists who prepare their own tour as real-time. Finally, in political way, designed Chinese tourist demands prediction model based on web search traffic can be used to tourism decision making for efficient managing of resource and optimizing opportunity for successful policy.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

The Evaluation of the dose calculation algorithm(AAA)'s Accuracy in Case of a Radiation Therapy on Inhomogeneous tissues using FFF beam (FFF빔을 사용한 불균질부 방사선치료 시 선량계산 알고리즘(AAA)의 정확성 평가)

  • Kim, In Woo;Chae, Seung Hoon;Kim, Min Jung;Kim, Bo Gyoum;Kim, Chan Yong;Park, So Yeon;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.321-327
    • /
    • 2014
  • Purpose : To verify the accuracy of the Ecilpse's dose calculation algorithm(AAA:Analytic anisotropic algorithm) in case of a radiation treatment on Inhomogeneous tissues using FFF beam comparing dose distribution at TPS with actual distribution. Materials and Methods : After acquiring CT images for radiation treatment by the location of tumors and sizes using the solid water phantoms, cork and chest tumor phantom made of paraffin, we established the treatment plan for 6MV photon therapy using our radiation treatment planning system for chest SABR, Ecilpse's AAA(Analytic anisotropic algorithm). According to the completed plan, using our TrueBeam STx(Varian medical system, Palo Alto, CA), we irradiated radiation on the chest tumor phantom on which EBT2 films are inserted and evaluated the dose value of the treatment plan and that of the actual phantom on Inhomogeneous tissue. Results : The difference of the dose value between TPS and measurement at the medial target is 1.28~2.7%, and, at the side of target including inhomogeneous tissues, the difference is 2.02%~7.40% at Ant, 4.46%~14.84% at Post, 0.98%~7.12% at Rt, 1.36%~4.08% at Lt, 2.38%~4.98% at Sup, and 0.94%~3.54% at Inf. Conclusion : In this study, we discovered the possibility of dose calculation's errors caused by FFF beam's characteristics and the inhomogeneous tissues when we do SBRT for inhomogeneous tissues. SBRT which is most popular therapy method needs high accuracy because it irradiates high dose radiation in small fraction. So, it is supposed that ideal treatment is possible if we minimize the errors when planning for treatment through more study about organ's characteristics like Inhomogeneous tissues and FFF beam's characteristics.

Evaluation of superficial dose for Postmastectomy using several treatment techniques (유방전절제술을 시행한 환자에서 치료기법에 따른 피부선량 평가)

  • Song, Yong Min;Choi, Ji Min;Kim, Jin Man;Kwon, Dong Yeol;Kim, Jong Sik;Cho, Hyun Sang;Song, Ki Won
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.225-232
    • /
    • 2014
  • Purpose : The purpose of this study was to evaluate the surface and superficial dose for patients requiring postmastectomy radiation therapy(PMRT) with different treatment techniques. Materials and Methods : Computed tomography images were acquired for the phantom(I'mRT, IBA) consisting of tissue equivalent material. Hypothetical chestwall and lung were outlined and modified. Five treatment techniques(Wedged Tangential; WT, 4-field IMRT, 7-field IMRT, TOMO DIRECT, TOMO HELICAL) were evaluated using only 6MV photon beam. GafChromic EBT3 film was used for dose measurements at the surface and superficial dose. Surface dose profiles around the phantom were obtained for each treatment technique. For superficial dose measurements, film were used inside the phantom and analyzed superficial region for depth from 1-6mm. Results : TOMO DIRECT showed the highest surface dose by 47~70% of prescribed dose, while 7-field IMRT showed the lowest by 35~46% of prescribed dose. For the WT, 4-field IMRT and 7-field IMRT, superficial dose were measured over 60%, 70%, and 80% for 1mm, 2mm, and 5mm depth, respectively. In case of TOMO DIRECT and TOMO HELICAL, over 75%, 80%, and 90% of prescribed dose was measured, respectively. Surface and superficial dose range were uniform in overall chestwall for the 7-field IMRT and TOMO HELICAL. In contrast, Because of the dose enhancement effect with oblique incidence, The dose was gradually increased toward the obliquely tangential angle for the WT and TOMO DIRECT. Conclusion : For PMRT, TOMO DIRECT and TOMO HELICAL deliver the higher surface and superficial doses than treatment techniques based linear accelerator. It showed adequate dose(over 75% of prescribed dose) at 1mm depth in skin region.

Anaerobic Organic Wastewater Treatment and Energy Regeneration by Utilizing E-PFR System (E-PER 반응기를 이용한 유기성 폐기물의 혐기성 처리와 재생에너지 생산에 관한 연구)

  • Kim, Burmshik;Choi, Hong-Bok;Lee, Jae-Ki;Park, Joo Hyung;Ji, Duk Gi;Choi, Eun-Ju
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.16 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Wastewater containing strong organic matter is very difficult to treat by utilizing general sewage treatment plant. but the wastewater is adequate to generate biomass energy (bio-gas; methane gas) by utilizing anaerobic digestion. EcoDays Plug Flow Reactor (E-PFR), which was already proved as an excellent aerobic wastewater treatment reactor, was adapted for anaerobic food wastewater digestion. This research was performed to improve the efficiency of bio-gas production and to optimize anaerobic wastewater treatment system. Food wastewater from N food waste treatment plant was applied for the pilot scale experiments. The results indicated that the efficiency of anaerobic wastewater treatment and the volume of bio-gas were increased by applying E-PFR to anaerobic digestion. The structural characteristics of E-PFR can cause the high efficiency of anaerobic treatment processes. The unique structure of E-PFR is a diaphragm dividing vertical hydraulic multi-stages and the inversely protruded fluid transfer tubes on each diaphragm. The unique structure of E-PFR can make gas hold-up space at the top part of each stage in the reactor. Also, E-PFR can contain relatively high MLSS concentration in lower stage by vertical up-flow of wastewater. This hydraulic flow can cause high buffering capacity against shock load from the wastewater in the reactor, resulting in stable pH (7.0~8.0), relatively higher wastewater treatment efficiency, and larger volume of bio-gas generation. In addition, relatively longer solid retention time (SRT) in the reactor can increase organic matter degradation and bio-gas production efficiency. These characteristics in the reactor can be regarded as "ideal" anaerobic wastewater treatment conditions. Anaerobic wastewater treatment plant design factor can be assessed for having 70 % of methane gas content, and better bio-gas yielding and stable treatment efficiency based on the results of this research. For example, inner circulation with generated bio-gas in the reactor and better mixing conditions by improving fluid transfer tube structure can be used for achieving better bio-gas yielding efficiency. This research results can be used for acquiring better improved regenerated energy system.

  • PDF