• Title/Summary/Keyword: Gas generate

Search Result 348, Processing Time 0.031 seconds

A Study for Activation Measure of Climate Change Mitigation Movement - A Case Study of Green Start Movement - (기후변화 완화 활동 활성화 방안에 관한 연구 - 그린스타트 운동을 중심으로 -)

  • Cho, Sung Heum;Lee, Sang Hoon;Moon, Tae Hoon;Choi, Bong Seok;Park, Na Hyun;Jeon, Eui Chan
    • Journal of Climate Change Research
    • /
    • v.5 no.2
    • /
    • pp.95-107
    • /
    • 2014
  • The 'Green Start Movement' is a practical movement of green living to efficiently reduce the greenhouse gases originating from non-industrial fields such as household, commerce, transportation, etc. for the 'materialization of a low carbon society through green growth (Low Carbon, Green Korea)'. When the new government took office, following the Lee Myeongbak Administration that had presented 'Low Carbon, Green Growth' as a national vision, it was required to set up the direction of the practical movement of green life to respond to climate change persistently and stably as well as to evaluate the performance of the green start movement over the past 5 years. A questionnaire survey was administered to a total of 265 persons including public servants, members of environmental and non-environmental NGOs, participants of the green start movement and professionals. In the results of the questionnaire survey, many opinions have indicated that the awareness of the green start movement is increasing and the green start movement has had a positive impact on individual behavior and group behavior in terms of green living. The result shows, however, that the environmental NGOs don't cooperate sufficiently to create a 'green living' effect on a national scale. Action needs to be taken on the community level in order to generate a culture of environmental responsibility. The national administration office of the Green Start Movement Network should play the leading role between the government and environmental NGOs. The Green Start National Network should have greater autonomy and governance of the network needs to be restructured in order to work effectively. Also the Green Start Movement should identify specific local characteristics to support activities that reduce greenhouse gas emissions. Best practices can be shared to reduce greenhouse gas emissions by a substantial amount.

Carbon Dioxide-based Plastic Pyrolysis for Hydrogen Production Process: Sustainable Recycling of Waste Fishing Nets (이산화탄소 기반 플라스틱 열분해 수소 생산 공정: 지속가능한 폐어망 재활용)

  • Yurim Kim;Seulgi Lee;Sungyup Jung;Jaewon Lee;Hyungtae Cho
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.36-43
    • /
    • 2024
  • Fishing net waste (FNW) constitutes over half of all marine plastic waste and is a major contributor to the degradation of marine ecosystems. While current treatment options for FNW include incineration, landfilling, and mechanical recycling, these methods often result in low-value products and pollutant emissions. Importantly, FNWs, comprised of plastic polymers, can be converted into valuable resources like syngas and pyrolysis oil through pyrolysis. Thus, this study presents a process for generating high-purity hydrogen (H2) by catalytically pyrolyzing FNW in a CO2 environment. The proposed process comprises of three stages: First, the pretreated FNW undergoes Ni/SiO2 catalytic pyrolysis under CO2 conditions to produce syngas and pyrolysis oil. Second, the produced pyrolysis oil is incinerated and repurposed as an energy source for the pyrolysis reaction. Lastly, the syngas is transformed into high-purity H2 via the Water-Gas-Shift (WGS) reaction and Pressure Swing Adsorption (PSA). This study compares the results of the proposed process with those of traditional pyrolysis conducted under N2 conditions. Simulation results show that pyrolyzing 500 kg/h of FNW produced 2.933 kmol/h of high-purity H2 under N2 conditions and 3.605 kmol/h of high-purity H2 under CO2 conditions. Furthermore, pyrolysis under CO2 conditions improved CO production, increasing H2 output. Additionally, the CO2 emissions were reduced by 89.8% compared to N2 conditions due to the capture and utilization of CO2 released during the process. Therefore, the proposed process under CO2 conditions can efficiently recycle FNW and generate eco-friendly hydrogen product.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Dosimetry of the Low Fluence Fast Neutron Beams for Boron Neutron Capture Therapy (붕소-중성자 포획치료를 위한 미세 속중성자 선량 특성 연구)

  • Lee, Dong-Han;Ji, Young-Hoon;Lee, Dong-Hoon;Park, Hyun-Joo;Lee, Suk;Lee, Kyung-Hoo;Suh, So-Heigh;Kim, Mi-Sook;Cho, Chul-Koo;Yoo, Seong-Yul;Yu, Hyung-Jun;Gwak, Ho-Shin;Rhee, Chang-Hun
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.66-73
    • /
    • 2001
  • Purpose : For the research of Boron Neutron Capture Therapy (BNCT), fast neutrons generated from the MC-50 cyclotron with maximum energy of 34.4 MeV in Korea Cancer Center Hospital were moderated by 70 cm paraffin and then the dose characteristics were investigated. Using these results, we hope to establish the protocol about dose measurement of epi-thermal neutron, to make a basis of dose characteristic of epi-thermal neutron emitted from nuclear reactor, and to find feasibility about accelerator-based BNCT. Method and Materials : For measuring the absorbed dose and dose distribution of fast neutron beams, we used Unidos 10005 (PTW, Germany) electrometer and IC-17 (Far West, USA), IC-18, ElC-1 ion chambers manufactured by A-150 plastic and used IC-l7M ion chamber manufactured by magnesium for gamma dose. There chambers were flushed with tissue equivalent gas and argon gas and then the flow rate was S co per minute. Using Monte Carlo N-Particle (MCNP) code, transport program in mixed field with neutron, photon, electron, two dimensional dose and energy fluence distribution was calculated and there results were compared with measured results. Results : The absorbed dose of fast neutron beams was $6.47\times10^{-3}$ cGy per 1 MU at the 4 cm depth of the water phantom, which is assumed to be effective depth for BNCT. The magnitude of gamma contamination intermingled with fast neutron beams was $65.2{\pm}0.9\%$ at the same depth. In the dose distribution according to the depth of water, the neutron dose decreased linearly and the gamma dose decreased exponentially as the depth was deepened. The factor expressed energy level, $D_{20}/D_{10}$, of the total dose was 0.718. Conclusion : Through the direct measurement using the two ion chambers, which is made different wall materials, and computer calculation of isodose distribution using MCNP simulation method, we have found the dose characteristics of low fluence fast neutron beams. If the power supply and the target material, which generate high voltage and current, will be developed and gamma contamination was reduced by lead or bismuth, we think, it may be possible to accelerator-based BNCT.

  • PDF

Seismic Data Processing and Inversion for Characterization of CO2 Storage Prospect in Ulleung Basin, East Sea (동해 울릉분지 CO2 저장소 특성 분석을 위한 탄성파 자료처리 및 역산)

  • Lee, Ho Yong;Kim, Min Jun;Park, Myong-Ho
    • Economic and Environmental Geology
    • /
    • v.48 no.1
    • /
    • pp.25-39
    • /
    • 2015
  • $CO_2$ geological storage plays an important role in reduction of greenhouse gas emissions, but there is a lack of research for CCS demonstration. To achieve the goal of CCS, storing $CO_2$ safely and permanently in underground geological formations, it is essential to understand the characteristics of them, such as total storage capacity, stability, etc. and establish an injection strategy. We perform the impedance inversion for the seismic data acquired from the Ulleung Basin in 2012. To review the possibility of $CO_2$ storage, we also construct porosity models and extract attributes of the prospects from the seismic data. To improve the quality of seismic data, amplitude preserved processing methods, SWD(Shallow Water Demultiple), SRME(Surface Related Multiple Elimination) and Radon Demultiple, are applied. Three well log data are also analysed, and the log correlations of each well are 0.648, 0.574 and 0.342, respectively. All wells are used in building the low-frequency model to generate more robust initial model. Simultaneous pre-stack inversion is performed on all of the 2D profiles and inverted P-impedance, S-impedance and Vp/Vs ratio are generated from the inversion process. With the porosity profiles generated from the seismic inversion process, the porous and non-porous zones can be identified for the purpose of the $CO_2$ sequestration initiative. More detailed characterization of the geological storage and the simulation of $CO_2$ migration might be an essential for the CCS demonstration.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Development Strategy for New Climate Change Scenarios based on RCP (온실가스 시나리오 RCP에 대한 새로운 기후변화 시나리오 개발 전략)

  • Baek, Hee-Jeong;Cho, ChunHo;Kwon, Won-Tae;Kim, Seong-Kyoun;Cho, Joo-Young;Kim, Yeongsin
    • Journal of Climate Change Research
    • /
    • v.2 no.1
    • /
    • pp.55-68
    • /
    • 2011
  • The Intergovernmental Panel on Climate Change(IPCC) has identified the causes of climate change and come up with measures to address it at the global level. Its key component of the work involves developing and assessing future climate change scenarios. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway(RCP)" and established the framework and development schedules for Climate Modeling (CM), Integrated Assessment Modeling(IAM), Impact Adaptation Vulnerability(IAV) community for the fifth IPCC Assessment Reports while 130 researchers and users took part in. The CM community at the IPCC Expert Meeting in September 2008, agreed on a new set of coordinated climate model experiments, the phase five of the Coupled Model Intercomparison Project(CMIP5), which consists of more than 30 standardized experiment protocols for the shortterm and long-term time scales, in order to enhance understanding on climate change for the IPCC AR5 and to develop climate change scenarios and to address major issues raised at the IPCC AR4. Since early 2009, fourteen countries including the Korea have been carrying out CMIP5-related projects. Withe increasing interest on climate change, in 2009 the COdinated Regional Downscaling EXperiment(CORDEX) has been launched to generate regional and local level information on climate change. The National Institute of Meteorological Research(NIMR) under the Korea Meteorological Administration (KMA) has contributed to the IPCC AR4 by developing climate change scenarios based on IPCC SRES using ECHO-G and embarked on crafting national scenarios for climate change as well as RCP-based global ones by engaging in international projects such as CMIP5 and CORDEX. NIMR/KMA will make a contribution to drawing the IPCC AR5 and will develop national climate change scenarios reflecting geographical factors, local climate characteristics and user needs and provide them to national IAV and IAM communites to assess future regional climate impacts and take action.

A Study on the Trend and Utilization of Stone Waste (석재폐기물 현황 및 활용 연구)

  • Chea, Kwang-Seok;Lee, Young Geun;Koo, Namin;Yang, Hee Moon
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.35 no.3
    • /
    • pp.333-344
    • /
    • 2022
  • The quarrying and utilization of natural building stones such as granite and marble are rapidly emerging in developing countries. A huge amount of wastes is being generated during the processing, cutting and sizing of these stones to make them useable. These wastes are disposed of in the open environment and the toxic nature of these wastes negatively affects the environment and human health. The growth trend in the world stone industry was confirmed in output for 2019, increasing more than one percent and reaching a new peak of some 155 million tons, excluding quarry discards. Per-capita stone use rose to 268 square meters per thousand persons (m2/1,000 inh), from 266 the previous year and 177 in 2001. However, we have to take into consideration that the world's gross quarrying production was about 316 million tons (100%) in 2019; about 53% of that amount, however, is regarded as quarrying waste. With regards to the stone processing stage, we have noticed that the world production has reached 91.15 million tons (29%), and consequently this means that 63.35 million tons of stone-processing scraps is produced. Therefore, we can say that, on a global level, if the quantity of material extracted in the quarry is 100%, the total percentage of waste is about 71%. This raises a substantial problem from the environmental, economical and social point of view. There are essentially three ways of dealing with inorganic waste, namely, reuse, recycling, or disposal in landfills. Reuse and recycling are the preferred waste management methods that consider environmental sustainability and the opportunity to generate important economic returns. Although there are many possible applications for stone waste, they can be summarized into three main general applications, namely, fillers for binders, ceramic formulations, and environmental applications. The use of residual sludge for substrate production seems to be highly promising: the substrate can be used for quarry rehabilitation and in the rehabilitation of industrial sites. This new product (artificial soil) could be included in the list of the materials to use in addition to topsoil for civil works, railway embankments roundabouts and stone sludge wastes could be used for the neutralization of acidic soil to increase the yield. Stone waste is also possible to find several examples of studies for the recovery of mineral residues, including the extraction of metallic elements, and mineral components, the production of construction raw materials, power generation, building materials, and gas and water treatment.