• Title/Summary/Keyword: Power System Analysis

Search Result 9,464, Processing Time 0.039 seconds

Comparisons of 1-Hour-Averaged Surface Temperatures from High-Resolution Reanalysis Data and Surface Observations (고해상도 재분석자료와 관측소 1시간 평균 지상 온도 비교)

  • Song, Hyunggyu;Youn, Daeok
    • Journal of the Korean earth science society
    • /
    • v.41 no.2
    • /
    • pp.95-110
    • /
    • 2020
  • Comparisons between two different surface temperatures from high-resolution ECMWF ReAnalysis 5 (ERA5) and Automated Synoptic Observing System (ASOS) observations were performed to investigate the reliability of the new reanalysis data over South Korea. As ERA5 has been recently produced and provided to the public, it will be highly used in various research fields. The analysis period in this study is limited to 1999-2018 because regularly recorded hourly data have been provided for 61 ASOS stations since 1999. Topographic characteristics of the 61 ASOS locations are classified as inland, coastal, and mountain based on Digital Elevation Model (DEM) data. The spatial distributions of whole period time-averaged temperatures for ASOS and ERA5 were similar without significant differences in their values. Scatter plots between ASOS and ERA5 for three different periods of yearlong, summer, and winter confirmed the characteristics of seasonal variability, also shown in the time-series of monthly error probability density functions (PDFs). Statistical indices NMB, RMSE, R, and IOA were adopted to quantify the temperature differences, which showed no significant differences in all indices, as R and IOA were all close to 0.99. In particular, the daily mean temperature differences based on 1-hour-averaged temperature had a smaller error than the classical daily mean temperature differences, showing a higher correlation between the two data. To check if the complex topography inside one ERA5 grid cell is related to the temperature differences, the kurtosis and skewness values of 90-m DEM PDFs in a ERA5 grid cell were compared to the one-year period amplitude among those of the power spectrum in the time-series of monthly temperature error PDFs at each station, showing positive correlations. The results account for the topographic effect as one of the largest possible drivers of the difference between ASOS and ERA5.

A Study on the Validity of Rural Type Low Carbon Green Village Through Case Analysis (사례분석을 통한 농촌형 저탄소 녹색마을 타당성 검토)

  • Do, In-Hwan;Hwang, Eun-Jin;Hong, Soo-Youl;Phae, Chae-Gun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.33 no.12
    • /
    • pp.913-921
    • /
    • 2011
  • This study examined the overall feasibility of low carbon green village formed in rural area. The check method is analyzing its environmental and economic feasibility and energy self-reliance. The biomass of the villages was set as 28 ton/day of livestock feces and 2 ton/day of cut fruit tree branches which make up the total of 30 ton/day. The facility consisted of a bio gasfication facility using wet (livestock feces) biomass and combined heat power generator, composting facility and wood boiler using dry (cut fruit tree branches) biomass. When operating the system, 540,540 kWh/yr of electricity and 1,762 Gcal/yr of heat energy was produced. The region's electricity energy and heat energy self-reliance rate will be 100%. The economic feasibility was found as a loss of 140 million won where the facility installation cost is 5.04 billion won, operation cost is 485.09 million won and profit is 337.12 million won. There will be a loss of about 2.2 billion won in 15 years but in the environmental analysis, it was found that crude replacement effect is about 178 million won, greenhouse gas reduction effect is about 92 million won making up the total environmental benefit of 270 million won. This means, there will be a yearly profit of about 130 million won. In terms of its environmental and economic feasibility and energy self-reliance, this project seemed to be a feasible project in overall even if it manages to get help from the government or local government.

EEG based Cognitive Load Measurement for e-learning Application (이러닝 적용을 위한 뇌파기반 인지부하 측정)

  • Kim, Jun;Song, Ki-Sang
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.2
    • /
    • pp.125-154
    • /
    • 2009
  • This paper describes the possibility of human physiological data, especially brain-wave activity, to detect cognitive overload, a phenomenon that may occur while learner uses an e-learning system. If it is found that cognitive overload to be detectable, providing appropriate feedback to learners may be possible. To illustrate the possibility, while engaging in cognitive activities, cognitive load levels were measured by EEG (electroencephalogram) to seek detection of cognitive overload. The task given to learner was a computerized listening and recall test designed to measure working memory capacity, and the test had four progressively increasing degrees of difficulty. Eight male, right-handed, university students were asked to answer 4 sets of tests and each test took from 61 seconds to 198 seconds. A correction ratio was then calculated and EEG results analyzed. The correction ratio of listening and recall tests were 84.5%, 90.6%, 62.5% and 56.3% respectively, and the degree of difficulty had statistical significance. The data highlighted learner cognitive overload on test level of 3 and 4, the higher level tests. Second, the SEF-95% value was greater on test3 and 4 than on tests 1 and 2 indicating that tests 3 and 4 imposed greater cognitive load on participants. Third, the relative power of EEG gamma wave rapidly increased on the 3rd and $4^{th}$ test, and signals from channel F3, F4, C4, F7, and F8 showed statistically significance. These five channels are surrounding the brain's Broca area, and from a brain mapping analysis it was found that F8, right-half of the brain area, was activated relative to the degree of difficulty. Lastly, cross relation analysis showed greater increasing in synchronization at test3 and $4^{th}$ at test1 and 2. From these findings, it is possible to measure brain cognitive load level and cognitive over load via brain activity, which may provide atimely feedback scheme for e-learning systems.

  • PDF

An Economic Factor Analysis of Air Pollutants Emission Using Index Decomposition Methods (대기오염 배출량 변화의 경제적 요인 분해)

  • Park, Dae Moon;Kim, Ki Heung
    • Environmental and Resource Economics Review
    • /
    • v.14 no.1
    • /
    • pp.167-199
    • /
    • 2005
  • The following policy implications can be drawn from this study: 1) The Air Pollution Emission Amount Report published by the Ministry of Environment since 1991 classifies industries into 4 sectors, i. e., heating, manufacturing, transportation and power generation. Currently, the usability of report is very low and extra efforts should be given to refine the current statistics and to improve the industrial classification. 2) Big pollution industries are as follows - s7, s17 and s20. The current air pollution control policy for these sectors compared to other sectors are found to be inefficient. This finding should be noted in the implementation of future air pollution policy. 3) s10 and s17 are found to be a big polluting industrial sector and its pollution reduction effect is also significant. 4) The effect of emission coefficient (${\Delta}f$) has the biggest impact on the reduction of emission amount change and the effect of economic growth coefficient (${\Delta}y$) has the biggest impact on the increase of emission volume. The effect of production technology factor (${\Delta}D$) and the effect of the change of the final demand structure (${\Delta}u$) are insignificant in terms of the change of emission volume. 5) Further studies on emission estimation techniques on each industry sector and the economic analysis are required to promote effective enforcement of the total volume control system of air pollutants, the differential management of pollution causing industrial sectors and the integration of environment and economy. 6) Korea's economic growth in 1990 is not pollution-driven in terms of the Barry Commoner's hypothesis, even though the overall industrial structure and the demand structure are not environmentally friendly. It indicates that environmental policies for the improvement of air quality depend mainly on the government initiatives and systematic national level consideration of industrial structures and the development of green technologies are not fully incorporated.

  • PDF

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

Analysis of Effect on Pesticide Drift Reduction of Prevention Plants Using Spray Drift Tunnel (비산 챔버를 활용한 차단 식물의 비산 저감 효과 분석)

  • Jinseon Park;Se-Yeon Lee;Lak-Yeong Choi;Se-woon Hong
    • Journal of Bio-Environment Control
    • /
    • v.32 no.2
    • /
    • pp.106-114
    • /
    • 2023
  • With rising concerns about pesticide spray drift by aerial application, this study attempt to evaluate aerodynamic property and collection efficiency of spray drift according to the leaf area index (LAI) of crop for preventing undesirable pesticide contamination by the spray-drift tunnel experiment. The collection efficiency of the plant with 'Low' LAI was measured at 16.13% at a wind speed of 1 m·s-1. As the wind speed increased to 2 m·s-1, the collection efficiency of plant with the same LAI level increased 1.80 times higher to 29.06%. For the 'Medium' level LAI, the collection efficiency was 24.42% and 43.06% at wind speed of 1 m·s-1 and 2 m·s-1, respectively. For the 'High' level LAI, it also increased 1.24 times higher as the wind speed increased. The measured results indicated that the collection of spray droplets by leaves were increased with LAI and wind speed. This also implied that dense leaves would have more advantages for preventing the drift of airborne spray droplets. Aerodynamic properties also tended to increase as the LAI increased, and the regression analysis of quadric equation and power law equation showed high explanatory of 0.96-0.99.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Analysis of Quantitative EEG to the Left Cranial Cervical Ganglion Block in Beagle Dogs (비글견에서 좌측앞쪽목신경절 차단에 대한 정량적 뇌파 분석)

  • Park, Woo-Dae;Bae, Chun-Sik;Kim, Se-Eun;Lee, Soo-Han;Lee, Jung-Sun;Chang, Wha-Seok;Chung, Dai-Jung;Lee, Jae-Hoon;Kim, Hwi-Yool
    • Journal of Veterinary Clinics
    • /
    • v.24 no.4
    • /
    • pp.514-521
    • /
    • 2007
  • The sympathetic nerve block improves the blood flow in the innervated regions. For this region, the sympathetic nerve block has been performed in the neural and cerebral disorders. However, the cerebral blood flow regulation of the cranial cervical ganglion block in dogs have not been well defined and the correlation to the changes in the cerebral circulation and the changes in the electroencephalogram is not well defined in dogs yet. Therefore, we investigated the hypothesis that changes in the EEG could be affected by the changes in cerebral blood flow following the cranial cervical ganglion block in dogs. Twenty five beagle dogs were divided into 3 groups; group I(LCCGB, n=10) underwent left sided cranial cervical ganglion block using the 1% lidocaine, group II(L, n=10) injected the 1% lidocaine into the right or left sided digastricus muscle, group III(N/SCCGB, n=5, served as control) underwent the left sided cranial cervical ganglion block using saline. A statistical difference was not found between the control group and the LCCGB group in the 95% spectral edge frequency(SEF) and the median frequency(MF). In the relative band power, the $\delta$ frequency was decreased during 5-25 min, while the $\alpha$ frequency was increased during the same time(p<0.05). But the $\theta$ frequency and the $\beta$ frequency were not shown the significant changes compared with the control group during the same time(p<0.05). These results suggest that the left cranial cervical ganglion block does not induce the change of the cerebral blood flow and its effect is insignificant.

0.1 MW Test Bed CO2 Capture Studies with New Absorbent (KoSol-5) (신 흡수제(KoSol-5)를 적용한 0.1 MW급 Test Bed CO2 포집 성능시험)

  • Lee, Junghyun;Kim, Beom-Ju;Shin, Su Hyun;kwak, No-Sang;Lee, Dong Woog;Lee, Ji Hyun;Shim, Jae-Goo
    • Applied Chemistry for Engineering
    • /
    • v.27 no.4
    • /
    • pp.391-396
    • /
    • 2016
  • The absorption efficiency of amine $CO_2$ absorbent (KoSol-5) developed by KEPCO research institute was evaluated using a 0.1 MW test bed. The performance of post-combustion technology to capture two tons of $CO_2$ per day from a slipstream of the flue gas from a 500 MW coal-fired power station was first confirmed in Korea. Also the analysis of the absorbent regeneration energy was conducted to suggest the reliable data for the KoSol-5 absorbent performance. And we tested energy reduction effects by improving the absorption tower inter-cooling system. Overall results showed that the $CO_2$ removal rate met the technical guideline ($CO_2$ removal rate : 90%) suggested by IEA-GHG. Also the regeneration energy of the KoSol-5 showed about $3.05GJ/tonCO_2$ which was about 25% reduction in the regeneration energy compared to that of using the commercial absorbent MEA (Monoethanolamine). Based on current experiments, the KoSol-5 absorbent showed high efficiency for $CO_2$ capture. It is expected that the application of KoSol-5 to commercial scale $CO_2$ capture plants could dramatically reduce $CO_2$ capture costs.

Simulation of Drying Grain with Solar-Heated Air (태양에너지를 이용한 곡물건조시스템의 시뮬레이션에 관한 연구)

  • 금동혁;김용운
    • Journal of Biosystems Engineering
    • /
    • v.4 no.2
    • /
    • pp.65-83
    • /
    • 1979
  • Low-temperature drying systems have been extensively used for drying cereal grain such as shelled corn and wheat. Since the 1973 energy crisis, many researches have been conducted to apply solar energy as supplemental heat to natural air drying systems. However, little research on rough rice drying has been done in this area, especially very little in Korea. In designing a solar drying system, quality loss, airflow requirements, temperature rise of drying air, fan power and energy requirements should be throughly studied. The factors affecting solar drying systems are airflow rate, initial moisture content, the amount of heat added to drying air, fan operation method and the weather conditions. The major objectives of this study were to analyze the effects of the performance factors and determine design parameters such as airflow requirements, optimum bed depth, optimum temperature rise of drying air, fan operation method and collector size. Three hourly observations based on the 4-year weather data in Chuncheon area were used to simulate rough rice drying. The results can be summarized as follows: 1. The results of the statistical analysis indicated that the experimental and predicted values of the temperature rise of the air passing through the collector agreed well. 2. Equilibrium moisture content was affected a little by airflow rate, but affected mainly by the amount of heat added, to drying air. Equilibrium moisture content ranged from 12.2 to 13.2 percent wet basis for the continuous fan operation, from 10.4 to 11.7 percent wet basis for the intermittent fan operation respectively, in range of 1. 6 to 5. 9 degrees Centigrade average temperature rise of drying air. 3. Average moisture content when top layer was dried to 15 percent wet basis ranged from 13.1 to 13.9 percent wet basis for the continuous fan operation, from 11.9 to 13.4 percent wet basis for the intermittent fan operation respectively, in the range of 1.6 to 5.9 degrees Centigrade average temperature rise of drying air and 18 to 24 percent wet basis initial moisture content. The results indicated that grain was overdried with the intermittent fan operation in any range of temperature rise of drying air. Therefore, the continuous fan operation is usually more effective than the intermittent fan operation considering the overdrying. 4. For the continuous fan operation, the average temperature rise of drying air may be limited to 2.2 to 3. 3 degrees Centigrade considering safe storage moisture level of 13.5 to 14 perceut wet basis. 5. Required drying time decrease ranged from 40 to 50 percent each time the airflow rate was doubled and from 3.9 to 4.3 percent approximately for each one degrees Centigrade in average temperature rise of drying air regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on required drying time. 6. Required drying time increase ranged from 18 to 30 percent approximately for each 2 percent increase in initial moisture content regardless of the fan operation methods, in the range of 18 to 24 percent moisture. 7. The intermittent fan operation showed about 36 to 42 percent decrease in required drying time as compared with the continuous fan operation. 8. Drymatter loss decrease ranged from 34 to 46 percent each time the airflow rate was doubled and from 2 to 3 percent approximately for each one degrees Centigrade in average temperature rise of drying air, regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on drymatter loss. 9. Drymatter loss increase ranged from 50 to 78 percent approximately for each 2 percent increase in initial moisture content, in the range of 18 to 24 percent moisture. 10. The intermittent fan operation: showed about 40 to 50 percent increase in drymatter loss as compared with the continuous fan operation and the increasing rate was higher at high level of initial moisture and average temperature rise. 11. Year-to-year weather conditions had a little effect on required drying time and drymatter loss. 12. The equations for estimating time required to dry top layer to 16 and 1536 wet basis and drymatter loss were derived as functions of the performance factors. by the least square method. 13. Minimum airflow rates based on 0.5 percent drymatter loss were estimated. Minimum airflow rates for the intermittent fan operation were approximately 1.5 to 1.8 times as much as compared with the continuous fan operation, but a few differences among year-to-year. 14. Required fan horsepower and energy for the intermittent fan operation were 3. 7 and 1. 5 times respectively as much as compared with the continuous fan operation. 15. The continuous fan operation may be more effective than the intermittent fan operation considering overdrying, fan horsepower requirements, and energy use. 16. A method for estimating the required collection area of flat-plate solar collector using average temperature rise and airflow rate was presented.

  • PDF