• Title/Summary/Keyword: power systems

Search Result 13,564, Processing Time 0.043 seconds

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Why A Multimedia Approach to English Education\ulcorner

  • Keem, Sung-uk
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.176-178
    • /
    • 1997
  • To make a long story short I made up my mind to experiment with a multimedia approach to my classroom presentations two years ago because my ways of giving instructions bored the pants off me as well as my students. My favorite ways used to be sometimes referred to as classical or traditional ones, heavily dependent on the three elements: teacher's mouth, books, and chalk. Some call it the 'MBC method'. To top it off, I tried audio-visuals such as tape recorders, cassette players, VTR, pictures, and you name it, that could help improve my teaching method. And yet I have been unhappy about the results by a trial and error approach. I was determined to look for a better way that would ensure my satisfaction in the first place. What really turned me on was a multimedia CD ROM title, ELLIS (English Language Learning Instructional Systems) developed by Dr. Frank Otto. This is an integrated system of learning English based on advanced computer technology. Inspired by the utility and potential of such a multimedia system for regular classroom or lab instructions, I designed a simple but practical multimedia language learning laboratory in 1994 for the first time in Korea(perhaps for the first time in the world). It was high time that the conventional type of language laboratory(audio-passive) at Hahnnam be replaced because of wear and tear. Prior to this development, in 1991, I put a first CALL(Computer Assisted Language Learning) laboratory equipped with 35 personal computers(286), where students were encouraged to practise English typing, word processing and study English grammar, English vocabulary, and English composition. The first multimedia language learning laboratory was composed of 1) a multimedia personal computer(486DX2 then, now 586), 2) VGA multipliers that enable simultaneous viewing of the screen at control of the instructor, 3) an amplifIer, 4) loud speakers, 5)student monitors, 6) student tables to seat three students(a monitor for two students is more realistic, though), 7) student chairs, 8) an instructor table, and 9) cables. It was augmented later with an Internet hookup. The beauty of this type of multimedia language learning laboratory is the economy of furnishing and maintaining it. There is no need of darkening the facilities, which is a must when an LCD/beam projector is preferred in the laboratory. It is headset free, which proved to make students exasperated when worn more than- twenty minutes. In the previous semester I taught three different subjects: Freshman English Lab, English Phonetics, and Listening Comprehension Intermediate. I used CD ROM titles like ELLIS, Master Pronunciation, English Tripple Play Plus, English Arcade, Living Books, Q-Steps, English Discoveries, Compton's Encyclopedia. On the other hand, I managed to put all teaching materials into PowerPoint, where letters, photo, graphic, animation, audio, and video files are orderly stored in terms of slides. It takes time for me to prepare my teaching materials via PowerPoint, but it is a wonderful tool for the sake of presentations. And it is worth trying as long as I can entertain my students in such a way. Once everything is put into the computer, I feel relaxed and a bit excited watching my students enjoy my presentations. It appears to be great fun for students because they have never experienced this type of instruction. This is how I freed myself from having to manipulate a cassette tape player, VTR, and write on the board. The student monitors in front of them seem to help them concentrate on what they see, combined with what they hear. All I have to do is to simply click a mouse to give presentations and explanations, when necessary. I use a remote mouse, which prevents me from sitting at the instructor table. Instead, I can walk around in the room and enjoy freer interactions with students. Using this instrument, I can also have my students participate in the presentation. In particular, I invite my students to manipulate the computer using the remote mouse from the student's seat not from the instructor's seat. Every student appears to be fascinated with my multimedia approach to English teaching because of its unique nature as a new teaching tool as we face the 21st century. They all agree that the multimedia way is an interesting and fascinating way of learning to satisfy their needs. Above all, it helps lighten their drudgery in the classroom. They feel other subjects taught by other teachers should be treated in the same fashion. A multimedia approach to education is impossible without the advent of hi-tech computers, of which multi functions are integrated into a unified system, i.e., a personal computer. If you have computer-phobia, make quick friends with it; the sooner, the better. It can be a wonderful assistant to you. It is the Internet that I pay close attention to in conjunction with the multimedia approach to English education. Via e-mail system, I encourage my students to write to me in English. I encourage them to enjoy chatting with people all over the world. I also encourage them to visit the sites where they offer study courses in English conversation, vocabulary, idiomatic expressions, reading, and writing. I help them search any subject they want to via World Wide Web. Some day in the near future it will be the hub of learning for everybody. It will eventually free students from books, teachers, libraries, classrooms, and boredom. I will keep exploring better ways to give satisfying instructions to my students who deserve my entertainment.

  • PDF

Approach to improve construction management using Information Technology (IT) (정보기술(IT) 기반을 통한 시공관리 선진화 방안)

  • Lee Woo-Bang;Moon Jin-Yeong;Moon Byeong-Suk
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.115-122
    • /
    • 2002
  • There is various points that should be improved in Fairness such as our contract practice to propose construction projects, project managing and the stakeholders' way of thinking and culture. We consider that the revision of construction related provisions and systems is required but even more, an overall change in business management through the implementation of Integrated Construction Information Management System that will enable the owner, which drives the project, and contractor sharing construction information is required. To mange construction related information in an integrated manner, designing information should be smoothly transferred to purchasing information, and changes are required in order to move ahead to process-oriented work system. Finally information created from various construction organizations should be delivered in an aligned and standardized manner as well. The domestic Nuclear Power Plant Construction has been accepting various technology transfers from U.S, France, Canada and UK, which enabled us to self-support technology and recently even proceeded to the phase exporting our technology to others. However, continuous effort is required to improve internal business efficiency and to respond to external environmental change such aselectricity market deregulation. Recently, in accordance with the result in number of CEO's intention to make progress in IT and improve business efficiency, the number of enterprises introducing Enterprise Resource Planning is increasing. ERP is an innovative tool which changes the way of performing work from organization and department orientation to process-orientation in order to optimize the resources, such as human and material resources, through out the Enterprise by performing BPR which will maximize overall business efficiency of the enterprise, such includes not only construction management, but also business management. KHNP continued to performing large scaled construction projects such as nuclear power plant construction for past 30 years and took the initiatives of large scale project management and Quality management ability in domestic industry by having independent capability of over all construction planning, purchasing and, construction and start up management etc. To maintain our leading position of improving construction management technology based on our accumulated project management experience and technology, KHNP included construction into our ERP project in purpose of innovating construction business. We would like to discuss the characteristics of nuclear construction business, project management system, information system infrastructure and information sharing system among construction related entities, and implementation practices for information system, and consider how to resolve our practice that should be improved in this thesis.

  • PDF

Construction and Tests of the Vacuum Pumping System for KSTAR Current Feeder System (KSTAR 전류전송계통 진공배기계 구축 및 시운전)

  • Woo, I.S.;Song, N.H.;Lee, Y.J.;Kwag, S.W.;Bang, E.N.;Lee, K.S.;Kim, J.S.;Jang, Y.B.;Park, H.T.;Hong, Jae-Sik;Park, Y.M.;Kim, Y.S.;Choi, C.H.
    • Journal of the Korean Vacuum Society
    • /
    • v.16 no.6
    • /
    • pp.483-488
    • /
    • 2007
  • Current feeder system (CFS) for Korea superconducting tokamak advanced research(KSTAR) project plays a role to interconnect magnet power supply (MPS) and superconducting (SC) magnets through the normal bus-bar at the room temperature(300 K) environment and the SC bus-line at the low temperature (4.5 K) environment. It is divided by two systems, i.e., toroidal field system which operates at 35 kA DC currents and poloidal field system wherein 20$\sim$26 kA pulsed currents are applied during 350 s transient time. Aside from the vacuum system of main cryostat, an independent vacuum system was constructed for the CFS in which a roughing system is consisted by a rotary and a mechanical booster pump and a high vacuum system is developed by four cryo-pumps with one dry pump as a backing pump. A self interlock and its control system, and a supervisory interlock and its control system are also established for the operational reliability as well. The entire CFS was completely tested including the reliability of local/supervisory control/interlock, helium gas leakage, vacuum pressure, and so on.

Alternative Concept to Enhance the Disposal Efficiency for CANDU Spent Fuel Disposal System (CANDU 사용후핵연료 처분시스템 효율향상 개념 도출)

  • Lee, Jong-Youl;Cho, Dong-Geun;Kook, Dong-Hak;Lee, Min-Soo;Choi, Heui-Joo
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.9 no.3
    • /
    • pp.169-179
    • /
    • 2011
  • There are two types of nuclear reactors in Korea and they are PWR type and CANDU type. The safe management of the spent fuels from these reactors is very important factor to maintain the sustainable energy supply with nuclear power plant. In Korea, a reference disposal system for the spent fuels has been developed through a study on the direct disposal of the PWR and CANDU spent fuel. Recently, the research on the demonstration and the efficiency analyses of the disposal system has been performed to make the disposal system safer and more economic. PWR spent fuels which include a lot of reusable material can be considered being recycled and a study on the disposal of HLW from this recycling process is being performed. CANDU spent fuels are considered being disposed of directly in deep geological formation, since they have little reusable material. In this study, based on the Korean Reference spent fuel disposal System (KRS) which was to dispose of both PWR type and CANDU type, the more effective CANDU spent fuel disposal systems were developed. To do this, the disposal canister for CANDU spent fuels was modified to hold the storage basket for 60 bundles which is used in nuclear power plant. With these modified disposal canister concepts, the disposal concepts to meet the thermal requirement that the temperature of the buffer materials should not be over $100^{\circ}C$ were developed. These disposal concepts were reviewed and analyzed in terms of disposal effective factors which were thermal effectiveness, U-density, disposal area, excavation volume, material volume etc. and the most effective concept was proposed. The results of this study will be used in the development of various wastes disposal system together with the HLW wastes from the PWR spent fuel recycling process.

Evaluation of Image Qualities for a Digital X-ray Imaging System Based on Gd$_2$O$_2$S(Tb) Scintillator and Photosensor Array by Using a Monte Carlo Imaging Simulation Code (몬테카를로 영상모의실험 코드를 이용한 Gd$_2$O$_2$S(Tb) 섬광체 및 광센서 어레이 기반 디지털 X-선 영상시스템의 화질평가)

  • Jung, Man-Hee;Jung, In-Bum;Park, Ju-Hee;Oh, Ji-Eun;Cho, Hyo-Sung;Han, Bong-Soo;Kim, Sin;Lee, Bong-Soo;Kim, Ho-Kyung
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.253-259
    • /
    • 2004
  • in this study, we developed a Monte Carlo imaging simulation code written by the visual C$\^$++/ programing language for design optimization of a digital X-ray imaging system. As a digital X-ray imaging system, we considered a Gd$_2$O$_2$S(Tb) scintillator and a photosensor array, and included a 2D parallel grid to simulate general test renditions. The interactions between X-ray beams and the system structure, the behavior of lights generated in the scintillator, and their collection in the photosensor array were simulated by using the Monte Carlo method. The scintillator thickness and the photosensor array pitch were assumed to 66$\mu\textrm{m}$ and 48$\mu\textrm{m}$, respertively, and the pixel format was set to 256 x 256. Using the code, we obtained X-ray images under various simulation conditions, and evaluated their image qualities through the calculations of SNR (signal-to-noise ratio), MTF (modulation transfer function), NPS (noise power spectrum), DQE (detective quantum efficiency). The image simulation code developed in this study can be applied effectively for a variety of digital X-ray imaging systems for their design optimization on various design parameters.

Evaluation of $^{14}C$ Behavior Characteristic in Reactor Coolant from Korean PWR NPP's (국내 경수로형 원자로 냉각재 중의 $^{14}C$ 거동 특성 평가)

  • Kang, Duk-Won;Yang, Yang-Hee;Park, Kyong-Rok
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.7 no.1
    • /
    • pp.1-7
    • /
    • 2009
  • This study has been focused on determining the chemical composition of $^{14}C$ - in terms of both organic and inorganic $^{14}C$ contents - in reactor coolant from 3 different PWR's reactor type. The purpose was to evaluate the characteristic of $^{14}C$ that can serve as a basis for reliable estimation of the environmental release at domestic PWR sites. $^{14}C$ is the most important nuclide in the inventory, since it contributes one of the main dose contributors in future release scenarios. The reason for this is its high mobility in the environment, biological availability and long half-life(5730yr). More recent studies - where a more detailed investigation of organic $^{14}C$ species believed to be formed in the coolant under reducing conditions have been made - show that the organic compounds not only are limited to hydrocarbons and CO. Possible organic compounds formed including formaldehyde, formic acid and acetic acid, etc. Under oxidizing conditions shows the oxidized carbon forms, possibly mainly carbon dioxide and bicarbonate forms. Measurements of organic and inorganic $^{14}C$ in various water systems were also performed. The $^{14}C$ inventory in the reactor water was found to be 3.1 GBq/kg in PWR of which less than 10% was in inorganic form. Generally, the $^{14}C$ activity in the water was divided equally between the gas- and water- phase. Even though organic $^{14}C$ compound shows that dominant species during the reactor operation, But during the releasing of $^{14}C$ from the plant stack, chemical forms of $^{14}C$ shows the different composition due to the operation conditions such as temperature, pH, volume control tank venting and shut down chemistry.

  • PDF

A survey on the EMF Levels of Study and Electric Appliances in Korea (국내 전철 및 가전제품을 대상으로 한 전자장 수준 실태조사)

  • Jang, Seong Ki;Cho, Yong Sung;Lee, Seok Jo;Yoo, Seong Wha;Jung, Kyung Mi;Lim, Jun Ho
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.15 no.1
    • /
    • pp.71-81
    • /
    • 2005
  • The purposes of this study was to collect, analyze, and describe the MF exposure levels from subways in Korea and to measure and evaluate the MF levels generated from electric appliances used at general homes. The target subway lines were Seoul Metropolitan Line 1 to Line 8, Bundang Line, Incheon Line, Daegu Line, Gwangju Line, and Busan Line 1 and Line 2. We measured at each station in those subway lines and, all the train types (pantograph-equipped, motor-equipped, and common), and platform types(facing and isolating) were investigated by the distance(80, 200, 400 cm) from the train on 19 targeted subway lines using 3 magnetic field measuring devices (EMDEXII, Enertech Co.) during the survey from January till October, 2004. On the other hand, the levels of the 60Hz magnetic fields generated from 14 items of home electric appliances such as electric blankets, hair dryers, electric razors, etc. were measured at 10 general homes using 5 EMDEXII meters with a sampling interval of 1.5 second by the distance(surface, 30, 50, 100, 300cm ) from the target electric appliances. The survey results in the whole subway lines examined in this study were as follows; Seoul Metropolitan Line 4 using AC(alternating current) power source showed the highest mean value of $2.85{\mu}T$, followed by Seoul Metropolitan Line 1 running between Seoul and Incheon using AC($2.78{\mu}T$), Seoul Metropolitan Line 1 between Seoul and Uijongbu using AC($2.73{\mu}T$), Bundang Line using AC($1.79{\mu}T$), Seoul Metropolitan Line 1 connected from Yongsan using AC($1.67{\mu}T$), Seoul Metropolitan Line 1 between Seoul and Suwon using AC($0.79{\mu}T$), and so on. In general, the intensity of the magnetic field in the subway systems in Korea was significantly higher when using AC($2.14{\pm}0.91{\mu}T$) than when using DC($0.29{\pm}0.44{\mu}T$) power source. Among the home electric appliances examined, microwave ovens showed the highest mean value of $7.69{\mu}T$, followed by hair dryers($6.47{\mu}T$), vacuum cleaners($5.27{\mu}T$), televisions ($2.26{\mu}T$), electric blankets($1.38{\mu}T$), personal computers ($0.81{\mu}T$), and so on. Two items of electric appliances showed the excess value of $0.2{\mu}T$ at the distance of 30cm in the MF exposure level; electric razors $1.58{\pm}2.13{\mu}T$ and vacuum cleaners $0.48{\pm}0.44{\mu}T$. As a whole, this study showed a tendency that the shift of the MF levels according to the increase of distance from the electric appliances was lower than those of the results surveyed in UK and USA. As a result, this study is expected to suggest meaningful data for the future study in exposure assessment of magnetic fields and for the establishment of guidelines for subways and electric appliances in Korea. More detailed and large scaled exposure assessment studies should be performed continuously to get the various and useful information on health risk assessment of MFs in Korea.