• Title/Summary/Keyword: Generation Power

Search Result 7,233, Processing Time 0.037 seconds

Analysis of Adolescent Awareness of Radiation: Marking the First Anniversary of the Fukushima Nuclear Accident (청소년의 방사선 인식도 분석: 일본 후쿠시마 원전사고 1주년 계기)

  • Park, Bang-Ju
    • Journal of Radiation Protection and Research
    • /
    • v.37 no.2
    • /
    • pp.75-83
    • /
    • 2012
  • Marking the first anniversary of the Fukushima nuclear accident, which took place on March 11th, 2011, the level of adolescent awareness and understanding of radiation was surveyed, and the results were then compared with those for adults with the same questionnaires conducted at similar times. A qualitative survey and frequency analysis were made for the design of the study methodology. Those surveyed were limited to 3rd grade middle school students, 15 years of age, who are the future generation. The questionnaire, which is a survey tool, was directly distributed to the students and 2,217 answers were analysed. The questionnaires were composed of 40 questions, and it was found that Cronbach's coefficient was high with 'self awareness of radiation' at 0.494, 'risk of radiation' at 0.843, 'benefit of radiation' at 0.748, 'radiological safety control' at 0.692, 'information sources of radiation' at 0.819, and 'impacts of Fukushima accident'. The results of the survey analysis showed that the students' knowledge of radiation was not very high with 67.4 points (69.5 points for adults) calculated on a maximum scale of 100 points (converted points). The impacts of the Fukushima nuclear accident were found to be less significant to adolescents than adults, and the rate of answer of "so" or " very so" in the following questions demonstrates this well. It was also shown that the impacts of the Fukushima accident to adolescents were comparatively low with 27.0% (38.9% for adults) on the question of "attitude changed against nuclear power due to the Fukushima accident," 65.7%(86.6% for adults) on the question of "the damages from the Fukushima accident was immeasurably huge," and 65.0% (86.3% for adults) on "the Fukushima accident contributed to raising awareness on the safety of nuclear power plants". The adolescents had a high rate of "average" answers on most of the questions compared with adults, and it can be construed that this resulted from adolescent awareness of radiation not being firmly rooted on themselves. This study was the first of its kind for surveying adolescents regarding the level of awareness of radiation after the Fukushima accident, and the results were compared with the survey results of adults, and they are expected to greatly contribute toward establishing a radiation policy by the government in the future.

Rheological Characteristics of Hydrogen Fermented Food Waste and Review on the Agitation Intensity (음식물류폐기물 수소 발효액의 유변학적 특성과 교반강도 고찰)

  • Kim, Min-Gyun;Lee, Mo-Kwon;Im, Seong-Won;Shin, Sang-Ryong;Kim, Dong-Hoon
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.25 no.4
    • /
    • pp.41-50
    • /
    • 2017
  • The design of proper agitation system is requisite in biological waste treatment and energy generation plant, which is affected by viscosity, impeller types, and power consumption. In the present work, hydrogen fermentation of food waste was conducted at various operational pHs (4.5~6.5) and substrate concentrations (10~50 g Carbo. COD/L), and the viscosity of fermented broth was analyzed. The $H_2$ yield significantly varied from 0.51 to $1.77mol\;H_2/mol\;hexose_{added}$ depending on the pH value, where the highest performance was achieved at pH 5.5. The viscosity gradually dropped with shear rate increase, indicating a shear thinning property. With the disintegration of carbohydrate, the viscosity dropped after fermentation, but it did not change depending on the operational pH. At the same pH level, the $H_2$ yield was not affected much, ranging $1.40{\sim}1.86mol\;H_2/mol\;hexose_{added}$ at 10~50 g Carbo. COD/L. The zero viscosity and infinite viscosity of fermented broth increased with substrate concentrations, from 10.4 to $346.2mPa{\cdot}s$, and from 1.7 to $5.3mPa{\cdot}s$, respectively. There was little difference in the viscosity value of fermented broth at 10 and 20 g Carbo. COD/L. As a result of designing the agitation intensity based on the experimental results, it is expected that the agitation intensity can be reduced during hydrogen fermentation. The initial and final agitation intensity of 30 g Carbo. COD/L in hydrogen fermentation were 26.0 and 10.0 rpm, respectively. As fermentation went on, the viscosity gradually decreased, indicating that the power consumption for agitation of food waste can be reduced.

Optimization Process Models of Gas Combined Cycle CHP Using Renewable Energy Hybrid System in Industrial Complex (산업단지 내 CHP Hybrid System 최적화 모델에 관한 연구)

  • Oh, Kwang Min;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.3
    • /
    • pp.65-79
    • /
    • 2019
  • The study attempted to estimate the optimal facility capacity by combining renewable energy sources that can be connected with gas CHP in industrial complexes. In particular, we reviewed industrial complexes subject to energy use plan from 2013 to 2016. Although the regional designation was excluded, Sejong industrial complex, which has a fuel usage of 38 thousand TOE annually and a high heat density of $92.6Gcal/km^2{\cdot}h$, was selected for research. And we analyzed the optimal operation model of CHP Hybrid System linking fuel cell and photovoltaic power generation using HOMER Pro, a renewable energy hybrid system economic analysis program. In addition, in order to improve the reliability of the research by analyzing not only the heat demand but also the heat demand patterns for the dominant sectors in the thermal energy, the main supply energy source of CHP, the economic benefits were added to compare the relative benefits. As a result, the total indirect heat demand of Sejong industrial complex under construction was 378,282 Gcal per year, of which paper industry accounted for 77.7%, which is 293,754 Gcal per year. For the entire industrial complex indirect heat demand, a single CHP has an optimal capacity of 30,000 kW. In this case, CHP shares 275,707 Gcal and 72.8% of heat production, while peak load boiler PLB shares 103,240 Gcal and 27.2%. In the CHP, fuel cell, and photovoltaic combinations, the optimum capacity is 30,000 kW, 5,000 kW, and 1,980 kW, respectively. At this time, CHP shared 275,940 Gcal, 72.8%, fuel cell 12,390 Gcal, 3.3%, and PLB 90,620 Gcal, 23.9%. The CHP capacity was not reduced because an uneconomical alternative was found that required excessive operation of the PLB for insufficient heat production resulting from the CHP capacity reduction. On the other hand, in terms of indirect heat demand for the paper industry, which is the dominant industry, the optimal capacity of CHP, fuel cell, and photovoltaic combination is 25,000 kW, 5,000 kW, and 2,000 kW. The heat production was analyzed to be CHP 225,053 Gcal, 76.5%, fuel cell 11,215 Gcal, 3.8%, PLB 58,012 Gcal, 19.7%. However, the economic analysis results of the current electricity market and gas market confirm that the return on investment is impossible. However, we confirmed that the CHP Hybrid System, which combines CHP, fuel cell, and solar power, can improve management conditions of about KRW 9.3 billion annually for a single CHP system.

NOx Reduction Characteristics of Ship Power Generator Engine SCR Catalysts according to Cell Density Difference (선박 발전기관용 SCR 촉매의 셀 밀도차에 따른 NOx 저감 특성)

  • Kyung-Sun Lim;Myeong-Hwan Im
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1209-1215
    • /
    • 2022
  • The selective catalytic reduction (SCR) is known as a very efficient method to reduce nitrogen oxides (NOx) and the catalyst performs reduction from nitrogen oxides (NOx) to nitrogen (N2) and water vapor (H2O). The catalyst, which is one of the factors determining the performance of the nitrogen oxide (NOx) ruduction method, is known to increase catalyst efficiency as cell density increases. In this study, the reduction characteristics of nitrogen oxides (NOx) under various engine loads investigated. A 100CPSI(60Cell) catalysts was studied through a laboratory-sized simulating device that can simulate the exhaust gas conditions from the power generation engine installed in the training ship SEGERO. The effect of 100CPSI(60Cell) cell density was compared with that of 25.8CPSI(30Cell) cell density that already had NOx reduction data from the SCR manufacturing. The experimental catalysts were honeycomb type and its compositions and materials of V2O5-WO3-TiO2 were retained, with only change on cell density. As a result, the NOx concentration reduction rate from 100CPSI(60Cell) catalyst was 88.5%, and IMO specific NOx emission was 0.99g/kwh satisfying the IMO Tier III NOx emission requirement. The NOx concentration reduction rate from 25.8CPSI(30Cell) was 78%, and IMO specific NOx emission was 2.00g/kwh. Comparing the NOx concentration reduction rate and emission of 100CPSI(60Cell) and 25.8CPSI(30Cell) catalysts, notably, the NOx concentration reduction rate of 100CPSI(60Cell) catalyst was 10.5% higher and its IMO specific NOx emission was about twice less than that of the 25.8CPSI(30Cell) catalysts. Therefore, an efficient NOx reduction effect can be expected by increasing the cell density of catalysts. In other words, effects to production cost reduction, efficient arrangement of engine room and cargo space can be estimated from the reduced catalyst volume.

Comparison between Solar Radiation Estimates Based on GK-2A and Himawari 8 Satellite and Observed Solar Radiation at Synoptic Weather Stations (천리안 2A호와 히마와리 8호 기반 일사량 추정값과 종관기상관측망 일사량 관측값 간의 비교)

  • Dae Gyoon Kang;Young Sang Joh;Shinwoo Hyun;Kwang Soo Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.28-36
    • /
    • 2023
  • Solar radiation that is measured at relatively small number of weather stations is one of key inputs to crop models for estimation of crop productivity. Solar radiation products derived from GK-2A and Himawari 8 satellite data have become available, which would allow for preparation of input data to crop models, especially for assessment of crop productivity under an agrivoltaic system where crop and power can be produced at the same time. The objective of this study was to compare the degree of agreement between the solar radiation products obtained from those satellite data. The sub hourly products for solar radiation were collected to prepare their daily summary for the period from May to October in 2020 during which both satellite products for solar radiation were available. Root mean square error (RMSE) and its normalized error (NRMSE) were determined for daily sum of solar radiation. The cumulative values of solar radiation for the study period were also compared to represent the impact of the errors for those products on crop growth simulations. It was found that the data product from the Himawari 8 satellite tended to have smaller values of RMSE and NRMSE than that from the GK-2A satellite. The Himawari 8 satellite product had smaller errors at a large number of weather stations when the cumulative solar radiation was compared with the measurements. This suggests that the use of Himawari 8 satellite products would cause less uncertainty than that of GK2-A products for estimation of crop yield. This merits further studies to apply the Himawari 8 satellites to estimation of solar power generation as well as crop yield under an agrivoltaic system.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Study on the Relationship between Changes of Rumen Microflora and Bloat in Jersey Cow (저지종 젖소의 반추위 내 미생물 균총 변화와 고창증 발병간의 상관관계 연구)

  • Kim, Sang Bum;Oh, Jong Seok;Jeong, Ha Yeon;Jung, Young Hun;Park, Beom Young;Ha, Seung Min;Im, Seok Ki;Lee, Sung Sill;Park, Ji Hoo;Park, Seong Min;Kim, Eun Tae
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.38 no.2
    • /
    • pp.106-111
    • /
    • 2018
  • This study was conducted to investigate the relationship between changes of rumen microflora and bloat in Jersey cow. Jersey cows (control age: 42 months, control weight: 558kg; treatment age: 29 months, treatment weight 507kg) were fed on the basis of dairy feeding management at dairy science division in National Institute of Animal Science. The change of microbial population in rumen was analyzed by using next generation sequencing (NGS) technologies due to metabolic disease. The diversity of Ruminococcus bromii, Bifidobacterium pseudolongum, Bifidobacterium merycicum and Butyrivibrio fibrisolvens known as major starch fermenting bacteria was increased more than 36-fold in bloated Jersey, while cellulolytic bacteria community such as Fibrobacter succinogenes, Ruminococcus albus and Ruminococcus flavefaciens was increased more than 12-fold in non-bloated Jersey. The proportion of bacteroidetes and firmicutes was 33.4% and 39.6% in non-bloated Jersey's rumen, while bacteroidetes and firmicutes were 24.9% and 55.1% in bloated Jersey's. In conclusion, the change of rumen microbial community, in particular the increase in starch fermenting bacteria, might have an effect to occur the bloat in Jersey cow.

A Study on Development of Prototype Test Train Design in G7 Project for High Speed Railway Technology (G7 고속전철기술개발사업에서의 시제차량 통합 디자인 개발)

  • 정경렬;이병종;윤세균
    • Archives of design research
    • /
    • v.16 no.4
    • /
    • pp.185-196
    • /
    • 2003
  • The demand for an environment-friendly transportation system, equipped with low energy consumption, and low-or zero-pollution has been on the increase since the beginning of the World Trade Organization era. Simultaneously, the consistent growth of high-speed tram technology, combined with market share, has sparked a fierce competition among technologically-advanced countries like France, Germany, and Japan in an effort to keep the lead in high-speed train technology via extensive Research and development(R&D) expenses. These countries are leaders in the race to implement the next-generation transportation system, build intercontinental rail way networks and export the high-speed train as a major industry commodity. The need to develop our own(Korean) 'high-speed train' technology and its core system technology layouts including original technology serves a few objectives: They boost the national competitive edge; they develop an environmental friendly rail road system that can cope with globalization and minimize the social and economic losses created by the growing traffic-congested delivery costs, environment pollution, and public discomforts. In turn, the 'G7 Project-Development of High Speed Railway Technology' held between 1996 and 2002 for a six-year period was focused on designing a domestic train capable of traveling at a speed of 350km/h combined and led to the actual implementation of engineering and producing the '2000 high-speed train:' This paper summarizes and introduces one of the G7 Projects-specifically, the design segment achievement within the development of train system engineering technology. It is true that the design aspect of the Korean domestic railway system program as a whole was lacking when compared with the advanced railroad countries whose early phase of train design emphasized the design aspect. However, having allowed the active participation of expert designers in the early phase of train design in the current project has led to a new era of domestic train development and the implementation of a way to meet demand flexibly with newly designed trains. The idea of a high-speed train in Korea and its design concept is well-conceived: a faster, more pleasant, and silent based Korean high-speed train that facilitates a new travel culture. A Korean-type of high-speed train is acknowledged by passengers who travel in such trains. The Korean high-speed prototype train has been born, combining aerodynamic air-cushioned design, which is the embodiment of Korean original design of forehead of power car minimized aerodynamic resistance using a curved car body profile, and the improvement of the interior design with ergonomics and the accommodation of the vestibule area through the study of passenger behavior and social culture that is based on the general passenger car.

  • PDF