• Title/Summary/Keyword: 다양한 문제해결 방법

Search Result 2,370, Processing Time 0.033 seconds

Current Status of Sericulture and Insect Industry to Respond to Human Survival Crisis (인류의 생존 위기 대응을 위한 양잠과 곤충 산업의 현황)

  • A-Young, Kim;Kee-Young, Kim;Hee Jung, Choi;Hyun Woo, Park;Young Ho, Koh
    • Korean journal of applied entomology
    • /
    • v.61 no.4
    • /
    • pp.605-614
    • /
    • 2022
  • Two major problems currently threaten human survival on Earth: climate change and the rapid aging of the population in developed countries. Climate change is a result of the increase in greenhouse gas (GHG) concentrations in the atmosphere due to the increase in the use of fossil fuels owing to economic and transportation development. The rapid increase in the age of the population is a result of the rise in life expectancy due to the development of biomedical science and technology and the improvement of personal hygiene in developed countries. To avoid irreversible global climate change, it is necessary to quickly transition from the current fossil fuel-based economy to a zero-carbon renewable energy-based economy that does not emit GHGs. To achieve this goal, the dairy and livestock industry, which generates the most GHGs in the agricultural sector, must transition to using low-carbon emission production methods while simultaneously increasing consumers' preference for low-carbon diets. Although 77% of currently available arable land globally is used to produce livestock feed, only 37% and 18% of the proteins and calories that humans consume come from dairy and livestock farming and industry. Therefore, using edible insects as a protein source represents a good alternative, as it generates less GHG and reduces water consumption and breeding space while ensuring a higher feed conversion rate than that of livestock. Additionally, utilizing the functionality of medicinal insects, such as silkworms, which have been proven to have certain health enhancement effects, it is possible to develop functional foods that can prevent or delay the onset of currently incurable degenerative diseases that occur more frequently in the elderly. Insects are among the first animals to have appeared on Earth, and regardless of whether humans survive, they will continue to adapt, evolve, and thrive. Therefore, the use of various edible and medicinal insects, including silkworms, in industry will provide an important foundation for human survival and prosperity on Earth in the near future by resolving the current two major problems.

Transformation of Adult Mesenchymal Stem Cells into Cardiomyocytes with 5-azacytidine: Isolated from the Adipose Tissues of Rat (성체 백서의 지방조직에서 추출한 중간엽 줄기세포의 5-azacytidine을 이용한 심근세포 분화 유도)

  • Choe Ju-Won;Kim Yong-In;Oh Tae-Yun;Cho Dai-Yoon;Sohn Dong-Suep;Lee Tae-Jin
    • Journal of Chest Surgery
    • /
    • v.39 no.7 s.264
    • /
    • pp.511-519
    • /
    • 2006
  • Background: Loss of cardiomyocytes in the myocardial infarction leads to regional contractile dysfunction, and necrotized cardiomyocytes in infracted ventricular tissues are progressively replaced by fibroblasts forming scar tissue. Although cardiomyoplasty, or implantation of ventricular assist device or artificial heart was tried in refractory heart failure, the cardiac transplantation was the only therapeutic modality because these other therapeutic strategies were not permanent. Cell transplantation is tried instead of cardiac transplantation, especially bone marrow is the most popular donated organ. But because bone marrow aspiration procedure is invasive and painful, and it had the fewer amounts of cellular population, the adipose tissue is recommended for harvesting of mesenchymal stem cells. Material and Method: After adipose tissues were extracted from abdominal subcutaneous adipose tissue and intra-abdominal adipose tissue individually, the cellular components were obtained by same method. These cellular components were tried to transformation with the various titers of 5-azacytidine to descript the appropriate concentration of 5-azacytidine and possibility of transformation ability of adipose tissue. Group 1 is abdominal subcutaneous adipose tissue and Group 2 is intra-abdominal adipose tissue-retroperitoneal adipose tissue and omentum. Cellular components were extracted by collagenase and $NH_4Cl$ et al, and these components were cultured by non-induction media - DMEM media containing 10% FBS and inducted by none, $3{\mu}mol/L,\;6{\mu}mol/L,\;and\;9{\mu}mol/L$ 5-azacytidine after the 1st and 2nd subculture. After 4 weeks incubation, tile cell blocks were made, immunostaining was done with the antibodies of CD34, heavy myosin chain, troponin T, and SMA. Result: Immunostaining of the transformed cells for troponin T was positive in the $6{\mu}mol/L\;&\;9{\mu}mol/L$ 5-azacytidine of Group 1 & 2, but CD34 and heavy myosin chain antibodies were negative and SMA antibody was positive in the $3{\mu}mol/L\;&\;6{\mu}mol/L$ 5-azacytidne of Group 2. Conclusion: These observations confirm that adult mesenchymal stem cells isolated from the abdominal subcutaneous adipose tissues and intra-abdominal adipose tissues can be chemically transformed into cardiomyocytes. This can potentially be a source of autologous cells for myocardial repair.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

The Applicability of Conditional Generative Model Generating Groundwater Level Fluctuation Corresponding to Precipitation Pattern (조건부 생성모델을 이용한 강수 패턴에 따른 지하수위 생성 및 이의 활용에 관한 연구)

  • Jeong, Jiho;Jeong, Jina;Lee, Byung Sun;Song, Sung-Ho
    • Economic and Environmental Geology
    • /
    • v.54 no.1
    • /
    • pp.77-89
    • /
    • 2021
  • In this study, a method has been proposed to improve the performance of hydraulic property estimation model developed by Jeong et al. (2020). In their study, low-dimensional features of the annual groundwater level (GWL) fluctuation patterns extracted based on a Denoising autoencoder (DAE) was used to develop a regression model for predicting hydraulic properties of an aquifer. However, low-dimensional features of the DAE are highly dependent on the precipitation pattern even if the GWL is monitored at the same location, causing uncertainty in hydraulic property estimation of the regression model. To solve the above problem, a process for generating the GWL fluctuation pattern for conditioning the precipitation is proposed based on a conditional variational autoencoder (CVAE). The CVAE trains a statistical relationship between GWL fluctuation and precipitation pattern. The actual GWL and precipitation data monitored on a total of 71 monitoring stations over 10 years in South Korea was applied to validate the effect of using CVAE. As a result, the trained CVAE model reasonably generated GWL fluctuation pattern with the conditioning of various precipitation patterns for all the monitoring locations. Based on the trained CVAE model, the low-dimensional features of the GWL fluctuation pattern without interference of different precipitation patterns were extracted for all monitoring stations, and they were compared to the features extracted based on the DAE. Consequently, it can be confirmed that the statistical consistency of the features extracted using CVAE is improved compared to DAE. Thus, we conclude that the proposed method may be useful in extracting a more accurate feature of GWL fluctuation pattern affected solely by hydraulic characteristics of the aquifer, which would be followed by the improved performance of the previously developed regression model.

Analysis of the Nature of Science (NOS) in Integrated Science Textbooks of the 2015 Revised Curriculum (2015 개정 교육과정 통합과학 교과서의 과학의 본성(NOS) 분석)

  • Jeon, Young Been;Lee, Young Hee
    • Journal of Science Education
    • /
    • v.44 no.3
    • /
    • pp.273-288
    • /
    • 2020
  • This study aims to investigate the presentation of the Nature of Science (NOS) in integrated science textbooks of the 2015 revised curriculum. The five integrated science textbooks published by the revised 2015 curriculum were analyzed with the conceptual framework of the four themes of the Nature of Science (NOS) (Lee, 2013) based on scientific literacy. The four themes of the NOS are 1. nature of scientific knowledge (theme I), 2. nature of scientific inquiry (theme II), 3. nature of scientific thinking (theme III), and 4. nature of interactions among science, technology, and society. The reliability of the textbooks analysis was measured between two coders by the Cohen's kappa and resulted in between 0,83 and 0,96, which means the results of analysis was consistent and reliable. The findings were as follows. First, overall theme II, nature of scientific inquiry emphasized on the integrated science textbooks of the 2015 revised curriculum by devoting the contents over 40 % in the all five publishing companies' textbooks. Second, while the theme II, nature of scientific inquiry was emphasized on the textbooks regardless of the publishing companies, other themes of the NOS were emphasized in different portions by the publishing companies. Thus, the focus among other three themes of the NOS was presented differently by the publishing companies except that in theme II, nature of scientific inquiry was most emphasized on integrated science textbooks. Third, the presentation of the NOS was identified similarly across the topics of integrated science textbooks except on topic 4. Environment and Energy. The theme IV, nature of interactions among science, technology, and society was emphasized reasonably only in the topic of Environment and Energy of the textbooks. Finally, the presentation of the NOS in the integrated science textbooks of the 2015 revised curriculum were more balanced among the four themes of the NOS with focus on the scientific inquiry compared to the previous curriculum textbooks.

Exploring the Ways to Use Maker Education in School (학교 교육 활용을 위한 메이커 교육 구성 요소 탐색)

  • Kwon, Yoojin;Lee, Youngtae;Lim, Yunjin;Park, Youngsu;Lee, Eunkyung;Park, Seongseog
    • Journal of Korean Home Economics Education Association
    • /
    • v.32 no.4
    • /
    • pp.19-30
    • /
    • 2020
  • Maker education started on the basis of the maker movement in which makers gathered in makerspace share their activities and experiences, and the educational value pursued in maker education is based on the constructivist paradigm. The purpose of this study is to present maker education components to be used in school education, focus on the characteristics and educational values of maker education, and explore ways to use them. To this end, this study explored the theoretical grounds to re-conceptualize maker education, drew statements based on in-depth interview data of teachers conducting maker education classes, and reviewed its validity through experts. Based on these statements, by deriving the components for the use of maker education, the direction of maker education in school education was set, and an example framework that could be used in subject class and creative experiential learning was proposed. Research shows that in maker education, makers cooperate to carry out activities, share ideas with others and try to improve them, and include self-direction such as learning, tinkering, design thinking, sharing and reflection. can see. In addition, maker education emphasizes experiential learning that can solve real problems that students face, rather than confining specific activities to student choices as needed. It emphasizes the learner's course of action rather than the outcome of the activity, tolerates the learner's failure, and emphasizes the role of the teacher as a facilitator to promote re-challenge. In the future, it can be used in various ways in each subject (curriculum expert, teaching/learning expert, elementary and middle school teachers, parents, local educators, etc.) and school activities, and it will contribute to setting future research directions as a basic research for school maker education.

Implementation of integrated monitoring system for trace and path prediction of infectious disease (전염병의 경로 추적 및 예측을 위한 통합 정보 시스템 구현)

  • Kim, Eungyeong;Lee, Seok;Byun, Young Tae;Lee, Hyuk-Jae;Lee, Taikjin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.69-76
    • /
    • 2013
  • The incidence of globally infectious and pathogenic diseases such as H1N1 (swine flu) and Avian Influenza (AI) has recently increased. An infectious disease is a pathogen-caused disease, which can be passed from the infected person to the susceptible host. Pathogens of infectious diseases, which are bacillus, spirochaeta, rickettsia, virus, fungus, and parasite, etc., cause various symptoms such as respiratory disease, gastrointestinal disease, liver disease, and acute febrile illness. They can be spread through various means such as food, water, insect, breathing and contact with other persons. Recently, most countries around the world use a mathematical model to predict and prepare for the spread of infectious diseases. In a modern society, however, infectious diseases are spread in a fast and complicated manner because of rapid development of transportation (both ground and underground). Therefore, we do not have enough time to predict the fast spreading and complicated infectious diseases. Therefore, new system, which can prevent the spread of infectious diseases by predicting its pathway, needs to be developed. In this study, to solve this kind of problem, an integrated monitoring system, which can track and predict the pathway of infectious diseases for its realtime monitoring and control, is developed. This system is implemented based on the conventional mathematical model called by 'Susceptible-Infectious-Recovered (SIR) Model.' The proposed model has characteristics that both inter- and intra-city modes of transportation to express interpersonal contact (i.e., migration flow) are considered. They include the means of transportation such as bus, train, car and airplane. Also, modified real data according to the geographical characteristics of Korea are employed to reflect realistic circumstances of possible disease spreading in Korea. We can predict where and when vaccination needs to be performed by parameters control in this model. The simulation includes several assumptions and scenarios. Using the data of Statistics Korea, five major cities, which are assumed to have the most population migration have been chosen; Seoul, Incheon (Incheon International Airport), Gangneung, Pyeongchang and Wonju. It was assumed that the cities were connected in one network, and infectious disease was spread through denoted transportation methods only. In terms of traffic volume, daily traffic volume was obtained from Korean Statistical Information Service (KOSIS). In addition, the population of each city was acquired from Statistics Korea. Moreover, data on H1N1 (swine flu) were provided by Korea Centers for Disease Control and Prevention, and air transport statistics were obtained from Aeronautical Information Portal System. As mentioned above, daily traffic volume, population statistics, H1N1 (swine flu) and air transport statistics data have been adjusted in consideration of the current conditions in Korea and several realistic assumptions and scenarios. Three scenarios (occurrence of H1N1 in Incheon International Airport, not-vaccinated in all cities and vaccinated in Seoul and Pyeongchang respectively) were simulated, and the number of days taken for the number of the infected to reach its peak and proportion of Infectious (I) were compared. According to the simulation, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days when vaccination was not considered. In terms of the proportion of I, Seoul was the highest while Pyeongchang was the lowest. When they were vaccinated in Seoul, the number of days taken for the number of the infected to reach at its peak was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. When they were vaccinated in Pyeongchang, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. Based on the results above, it has been confirmed that H1N1, upon the first occurrence, is proportionally spread by the traffic volume in each city. Because the infection pathway is different by the traffic volume in each city, therefore, it is possible to come up with a preventive measurement against infectious disease by tracking and predicting its pathway through the analysis of traffic volume.