• Title/Summary/Keyword: Prediction models

Search Result 4,406, Processing Time 0.039 seconds

Korean Ocean Forecasting System: Present and Future (한국의 해양예측, 오늘과 내일)

  • Kim, Young Ho;Choi, Byoung-Ju;Lee, Jun-Soo;Byun, Do-Seong;Kang, Kiryong;Kim, Young-Gyu;Cho, Yang-Ki
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.2
    • /
    • pp.89-103
    • /
    • 2013
  • National demands for the ocean forecasting system have been increased to support economic activity and national safety including search and rescue, maritime defense, fisheries, port management, leisure activities and marine transportation. Further, the ocean forecasting has been regarded as one of the key components to improve the weather and climate forecasting. Due to the national demands as well as improvement of the technology, the ocean forecasting systems have been established among advanced countries since late 1990. Global Ocean Data Assimilation Experiment (GODAE) significantly contributed to the achievement and world-wide spreading of ocean forecasting systems. Four stages of GODAE were summarized. Goal, vision, development history and research on ocean forecasting system of the advanced countries such as USA, France, UK, Italy, Norway, Australia, Japan, China, who operationally use the systems, were examined and compared. Strategies of the successfully established ocean forecasting systems can be summarized as follows: First, concentration of the national ability is required to establish successful operational ocean forecasting system. Second, newly developed technologies were shared with other countries and they achieved mutual and cooperative development through the international program. Third, each participating organization has devoted to its own task according to its role. In Korean society, demands on the ocean forecasting system have been also extended. Present status on development of the ocean forecasting system and long-term plan of KMA (Korea Meteorological Administration), KHOA (Korea Hydrographic and Oceanographic Administration), NFRDI (National Fisheries Research & Development Institute), ADD (Agency for Defense Development) were surveyed. From the history of the pre-established systems in other countries, the cooperation among the relevant Korean organizations is essential to establish the accurate and successful ocean forecasting system, and they can form a consortium. Through the cooperation, we can (1) set up high-quality ocean forecasting models and systems, (2) efficiently invest and distribute financial resources without duplicate investment, (3) overcome lack of manpower for the development. At present stage, it is strongly requested to concentrate national resources on developing a large-scale operational Korea Ocean Forecasting System which can produce open boundary and initial conditions for local ocean and climate forecasting models. Once the system is established, each organization can modify the system for its own specialized purpose. In addition, we can contribute to the international ocean prediction community.

Development of a TBM Advance Rate Model and Its Field Application Based on Full-Scale Shield TBM Tunneling Tests in 70 MPa of Artificial Rock Mass (70 MPa급 인공암반 내 실대형 쉴드TBM 굴진실험을 통한 굴진율 모델 및 활용방안 제안)

  • Kim, Jungjoo;Kim, Kyoungyul;Ryu, Heehwan;Hwan, Jung Ju;Hong, Sungyun;Jo, Seonah;Bae, Dusan
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.3
    • /
    • pp.305-313
    • /
    • 2020
  • The use of cable tunnels for electric power transmission as well as their construction in difficult conditions such as in subsea terrains and large overburden areas has increased. So, in order to efficiently operate the small diameter shield TBM (Tunnel Boring Machine), the estimation of advance rate and development of a design model is necessary. However, due to limited scope of survey and face mapping, it is very difficult to match the rock mass characteristics and TBM operational data in order to achieve their mutual relationships and to develop an advance rate model. Also, the working mechanism of previously utilized linear cutting machine is slightly different than the real excavation mechanism owing to the penetration of a number of disc cutters taking place at the same time in the rock mass in conjunction with rotation of the cutterhead. So, in order to suggest the advance rate and machine design models for small diameter TBMs, an EPB (Earth Pressure Balance) shield TBM having 3.54 m diameter cutterhead was manufactured and 19 cases of full-scale tunneling tests were performed each in 87.5 ㎥ volume of artificial rock mass. The relationships between advance rate and machine data were effectively analyzed by performing the tests in homogeneous rock mass with 70 MPa uniaxial compressive strength according to the TBM operational parameters such as thrust force and RPM of cutterhead. The utilization of the recorded penetration depth and torque values in the development of models is more accurate and realistic since they were derived through real excavation mechanism. The relationships between normal force on single disc cutter and penetration depth as well as between normal force and rolling force were suggested in this study. The prediction of advance rate and design of TBM can be performed in rock mass having 70 MPa strength using these relationships. An effort was made to improve the application of the developed model by applying the FPI (Field Penetration Index) concept which can overcome the limitation of 100% RQD (Rock Quality Designation) in artificial rock mass.

Recent Changes in Bloom Dates of Robinia pseudoacacia and Bloom Date Predictions Using a Process-Based Model in South Korea (최근 12년간 아까시나무 만개일의 변화와 과정기반모형을 활용한 지역별 만개일 예측)

  • Kim, Sukyung;Kim, Tae Kyung;Yoon, Sukhee;Jang, Keunchang;Lim, Hyemin;Lee, Wi Young;Won, Myoungsoo;Lim, Jong-Hwan;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.3
    • /
    • pp.322-340
    • /
    • 2021
  • Due to climate change and its consequential spring temperature rise, flowering time of Robinia pseudoacacia has advanced and a simultaneous blooming phenomenon occurred in different regions in South Korea. These changes in flowering time became a major crisis in the domestic beekeeping industry and the demand for accurate prediction of flowering time for R. pseudoacacia is increasing. In this study, we developed and compared performance of four different models predicting flowering time of R. pseudoacacia for the entire country: a Single Model for the country (SM), Modified Single Model (MSM) using correction factors derived from SM, Group Model (GM) estimating parameters for each region, and Local Model (LM) estimating parameters for each site. To achieve this goal, the bloom date data observed at 26 points across the country for the past 12 years (2006-2017) and daily temperature data were used. As a result, bloom dates for the north central region, where spring temperature increase was more than two-fold higher than southern regions, have advanced and the differences compared with the southwest region decreased by 0.7098 days per year (p-value=0.0417). Model comparisons showed MSM and LM performed better than the other models, as shown by 24% and 15% lower RMSE than SM, respectively. Furthermore, validation with 16 additional sites for 4 years revealed co-krigging of LM showed better performance than expansion of MSM for the entire nation (RMSE: p-value=0.0118, Bias: p-value=0.0471). This study improved predictions of bloom dates for R. pseudoacacia and proposed methods for reliable expansion to the entire nation.

A prediction study on the number of emergency patients with ASTHMA according to the concentration of air pollutants (대기오염물질 농도에 따른 천식 응급환자 수 예측 연구)

  • Han Joo Lee;Min Kyu Jee;Cheong Won Kim
    • Journal of Service Research and Studies
    • /
    • v.13 no.1
    • /
    • pp.63-75
    • /
    • 2023
  • Due to the development of industry, interest in air pollutants has increased. Air pollutants have affected various fields such as environmental pollution and global warming. Among them, environmental diseases are one of the fields affected by air pollutants. Air pollutants can affect the human body's skin or respiratory tract due to their small molecular size. As a result, various studies on air pollutants and environmental diseases have been conducted. Asthma, part of an environmental disease, can be life-threatening if symptoms worsen and cause asthma attacks, and in the case of adult asthma, it is difficult to cure once it occurs. Factors that worsen asthma include particulate matter and air pollution. Asthma is an increasing prevalence worldwide. In this paper, we study how air pollutants correlate with the number of emergency room admissions in asthma patients and predict the number of future asthma emergency patients using highly correlated air pollutants. Air pollutants used concentrations of five pollutants: sulfur dioxide(SO2), carbon monoxide(CO), ozone(O3), nitrogen dioxide(NO2), and fine dust(PM10), and environmental diseases used data on the number of hospitalizations of asthma patients in the emergency room. Data on the number of emergency patients of air pollutants and asthma were used for a total of 5 years from January 1, 2013 to December 31, 2017. The model made predictions using two models, Informer and LTSF-Linear, and performance indicators of MAE, MAPE, and RMSE were used to measure the performance of the model. The results were compared by making predictions for both cases including and not including the number of emergency patients. This paper presents air pollutants that improve the model's performance in predicting the number of asthma emergency patients using Informer and LTSF-Linear models.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Optimum Radiotherapy Schedule for Uterine Cervical Cancer based-on the Detailed Information of Dose Fractionation and Radiotherapy Technique (처방선량 및 치료기법별 치료성적 분석 결과에 기반한 자궁경부암 환자의 최적 방사선치료 스케줄)

  • Cho, Jae-Ho;Kim, Hyun-Chang;Suh, Chang-Ok;Lee, Chang-Geol;Keum, Ki-Chang;Cho, Nam-Hoon;Lee, Ik-Jae;Shim, Su-Jung;Suh, Yang-Kwon;Seong, Jinsil;Kim, Gwi-Eon
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.143-156
    • /
    • 2005
  • Background: The best dose-fractionation regimen of the definitive radiotherapy for cervix cancer remains to be clearly determined. It seems to be partially attributed to the complexity of the affecting factors and the lack of detailed information on external and intra-cavitary fractionation. To find optimal practice guidelines, our experiences of the combination of external beam radiotherapy (EBRT) and high-dose-rate intracavitary brachytherapy (HDR-ICBT) were reviewed with detailed information of the various treatment parameters obtained from a large cohort of women treated homogeneously at a single institute. Materials and Methods: The subjects were 743 cervical cancer patients (Stage IB 198, IIA 77, IIB 364, IIIA 7, IIIB 89 and IVA 8) treated by radiotherapy alone, between 1990 and 1996. A total external beam radiotherapy (EBRT) dose of $23.4\~59.4$ Gy (Median 45.0) was delivered to the whole pelvis. High-dose-rate intracavitary brachytherapy (HDR-IBT) was also peformed using various fractionation schemes. A Midline block (MLB) was initiated after the delivery of $14.4\~43.2$ Gy (Median 36.0) of EBRT in 495 patients, while In the other 248 patients EBRT could not be used due to slow tumor regression or the huge initial bulk of tumor. The point A, actual bladder & rectal doses were individually assessed in all patients. The biologically effective dose (BED) to the tumor ($\alpha/\beta$=10) and late-responding tissues ($\alpha/\beta$=3) for both EBRT and HDR-ICBT were calculated. The total BED values to point A, the actual bladder and rectal reference points were the summation of the EBRT and HDR-ICBT. In addition to all the details on dose-fractionation, the other factors (i.e. the overall treatment time, physicians preference) that can affect the schedule of the definitive radiotherapy were also thoroughly analyzed. The association between MD-BED $Gy_3$ and the risk of complication was assessed using serial multiple logistic regression models. The associations between R-BED $Gy_3$ and rectal complications and between V-BED $Gy_3$ and bladder complications were assessed using multiple logistic regression models after adjustment for age, stage, tumor size and treatment duration. Serial Coxs proportional hazard regression models were used to estimate the relative risks of recurrence due to MD-BED $Gy_{10}$, and the treatment duration. Results: The overall complication rate for RTOG Grades $1\~4$ toxicities was $33.1\%$. The 5-year actuarial pelvic control rate for ail 743 patients was $83\%$. The midline cumulative BED dose, which is the sum of external midline BED and HDR-ICBT point A BED, ranged from 62.0 to 121.9 $Gy_{10}$ (median 93.0) for tumors and from 93.6 to 187.3 $Gy_3$ (median 137.6) for late responding tissues. The median cumulative values of actual rectal (R-BED $Gy_3$) and bladder Point BED (V-BED $Gy_3$) were 118.7 $Gy_3$ (range $48.8\~265.2$) and 126.1 $Gy_3$ (range: $54.9\~267.5$), respectively. MD-BED $Gy_3$ showed a good correlation with rectal (p=0.003), but not with bladder complications (p=0.095). R-BED $Gy_3$ had a very strong association (p=<0.0001), and was more predictive of rectal complications than A-BED $Gy_3$. B-BED $Gy_3$ also showed significance in the prediction of bladder complications in a trend test (p=0.0298). No statistically significant dose-response relationship for pelvic control was observed. The Sandwich and Continuous techniques, which differ according to when the ICR was inserted during the EBRT and due to the physicians preference, showed no differences in the local control and complication rates; there were also no differences in the 3 vs. 5 Gy fraction size of HDR-ICBT. Conclusion: The main reasons optimal dose-fractionation guidelines are not easily established is due to the absence of a dose-response relationship for tumor control as a result of the high-dose gradient of HDR-ICBT, individual differences In tumor responses to radiation therapy and the complexity of affecting factors. Therefore, in our opinion, there is a necessity for individualized tailored therapy, along with general guidelines, in the definitive radiation treatment for cervix cancer. This study also demonstrated the strong predictive value of actual rectal and bladder reference dosing therefore, vaginal gauze packing might be very Important. To maintain the BED dose to less than the threshold resulting in complication, early midline shielding, the HDR-ICBT total dose and fractional dose reduction should be considered.