• Title/Summary/Keyword: Real Time Prediction

Search Result 1,192, Processing Time 0.028 seconds

Combined analysis of meteorological and hydrological drought for hydrological drought prediction and early response - Focussing on the 2022-23 drought in the Jeollanam-do - (수문학적 가뭄 예측과 조기대응을 위한 기상-수문학적 가뭄의 연계분석 - 2022~23 전남지역 가뭄을 대상으로)

  • Jeong, Minsu;Hong, Seok-Jae;Kim, Young-Jun;Yoon, Hyeon-Cheol;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.195-207
    • /
    • 2024
  • This study selected major drought events that occurred in the Jeonnam region from 1991 to 2023, examining both meteorological and hydrological drought occurrence mechanisms. The daily drought index was calculated using rainfall and dam storage as input data, and the drought propagation characteristics from meteorological drought to hydrological drought were analyzed. The characteristics of the 2022-23 drought, which recently occurred in the Jeonnam region and caused serious damage, were evaluated. Compared to historical droughts, the duration of the hydrological drought for 2022-2023 lasted 334 days, the second longest after 2017-2018, the drought severity was evaluated as the most severe at -1.76. As a result of a linked analysis of SPI (StandQardized Precipitation Index), and SRSI (Standardized Reservoir Storage Index), it is possible to suggest a proactive utilization for SPI(6) to respond to hydrological drought. Furthermore, by confirming the similarity between SRSI and SPI(12) in long-term drought monitoring, the applicability of SPI(12) to hydrological drought monitoring in ungauged basins was also confirmed. Through this study, it was confirmed that the long-term dryness that occurs during the summer rainy season can transition into a serious level of hydrological drought. Therefore, for preemptive drought response, it is necessary to use real-time monitoring results of various drought indices and understand the propagation phenomenon from meteorological-agricultural-hydrological drought to secure a sufficient drought response period.

Identification of relevant differential genes to the divergent development of pectoral muscle in ducks by transcriptomic analysis

  • Fan Li;Zongliang He;Yinglin Lu;Jing Zhou;Heng Cao;Xingyu Zhang;Hongjie Ji;Kunpeng Lv;Debing Yu;Minli Yu
    • Animal Bioscience
    • /
    • v.37 no.8
    • /
    • pp.1345-1354
    • /
    • 2024
  • Objective: The objective of this study was to identify candidate genes that play important roles in skeletal muscle development in ducks. Methods: In this study, we investigated the transcriptional sequencing of embryonic pectoral muscles from two specialized lines: Liancheng white ducks (female) and Cherry valley ducks (male) hybrid Line A (LCA) and Line C (LCC) ducks. In addition, prediction of target genes for the differentially expressed mRNAs was conducted and the enriched gene ontology (GO) terms and Kyoto encyclopedia of genes and genomes signaling pathways were further analyzed. Finally, a protein-to-protein interaction network was analyzed by using the target genes to gain insights into their potential functional association. Results: A total of 1,428 differentially expressed genes (DEGs) with 762 being up-regulated genes and 666 being down-regulated genes in pectoral muscle of LCA and LCC ducks identified by RNA-seq (p<0.05). Meanwhile, 23 GO terms in the down-regulated genes and 75 GO terms in up-regulated genes were significantly enriched (p<0.05). Furthermore, the top 5 most enriched pathways were ECM-receptor interaction, fatty acid degradation, pyruvate degradation, PPAR signaling pathway, and glycolysis/gluconeogenesis. Finally, the candidate genes including integrin b3 (Itgb3), pyruvate kinase M1/2 (Pkm), insulin-like growth factor 1 (Igf1), glucose-6-phosphate isomerase (Gpi), GABA type A receptor-associated protein-like 1 (Gabarapl1), and thyroid hormone receptor beta (Thrb) showed the most expression difference, and then were selected to verification by quantitative real-time polymerase chain reaction (qRT-PCR). The result of qRT-PCR was consistent with that of transcriptome sequencing. Conclusion: This study provided information of molecular mechanisms underlying the developmental differences in skeletal muscles between specialized duck lines.

COVID-19 Surveillance using Wastewater-based Epidemiology in Ulsan (울산지역 하수기반역학을 이용한 코로나19 감시 연구)

  • Gyeongnam Kim;Jaesun Choi;Yeon-Su Lee;Dae-Kyo Kim;Junyoung Park;Young-Min Kim;Youngsun Choi
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.3
    • /
    • pp.260-265
    • /
    • 2024
  • During the coronavirus 2019 (COVID-19) pandemic, wastewater-based epidemiology was used for surveying infectious diseases. In this study, wastewater surveillance was employed to monitor COVID-19 outbreaks. Wastewater influent samples were collected from four sewage treatment plants in Ulsan (Gulhwa, Yongyeon, Nongso, and Bangeojin) between August 2022 and August 2023. The samples were concentrated using the polyethylene glycol-sodium chloride pretreatment method. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA was extracted and detected using real-time polymerase chain reaction. Next generation sequences was used to perform correlation analysis between SARS-CoV-2 concentrations and COVID-19 cases and for COVID-19 variant analysis. A strong correlation was observed between SARS-CoV-2 concentrations and COVID-19 cases (correlation coefficient, r = 0.914). The COVID-19 variant analysis results were similar to the clinical variant genomes of three epidemics during the study period. In conclusion, monitoring COVID-19 via analyzing wastewater facilitates early recognition and prediction of epidemics.

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Development of a TBM Advance Rate Model and Its Field Application Based on Full-Scale Shield TBM Tunneling Tests in 70 MPa of Artificial Rock Mass (70 MPa급 인공암반 내 실대형 쉴드TBM 굴진실험을 통한 굴진율 모델 및 활용방안 제안)

  • Kim, Jungjoo;Kim, Kyoungyul;Ryu, Heehwan;Hwan, Jung Ju;Hong, Sungyun;Jo, Seonah;Bae, Dusan
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.3
    • /
    • pp.305-313
    • /
    • 2020
  • The use of cable tunnels for electric power transmission as well as their construction in difficult conditions such as in subsea terrains and large overburden areas has increased. So, in order to efficiently operate the small diameter shield TBM (Tunnel Boring Machine), the estimation of advance rate and development of a design model is necessary. However, due to limited scope of survey and face mapping, it is very difficult to match the rock mass characteristics and TBM operational data in order to achieve their mutual relationships and to develop an advance rate model. Also, the working mechanism of previously utilized linear cutting machine is slightly different than the real excavation mechanism owing to the penetration of a number of disc cutters taking place at the same time in the rock mass in conjunction with rotation of the cutterhead. So, in order to suggest the advance rate and machine design models for small diameter TBMs, an EPB (Earth Pressure Balance) shield TBM having 3.54 m diameter cutterhead was manufactured and 19 cases of full-scale tunneling tests were performed each in 87.5 ㎥ volume of artificial rock mass. The relationships between advance rate and machine data were effectively analyzed by performing the tests in homogeneous rock mass with 70 MPa uniaxial compressive strength according to the TBM operational parameters such as thrust force and RPM of cutterhead. The utilization of the recorded penetration depth and torque values in the development of models is more accurate and realistic since they were derived through real excavation mechanism. The relationships between normal force on single disc cutter and penetration depth as well as between normal force and rolling force were suggested in this study. The prediction of advance rate and design of TBM can be performed in rock mass having 70 MPa strength using these relationships. An effort was made to improve the application of the developed model by applying the FPI (Field Penetration Index) concept which can overcome the limitation of 100% RQD (Rock Quality Designation) in artificial rock mass.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Study on Mixed-Mode Survey which Combine the Landline and Mobile Telephone Interviews: The Case of Special Election for the Mayor of Seoul (유.무선전화 병행조사에 대한 연구: 2011년 서울시장 보궐선거 여론조사 사례)

  • Lee, Kyoung-Taeg;Lee, Hwa-Jeong;Hyun, Kyung-Bo
    • Survey Research
    • /
    • v.13 no.1
    • /
    • pp.135-158
    • /
    • 2012
  • Korean telephone surveys have been based on landline telephone directory or RDD(Random Digit Dialing) method. These days, however, there has been an increase of the households with no landline, or households with the line but not willing to register in the directory. Moreover, it is hard to contact young people or office workers who are usually staying out of home in the daytime. Due to these issues above, the predictability of election polls gets weaker. Especially, low accessibility to those who stay out of home when the poll's done, results in predictions with positive inclination toward conservatism. A solution to resolve this problem is to contact respondents by using both mobile and landline phones-via landline phone to those who are at home and via mobile phone to those who are out of home in the daytime(Mixed Mode Survey, hereafter MMS). To conduct MMS, 1) we need to obtain the sampling frames for the landline and mobile surveys, and 2) we need to decide the proportion of sample size of both. In this paper, we propose a heuristic method for conducting MMS. The method uses RDD for the landline phone survey, and the access panel list for the mobile phone survey. The proportion of sample sizes between landline and mobile phones are determined based on the 'Lifestyle and Time Use Study' conducted by Statistics Korea. As a case study, 4 election polls were conducted in the periods of the special election for the mayor of Seoul on Oct 26th, 2011. From the initial 3 polls, reactions and responses regarding the issues raised during the survey period were appropriately covered, and the final poll showed a very close prediction to the real election result.

  • PDF