• Title/Summary/Keyword: real-time prediction

Search Result 1,224, Processing Time 0.033 seconds

Identification and functional prediction of long non-coding RNAs related to oxidative stress in the jejunum of piglets

  • Jinbao Li;Jianmin Zhang;Xinlin Jin;Shiyin Li;Yingbin Du;Yongqing Zeng;Jin Wang;Wei Chen
    • Animal Bioscience
    • /
    • v.37 no.2
    • /
    • pp.193-202
    • /
    • 2024
  • Objective: Oxidative stress (OS) is a pathological process arising from the excessive production of free radicals in the body. It has the potential to alter animal gene expression and cause damage to the jejunum. However, there have been few reports of changes in the expression of long noncoding RNAs (lncRNAs) in the jejunum in piglets under OS. The purpose of this research was to examine how lncRNAs in piglet jejunum change under OS. Methods: The abdominal cavities of piglets were injected with diquat (DQ) to produce OS. Raw reads were downloaded from the SRA database. RNA-seq was utilized to study the expression of lncRNAs in piglets under OS. Additionally, six randomly selected lncRNAs were verified using quantitative real-time polymerase chain reaction (qRT-PCR) to examine the mechanism of oxidative damage. Results: A total of 79 lncRNAs were differentially expressed (DE) in the treatment group compared to the negative control group. The target genes of DE lncRNAs were enriched in gene ontology (GO) terms and Kyoto encyclopedia of genes and genomes (KEGG) signaling pathways. Chemical carcinogenesis-reactive oxygen species, the Foxo signaling pathway, colorectal cancer, and the AMPK signaling pathway were all linked to OS. Conclusion: Our results demonstrated that DQ-induced OS causes differential expression of lncRNAs, laying the groundwork for future research into the processes involved in the jejunum's response to OS.

Combined analysis of meteorological and hydrological drought for hydrological drought prediction and early response - Focussing on the 2022-23 drought in the Jeollanam-do - (수문학적 가뭄 예측과 조기대응을 위한 기상-수문학적 가뭄의 연계분석 - 2022~23 전남지역 가뭄을 대상으로)

  • Jeong, Minsu;Hong, Seok-Jae;Kim, Young-Jun;Yoon, Hyeon-Cheol;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.195-207
    • /
    • 2024
  • This study selected major drought events that occurred in the Jeonnam region from 1991 to 2023, examining both meteorological and hydrological drought occurrence mechanisms. The daily drought index was calculated using rainfall and dam storage as input data, and the drought propagation characteristics from meteorological drought to hydrological drought were analyzed. The characteristics of the 2022-23 drought, which recently occurred in the Jeonnam region and caused serious damage, were evaluated. Compared to historical droughts, the duration of the hydrological drought for 2022-2023 lasted 334 days, the second longest after 2017-2018, the drought severity was evaluated as the most severe at -1.76. As a result of a linked analysis of SPI (StandQardized Precipitation Index), and SRSI (Standardized Reservoir Storage Index), it is possible to suggest a proactive utilization for SPI(6) to respond to hydrological drought. Furthermore, by confirming the similarity between SRSI and SPI(12) in long-term drought monitoring, the applicability of SPI(12) to hydrological drought monitoring in ungauged basins was also confirmed. Through this study, it was confirmed that the long-term dryness that occurs during the summer rainy season can transition into a serious level of hydrological drought. Therefore, for preemptive drought response, it is necessary to use real-time monitoring results of various drought indices and understand the propagation phenomenon from meteorological-agricultural-hydrological drought to secure a sufficient drought response period.

COVID-19 Surveillance using Wastewater-based Epidemiology in Ulsan (울산지역 하수기반역학을 이용한 코로나19 감시 연구)

  • Gyeongnam Kim;Jaesun Choi;Yeon-Su Lee;Dae-Kyo Kim;Junyoung Park;Young-Min Kim;Youngsun Choi
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.3
    • /
    • pp.260-265
    • /
    • 2024
  • During the coronavirus 2019 (COVID-19) pandemic, wastewater-based epidemiology was used for surveying infectious diseases. In this study, wastewater surveillance was employed to monitor COVID-19 outbreaks. Wastewater influent samples were collected from four sewage treatment plants in Ulsan (Gulhwa, Yongyeon, Nongso, and Bangeojin) between August 2022 and August 2023. The samples were concentrated using the polyethylene glycol-sodium chloride pretreatment method. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA was extracted and detected using real-time polymerase chain reaction. Next generation sequences was used to perform correlation analysis between SARS-CoV-2 concentrations and COVID-19 cases and for COVID-19 variant analysis. A strong correlation was observed between SARS-CoV-2 concentrations and COVID-19 cases (correlation coefficient, r = 0.914). The COVID-19 variant analysis results were similar to the clinical variant genomes of three epidemics during the study period. In conclusion, monitoring COVID-19 via analyzing wastewater facilitates early recognition and prediction of epidemics.

High-Quality Standard Data-Based Pharmacovigilance System for Privacy and Personalization (프라이버시와 개인화를 위한 고품질 표준 데이터 기반 약물감시 시스템 연구)

  • SeMo Yang;InSeo Song;KangYoon Lee
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.125-131
    • /
    • 2023
  • Globally, drug side effects rank among the top causes of death. To effectively respond to these adverse drug reactions, a shift towards an active real-time monitoring system, along with the standardization and quality improvement of data, is necessary. Integrating individual institutional data and utilizing large-scale data to enhance the accuracy of drug side effect predictions is critical. However, data sharing between institutions poses privacy concerns and involves varying data standards. To address this issue, our research adopts a federated learning approach, where data is not shared directly in compliance with privacy regulations, but rather the results of the model's learning are shared. We employ the Common Data Model (CDM) to standardize different data formats, ensuring accuracy and consistency of data. Additionally, we propose a drug monitoring system that enhances security and scalability management through a cloud-based federated learning environment. This system allows for effective monitoring and prediction of drug side effects while protecting the privacy of data shared between hospitals. The goal is to reduce mortality due to drug side effects and cut medical costs, exploring various technical approaches and methodologies to achieve this.

In a Time of Change: Reflections on Humanities Research and Methodologies (변화의 시대, 인문학적 변화 연구와 방법에 대한 고찰)

  • Kim Dug-sam
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.49
    • /
    • pp.265-294
    • /
    • 2024
  • This study begins with a question about research methods in humanities. It is grounded in the humanities, focusing on the changes that have brought light and darkness to the humanities, and focusing on discourse regarding research methods that explore those changes. If the role of the humanities is to prevent the proverbial "gray rhino," unlike the sciences, and if the humanities have a role to play in moderating the uncontrollable development of the sciences, what kind of research methods should humanities pursue. Furthermore, what kind of research methods should be pursued in the humanities, in line with the development of the sciences and the changing environment? This study discusses research methods in the humanities as follows: first, in Section 2, I advocate for the collaboration between humanities and scientific methods, utilizing accumulated assets produced by humanities and continuously introducing scientific methods. Prediction of change is highly precise and far-reaching in engineering and the natural sciences. However, it is difficult to approach change in these fields in a macro or integrated manner. Because they are not precise, they are not welcome in disciplines that deal with the real world. This is primarily the responsibility of humanities. Where science focuses on precision, humanities focuses on questions of essence. This is because while the ends of change have varied throughout history, the nature of change has not varied that much. Section 3 then discusses the changing environment, proposals for changes to humanistic research methods, reviews and proposals inductive change research methods, and makes some suggestions for humanistic change research. The data produced by the field of humanities accumulated by humankind in the past is abundant and has a wide range of applications. In the future, we should not only actively accept the results of scientific advances but also actively seek systematic humanistic approaches and utilize them across disciplinary boundaries to find solutions at the intersection of scientific methods and humanistic assets.

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Development of a TBM Advance Rate Model and Its Field Application Based on Full-Scale Shield TBM Tunneling Tests in 70 MPa of Artificial Rock Mass (70 MPa급 인공암반 내 실대형 쉴드TBM 굴진실험을 통한 굴진율 모델 및 활용방안 제안)

  • Kim, Jungjoo;Kim, Kyoungyul;Ryu, Heehwan;Hwan, Jung Ju;Hong, Sungyun;Jo, Seonah;Bae, Dusan
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.3
    • /
    • pp.305-313
    • /
    • 2020
  • The use of cable tunnels for electric power transmission as well as their construction in difficult conditions such as in subsea terrains and large overburden areas has increased. So, in order to efficiently operate the small diameter shield TBM (Tunnel Boring Machine), the estimation of advance rate and development of a design model is necessary. However, due to limited scope of survey and face mapping, it is very difficult to match the rock mass characteristics and TBM operational data in order to achieve their mutual relationships and to develop an advance rate model. Also, the working mechanism of previously utilized linear cutting machine is slightly different than the real excavation mechanism owing to the penetration of a number of disc cutters taking place at the same time in the rock mass in conjunction with rotation of the cutterhead. So, in order to suggest the advance rate and machine design models for small diameter TBMs, an EPB (Earth Pressure Balance) shield TBM having 3.54 m diameter cutterhead was manufactured and 19 cases of full-scale tunneling tests were performed each in 87.5 ㎥ volume of artificial rock mass. The relationships between advance rate and machine data were effectively analyzed by performing the tests in homogeneous rock mass with 70 MPa uniaxial compressive strength according to the TBM operational parameters such as thrust force and RPM of cutterhead. The utilization of the recorded penetration depth and torque values in the development of models is more accurate and realistic since they were derived through real excavation mechanism. The relationships between normal force on single disc cutter and penetration depth as well as between normal force and rolling force were suggested in this study. The prediction of advance rate and design of TBM can be performed in rock mass having 70 MPa strength using these relationships. An effort was made to improve the application of the developed model by applying the FPI (Field Penetration Index) concept which can overcome the limitation of 100% RQD (Rock Quality Designation) in artificial rock mass.