• Title/Summary/Keyword: Context prediction

Search Result 261, Processing Time 0.053 seconds

On Software Reliability Engineering Process for Weapon Systems (무기체계를 위한 소프트웨어의 신뢰성 공학 프로세스)

  • Kim, Ghi-Back;Lee, Jae-Chon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.4B
    • /
    • pp.332-345
    • /
    • 2011
  • As weapon systems are evolving into more advanced and complex ones, the role of the software is becoming heavily significant in their developments. Particularly in the war field of today as represented by the network centric warfare(NCW), the reliability of weapon systems is definitely crucial. In this context, it is inevitable to develop software reliably enough to make the weapon systems operate robustly in the combat field. The reliability engineering activities performed to develop software in the domestic area seem to be limited to the software reliability estimations for some projects. To ensure that the target reliability of software be maintained through the system's development period, a more systematic approach to performing software reliability engineering activities are necessary from the beginning of the development period. In this paper, we consider the software reliability in terms of the development of a weapon system as a whole. Thus, from the systems engineering point of view, we analyze the models and methods that are related to software reliability and a variety of associated activities. As a result, a process is developed, which can be called the software reliability engineering process for weapon systems (SREP-WS), The developed SREP-WS can be used in the development of a weapon system to meet a target reliability throughout its life-cycle. Based on the SREP-WS, the software reliability could also be managed quantitatively.

Study on Anomaly Detection Method of Improper Foods using Import Food Big data (수입식품 빅데이터를 이용한 부적합식품 탐지 시스템에 관한 연구)

  • Cho, Sanggoo;Choi, Gyunghyun
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.19-33
    • /
    • 2018
  • Owing to the increase of FTA, food trade, and versatile preferences of consumers, food import has increased at tremendous rate every year. While the inspection check of imported food accounts for about 20% of the total food import, the budget and manpower necessary for the government's import inspection control is reaching its limit. The sudden import food accidents can cause enormous social and economic losses. Therefore, predictive system to forecast the compliance of food import with its preemptive measures will greatly improve the efficiency and effectiveness of import safety control management. There has already been a huge data accumulated from the past. The processed foods account for 75% of the total food import in the import food sector. The analysis of big data and the application of analytical techniques are also used to extract meaningful information from a large amount of data. Unfortunately, not many studies have been done regarding analyzing the import food and its implication with understanding the big data of food import. In this context, this study applied a variety of classification algorithms in the field of machine learning and suggested a data preprocessing method through the generation of new derivative variables to improve the accuracy of the model. In addition, the present study compared the performance of the predictive classification algorithms with the general base classifier. The Gaussian Naïve Bayes prediction model among various base classifiers showed the best performance to detect and predict the nonconformity of imported food. In the future, it is expected that the application of the abnormality detection model using the Gaussian Naïve Bayes. The predictive model will reduce the burdens of the inspection of import food and increase the non-conformity rate, which will have a great effect on the efficiency of the food import safety control and the speed of import customs clearance.

Ingroup's Apology For Past Wrongdoing Can Increase Outgroup Dehumanization (과거 잘못에 대한 집단 간 사과의 역설적 효과: 외집단 비인간화를 중심으로)

  • Hyeon Jeong Kim;Sang Hee Park
    • Korean Journal of Culture and Social Issue
    • /
    • v.25 no.1
    • /
    • pp.79-99
    • /
    • 2019
  • Apologies are used with increasing frequency for mending damaged relations between groups after intergroup conflict. Past research revealed that members of a perpetrator group may engage in (animalistic) dehumanization of victim group members to cope with guilt and responsibility associated with the ingroup's past wrongdoing. We hypothesized that ingroup's apology would relieve perpetrator group members of the moral threat, and therefore would make them perceive more humanness in the victim group members. The study was conducted in the context of South Korea's alleged atrocities against Vietnamese civilians during its military involvement in the Vietnam War. Korean participants read an article on the incidents with Korean government's issuance of an official apology manipulated, and reported their thoughts on the incidents and perceptions of Vietnamese people including their humanness. Contrary to our prediction, apology further enhanced dehumanization of Vietnamese people, even while it also decreased dehumanization through heightened feelings of relief. This study documents a seemingly ironic effect of intergroup apology, and calls for a more careful examination of the consequences of apology before recommending it as a viable strategy for alleviating intergroup tensions.

A Development of Flood Mapping Accelerator Based on HEC-softwares (HEC 소프트웨어 기반 홍수범람지도 엑셀러레이터 개발)

  • Kim, JongChun;Hwang, Seokhwan;Jeong, Jongho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.173-182
    • /
    • 2024
  • In recent, there has been a trend toward primarily utilizing data-driven models employing artificial intelligence technologies, such as machine learning, for flood prediction. These data-driven models offer the advantage of utilizing pre-training results, significantly reducing the required simulation time. However, it remains that a considerable amount of flood data is necessary for the pre-training in data-driven models, while the available observed data for application is often insufficient. As an alternative, validated simulation results from physically-based models are being employed as pre-training data alongside observed data. In this context, we developed a flood mapping accelerator to generate flood maps for pre-training. The proposed accelerator automates the entire process of flood mapping, i.e., estimating flood discharge using HEC-1, calculating water surface levels using HEC-RAS, simulating channel overflow and generating flood maps using RAS Mapper. With the accelerator, users can easily prepare a database for pre-training of data-driven models from hundreds to tens of thousands of rainfall scenarios. It includes various convenient menus containing a Graphic User Interface(GUI), and its practical applicability has been validated across 26 test-beds.

Prediction of Correct Answer Rate and Identification of Significant Factors for CSAT English Test Based on Data Mining Techniques (데이터마이닝 기법을 활용한 대학수학능력시험 영어영역 정답률 예측 및 주요 요인 분석)

  • Park, Hee Jin;Jang, Kyoung Ye;Lee, Youn Ho;Kim, Woo Je;Kang, Pil Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.509-520
    • /
    • 2015
  • College Scholastic Ability Test(CSAT) is a primary test to evaluate the study achievement of high-school students and used by most universities for admission decision in South Korea. Because its level of difficulty is a significant issue to both students and universities, the government makes a huge effort to have a consistent difficulty level every year. However, the actual levels of difficulty have significantly fluctuated, which causes many problems with university admission. In this paper, we build two types of data-driven prediction models to predict correct answer rate and to identify significant factors for CSAT English test through accumulated test data of CSAT, unlike traditional methods depending on experts' judgments. Initially, we derive candidate question-specific factors that can influence the correct answer rate, such as the position, EBS-relation, readability, from the annual CSAT practices and CSAT for 10 years. In addition, we drive context-specific factors by employing topic modeling which identify the underlying topics over the text. Then, the correct answer rate is predicted by multiple linear regression and level of difficulty is predicted by classification tree. The experimental results show that 90% of accuracy can be achieved by the level of difficulty (difficult/easy) classification model, whereas the error rate for correct answer rate is below 16%. Points and problem category are found to be critical to predict the correct answer rate. In addition, the correct answer rate is also influenced by some of the topics discovered by topic modeling. Based on our study, it will be possible to predict the range of expected correct answer rate for both question-level and entire test-level, which will help CSAT examiners to control the level of difficulties.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Review of Policy Direction and Coupled Model Development between Groundwater Recharge Quantity and Climate Change (기후변화 연동 지하수 함양량 산정 모델 개발 및 정책방향 고찰)

  • Lee, Moung-Jin;Lee, Joung-Ho;Jeon, Seong-Woo;Houng, Hyun-Jung
    • Journal of Environmental Policy
    • /
    • v.9 no.2
    • /
    • pp.157-184
    • /
    • 2010
  • Global climate change is destroying the water circulation balance by changing rates of precipitation, recharge and discharge, and evapotranspiration. The Intergovernmental Panel on Climate Change (IPCC 2007) makes "changes in rainfall pattern due to climate system changes and consequent shortage of available water resource" a high priority as the weakest part among the effects of human environment caused by future climate changes. Groundwater, which occupies a considerable portion of the world's water resources, is related to climate change via surface water such as rivers, lakes, and marshes, and "direct" interactions, being indirectly affected through recharge. Therefore, in order to quantify the effects of climate change on groundwater resources, it is necessary to not only predict the main variables of climate change but to also accurately predict the underground rainfall recharge quantity. In this paper, the authors selected a relevant climate change scenario, In this context, the authors selected A1B from the Special Report on Emission Scenario (SRES) which is distributed at Korea Meteorological Administration. By using data on temperature, rainfall, soil, and land use, the groundwater recharge rate for the research area was estimated by period and embodied as geographic information system (GIS). In order to calculate the groundwater recharge quantity, Visual HELP3 was used as main model for groundwater recharge, and the physical properties of weather, temperature, and soil layers were used as main input data. General changes to water circulation due to climate change have already been predicted. In order to systematically solve problems associated with how the groundwater resource circulation system should be reflected in future policies pertaining to groundwater resources, it may be urgent to recalculate the groundwater recharge quantity and consequent quantity for using via prediction of climate change in Korea in the future and then reflection of the results. The space-time calculation of changes to the groundwater recharge quantity in the study area may serve as a foundation to present additional measures for the improved management of domestic groundwater resources.

  • PDF

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

Construction of Web-Based Database for Anisakis Research (고래회충 연구를 위한 웹기반 데이터베이스 구축)

  • Lee, Yong-Seok;Baek, Moon-Ki;Jo, Yong-Hun;Kang, Se-Won;Lee, Jae-Bong;Han, Yeon-Soo;Cha, Hee-Jae;Yu, Hak-Sun;Ock, Mee-Sun
    • Journal of Life Science
    • /
    • v.20 no.3
    • /
    • pp.411-415
    • /
    • 2010
  • Anisakis simplex is one of the parasitic nematodes, and has a complex life cycle in crustaceans, fish, squid or whale. When people eat under-processed or raw fish, it causes anisakidosis and also plays a critical role in inducing serious allergic reactions in humans. However, no web-based database on A. simplex at the level of DNA or protein has been so far reported. In this context, we constructed a web-based database for Anisakis research. To build up the web-based database for Anisakis research, we proceeded with the following measures: First, sequences of order Ascaridida were downloaded and translated into the multifasta format which was stored as database for stand-alone BLAST. Second, all of the nucleotide and EST sequences were clustered and assembled. And EST sequences were translated into amino acid sequences for Nuclear Localization Signal prediction. In addition, we added the vector, E. coli, and repeat sequences into the database to confirm a potential contamination. The web-based database gave us several advantages. Only data that agrees with the nucleotide sequences directly related with the order Ascaridida can be found and retrieved when searching BLAST. It is also very convenient to confirm contamination when making the cDNA or genomic library from Anisakis. Furthermore, BLAST results on the Anisakis sequence information can be quickly accessed. Taken together, the Web-based database on A. simplex will be valuable in developing species specific PCR markers and in studying SNP in A. simplex-related researches in the future.

An Exploratory Study on Forecasting Sales Take-off Timing for Products in Multiple Markets (해외 복수 시장 진출 기업의 제품 매출 이륙 시점 예측 모형에 관한 연구)

  • Chung, Jaihak;Chung, Hokyung
    • Asia Marketing Journal
    • /
    • v.10 no.2
    • /
    • pp.1-29
    • /
    • 2008
  • The objective of our study is to provide an exploratory model for forecasting sales take-off timing of a product in the context of multi-national markets. We evaluated the usefulness of key predictors such as multiple market information, product attributes, price, and sales for the forecasting of sales take-off timing by applying the suggested model to monthly sales data for PDP and LCD TV provided by a Korean electronics manufacturer. We have found some important results for global companies from the empirical analysis. Firstly, innovation coefficients obtained from sales data of a particular product in other markets can provide the most useful information on sales take-off timing of the product in a target market. However, imitation coefficients obtained from the sales data of a particular product in the target market and other markets are not useful for sales take-off timing of the product in the target market. Secondly, price and product attributes significantly influence on take-off timing. It is noteworthy that the ratio of the price of the target product to the average price of the market is more important than the price ofthe target product itself. Lastly, the cumulative sales of the product are still useful for the prediction of sales take-off timing. Our model outperformed the average model in terms of hit-rate.

  • PDF