• Title/Summary/Keyword: unit graph

Search Result 136, Processing Time 0.026 seconds

A Virtual Battlefield Situation Dataset Generation for Battlefield Analysis based on Artificial Intelligence

  • Cho, Eunji;Jin, Soyeon;Shin, Yukyung;Lee, Woosin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.33-42
    • /
    • 2022
  • In the existing intelligent command control system study, the analysis results of the commander's battlefield situation questions are provided from knowledge-based situation data. Analysis reporters write these results in various expressions of natural language. However, it is important to analyze situations about information and intelligence according to context. Analyzing the battlefield situation using artificial intelligence is necessary. We propose a virtual dataset generation method based on battlefield simulation scenarios in order to provide a dataset necessary for the battlefield situation analysis based on artificial intelligence. Dataset is generated after identifying battlefield knowledge elements in scenarios. When a candidate hypothesis is created, a unit hypothesis is automatically created. By combining unit hypotheses, similar identification hypothesis combinations are generated. An aggregation hypothesis is generated by grouping candidate hypotheses. Dataset generator SW implementation demonstrates that the proposed method can be generated the virtual battlefield situation dataset.

A Study on the Interpretalion of the Synthetic Unit Hydrograph According to the Characteristics of catchment Area and Runoff Routing (유역 특성과 유출추적에 의한 단위도 해석에 관한 고찰)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.8 no.1
    • /
    • pp.1088-1096
    • /
    • 1966
  • The following is a method of synthetic unitgraph derivation based on the routing of a time area diagram through channel storage, studied by Clark-Jonstone and Laurenson. Unithy drograph (or unitgraph) is the hydrograph that would result from unit rainfall\ulcorner excess occuring uniformly with respect to both time and area over a catchment in unit time. By thus standarzing rainfall characteristics and ignoring loss, the unitgraph represents only the effects of catchment characteristics on the time distribution of runoff from a catchment The situation abten arises where it is desirable to derive a unitgraph for the design of dams, large bridge, and flood mitigation works such as levees, floodways and other flood control structures, and are also used in flood forecasting, and the necessary hydrologie records are not available. In such cases, if time and funds permit, it may be desirable to install the necessary raingauges, pruviometers, and stream gaging stations, and collect the necessary data over a period of years. On the otherhand, this procedure may be found either uneconomic or impossible on the grounds of time required, and it then becomes necessary to synthesise a unitgraph from a knowledge of the physical charcteristics of the catchment. In the preparing the approach to the solution of the problem we must select a number of catchment characteristic(shape, stream pattern, surface slope, and stream slope, etc.), a number of parameters that will define the magnitude and shape of the unit graph (e.g. peak discharge, time to peak, and base length, etc.), evaluate the catch-ment characteristics and unitgraph parameters selected, for a number of catchments having adequate rainfall and stream data and obtain Correlations between the two classes of data, and assume the relationships derived in just above question apply to other, ungaged, Catchments in the same region and, knowing the physical characteritics of these catchments, substitute for them in the relation\ulcorner ships to determine the corresponding unitgraph parameters. This method described in this note, based on the routing of a time area diagram through channel storage, appears to provide a logical line of research and they allow a readier correlation of unitgraph parameters with catchment characteristics. The main disadvantage of this method appears to be the error in routing all elements of rainfall excess through the same amount of storage. evertheless, it should be noted that the synthetic unitgraph method is more accurate than the rational method since it takes account of the shape and tophography of the catchment, channel storage, and temporal variation of rainfall excess, all of which are neglected in rational method.

  • PDF

Developing Learning Materials of Multimedia for General Science Instruction of High School (고등학교 공통과학 학습을 위한 멀티미디어 자료 구축)

  • Kim, Jae Hyun;Lee, Hee Bok;Kim, Hyun Sub;Kim, Hee Soo;Park, Jeong Wok;Park, Hyun Ju
    • Journal of the Korean Chemical Society
    • /
    • v.44 no.3
    • /
    • pp.249-257
    • /
    • 2000
  • This study was designed to develop learning materials of multimediafor general science instruction of high school.this learning material was made of HTML record for each middle unit according to the general science curriculum, and was included a variety of Ietter, graph, picture, drawing, animation, and other moving image materials. And it was composed five coursewares:Content, Dictionary, Science Story,lmage Material, and Questions.The learning material is uploaded an internet website under Science Education Research Institute of Kongju National University (http://science.kongju.ac.kr), and also is provided to a CD-ROM title.

  • PDF

Hardware implementation of Petri net-based controller with matrix-based look-up tables (행렬구조 메모리 참조표를 사용한 페트리네트 제어기의 하드웨어 구현)

  • Chang, Nae-Hyuck;Jeong, Seung-Kweon;Kwon, Wook-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.2
    • /
    • pp.194-202
    • /
    • 1998
  • This paper describes a hardware implementation method of a Petri Net-based controller. A flexible and systematic implementation method, based on look-up tables, is suggested, which enables to build high speed Petri net-based controllers. The suggested method overcomes the inherent speed limit that arises from the microprocessors by using of matrix-based look-up tables. Based on the matrix framework, this paper suggests various specific data path structures as well as a basic data path structure, accompanied by evolution algorithms, for sub-class Petri nets. A new sub-class Petri net, named Biarced Petri Net, resolves memory explosion problem that usually comes with matrix-based look-up tables. The suggested matrix-based method based on the Biarced Petri net has as good efficiency and expendability as the list-based methods. This paper shows the usefulness of the suggested method, evaluating the size of the look-up tables and introducing an architecture of the signal processing unit of a programmable controller. The suggested implementation method is supported by an automatic design support program.

  • PDF

Object-Oriented Software Regression Testing by Class Node Analysis (클래스 노드 분석에 의한 객체 지향 소프트웨어 회귀 테스팅)

  • Kwon, Young-Hee;Li, Len-Ge;Koo, Yeon-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3523-3529
    • /
    • 1999
  • In this paper, we propose an improved regression testing method, which use method as the basic unit of changing. The testing method consists of three steps. We represent the relationship of classes using the notation of UML(Unified Modeling Language), find the nodes of the modified methods and affected methods by node analysis, and then select changed test cases from the original test cases. The proposed object-oriented regression testing method can reduce the number of test cases, testing time and cost through reuse of test cases.

  • PDF

A Study of Dark Photon at the Electron-Positron Collider Experiments Using KISTI-5 Supercomputer

  • Park, Kihong;Cho, Kihyeon
    • Journal of Astronomy and Space Sciences
    • /
    • v.38 no.1
    • /
    • pp.55-63
    • /
    • 2021
  • The universe is well known to be consists of dark energy, dark matter and the standard model (SM) particles. The dark matter dominates the density of matter in the universe. The dark matter is thought to be linked with dark photon which are hypothetical hidden sector particles similar to photons in electromagnetism but potentially proposed as force carriers. Due to the extremely small cross-section of dark matter, a large amount of data is needed to be processed. Therefore, we need to optimize the central processing unit (CPU) time. In this work, using MadGraph5 as a simulation tool kit, we examined the CPU time, and cross-section of dark matter at the electron-positron collider considering three parameters including the center of mass energy, dark photon mass, and coupling constant. The signal process pertained to a dark photon, which couples only to heavy leptons. We only dealt with the case of dark photon decaying into two muons. We used the simplified model which covers dark matter particles and dark photon particles as well as the SM particles. To compare the CPU time of simulation, one or more cores of the KISTI-5 supercomputer of Nurion Knights Landing and Skylake and a local Linux machine were used. Our results can help optimize high-energy physics software through high-performance computing and enable the users to incorporate parallel processing.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Study on Development of the Instructional Materials for Elementary School Mathematics Based on STEAM Education (융합인재교육을 적용한 초등수학 수업자료 개발 연구)

  • Jung, Yun Hoe;Kim, Sung Joon
    • Journal of the Korean School Mathematics Society
    • /
    • v.16 no.4
    • /
    • pp.745-770
    • /
    • 2013
  • In the knowledge-based society today, most knowledge is the integrated one which is difficult to be classified into subjects rather than the knowledge of a single subject. Thus, integrated thinking, which integrated knowledge is preferentially acquired first and then can be also associated with imagination and artistic sensitivity, is simultaneously required in order that we have a problem-solving capability in our daily life. STEAM education(science, technology, engineering, arts and mathematics) is one of the educational methods to improve this problem-solving capability as well as integrated thinking. This research developed materials for STEAM education which can be applied to the 6th grade curriculum of elementary school mathematics, then input it, and analyzed how it impacts with students' attitudes toward mathematics. Unit 3 'Prism' and Pyramid' were restructured and replaced by classes such as 'Spaghetti Project' or 'Paper Craft'. Unit 4 'Several Solid Figure' was taught as a class of 'EDUCUBE'. Unit 6 'Proportional Graph' was taught as a class of 'Creating my own bracelet'. After having this class, we found that mathematics class applied STEAM also has a positive effect on the mathematical attitude of students. Many students said that math is fun and gets more interesting after having math class applied STEAM and we come to know that they have positive awareness of mathematics.

  • PDF

Evaluation of Approximate Exposure to Low-dose Ionizing Radiation from Medical Images using a Computed Radiography (CR) System (전산화 방사선촬영(CR) 시스템을 이용한 근사적 의료 피폭 선량 평가)

  • Yu, Minsun;Lee, Jaeseung;Im, Inchul
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.6
    • /
    • pp.455-464
    • /
    • 2012
  • This study suggested evaluation of approximately exposure to low-dose ionization radiation from medical images using a computed radiography (CR) system in standard X-ray examination and experimental model can compare diagnostic reference level (DRL) will suggest on optimization condition of guard about medical radiation of low dose space. Entrance surface dose (ESD) cross-measuring by standard dosimeter and optically stimulated luminescence dosimeters (OSLDs) in experiment condition about tube voltage and current of X-ray generator. Also, Hounsfield unit (HU) scale measured about each experiment condition in CR system and after character relationship table and graph tabulate about ESD and HU scale, approximately radiation dose about head, neck, thoracic, abdomen, and pelvis draw a measurement. In result measuring head, neck, thoracic, abdomen, and pelvis, average of ESD is 2.10, 2.01, 1.13, 2.97, and 1.95 mGy, respectively. HU scale is $3,276{\pm}3.72$, $3,217{\pm}2.93$, $2,768{\pm}3.13$, $3,782{\pm}5.19$, and $2,318{\pm}4.64$, respectively, in CR image. At this moment, using characteristic relationship table and graph, ESD measured approximately 2.16, 2.06, 1.19, 3.05, and 2.07 mGy, respectively. Average error of measuring value and ESD measured approximately smaller than 3%, this have credibility cover all the bases radiology area of measurement 5%. In its final analysis, this study suggest new experimental model approximately can assess radiation dose of patient in standard X-ray examination and can apply to CR examination, digital radiography and even film-cassette system.

An Algorithm for Spot Addressing in Microarray using Regular Grid Structure Searching (균일 격자 구조 탐색을 이용한 마이크로어레이 반점 주소 결정 알고리즘)

  • 진희정;조환규
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.9
    • /
    • pp.514-526
    • /
    • 2004
  • Microarray is a new technique for gene expression experiment, which has gained biologist's attention for recent years. This technology enables us to obtain hundreds and thousands of expression of gene or genotype at once using microarray Since it requires manual work to analyze patterns of gene expression, we want to develop an effective and automated tools to analyze microarray image. However it is difficult to analyze DNA chip images automatically due to several problems such as the variation of spot position, the irregularity of spot shape and size, and sample contamination. Especially, one of the most difficult problems in microarray analysis is the block and spot addressing, which is performed by manual or semi automated work in all the commercial tools. In this paper we propose a new algorithm to address the position of spot and block using a new concept of regular structure grid searching. In our algorithm, first we construct maximal I-regular sequences from the set of input points. Secondly we calculate the rotational angle and unit distance. Finally, we construct I-regularity graph by allowing pseudo points and then we compute the spot/block address using this graph. Experiment results showed that our algorithm is highly robust and reliable. Supplement information is available on http://jade.cs.pusan.ac.kr/~autogrid.