• Title/Summary/Keyword: CRITERIA

Search Result 21,036, Processing Time 0.047 seconds

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Stock-Index Invest Model Using News Big Data Opinion Mining (뉴스와 주가 : 빅데이터 감성분석을 통한 지능형 투자의사결정모형)

  • Kim, Yoo-Sin;Kim, Nam-Gyu;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.143-156
    • /
    • 2012
  • People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Interlaboratory Comparison of Blood Lead Determination in Some Occupational Health Laboratories in Korea (일부 산업보건기관들의 혈중연 분석치 비교)

  • Ahn, Kyu Dong;Lee, Byung Kook
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.5 no.1
    • /
    • pp.8-15
    • /
    • 1995
  • The reliable measurement of metal in biological media in human body is one of critical indicators for the proper evaluation of its toxic effect on human health. Recently in Korea the necessity of quality assurance of measurement in occupational health and occupational hygiene fields brought out regulatory quality control program. Lead is often used as a standard metal for the program in both fields of occupational health and hygiene. During last 20 years lead poisoning was prevalent in Korea and still is one of main heavy metal poisoning and the capability of the measurement of blood lead is one of prerequisites for institute of specialized occupational health in Korea. Furthermore blood lead is most important indicator to evaluate lead burden of human exposure to lead and the reliable and accurate analysis is most needed whenever possible. To evaluate the extent of the interlaboratory differences of blood lead measurement in several well-known institute specialized in occupational health in Korea, authors prepared 68 blood samples from two storage battery industries and all samples were divided into samples with 2 ml. One set of 68 samples were analyzed by authors's laboratory(Soonchunhyang University Institute of Industrial Medicine: SIIM) and 40 samples of other set were analyzed by C University Institute of Industrial Medicine(CIIM) and the rest 28 samples of other set were analyzed by Japanese institute(K Occupational Health Center:KOHC). Authors also prepared test bovine samples which were obtained from Japanese Federation of Occupational Health Organization (JFOHO) for quality control. Authors selected 2 other well-known occupational health laboratories and one laboratory specialized for instrumental analysis. A total of 6 laboratories joined the interlaboratory comparison of blood lead measurement and the results obtained were as follows: 1. There was no significant difference in average blood lead between SIIM and CIIM in different group of blood lead concentration, and the relative standard deviation of two laboratories was less than 3.0%. On the other hand, there was also no significant difference of average blood lead between SIIM and KOHC with relative standard deviation of 6.84% as maximum. 2. Taking less than 15% difference of mean or less than 6 ug/dl difference in below 40 ug/dl in whole blood as a criteria of agreement of measurement between two laboratories, agreement rates were 87.5%(35/40) and 78.6%(22/28) between SIIM and CIIM, SIIM and KOHC respectively. 3. The correlation of blood lead between SIIM and CIIM was 0.975 (p=0.0001) and the regression equation was SIIM = 2.19 + 0.9243 ClIM, whereas the correlation between SUM and KOHC was O.965(p=0.0001) with the equation of SIIM = 1.91 + 0.9794 KOHC. 4. Taking the reference value as a dependent variable and each of 6 laboratories's measurement value as a independent variable, the determination coefficient($R^2$) of simple regression equations of blood lead measurement for bovine test samples were very high($R^2>0.99$), and the regression coefficient(${\beta}$) was between 0.972 and 1.15 which indicated fairly good agreement of measurement results.

  • PDF

Determination of Therapeutic Dose of I-131 for First High Dose Radioiodine Therapy in Patients with Differentiated Thyroid Cancer: Comparison of Usefulness between Pathological Staging, Serum Thyroglobulin Level and Finding of I-123 Whole Body Scan (분화 갑상선암 수술 후 최초 고용량 방사성옥소 치료시 투여용량 결정: 병리적 병기, 혈청 갑상선글로불린치와 I-123 전신 스캔의 유용성 비교)

  • Jeong, Hwan-Jeong;Lim, Seok-Tae;Youn, Hyun-Jo;Sohn, Myung-Hee
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.4
    • /
    • pp.301-306
    • /
    • 2008
  • Purpose: Recently, a number of patients needed total thyroidectomy and high dose radioiodine therapy (HD-RAI) get increased more. The aim of this study is to evaluate whether pathological staging (PS) and serum thyroglobulin (sTG) level could replace the diagnostic I-123 scan for the determination of therapeutic dose of HD-RAI in patients with differentiated thyroid cancer. Materials and Methods: Fifty eight patients (M:F=13;45, age $44.5{\pm}11.5\;yrs$) who underwent total thyroidectomy and central or regional lymph node dissection due to differentiated thyroid cancer were enrolled. Diagnostic scan of I-123 and sTG assay were also performed on off state of thyroid hormone. The therapeutic doses of I-131 (TD) were determined by the extent of uptakes on diagnostic I-123 scan as a gold standard. PS was graded by the criteria recommended in 6th edition of AJCC cancer staging manual except consideration of age. For comparison of the determination of therapeutic doses, PS and sTG were compared with the results of I-123 scan. Results: All patients were underwent HD-RAI. Among them, five patients (8.6%) were treated with 100 mCi of I-131, fourty three (74.1%) with 150 mCi, six (10.3%) with 180 mCi, three (5.2%) with 200 mCi, and one (1.7%) with 250 mCi, respectively. On the assessment of PS, average TDs were $154{\pm}25\;mCi$ in stage I (n=9), $175{\pm}50\;mCi$ in stage II (n=4), $149{\pm}21\;mCi$ in stage III (n=38), and $161{\pm}20\;mCi$ in stage IV (n=7). The statistical significance was not shown between PS and TD (p=0.169). Among fifty two patients who had available sTG, 25 patients (48.1%) having below 2 ng/mL of sTG were treated with $149{\pm}26\;mCi$ of I-131, 9 patients (17.3%) having $2{\leq}\;sTG\;<5\;ng/mL$ with $156{\pm}17\;mCi$, 5 patients (9.6%) having $5{\leq}\;sTG\;<10\;ng/mL$ with $156{\pm}13\;mCi$, 7 patients (13.5%) having $10{\leq}sTG\;<50\;ng/mL$ with $147{\pm}24\;mCi$, and 6 patients (11.5%) having above 50 ng/mL with $175{\pm}42\;mCi$. The statistical significance between sTG level and TD (p=0.252) was not shown. Conclusion: In conclusion, PS and sTG could not replace the determination of TD using I-123 scan for first HD-RAI in patients with differentiated thyroid cancer.

Variation of Leaf Characters in Cultivating and Wild Soybean [Glycine max (L.) Merr.] Germplasm (콩 재배종과 야생종 유전자원의 엽 형질 변이)

  • Jong, Seung-Keun;Kim, Hong-Sig
    • Korean Journal of Breeding Science
    • /
    • v.41 no.1
    • /
    • pp.16-24
    • /
    • 2009
  • Although leaf characters are important in soybean [Glycin max (L.) Merr.] breeding and development of cultural methods, very little information has been reported. The objectives of this study were to evaluate and analyze the relationships among leaf characters and suggest possible classification criteria for cultivating and wild (Glycin soja Sieb. & Zucc.) soybeans. Total of 94 cultivating and 91 wild soybean accessions from the Soybean Germplasm Laboratory of Chungbuk National University were used for this study. Central leaflet of the second leaf from the top of the plant was selected to measure leaf characters. Average leaf length, leaf width, leaf area, leaf shape index (LSI) of cultivating and wild soybeans were 12.3$\pm$1.25 cm and 6.6$\pm$1.35 cm, 6.8$\pm$1.241 cm and 2.9$\pm$0.92 cm, 55.6$\pm$15.75 $cm^2$ and 14.3$\pm$7.83 $cm^2$, and 1.9$\pm$0.38 and 2.4$\pm$0.53, respectively. Based on LSI, three categories of leaf shape, i.e., oval, ovate and lanceolate, were defined as LIS$\leq$2.0, LSI 2.1~3.0 and 3.1$\leq$LSI, respectively. Percentage of oval, ovate and lanceolate leaf types among cultivating and wild soybean accessions were 78.7%, 17.0% and 4.3 %, and 40%, 15.4% and 4.4%, respectively. Based on leaf length, three categories for cultivating, i.e. short leaf ($\leq$11.0 cm), intermediate (11.1~13.0 cm), and long (13.1 cm$\leq$), and four categories, i.e. short ($\leq$5.0 cm), intermediate (5.1~7.0 cm), long (7.0~9.0 cm), and very long (9.1 cm$\leq$) for wild soybeans were defined. Short, intermediate and long leaf types were about 1/3, 1/2 and 1/6, respectively, in cultivating soybeans, and 15.4%, 40.7% and 39.5%, plus 4.4% of very long leaf type in wild soybean. Cultivating and wild soybeans had leaf thickness, leaf area ratio (LAR), angle and petiol length of 0.25$\pm$0.054 mm and 0.14$\pm$0.032 mm, 40.1$\pm$8.22 and 53.7$\pm$12.02, $37.6{\pm}5.89^{\circ}$ and $54.6{\pm}10.77^{\circ}$, and 23.9$\pm$5.89 cm and 5.9$\pm$2.33 cm, respectively. There were highly significant positive correlations between leaf length and leaf width, and negative correlation between LSI and leaf width both in cultivating and wild soybeans. Although leaf area showed significant correlations with leaf length, leaf width and LIS in cultivating soybeans, wild soybeans showed no significant relationships among these characters. In general, soybeans with oval, ovate and lanceolate leaves were significantly different in leaf width and thickness. Cultivating soybean with oval leaf had greater leaf area, while wild soybeans with oval or ovate leaf had longer petiol than with lanceolate leaf.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Development of Analytical Method for Detection of Fungicide Validamycin A Residues in Agricultural Products Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살균제 Validamycin A의 시험법 개발)

  • Park, Ji-Su;Do, Jung-Ah;Lee, Han Sol;Park, Shin-min;Cho, Sung Min;Shin, Hye-Sun;Jang, Dong Eun;Cho, Myong-Shik;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.1
    • /
    • pp.22-29
    • /
    • 2019
  • Validamycin A is an aminoglycoside fungicide produced by Streptomyces hygroscopicus that inhibits trehalase. The purpose of this study was to develop a method for detecting validamycin A in agricultural samples to establish MRL values for use in Korea. The validamycin A residues in samples were extracted using methanol/water (50/50, v/v) and purified with a hydrophilic-lipophilic balance (HLB) cartridges. The analyte was quantified and confirmed by liquid chromatograph-tandem mass spectrometer (LC-MS/MS) in positive ion mode using multiple reaction monitoring (MRM). Matrix-matched calibration curves were linear over the calibration ranges (0.005~0.5 ng) into a blank extract with $R^2$ > 0.99. The limits of detection and quantification were 0.005 and 0.01 mg/kg, respectively. For validation validamycin A, recovery studies were carried out three different concentration levels (LOQ, $LOQ{\times}10$, $LOQ{\times}50$, n = 5) with five replicates at each level. The average recovery range was from 72.5~118.3%, with relative standard deviation (RSD) less than 10.3%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the NIFDS (National Institute of Food and Drug Safety) guideline (2016). Therefore, the proposed analytical method is accurate, effective and sensitive for validamycin A determination in agricultural commodities.