• Title/Summary/Keyword: 서비스 요소

Search Result 4,667, Processing Time 0.031 seconds

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

The Comparison of Image Quality between Computed Radiography(CR) and Direct Digital Radiography(DDR) which Follows the Proper Exposure Conditions in General Photographing under the Digital Radiography(DR) (Digital Radiography 환경하에서 일반촬영시 적정 노출조건에 따른 CR과 DDR의 Image Quality 비교)

  • Kim, Jin-Bae;Kang, Chung-Hwan;Kang, Sung-Jin;Park, Soo-In;Park, Jong-Won;Kim, Yeong-Su;Kim, Seung-Sik
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.5 no.1
    • /
    • pp.64-77
    • /
    • 2002
  • DR has had an important fact not only in the department of radiology but also in productivity or work efficiency of a whole hospital. The environment of DR has more various parameter than CR, so it is able to supply high quality of medical services. The current environment of radiology department in each hospital has been changed from Film-Screen system to DR through Full-PACS. This hospital which uses Full-PACS became to study the proper condition of CR and DDR and how the image quality of them is expressed among general photographing systems in the DR environment. From this experiment, the image quality of DDR is better than CR under the same exposure condition. And in the DDR system, the score of image which uses AEC is a little higher than the score which doesn't use it. Especially it can be known that the function of AEC of DDR is useful to improve the image quality in the part of skull and chest. (The function of AEC : It is the tool that detects the ionized current of x-ray which goes through objects with using the ion chamber which is in the detector. Also it controls the examination of X-ray when the proper density is reached.) Because the proper degree of density can be represented by this system, the photographing can be taken much easily without consideration of the exposure condition with the thickness of various objects. From the result of this experiment, it can be known that the selection of proper exposure condition plays an important rule to gain good Image Quality. More researches will be necessary about DDR system which has potential ability in the future.

  • PDF

Dynamic Traffic Assignment Using Genetic Algorithm (유전자 알고리즘을 이용한 동적통행배정에 관한 연구)

  • Park, Kyung-Chul;Park, Chang-Ho;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.1 s.15
    • /
    • pp.51-63
    • /
    • 2000
  • Dynamic traffic assignment(DTA) has been a topic of substantial research during the past decade. While DTA is gradually maturing, many aspects of DTA still need improvement, especially regarding its formulation and solution algerian Recently, with its promise for In(Intelligent Transportation System) and GIS(Geographic Information System) applications, DTA have received increasing attention. This potential also implies higher requirement for DTA modeling, especially regarding its solution efficiency for real-time implementation. But DTA have many mathematical difficulties in searching process due to the complexity of spatial and temporal variables. Although many solution algorithms have been studied, conventional methods cannot iud the solution in case that objective function or constraints is not convex. In this paper, the genetic algorithm to find the solution of DTA is applied and the Merchant-Nemhauser model is used as DTA model because it has a nonconvex constraint set. To handle the nonconvex constraint set the GENOCOP III system which is a kind of the genetic algorithm is used in this study. Results for the sample network have been compared with the results of conventional method.

  • PDF

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Analysis of Sustainable Development Goals(SDGs) and 'Housing' Contents in Middle School Technology·Home Economics Textbooks (중학교 기술·가정 교과서의 '주생활' 단원 내용과 관련된 지속가능발전목표(SDGs) 분석)

  • Choi, Seong-Youn;Lee, Young-Sun;Kim, Eun-Jong;Kim, Seung-Hee;Lee, Ji-Sun;Cho, Jae-Soon
    • Journal of Korean Home Economics Education Association
    • /
    • v.31 no.1
    • /
    • pp.115-136
    • /
    • 2019
  • The purpose of this study is to analyze the contents of 'housing' unit in middle school Technology-Home Economics textbooks according to the 2015 revision curriculum based on the targets of SDGs. All contents of the ten textbooks of five publishers, such as texts, photographs/figures/tables, activity tasks, and supplementary materials were analyzed in terms of SDGs targets. The number of 'housing' contents among 4 small housing units of Technology-Home Economics book 1 & 2 varied from 64 to 97 by publishers. Beside SDGs4.7, which contains inclusive and general ESDGs, 24 targets of 10 SDGs were found to be related to the contents of 'housing' and were grouped into 15 target categories. The number of SDGs target categories related to housing contents of each small unit and total of all units differed by publishers. Each of 4 small 'housing' units from all the five publishers was related to 6~10 target categories. The contents of five book 1's were related to smaller number of target categories than those of five book 2's. They corresponded to 9 and 12 target categories, consecutively. Only SDGs' target11.1 (appropriate and safe housing and basic services) was related to all the four small units of 'housing' contents among all the five publishers. covering 43.8% of the housing contents. In conclusion, the contents of the 'housing' unit were related to broad range of SDGs targets. Further study could relate goals of teaching-learning plan to various global targets of SDGs according to the contents of 'housing' in order to accomplish ESDGs.

A Review Essay on Legal Mechanisms for Orbital Slot Allocation (정지궤도슬롯의 법적 배분기제에 관한 논고)

  • Jung, Joon-Sik;Hwang, Ho-Won
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.199-236
    • /
    • 2014
  • This paper analyses from the perspective of distributive justice the legal mechanisms for international allocation of orbital slots, which are of co-owned nature and thereby limited natural resources in outer space. The allocative function is delegated to the International Telecommunication Union. The Radio Regulation, amongst such other legal instruments as the Constitution and Convention, by which the ITU and contracting States thereof abides, dictates how the orbital positions are distributed. Thus, the RR is thoroughly reviewed in the essay. The mechanisms are in a broad sense categorized into two systems: 'a posteriori system' where the 'first come, first served' principle prevails; and 'a priori system' designed to foster the utilisation of the slots by those who lack space resources and are, in especial, likely to be marginalised under the former system. The argument proceeds on the premise that a posteriori system places the under-resourced States in unfavourable positions in the securement of the slots. In contrast with this notion, seven factors were instantiated for an assertion that the degradation of the distributive justice derived from the 'first come, first served' rule, which lays the foundation for the system, could be either mitigated or counterbalanced by the alleged exceptions to the rule. However, the author of this essay argues for counterevidences against the factors and thereby demonstrating that the principle still remains as an overwhelming doctrine, posing a threat to the pursuit of fair allocation. The elements he set forth are as in the following: 1) that the 'first come, first served' principle only applies to assignments capable of causing harmful interferences; 2) the interoperability of the principle with the 'rule of conformity' with the all the ITU instruments; 3) the viability of alternative registrations, as an exception of the application of the principle, on the condition of provisional and informational purposes; 4) another reference that matters in deciding the priority: the types of services in the TFA; 5) the Rule of Procedure H40 proclaiming a ban on taking advantage of coming first to the Register; 6) the technical factors and equity-oriented norms under international and municipal laws along with; 7) the changes of 'basic characteristics' of registered assignments. The second half of this essay illustrates by examining the relevant Annexes to the Regulation that the planned allocation, i.e., a priori system, bear the structured flaws that hinder the fulfillment of the original purpose of the system. The Broadcasting and Fixed Satellite Systems are the reviewed Plans in which the 'first come, first served' principle re-emerges in the end as a determining factor to grant the 'right to international recognition' to administrations including those who has not the allotted portions in the Plan.

A Study on the Roles and Ideological Development of Welfare Characteristics in Parks (공원복지 역할 및 이념 전개 양상에 관한 연구)

  • Han, So-Young;Cho, Han-Sol;Zoh, Kyung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.69-81
    • /
    • 2015
  • Under the premise that parks have been a performing field of welfare ideology that benefits the citizen from the past, the present study began with a basic question on what substance a park has and how it has worked. Therefore, this study tried to find out the theoretic background that can explain the roles of a park as an instrument for welfare, of which topic is currently being discussed, and examine how the ideology in the debate regarding welfare characteristics of parks are differentiated from those of social welfare. In addition, this study divided the process of development of parks defined by Galen Cranz in an attempt to view how welfare benefits offered by parks have changed in their development and looked into the roles and types of welfare functions that parks provided to the citizens under a certain social situation by period. Furthermore, the characteristics and development of the ideology underlying a welfare park were examined by function and element in its progression. The results of this study are as follows. The functions that parks have performed so far can be classified into three categories. First, they have a remedial function. Parks have given direct services to 'the socially disadvantaged' such as relief, fostering, and rehabilitation. Second, parks have played a preventive function. They aim to reinforce the functions of individual, family, group, and community. Third, they have exerted a developmental function. They function to promote change of society in a way for it to contribute to social development. Looking into the roles and functions of parks from the perspective of their beneficiary class and benefits, the following were discovered. First, the beneficiaries of welfare characteristics in parks have expanded to the general public from the poor class, and the benefits of parks have spread into the public including the underprivileged in a real sense. Second, the significance of welfare characteristics in parks has also changed from literal benefits to caring for basic human rights. Third, the purpose of welfare characteristics in parks has changed from providing minimal conditions to optimal conditions. At its beginning, the ideology of welfare in parks remained ideal, confining itself to their idealistic characteristics; but as time went on, they created several social benefits in response to various social demands, developing into a field where welfare ideology manifests and is realized in an active manner. Furthermore, it was witnessed that the parks and welfare of the present times are standing at the point of contact for participation and universal well-being. The present study reconsidered the meaning and value of parks from perspective of them as a provider of welfare benefits as well as examined how the welfare ideology of parks is connected to practice. By doing so, this study discovered the various roles, values, and ideology that parks should bear in the future. Therefore, this study is expected to be a good example for future research related to the topic.

A Study on Visual Identity of Korean Government (우리나라 행정부의 시각 정체성 연구)

  • Cho, Ju-Eun
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.261-272
    • /
    • 2006
  • As we cannot think of our lives without a nation, it is closely related to almost every part of our daily lives. The role of government is becoming more important in the complex modern society as an essential element of national authority even though the government has indirect and secondary characteristics in its functional performance. Therefore, the government has to be efficient in planning and executing its policies, and it needs to be representative and fair as part of a national authoritative community. In the 21st century when symbolic and cultural importance of images are becoming more important, it is crucial for the government organizations to have an integrated identity design system that can satisfy both of these requirements of the government. However, the C.I.(Corporate Identity) of each Korean administrative branch has been developed separately and sporadically, which resulted in lack of consistency as part of the government. Shape and material of their C.I.s that follow short term design trend and popularity also lack uniqueness which can be distinguished from those of any private corporation. This may show that our government lacks systematic administrative capability, since image of a feature represents its characteristics and reality, and their recognition and evaluation from others become identity of the feature. In this perspective, the purpose of this thesis is to suggest an identity design system that has certain rules and regularity with wide variety of possible alterations for the central administration in Korea. In order to represent this visually, identity design system with both integrity and variety of possible alteration is created based on traditional Korean culture, especially the concept of Umyang-ohaeng and Samjae.

  • PDF

Attention to the Internet: The Impact of Active Information Search on Investment Decisions (인터넷 주의효과: 능동적 정보 검색이 투자 결정에 미치는 영향에 관한 연구)

  • Chang, Young Bong;Kwon, YoungOk;Cho, Wooje
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.117-129
    • /
    • 2015
  • As the Internet becomes ubiquitous, a large volume of information is posted on the Internet with exponential growth every day. Accordingly, it is not unusual that investors in stock markets gather and compile firm-specific or market-wide information through online searches. Importantly, it becomes easier for investors to acquire value-relevant information for their investment decision with the help of powerful search tools on the Internet. Our study examines whether or not the Internet helps investors assess a firm's value better by using firm-level data over long periods spanning from January 2004 to December 2013. To this end, we construct weekly-based search volume for information technology (IT) services firms on the Internet. We limit our focus to IT firms since they are often equipped with intangible assets and relatively less recognized to the public which makes them hard-to measure. To obtain the information on those firms, investors are more likely to consult the Internet and use the information to appreciate the firms more accurately and eventually improve their investment decisions. Prior studies have shown that changes in search volumes can reflect the various aspects of the complex human behaviors and forecast near-term values of economic indicators, including automobile sales, unemployment claims, and etc. Moreover, search volume of firm names or stock ticker symbols has been used as a direct proxy of individual investors' attention in financial markets since, different from indirect measures such as turnover and extreme returns, they can reveal and quantify the interest of investors in an objective way. Following this line of research, this study aims to gauge whether the information retrieved from the Internet is value relevant in assessing a firm. We also use search volume for analysis but, distinguished from prior studies, explore its impact on return comovements with market returns. Given that a firm's returns tend to comove with market returns excessively when investors are less informed about the firm, we empirically test the value of information by examining the association between Internet searches and the extent to which a firm's returns comove. Our results show that Internet searches are negatively associated with return comovements as expected. When sample is split by the size of firms, the impact of Internet searches on return comovements is shown to be greater for large firms than small ones. Interestingly, we find a greater impact of Internet searches on return comovements for years from 2009 to 2013 than earlier years possibly due to more aggressive and informative exploit of Internet searches in obtaining financial information. We also complement our analyses by examining the association between return volatility and Internet search volumes. If Internet searches capture investors' attention associated with a change in firm-specific fundamentals such as new product releases, stock splits and so on, a firm's return volatility is likely to increase while search results can provide value-relevant information to investors. Our results suggest that in general, an increase in the volume of Internet searches is not positively associated with return volatility. However, we find a positive association between Internet searches and return volatility when the sample is limited to larger firms. A stronger result from larger firms implies that investors still pay less attention to the information obtained from Internet searches for small firms while the information is value relevant in assessing stock values. However, we do find any systematic differences in the magnitude of Internet searches impact on return volatility by time periods. Taken together, our results shed new light on the value of information searched from the Internet in assessing stock values. Given the informational role of the Internet in stock markets, we believe the results would guide investors to exploit Internet search tools to be better informed, as a result improving their investment decisions.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.