• Title/Summary/Keyword: transactions

Search Result 45,713, Processing Time 0.063 seconds

Energy-Efficient Division Protocol for Mobile Sink Groups in Wireless Sensor Network (무선 센서 네트워크에서 이동 싱크 그룹의 분리를 지원하기 위한 라우팅 프로토콜)

  • Jang, Jaeyoung;Lee, Euisin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • Communications for mobile sink groups such as rescue teams or platoons bring about a new challenging issue for handling mobility in wireless sensor networks. To do this, many studies have been proposed to support mobile sink groups. When closely looking at mobile sink groups, they can be divided into (multiple) small groups according to the property of applications. For example, a platoon can be divided into multiple squads to carry out its mission in the battle field. However, the previous studies cannot efficiently support the division of mobile sink groups because they do not address three challenging issues engendered by the mobile sink group division. The first issue is to select a leader sink for a new small mobile sink group. The efficient data delivery from a source to small mobile sink groups is the second issue. Last, the third issue is to share data between leader sinks of small mobile sink groups. Thus, this paper proposes a routing protocol to efficiently support the division of mobile sink groups by solving the three challenging issues. For the first issue, the proposed protocol selects a leader sink of a new small mobile sink group which provide a minimum summation of the distance between the new leader sink and the previous leader sink and the distance from the new leader sink to all of its member sinks. For the efficient data delivery from a source to small mobile sink groups in the second issue, the proposed protocol determines the path to minimize the data dissemination distance from source to small mobile sink group by calculating with the location information of both the source and the leader sinks. With regard to the third issue, the proposed protocol exploits member sinks located among leader sinks to provide efficient data sharing among leaders sinks by considering the location information of member sinks. Simulation results verified that the proposed protocol is superior to the previous protocol in terms of the energy consumption.

An efficient interconnection network topology in dual-link CC-NUMA systems (이중 연결 구조 CC-NUMA 시스템의 효율적인 상호 연결망 구성 기법)

  • Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.49-56
    • /
    • 2004
  • The performance of the multiprocessor systems is limited by the several factors. The system performance is affected by the processor speed, memory delay, and interconnection network bandwidth/latency. By the evolution of semiconductor technology, off the shelf microprocessor speed breaks beyond GHz, and the processors can be scalable up to multiprocessor system by connecting through the interconnection networks. In this situation, the system performances are bound by the latencies and the bandwidth of the interconnection networks. SCI, Myrinet, and Gigabit Ethernet are widely adopted as a high-speed interconnection network links for the high performance cluster systems. Performance improvement of the interconnection network can be achieved by the bandwidth extension and the latency minimization. Speed up of the operation clock speed is a simple way to accomplish the bandwidth and latency betterment, while its physical distance makes the difficulties to attain the high frequency clock. Hence the system performance and scalability suffered from the interconnection network limitation. Duplicating the link of the interconnection network is one of the solutions to resolve the bottleneck of the scalable systems. Dual-ring SCI link structure is an example of the interconnection network improvement. In this paper, I propose a network topology and a transaction path algorism, which optimize the latency and the efficiency under the duplicated links. By the simulation results, the proposed structure shows 1.05 to 1.11 times better latency, and exhibits 1.42 to 2.1 times faster execution compared to the dual ring systems.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

Integrated Management Data Warehouse Development Process of Research Expenses in Enterprise Environment (엔터프라이즈 환경의 연구비 통합관리 데이터 웨어하우스 개발 프로세스)

  • Choi, Seong-Man;Yoo, Cheol-Jung;Chang, Ok-Bae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.183-194
    • /
    • 2004
  • The existing management job of research expenses has been divided into three parts: budget planning, budget draw-up, and exact settlement of budget. However, it caused some problems. Under this current circumstance it is required to obtain research expenses steadily, to operate efficiently and to use them clearly to solve such problems. As a result of a study on data warehouse development process of existing system integration company (Inmon, IBM) to reflect current trend described above, data warehouse development process of Inmon uses systematic and gradual access as a classical development cycle method. It causes overlap and feedback to the previous step in the process of each step Is requested. And another problem that it is difficult to toil what function refers and corrects data because functions and data are separated during performing development process at data warehouse development process of IBM is caused. Integrated management data warehouse development process of research expenses in the enterprise environment which applies UML at planning and analysis step, design step and implement and test step is suggested in this paper. Information retrieval agent uses existing budget plan DB, budget draw-up DB and budget settlement DB to find out information that a user wants to know. Information retrieval agent collects and saves information at integration database and information integration agent extracts, transports, transforms and loads the data. Information integration agent reduces a user's efforts to access to a number of information sources and check each of them. It also screens out data that a user may not need. As a result, integrated management data warehouse development process of research expenses in the enterprise environment reflects a user's requirements as much as possible and provides various types of information to make a decision which is needed to establish the policy of research expense management. It helps an end user approach his/her desired analysis information quickly and get various data from the comprehensive viewpoint rather than the fragmentary viewpoint. Furthermore, as it integrated three systems into one, it is possible to share data, to integrate the system, to reduce operating expenses and to simplify supporting environment for the decision making.

The Influential Factor Analysis in the Technology Valuation of The Agri-Food Industry and the Simulation-Based Valuation Analysis (농식품 산업의 기술평가 영향요인 분석과 시뮬레이션 기반 기술평가 비교)

  • Kim, Sang-gook;Jun, Seung-pyo;Park, Hyun-woo
    • Journal of Technology Innovation
    • /
    • v.24 no.4
    • /
    • pp.277-307
    • /
    • 2016
  • Since 2011, DCF(Discounted Cash Flow) method has been used initiatively for valuating R&D technology assets in the agricultural food industry and recently technology valuation based on royalties comparison among technology transfer transactions has been also carried out in parallel when evaluating the technology assets such as new seed development technologies. Since the DCF method which has been known until now has many input variables to be estimated, sophisticated estimation has been demanded at the time of technology valuation. In addition, considering more similar trading cases when applying sales transaction comparison or industry norm method based on information of technology transfer royalty, it is an important issue that should be taken into account in the same way in the Agri-Food industry. The main input variables used for technology valuation in the Agri-Food industry are life cycle of technology asset, the financial information related to the Agri-Food industry, discount rate, and technology contribution rate. The latest infrastructure building and data updating related to technology valuation has been carried out on a regular basis in the evaluation organization of the Agri-Food segment. This study verifies the key variables that give the most important impact on the results for the existing technology valuation in the Agri-Food industry and clarifies the difference between the existing valuation result and the outcome by referring the support information that is derived through the latest input information applied in DCF method. In addition, while presenting the scheme to complement fragment information which the latest input data just influence result of technology valuation, we tried to perform comparative analysis between the existing valuation results and the evaluated outcome after the latest of reference data for making a decision the input values to be estimated in DCF. To perform these analyzes, it was first selected the representative cases evaluated past in the Agri-Food industry, applied a sensitivity analysis for input variables based on these selected cases, and then executed a simulation analysis utilizing the key input variables derived from sensitivity analysis. The results of this study is to provide the information which there are the need for modernization of the data related to the input variables that are utilized during valuating technology assets in the Agri-Food sector and for building the infrastructure of the key input variables in DCF. Therefore it is expected to provide more fruitful information about the results of valuation.

Current Status and Management of Alien Turtles in Korea (외래거북의 국내 현황 및 관리방안)

  • Lee, Do-Hun;Kim, Young-Chae;Chang, Min-Ho;Kim, Suhwan;Kim, Dongeon;Kil, Jihyon
    • Journal of Environmental Impact Assessment
    • /
    • v.25 no.5
    • /
    • pp.319-332
    • /
    • 2016
  • Alien turtles belonging to Genus Trachemys have been designated as Invasive Alien Species since 2001 and their import has been banned in Korea. However, current status of import and distribution of the other alien turtles have not been reported. In this study, we aimed to investigate the taxa of alien turtles introduced into Korea, to assess their potential risks to the natural ecosystems and to suggest the future management directions of them in Korea. We identified 73 species of alien turtles belonging to 9 families. Since 2008, more than 6,000 kg of turtles have been imported annually and widely distributed through the pet shops, traditional markets and individual transactions. From the survey of natural habitats, we found that 8 species belonging to 3 families including Chrysemys picta, Pseudemys concinna, P. nelsoni, P. peninsularis, P. rubriventris, Mauremys sinensis, Macrochelys temminckii and Trachemys scripta have inhabited in 12 study sites. Out of 73 alien turtles, the potential adverse impacts of 13 species to ecosystems are serious when we considered status of designation of invasive alien species in other countries. For the management of alien turtles, it is required to register alien turtles in the import list and share general information such as import purpose, distribution and management condition among relevant authorities. The breeders and distributors must be obliged to identify turtles and to record management. The government must check transfer and migration of turtles periodically to prevent their introduction and spread into natural environments. The change of alien turtle populations in natural habitats should be monitored and their management plan should be developed to control the alien turtles in areas where the impacts are significant.

A Case Study of Artist-centered Art Fair for Popularizing Art Market (미술 대중화를 위한 작가중심형 아트페어 사례 연구)

  • Kim, Sun-Young;Yi, Eni-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.279-292
    • /
    • 2018
  • Unlike the global art market which experienced rapid recovery from the impacts of the Global Financial Crisis in 2008, the Korean art market has not yet fully recovered. The gallery-oriented distribution system, vulnerable primary art market functions, and the market structure centered on a small number of collectors make it difficult for young and medium artists to enter the market and, as a result, deepen the economic polarization of artists. In addition, the high price of art works limits market participation by restricting the general public. This study began with the idea that the interest of the public in the art market as well as their participation in the market are urgent. To this end, we noted that public awareness of art transactions can be a starting point for improving the constitution of the fragile art market, focusing on the 'Artist-centered Art Fair' rather than existing art fairs. To examine the contribution of such an art fair to the popularization of the art market, we analyzed the case of the 'Visual Artist Market (VAM)' project of the Korea Arts Management Service. Results found that the 'Artist-centered Art Fair' focuses on providing opportunities for market entry to young and medium artists rather than on the interests of distributors, and promotes the popularization of the art market by promoting low-priced works to the general public. Also, the 'Artist-centered Art Fair' seems to play a primary role in the public sector to foster solid groups of artists as well as to establish healty distribution networks of Korean Art market. However, in the long run, it is necessary to promote sustainable development of the 'Artist-centered Art Fair' through indirect support, such as the provision of a publicity platform or consumer finance support, rather than direct support.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF