• Title/Summary/Keyword: 사용자효용

Search Result 351, Processing Time 0.03 seconds

Scenario-Driven Verification Method for Completeness and Consistency Checking of UML Object-Oriented Analysis Model (UML 객체지향 분석모델의 완전성 및 일관성 진단을 위한 시나리오기반 검증기법)

  • Jo, Jin-Hyeong;Bae, Du-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.3
    • /
    • pp.211-223
    • /
    • 2001
  • 본 논문에서 제안하는 시나리오기반 검증기법의 목적은 UML로 작성된 객체지향 분석모델의 완전성 및 일관성을 진단하는 것이다. 검증기법의 전체 절차는 요구분석을 위한 Use Case 모델링 과정에서 생성되는 Use Case 시나리오와 UML 분석모델로부터 역공학적 방법으로 도출된 객체행위 시나리오와의 상호참조과정 및 시나리오 정보트리 추적과정을 이용하여 단계적으로 수행된다. 본 검증절차를 위하여 우선, UML로 작성된 객체지향 분석모델들은 우선 정형명세언어를 사용하여 Use Case 정형명세로 변환하다. 그 다음에, Use Case 정형명세로부터 해당 Use Case 내의 객체의 정적구조를 표현하는 시나리오 정보트리를 구축하고, Use Case 정형명세 내에 포함되어 있는 객체 동적행위 정보인 메시지 순차에 따라 개별 시나리오흐름을 시나리오 정보트리에 표현한다. 마지막으로 시나리오 정보트리 추적과 시나리오 정보 테이블 참조과정을 중심으로 완전성 및 일관성 검증작업을 수행한다. 즉, 검증하고자 하는 해당 Use Case의 시나리오 정보트리를 이용한 시나리오 추적과정을 통해 생성되는 객체행위 시나리오와 요구분석 과정에서 도출되는 Use Case 시나리오와의 일치여부를 조사하여 분석모델과 사용자 요구사양과의 완전성을 검사한다. 그리고, 시나리오 추적과정을 통해 수집되는 시나리오 관련종보들을 가지고 시나리오 정보 테이블을 작성한 후, 분석과정에서 작성된 클래스 관련정보들의 시나리오 포함 여부를 확인하여 분석모델의 일관성을 검사한다. 한편, 본 논문에서 제안하는 검증기법의 효용성을 증명하기 위해 대학의 수강등록시스템 개발을 위해 UML을 이용해 작성된 분석모델을 특정한 사례로써 적용하여 보았다. 프로세싱 오버헤드 및 메모리와 대역폭 요구량 측면에서 MARS 모델보다 유리함을 알 수 있었다.과는 본 논문에서 제안된 프리페칭 기법이 효율적으로 peak bandwidth를 줄일 수 있다는 것을 나타낸다.ore complicate such a prediction. Although these overestimation sources have been attacked in many existing analysis techniques, we cannot find in the literature any description about questions like which one is most important. Thus, in this paper, we quantitatively analyze the impacts of overestimation sources on the accuracy of the worst case timing analysis. Using the results, we can identify dominant overestimation sources that should be analyzed more accurately to get tighter WCET estimations. To make our method independent of any existing analysis techniques, we use simulation based methodology. We have implemented a MIPS R3000 simulator equipped with several switches, each of which determines the accuracy level of the

  • PDF

Causal inference from nonrandomized data: key concepts and recent trends (비실험 자료로부터의 인과 추론: 핵심 개념과 최근 동향)

  • Choi, Young-Geun;Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.173-185
    • /
    • 2019
  • Causal questions are prevalent in scientific research, for example, how effective a treatment was for preventing an infectious disease, how much a policy increased utility, or which advertisement would give the highest click rate for a given customer. Causal inference theory in statistics interprets those questions as inferring the effect of a given intervention (treatment or policy) in the data generating process. Causal inference has been used in medicine, public health, and economics; in addition, it has received recent attention as a tool for data-driven decision making processes. Many recent datasets are observational, rather than experimental, which makes the causal inference theory more complex. This review introduces key concepts and recent trends of statistical causal inference in observational studies. We first introduce the Neyman-Rubin's potential outcome framework to formularize from causal questions to average treatment effects as well as discuss popular methods to estimate treatment effects such as propensity score approaches and regression approaches. For recent trends, we briefly discuss (1) conditional (heterogeneous) treatment effects and machine learning-based approaches, (2) curse of dimensionality on the estimation of treatment effect and its remedies, and (3) Pearl's structural causal model to deal with more complex causal relationships and its connection to the Neyman-Rubin's potential outcome model.

Security Analysis on the Home Trading System Service and Proposal of the Evaluation Criteria (홈트레이딩 시스템 서비스의 보안 취약점 분석 및 평가기준 제안)

  • Lee, Yun-Young;Choi, Hae-Lahng;Han, Jeong-Hoon;Hong, Su-Min;Lee, Sung-Jin;Shin, Dong-Hwi;Won, Dong-Ho;Kim, Seung-Joo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.1
    • /
    • pp.115-137
    • /
    • 2008
  • As stock market gets bigger, use of HTS(Home Trading System) is getting increased in stock exchange. HTS provides lots of functions such as inquiry about stock quotations, investment counsel and so on. Thus, despite the fact that the functions fur convenience and usefulness are developed and used, security functions for privacy and trade safety are insufficient. In this paper, we analyze the security system of HTS service through the key-logging and sniffing and suggest that many private information is unintentionally exposed. We also find out a vulnerable point of the system, and show the advisable criteria of secure HTS.

A Study on Innovation Resistance and Adoption Regarding a EXtended Reality Devices (확장현실 기기의 혁신저항과 수용에 관한 연구)

  • Jin, Seok
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.918-940
    • /
    • 2021
  • In this study, the concept of eXtended Reality Devices(XR) is defined, how it is applied by industry and how it will develop in the future, and based on the expanded integrated technology acceptance theory and innovation resistance, We tried to confirm through empirical analysis how the influencing variables affect. We carry out the analysis of the hypotheses using PLS Structural Equation Modeling. According to the empirical analysis results, this study confirms that innovativeness has a significant effect on UTAUT2's acceptance variables(performance expectation, effort expectation, hedonic motivation, price value) for XR devices, and these variables affect attitudes and acceptance of XR. and the pace of change of XR has a significant effect on perceived risk, and the perceived risk perceived by consumers mediates the pace of change and innovation resistance, and has a significant effect on innovation resistance. and innovation resistance to XR devices had a significant negative effect on acceptance. This study has its meaning because it found out that it deals expansively and comprehensively with personal innovation, the UTAUT2's acceptance variables, and the effects of perceived risk factors mediating the pace of change and resistance to innovation. In addition, it suggests that in order for innovative technologies such as XR to advance to the stage of market expansion, it is important to present strategies to reduce resistance to new technologies as much as the value to be provided to consumers.

A Resource Management Scheme Based on Live Migrations for Mobility Support in Edge-Based Fog Computing Environments (에지 기반 포그 컴퓨팅 환경에서 이동성 지원을 위한 라이브 마이그레이션 기반 자원 관리 기법)

  • Lim, JongBeom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.4
    • /
    • pp.163-168
    • /
    • 2022
  • As cloud computing and the Internet of things are getting popular, the number of devices in the Internet of things computing environments is increasing. In addition, there exist various Internet-based applications, such as home automation and healthcare. In turn, existing studies explored the quality of service, such as downtime and reliability of tasks for Internet of things applications. To enhance the quality of service of Internet of things applications, cloud-fog computing (combining cloud computing and edge computing) can be used for offloading burdens from the central cloud server to edge servers. However, when devices inherit the mobility property, continuity and the quality of service of Internet of things applications can be reduced. In this paper, we propose a resource management scheme based on live migrations for mobility support in edge-based fog computing environments. The proposed resource management algorithm is based on the mobility direction and pace to predict the expected position, and migrates tasks to the target edge server. The performance results show that our proposed resource management algorithm improves the reliability of tasks and reduces downtime of services.

Simulating tentacle Creature with External Magnetism for Animatronics (외부 자력을 이용한 촉수 생명체 애니매트로닉스 시뮬레이션)

  • Ye Yeong Kim;Do Hee Kim;Ju Ran Kim;Na Hyun Oh;Myung Geol Choi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.5
    • /
    • pp.1-9
    • /
    • 2023
  • The control technology of animatronics is an interesting topic explored in various fields, including engineering, medicine, and art, with ongoing research efforts. The conventional method for controlling the movement of animatronics is to use electric motors installed inside the body. However, this method is difficult to apply when expressing a narrow space inside the body. In this study, a method of using external forces instead of installing mechanical devices inside the body was proposed to control the movement of a thin and long tentacle organism. Specifically, in this study, the joint body of animatronics was made of magnetic metal material so that it could be affected by the force of an externally installed electromagnet. The strength of the electromagnet was controlled by a PID controller to enable real-time control of the position of the animatronics body. In addition, the magnet was made to rotate, and the speed of rotation was changed to create various movements. Through virtual environment simulations, our experiments demonstrate the superiority of the proposed method, showcasing real-time control by users and the creation of animations in various styles.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Automated-Database Tuning System With Knowledge-based Reasoning Engine (지식 기반 추론 엔진을 이용한 자동화된 데이터베이스 튜닝 시스템)

  • Gang, Seung-Seok;Lee, Dong-Joo;Jeong, Ok-Ran;Lee, Sang-Goo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06a
    • /
    • pp.17-18
    • /
    • 2007
  • 데이터베이스 튜닝은 일반적으로 데이터베이스 어플리케이션을 "좀 더 빠르게" 실행하게 하는 일련의 활동을 뜻한다[1]. 데이터베이스 관리자가 튜닝에 필요한 주먹구구식 룰(Rule of thumb)들을 모두 파악 하고 상황에 맞추어 적용하는 것은 비싼 비용과 오랜 시간을 요구한다. 그렇게 때문에 서로 다른 어플 리케이션들이 맞물려 있는 복잡한 서비스는 필수적으로 자동화된 데이터베이스 성능 관리와 튜닝을 필 요로 한다. 본 논문에서는 이를 해결하기 위하여 지식 도매인(Knowledge Domain)을 기초로 한 자동화 된 데이터베이스 튜닝 원칙(Tuning Principle)을 제시하는 시스템을 제안한다. 각각의 데이터베이스 튜닝 이론들은 지식 도매인의 지식으로 활용되며, 성능에 영향을 미치는 요소들을 개체(Object)와 콘셉트 (Concept)로 구성하고 추론 시스템을 통해 튜닝 원칙을 추론하여 쉽고 빠르게 현재 상황에 맞는 튜닝 방법론을 적용시킬 수 있다. 자동화된 데이터베이스 튜닝에 대해 여러 분야에 걸쳐 학문적인 연구가 이루어지고 있다. 그 예로써 Microsoft의 AutoAdmin Project[2], Oracle의 SQL 튜닝 아키텍처[3], COLT[4], DBA Companion[5], SQUASH[6] 등을 들 수 있다. 이러한 최적화 기법들을 각각의 기능적인 방법론에 따라 다시 분류하면 크게 Design Tuning, Logical Structure Tuning, Sentence Tuning, SQL Tuning, Server Tuning, System/Network Tuning으로 나누어 볼 수 있다. 이 중 SQL Tuning 등은 수치적으로 결정되어 이미 존재하는 정보를 이용하기 때문에 구조화된 모델로 표현하기 쉽고 사용자의 다양한 요구에 의해 변화하는 조건들을 수용하기 쉽기 때문에 이에 중점을 두고 성능 문제를 해결하는 데 초점을 맞추었다. 데이터베이스 시스템의 일련의 처리 과정에 따라 DBMS를 구성하는 개체들과 속성, 그리고 연관 관계들이 모델링된다. 데이터베이스 시스템은 Application / Query / DBMS Level의 3개 레벨에 따라 구조화되며, 본 논문에서는 개체, 속성, 연관 관계 및 데이터베이스 튜닝에 사용되는 Rule of thumb들을 분석하여 튜닝 원칙을 포함한 지식의 형태로 변환하였다. 튜닝 원칙은 데이터베이스 시스템에서 발생하는 문제를 해결할 수 있게 하는 일종의 황금률로써 지식 도매인의 바탕이 되는 사실(Fact)과 룰(Rule) 로써 표현된다. Fact는 모델링된 시스템을 지식 도매인의 하나의 지식 개체로 표현하는 방식이고, Rule 은 Fact에 기반을 두어 튜닝 원칙을 지식의 형태로 표현한 것이다. Rule은 다시 시스템 모델링을 통해 사전에 정의되는 Rule와 튜닝 원칙을 추론하기 위해 사용되는 Rule의 두 가지 타업으로 나뉘며, 대부분의 Rule은 입력되는 값에 따라 다른 솔루션을 취하게 하는 분기의 역할을 수행한다. 사용자는 제한적으로 자동 생성된 Fact와 Rule을 통해 튜닝 원칙을 추론하여 데이터베이스 시스템에 적용할 수 있으며, 요구나 필요에 따라 GUI를 통해 상황에 맞는 Fact와 Rule을 수동으로 추가할 수도 었다. 지식 도매인에서 튜닝 원칙을 추론하기 위해 JAVA 기반의 추론 엔진인 JESS가 사용된다. JESS는 스크립트 언어를 사용하는 전문가 시스템[7]으로 선언적 룰(Declarative Rule)을 이용하여 지식을 표현 하고 추론을 수행하는 추론 엔진의 한 종류이다. JESS의 지식 표현 방식은 튜닝 원칙을 쉽게 표현하고 수용할 수 있는 구조를 가지고 있으며 작은 크기와 빠른 추론 성능을 가지기 때문에 실시간으로 처리 되는 어플리케이션 튜닝에 적합하다. 지식 기반 모률의 가장 큰 역할은 주어진 데이터베이스 시스템의 모델을 통하여 필요한 새로운 지식을 생성하고 저장하는 것이다. 이를 위하여 Fact와 Rule은 지식 표현 의 기본 단위인 트리플(Triple)의 형태로 표현된다, 트리플은 Subject, Property, Object의 3가지 요소로 구성되며, 대부분의 Fact와 Rule들은 트리플의 기본 형태 또는 트리플의 조합으로 이루어진 C Condition과 Action의 두 부분의 결합으로 구성된다. 이와 같이 데이터베이스 시스템 모델의 개체들과 속성, 그리고 연관 관계들을 표현함으로써 지식들이 추론 엔진의 Fact와 Rule로 기능할 수 있다. 본 시스템에서는 이를 구현 및 실험하기 위하여 웹 기반 서버-클라이언트 시스템을 가정하였다. 서버는 Process Controller, Parser, Rule Database, JESS Reasoning Engine으로 구성 되 어 있으며, 클라이 언트는 Rule Manager Interface와 Result Viewer로 구성되어 었다. 실험을 통해 얻어지는 튜닝 원칙 적용 전후의 실행 시간 측정 등 데이터베이스 시스템 성능 척도를 비교함으로써 시스템의 효용을 판단하였으며, 실험 결과 적용 전에 비하여 튜닝 원칙을 적용한 경우 최대 1초 미만의 전처리에 따른 부하 시간 추가와 최소 약 1.5배에서 최대 약 3배까지의 처리 시간 개선을 확인하였다. 본 논문에서 제안하는 시스템은 튜닝 원칙을 자동으로 생성하고 지식 형태로 변형시킴으로써 새로운 튜닝 원칙을 파생하여 제공하고, 성능에 영향을 미치는 요소와 함께 직접 Fact과 Rule을 추가함으로써 커스터마이정된 튜닝을 수행할 수 있게 하는 장점을 가진다. 추후 쿼리 자체의 튜닝 및 인텍스 최적화 등의 프로세스 자동화와 Rule을 효율적으로 정의하고 추가하는 방법 그리고 시스템 모델링을 효과적으로 구성하는 방법에 대한 연구를 통해 본 연구를 더욱 개선시킬 수 있을 것이다.

  • PDF

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

The Role of Digital Knowledge Richness in Green Technology Adoption: A Digital Option Theory Perspective (그린기술 채택에의 디지털 지식풍부성의 역할: 디지털 옵션 이론 관점에서)

  • Yoo, Hosun;Lee, Namyeon;Kwon, Ohbyung
    • The Journal of Information Systems
    • /
    • v.24 no.2
    • /
    • pp.23-52
    • /
    • 2015
  • Purpose This study aims to understand the role of digital knowledge in accepting the green technology. This study combined digital option theory with the second version of the Unified Theory of Acceptance and Use of Technology (UTAUT2). Contrary to other studies in which the UTAUT2 is used to explain IT adoption behavior, we look at the relationship between IT and the UTAUT2 from a new angle, incorporating an important aspect of IT, that is, digitized knowledge richness, as a determinant of the UTAUT2. Design/methodology/approach Grounded in the UTAUT2, a content analysis was conducted to investigate novel constructs dedicated to explaining green technology adoption. In this study, an amended version of the UTAUT2 specific to green technology is offered that better explains the green technology adoption behavior of consumers. Using the items identified by content analysis, we developed a questionnaire with 36 survey items. We measured all the items on a seven-point Likert-type scale. We randomly selected 402 survey respondents from a set of panel data. After a pilot study, we analyzed the main survey data by using PLS 2.0M3 and SPSS 20.0, and employed structural equation modeling to test the hypotheses. Findings The results suggest that the UTAUT2 was found to be extendable to technologies other than conventional IT. Social influence is more significant than conventional utilitarian and hedonic-based constructs such as those utilized in the UTAUT and UTAUT2 in explaining adoption behavior in the context of green technologies. The hypothesized connection between digitized knowledge richness and adoption intention was supported by the results of studies on the role of IT in formation of attitudes toward eco-friendly production. The results also indicate that digital knowledge can also encourage people to try green technology when they learn that their peers are already using the technology successfully.