• Title/Summary/Keyword: Intelligent Agent

Search Result 702, Processing Time 0.028 seconds

Analysis of Priority in the Robotaxi Design Elements : Focusing on Application of AHP Methodology (로보택시 설계 요소 간 우선순위 분석 : AHP 방법론 적용을 중심으로)

  • Juhye Ha;Yeonbi Jeung;Junho Choi
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.179-193
    • /
    • 2023
  • research on user-friendly experience design is crucial to reduce resistance and enhance acceptance of robotaxis. This study analyzes the prioritization of design factors in robotaxi systems and provides design guidelines based on user experience. Using the AHP(Analytic Hierarchy Process) technique, users' perceived importance of four primary design factors and sixteen 16 sub-design elements were assessed, and comfort and safety were top priorities. The results showed that the artificial intelligence agent was the most critical design factor, followed by driving guidance information, interior design, and exterior design. These findings offer valuable insights for robotaxi professionals, and could assist in informed decision-making and creating user-centered design guidelines.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Distributed Coordination of Project Schedule Changes: An Agent-Based Compensatory Negotiation Approach (건설공사 공정변경의 분산조정 : 에이전트기반의 보상협의 방식)

  • Kim Kee-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.4 no.2 s.14
    • /
    • pp.74-81
    • /
    • 2003
  • In the construction industry, projects are becoming increasingly large and complex, involving multiple subcontractors. Traditional centralized coordination techniques used by the general contractors become less effective as subcontractors perform most wok and provide their own resources. When subcontractors cannot provide enough resources, they hinder their own performance as well as that of other subconractors and ultimately the entire project Thus, construction projects need a new distributed coordination approach wherein all of the concerned subcontractors can reschedule a project dynamically. To enable the distributed coordination framework of project schedule changes, the author developed an agent-based compensatory negotiation methodology, which allows intelligent software agents to simulate negotiations on behalf of their human subcontractors. In addition to this theoretical work, 1 designed and implemented a prototype to demonstrate the effectiveness of the framework. Thus, this research formalizes the necessary steps that would help construction project participants to increase the efficiency of their resource use, which in turn will enhance successful completions of whole projects.

Development of Forward chaining inference engine SMART-F using Rete Algorithm in the Semantic Web (차세대 웹 환경에서의 Rete Algorithm을 이용한 정방향 추론엔진 SMART - F 개발)

  • Jeong, Kyun-Beom;Hong, June-Seok;Kim, Woo-Ju;Lee, Myung-Jin;Park, Ji-Hyoung;Song, Yong-Uk
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.3
    • /
    • pp.17-29
    • /
    • 2007
  • Inference engine that performs the brain of software agent in next generation's web with various standards based on standard language of the web, XML has to understand SWRL (Semantic Web Rule Language) that is a language to express the rule in the Semantic Web. In this research, we want to develop a forward inference engine, SMART-F (SeMantic web Agent Reasoning Tools-Forward chaining inference engine) that uses SWRL as a rule express method, and OWL as a fact express method. In the traditional inference field, the Rete algorithm that improves effectiveness of forward rule inference by converting if-then rules to network structure is often used for forward inference. To apply this to the Semantic Web, we analyze the required functions for the SWRL-based forward inference, and design the forward inference algorithm that reflects required functions of next generation's Semantic Web deducted by Rete algorithm. And then, to secure each platform's independence and portability in the ubiquitous environment and overcome the gap of performance, we developed management tool of fact and rule base and forward inference engine. This is compatible with fact and rule base of SMART-B that was developed. So, this maximizes a practical use of knowledge in the next generation's Web environment.

  • PDF

An Empirical Study for Performance Evaluation of Web Personalization Assistant Systems (웹 기반 개인화 보조시스템 성능 평가를 위한 실험적 연구)

  • Kim, Ki-Bum;Kim, Seon-Ho;Weon, Sung-Hyun
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.3
    • /
    • pp.155-167
    • /
    • 2004
  • At this time, the two main techniques for achieving web personalization assistant systems generally concern direct manipulation and software agents. While both direct manipulation and software agents are intended for permitting user to complete tasks rapidly, efficiently, and easily, their methodologies are different. The central debate involving these web personalization techniques originates from the amount of control that each allows to, or holds back from, the users. Direct manipulation can provide users with comprehensibel, predictable and controllable user interfaces that give them a feeling of accomplishnent and responsibility. On the other hand, the intelligent software components, the agents, can assist users with artificial intelligence by monitoring or retrieving personal histories or behaviors. In this empirical study, two web personalization assistant systems are evaluated. One of them, WebPersonalizer, is an agent based user personalization tool; the other, AntWorld, is a collaborative recommendation tool which provides direct manipulation interfaces. Through this empirical study, we have focused on two different paradigms as web personalization assistant systems : direct manipulation and software agents. Each approach has its own advantages and disadvantages. We also provide the experimental result that is worth referring for developers of electronic commerce system and suggest the methodologies for conveniently retrieving necessary information based on their personal needs.

  • PDF

Cooperative Multi-Agent Reinforcement Learning-Based Behavior Control of Grid Sortation Systems in Smart Factory (스마트 팩토리에서 그리드 분류 시스템의 협력적 다중 에이전트 강화 학습 기반 행동 제어)

  • Choi, HoBin;Kim, JuBong;Hwang, GyuYoung;Kim, KwiHoon;Hong, YongGeun;Han, YounHee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.8
    • /
    • pp.171-180
    • /
    • 2020
  • Smart Factory consists of digital automation solutions throughout the production process, including design, development, manufacturing and distribution, and it is an intelligent factory that installs IoT in its internal facilities and machines to collect process data in real time and analyze them so that it can control itself. The smart factory's equipment works in a physical combination of numerous hardware, rather than a virtual character being driven by a single object, such as a game. In other words, for a specific common goal, multiple devices must perform individual actions simultaneously. By taking advantage of the smart factory, which can collect process data in real time, if reinforcement learning is used instead of general machine learning, behavior control can be performed without the required training data. However, in the real world, it is impossible to learn more than tens of millions of iterations due to physical wear and time. Thus, this paper uses simulators to develop grid sortation systems focusing on transport facilities, one of the complex environments in smart factory field, and design cooperative multi-agent-based reinforcement learning to demonstrate efficient behavior control.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Study Service Ontology Design Scheme Using UML and OCL (UML 및 OCL을 이용한 서비스 온톨로지 설계 방안에 관한 연구)

  • Lee Yun-Su;Chung In-Jeoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.627-636
    • /
    • 2005
  • The Intelligent Web Service is proposed for the purpose of automatic discovery, invocation, composition, inter-operation, execution monitoring and recovery web service through the Semantic Web and the Agent technology. To accomplish this Intelligent Web Service, the Ontology is a necessity for reasoning and processing the knowledge by the computer. However, creating service ontology, for the intelligent web service, has two problems not only consuming a lot of time and cost depended on heuristic of service developer, but also being hard to be mapping completely between service and service ontology. Moreover, the markup language to describe the service ontology is currently hard to be learned by the service developer In a short time. This paper proposes the efficient way of designing and creating the service ontology using MDA methodology. This proposed solution reuses the creating model in terms of desiEninE and constructing Web Service Model using UML based on MDA. After converting the Platform-Independent Web Service Model to the dependent model of OWL-S which is a Service Ontology description language, it converts to OWL-S Service Ontology using XMI. This proposed solution has profits, oneis able to be easily constructed the Service Ontology by Service Developers, the other is enable to be created the both service and Service Ontology from one model. Moreover, it can be effective to reduce the time and cost as creating Service Ontology automatically from a model, and calmly dealt with a change of outer environment like as the platform change. This paper cites an instance for the validity of designing Web Service model and creating the Service Ontology, and validates whether the created Service Ontology is valid or not.

Intelligent Broadcasting System and Services for Personalized Semantic Contents Consumption (개인화된 의미 기반 콘텐츠 소비를 위한 지능형 방송 시스템과 서비스)

  • Jin, Sung Ho;Cho, Jun Ho;Ro, Yong Man;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.422-435
    • /
    • 2005
  • Compared with analog broadcasting, digital broadcasting supports technical background to provide personalize the TV watching environment by offering broadcasting services that can adapt to viewers' preferences. However, current digital broadcasting shows limited services such as reservation recording, simple program guiding with an electronic program guide (EPG) on a personal video recorder system, and primitive data broadcasting by broadcasters. Therefore, the purpose of this paper is to suggest a new broadcasting environment which gives a person facility and a difference fur watching TV by serving enhanced personalized services. For that reason, we propose an intelligent broadcasting system which can minimize viewer's actions, and enhanced broadcasting services which are based on understanding of the semantics of broadcasting contents. To implement the system, agent technology as well as the MPEG-7 and TV-Anytime Forum (TVAF) are employed. For content-level services, real-time content filtering and personalized video skimming are designed and implemented. To verify the usefulness of the proposed system, we demonstrate it with a test-bed on which content-level personalized services are implemented.

An Integrated Model of the Intention to Use the Intelligent Personal Assistant (IPA) (지능형 개인비서(IPA)의 사용의도에 관한 통합모형)

  • Chan-Woo Kim;Chang-Kyo Suh
    • Information Systems Review
    • /
    • v.19 no.4
    • /
    • pp.135-156
    • /
    • 2017
  • An intelligent personal assistant (IPA) is a software agent that assists people to perform basic tasks or services for an individual by commonly providing information via natural language. In spite of the versatile capabilities of the IPA to answer a user's simple information-based queries, such as the weather and driving directions, the actual usage rates for IPA services are limited to date. In this research, to evaluate the factors affecting the intention to use IPA, we develop an empirical model based on technology acceptance model, innovation diffusion theory, and IS success model. Afterward, we collect 203 questionnaires from actual users of IPAs. Finally, the structural equation model validates the causal relationship between the constructs of the model. Consequently, the innovation characteristics of IPA drawn from innovation diffusion theory, namely, relative advantage, compatibility, observability, all exerted a positive influence on perceived usefulness. Furthermore, information quality, a quality characteristic of IPA obtained from DeLone and McLean's IS success model, presented a positive effect on perceived usefulness and perceived ease of use. Finally, the perceived intelligence of IPA displayed a positive influence on perceived usefulness and ease of use. This characteristic was also a major factor that can increase the intention to use the IPA. Given these research findings, this study is significant for identifying factors that may influence the intention to use the IPA by providing strategic guidelines to relevant business operators and establishing an integrated model.