• Title/Summary/Keyword: 사용자 모델링

Search Result 1,239, Processing Time 0.022 seconds

Blind Rhythmic Source Separation (블라인드 방식의 리듬 음원 분리)

  • Kim, Min-Je;Yoo, Ji-Ho;Kang, Kyeong-Ok;Choi, Seung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.697-705
    • /
    • 2009
  • An unsupervised (blind) method is proposed aiming at extracting rhythmic sources from commercial polyphonic music whose number of channels is limited to one. Commercial music signals are not usually provided with more than two channels while they often contain multiple instruments including singing voice. Therefore, instead of using conventional modeling of mixing environments or statistical characteristics, we should introduce other source-specific characteristics for separating or extracting sources in the under determined environments. In this paper, we concentrate on extracting rhythmic sources from the mixture with the other harmonic sources. An extension of nonnegative matrix factorization (NMF), which is called nonnegative matrix partial co-factorization (NMPCF), is used to analyze multiple relationships between spectral and temporal properties in the given input matrices. Moreover, temporal repeatability of the rhythmic sound sources is implicated as a common rhythmic property among segments of an input mixture signal. The proposed method shows acceptable, but not superior separation quality to referred prior knowledge-based drum source separation systems, but it has better applicability due to its blind manner in separation, for example, when there is no prior information or the target rhythmic source is irregular.

Brand Platformization and User Sentiment: A Text Mining Analysis of Nike Run Club with Comparative Insights from Adidas Runtastic (텍스트마이닝을 활용한 브랜드 플랫폼 사용자 감성 분석: 나이키 및 아디다스 러닝 앱 리뷰 비교분석을 중심으로)

  • Hanna Park;Yunho Maeng;Hyogun Kym
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.43-66
    • /
    • 2024
  • In an era where digital technology reshapes brand-consumer interactions, this study examines the influence of Nike's Run Club and Adidas' Runtastic apps on loyalty and advocacy. Analyzing 3,715 English reviews from January 2020 to October 2023 through text mining, and conducting a focused sentiment analysis on 155 'recommend' mentions, we explore the nuances of 'hot loyalty'. The findings reveal Nike as a 'companion' with an emphasis on emotional engagement, versus Runtastic's 'tool' focus on reliability. This underscores the varied consumer perceptions across similar platforms, highlighting the necessity for brands to integrate user preferences and address technical flaws to foster loyalty. Demonstrating how customized technology adaptations impact loyalty, this research offers crucial insights for digital brand strategy, suggesting a proactive approach in app development and management for brand loyalty enhancement

Location Service Modeling of Distributed GIS for Replication Geospatial Information Object Management (중복 지리정보 객체 관리를 위한 분산 지리정보 시스템의 위치 서비스 모델링)

  • Jeong, Chang-Won;Lee, Won-Jung;Lee, Jae-Wan;Joo, Su-Chong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.985-996
    • /
    • 2006
  • As the internet technologies develop, the geographic information system environment is changing to the web-based service. Since geospatial information of the existing Web-GIS services were developed independently, there is no interoperability to support diverse map formats. In spite of the same geospatial information object it can be used for various proposes that is duplicated in GIS separately. It needs intelligent strategies for optimal replica selection, which is identification of replication geospatial information objects. And for management of replication objects, OMG, GLOBE and GRID computing suggested related frameworks. But these researches are not thorough going enough in case of geospatial information object. This paper presents a model of location service, which is supported for optimal selection among replication and management of replication objects. It is consist of tree main services. The first is binding service which can save names and properties of object defined by users according to service offers and enable clients to search them on the service of offers. The second is location service which can manage location information with contact records. And obtains performance information by the Load Sharing Facility on system independently with contact address. The third is intelligent selection service which can obtain basic/performance information from the binding service/location service and provide both faster access and better performance characteristics by rules as intelligent model based on rough sets. For the validity of location service model, this research presents the processes of location service execution with Graphic User Interface.

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A rock physics simulator and its application for $CO_2$ sequestration process ($CO_2$ 격리 처리를 위한 암석물리학 모의실헝장치와 그 응용)

  • Li, Ruiping;Dodds, Kevin;Siggins, A.F.;Urosevic, Milovan
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2006
  • Injection of $CO_2$ into underground saline formations, due to their large storage capacity, is probably the most promising approach for the reduction of $CO_2$ emissions into the atmosphere. $CO_2$ storage must be carefully planned and monitored to ensure that the $CO_2$ is safely retained in the formation for periods of at least thousands of years. Seismic methods, particularly for offshore reservoirs, are the primary tool for monitoring the injection process and distribution of $CO_2$ in the reservoir over time provided that reservoir properties are favourable. Seismic methods are equally essential for the characterisation of a potential trap, determining the reservoir properties, and estimating its capacity. Hence, an assessment of the change in seismic response to $CO_2$ storage needs to be carried out at a very early stage. This must be revisited at later stages, to assess potential changes in seismic response arising from changes in fluid properties or mineral composition that may arise from chemical interactions between the host rock and the $CO_2$. Thus, carefully structured modelling of the seismic response changes caused by injection of $CO_2$ into a reservoir over time helps in the design of a long-term monitoring program. For that purpose we have developed a Graphical User Interface (GUI) driven rock physics simulator, designed to model both short and long-term 4D seismic responses to injected $CO_2$. The application incorporates $CO_2$ phase changes, local pressure and temperature changes. chemical reactions and mineral precipitation. By incorporating anisotropic Gassmann equations into the simulator, the seismic response of faults and fractures reactivated by $CO_2$ can also be predicted. We show field examples (potential $CO_2$ sequestration sites offshore and onshore) where we have tested our rock physics simulator. 4D seismic responses are modelled to help design the monitoring program.

The Extended Site Assessment Procedure Based on Knowledge of Biodegradability to Evaluate the Applicability of Intrinsic Remediation (자연내재복원기술(Intrinsic Remediation)적용을 위한 오염지역 평가과정 개발)

  • ;Robert M. Cowan
    • Journal of Korea Soil Environment Society
    • /
    • v.2 no.3
    • /
    • pp.3-21
    • /
    • 1997
  • The remediation of contamiated sites using currently available remediation technologies requires long term treatment and huge costs, and it is uncertain to achieve the remediation goal to drop contamination level to either back-ground or health-based standards by using such technologies. Intrinsic remediation technology is the remediation technology that relies on the mechanisms of natural attenuation for the containment and elimination of contaminants in subsurface environments. Initial costs for the intrinsic remediation may be higher than conventional treatment technologies because the most comprehensive site assessment for intrinsic remediation is required. Total remediation cost, however may be the lowest among the presently employed technologies. The applicability of intrinsic remediation in the contaminated sites should be theroughly investigated to achieve the remedial goal of the technology. This paper provides the frame of the extended site assessment procedure based on knowledge of biodegradability to evaluate the applicability of intrinsic remediation. This site assessment procedure is composed of 5 steps such as preliminary site screening, assessment of the current knowledge of biodegradability, selecting the appropriate approach, analyzing the contaminant fate and transport and planning the monitoring schedule. In the step 1, followings are to be decided 1) whether to go on the the detailed assessment or not based on the rules of thumb concerning the biodegradability of organic compounds, 2) which protocol document is selected to follow for detailed site assessment according to the site characteristics, contaminants and the relative distance between the contamination and potential receptors. In the step 2, the database for biodegradability are searched and evaluated. In the step 3, the appropriate biodegradability pathways for the contaminated site is selected. In the step 4, the fate and transport of the contaminants at the site are analyzed through modeling. In the step 5, the monitoring schedule is planned according to the result of the modeling. Through this procedure, users may able to have the rational and systematic informations for the application of intrinsic remediation. Also the collected data and informations can be used as the basic to re-select the other remediation technology if it reaches a conclusion not to applicate intrinsic remediation technology at the site from the site assessment procedure.

  • PDF

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

A Lifelog Management System Based on the Relational Data Model and its Applications (관계 데이터 모델 기반 라이프로그 관리 시스템과 그 응용)

  • Song, In-Chul;Lee, Yu-Won;Kim, Hyeon-Gyu;Kim, Hang-Kyu;Haam, Deok-Min;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.637-648
    • /
    • 2009
  • As the cost of disks decreases, PCs are soon expected to be equipped with a disk of 1TB or more. Assuming that a single person generates 1GB of data per month, 1TB is enough to store data for the entire lifetime of a person. This has lead to the growth of researches on lifelog management, which manages what people see and listen to in everyday life. Although many different lifelog management systems have been proposed, including those based on the relational data model, based on ontology, and based on file systems, they have all advantages and disadvantages: Those based on the relational data model provide good query processing performance but they do not support complex queries properly; Those based on ontology handle more complex queries but their performances are not satisfactory: Those based on file systems support only keyword queries. Moreover, these systems are lack of support for lifelog group management and do not provide a convenient user interface for modifying and adding tags (metadata) to lifelogs for effective lifelog search. To address these problems, we propose a lifelog management system based on the relational data model. The proposed system models lifelogs by using the relational data model and transforms queries on lifelogs into SQL statements, which results in good query processing performance. It also supports a simplified relationship query that finds a lifelog based on other lifelogs directly related to it, to overcome the disadvantage of not supporting complex queries properly. In addition, the proposed system supports for the management of lifelog groups by providing ways to create, edit, search, play, and share them. Finally, it is equipped with a tagging tool that helps the user to modify and add tags conveniently through the ion of various tags. This paper describes the design and implementation of the proposed system and its various applications.