• Title/Summary/Keyword: 불완전한 데이터

Search Result 151, Processing Time 0.021 seconds

Study on the Development Direction of Domestic Proptech Company: Focusing on the Real Estate Platform Information Provision Function (국내 프롭테크 기업의 발전방향에 대한 연구: 부동산 플랫폼 정보제공 기능을 중심으로)

  • Lee, Jungyun;Oh, Kyong Joo;Ahn, Jae Joon
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.55-76
    • /
    • 2021
  • The real estate market is a representative imperfectly competitive market. Real estate information is characterized by being collected and utilized in a closed environment, and market participants have to pay a lot of time, effort, and costs to acquire such information. Korea's real estate public data is increasing year by year, but it is scattered by relevant ministries. So it is difficult to search and analyze, and the level of development of the industry using it is low. In the recent 4th industrial revolution, the proptech industry has developed as an industry to efficiently provide necessary information to the real estate market. In this study, based on the case of major companies in the real estate platform field among proptech companies, we looked at the types of information provided to users, and on the contrary, explored ways to utilize the data collected from users. The results of this study are expected to provide theoretical and practical implications for ways to reduce information asymmetry in the real estate market and contribute to the development of the real estate industry.

A Split Synchronizable Mobile Transaction Processing Model for e-Business Applications in Ubiquitous Computing Environment (편재형 컴퓨팅 환경에서의 e-비즈니스 응용을 위한 분할 동기화 이동 트랜잭션 처리 모델)

  • Choi, Mi-Seon;Kim, Young-Kuk
    • The KIPS Transactions:PartD
    • /
    • v.11D no.4
    • /
    • pp.783-798
    • /
    • 2004
  • An e-business client application in ubiquitous mobile computing environment may become disconnected from the enterprise server due to broken communication connections caused by the limitation of mobile computing environments(limited battery life of the mobile device, low bandwidth communication, incomplete wireless communication infrastructure, etc). It Is even Possible that mobile client application Intentionally operates in disconnected mode to reduce communication cost and the power consumption of the mobile device. We use “data hoarding” as a means of providing local autonomy to allow transactions to be processed and committed on the mobile host despite of disconnection. The key problem to this approach is the synchronization problem that serialize potentially conflicting updates from disconnected clients on master objects of the server database. In this paper, we present a new transaction synchronizing method that splits a transaction into a set of independent component transactions and give the synchronization priority on each component taking the possibility of use and conflicts in the server into consideration. Synchronization is performed component by component based un synchronization priority. After the Preferred component of a no bile transaction succeeds in synchronization with the server, the mobile transaction can pre-commit at server. A pre-committed transaction's updated value is made visible at server before the final commit of the transaction. The synchronization of the component with low synchronization priority can be delayed in adaption to wireless bandwidth and computing resources. As a result, the availability of important data updated by mobile client is increased and it can maximize the utilization of the limited wireless bandwidth and computing resources.

The Impact of the Mobile Application on Off-Line Market: Case in Call Taxi and Kakao Taxi (모바일 어플리케이션이 오프라인 시장에 미치는 영향: 콜택시와 카카오택시를 중심으로)

  • Kyeongjin Lee;Jaehong Park
    • Information Systems Review
    • /
    • v.18 no.4
    • /
    • pp.141-154
    • /
    • 2016
  • Mobile application is growing explosively with the advent of a new technology: smartphones. Mobile application is a new marketing channel and performs as a start-up platform. This study examines the effect of mobile application on the off-line market. Despite the continuous declining demand for taxi service, paradoxically, the supply of taxi service has increased. The taxi industry can be categorized into general taxi and call taxi. General taxi is accidental and inefficient because it has to search for its own passenger. As call taxi takes the request of a passenger, it is more efficient than general taxi. However, the current defective passenger-taxi driver matching system and insufficient taxi driver management hinder the development of the call taxi market. Differences in differences (DID) is an econometrical methodology that examines whether or not an event has meaningful influence. This research uses DID to investigate the effect of the Kakao taxi application on the call taxi industry. Furthermore, it examines the effect of major companies' reckless diversification, which is considered unethical behavior. The passengers of call taxi data from August 2014 to July 2015 and those of designated driving service data of the same period were collected as the control group.

A Case Study on Big Data Analysis of Performing Arts Consumer for Audience Development (관객개발을 위한 공연예술 소비자 빅데이터 분석 사례 고찰)

  • Kim, Sun-Young;Yi, Eui-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.286-299
    • /
    • 2017
  • The Korean performing arts has been facing stagnation due to oversupply, lack of effective distribution system, and insufficient business models. In order to overcome these difficulties, it is necessary to improve the efficiency and accuracy of marketing by using more objective market data, and to secure audience development and loyalty. This study considers the viewpoint that 'Big Data' could provide more general and accurate statistics and could ultimately promote tailoring services for performances. We examine the first case of Big Data analysis conducted by a credit card company as well as Big Data's characteristics, analytical techniques, and the theoretical background of performing arts consumer analysis. The purpose of this study is to identify the meaning and limitations of the analysis case on performing arts by Big Data and to overcome these limitations. As a result of the case study, incompleteness of credit card data for performance buyers, limits of verification of existing theory, low utilization, consumer propensity and limit of analysis of purchase driver were derived. In addition, as a solution to overcome these problems, it is possible to identify genre and performances, and to collect qualitative information, such as prospectors information, that can identify trends and purchase factors.combination with surveys, and purchase motives through mashups with social data. This research is ultimately the starting point of how the study of performing arts consumers should be done in the Big Data era and what changes should be sought. Based on our research results, we expect more concrete qualitative analysis cases for the development of audiences, and continue developing solutions for Big Data analysis and processing that accurately represent the performing arts market.

Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers (다중 섬광결정을 이용한 고해상도 PET의 불균일/불완전 데이터 보정기법 연구)

  • Lee, Jae-Sung;Kim, Soo-Mee;Lee, Kwon-Song;Sim, Kwang-Souk;Rhe, June-Tak;Park, Kwang-Suk;Lee, Dong-Soo;Hong, Seong-Jong
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.1
    • /
    • pp.52-60
    • /
    • 2008
  • Purpose: To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Materials and Methods: Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 20 filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Results: Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conclusion: Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 20 reconstruction of multiple crystal layer PET data.

Health Risk Management using Feature Extraction and Cluster Analysis considering Time Flow (시간흐름을 고려한 특징 추출과 군집 분석을 이용한 헬스 리스크 관리)

  • Kang, Ji-Soo;Chung, Kyungyong;Jung, Hoill
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.99-104
    • /
    • 2021
  • In this paper, we propose health risk management using feature extraction and cluster analysis considering time flow. The proposed method proceeds in three steps. The first is the pre-processing and feature extraction step. It collects user's lifelog using a wearable device, removes incomplete data, errors, noise, and contradictory data, and processes missing values. Then, for feature extraction, important variables are selected through principal component analysis, and data similar to the relationship between the data are classified through correlation coefficient and covariance. In order to analyze the features extracted from the lifelog, dynamic clustering is performed through the K-means algorithm in consideration of the passage of time. The new data is clustered through the similarity distance measurement method based on the increment of the sum of squared errors. Next is to extract information about the cluster by considering the passage of time. Therefore, using the health decision-making system through feature clusters, risks able to managed through factors such as physical characteristics, lifestyle habits, disease status, health care event occurrence risk, and predictability. The performance evaluation compares the proposed method using Precision, Recall, and F-measure with the fuzzy and kernel-based clustering. As a result of the evaluation, the proposed method is excellently evaluated. Therefore, through the proposed method, it is possible to accurately predict and appropriately manage the user's potential health risk by using the similarity with the patient.

A Study on the Imperfect Debugging Effect on Release Time of Dedicated Develping Software (불완전디버깅이 주문형 개발소프트웨어의 인도시기에 미치는 영향 연구)

  • Che Gyu Shik
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.4
    • /
    • pp.87-94
    • /
    • 2004
  • The software reliability growth model(SRGM) has been developed in order to estimate such reliability measures as remaining fault number, failure rate and reliability for the developing stage software. Almost of them assumed that the faults detected during testing were evetually removed. Namely, they have studied SRGM based on the assumption that the faults detected during testing were perfectly removed. The fault removing efficiency. however. IS imperfect and it is widely known as so in general. It is very difficult to remove detected fault perfectly because the fault detecting is not easy and new error may be introduced during debugging and correcting. Therefore, the fault detecting efficiency may influence the SRGM or cost of developing software. It is a very useful measure for the developing software. much helpful for the developer to evaluate the debugging efficiency, and, moreover, help to additional workloads necessary. Therefore. it is very important to evaluate the effect of imperfect dubugging in point of SRGM and cost. and may influence the optimal release time and operational budget. I extent and study the generally used reliability and cost models to the imperfect debugging range in this paper.

  • PDF

Uncertainty Improvement of Incomplete Decision System using Bayesian Conditional Information Entropy (베이지언 정보엔트로피에 의한 불완전 의사결정 시스템의 불확실성 향상)

  • Choi, Gyoo-Seok;Park, In-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.47-54
    • /
    • 2014
  • Based on the indiscernible relation of rough set, the inevitability of superposition and inconsistency of data makes the reduction of attributes very important in information system. Rough set has difficulty in the difference of attribute reduction between consistent and inconsistent information system. In this paper, we propose the new uncertainty measure and attribute reduction algorithm by Bayesian posterior probability for correlation analysis between condition and decision attributes. We compare the proposed method and the conditional information entropy to address the uncertainty of inconsistent information system. As the result, our method has more accuracy than conditional information entropy in dealing with uncertainty via mutual information of condition and decision attributes of information system.

Optimal Database Audit Timing for Data Quality Enhancement (자료의 질 향상을 위한 데이타베이스의 최적감사시점)

  • 김기수
    • The Journal of Information Technology and Database
    • /
    • v.3 no.1
    • /
    • pp.25-43
    • /
    • 1996
  • 정보시스템이 효과적이기 위해서는 정보가 도출되는 자료의 무결성이 우선 전제되어야 한다. 특히 오늘날과 같이 사회가 다양한 활동들을 지원하기 위해 컴퓨터를 이용한 정보시스템에 점점 더 의존해감에 따라 정보시스템에서 사용되는 자료의 질을 적절한 수준으로 유지 및 관리해야 할 필요성이 더욱 절실히 대두되게 되었다. 그럼에도 불구하고 여전히 관리자들은 효과적인 의사결정 및 활동을 위해 필요한 최신의 정확한 자료들을 제공 받지 못하고 있으며 [Nesbit 1985], 정보시스템이 기대 이하의 성능을 나타내는 가장 단순하고 일반적인 원인은 정보시스템에 입력된 자료가 부정확하거나 불완전하기 때문인 것으로 나타나고 있다 [Ballou and Pazer 1989]. 낮은 질의 자료는 즉각적인 경제적 손실뿐만 아니라 보다 많은 간접적이고 경제적으로 측정하기 어려운 손실들을 초래한다. 그리고 아무리 잘 관리되는 시스템에도 시간이 흐름에 따라 여러가지 원인에 의해 저장된 자료에 오류가 발생하게 된다. 자료의 질을 적절한 수준으로 유지하기 위해서는 이와 같은 오류는 주기적으로 발견 및 수정되어야 한다. 이와 같은 작업을 데이타베이스 감사라고 한다. 본 논문에서는 데이타베이스에 저장된 자료의 질을 주기적으로 향상시키기 위한 최적 데이타베이스 감사시점을 일반적인 비용모형을 통해 결정하는 과정을 제시하고, 그와 관련된 사항들에 대해 논의하였다. 데이타베이스는 오류 발생률도 다르고 오류의 결과도 상당히 다른 여러개의 자료군들로 구성되어 있다고 가정하였다. 그리고 각 자료군에서의 오류 누적과정은 확정적이 아닌 확률적인 과정으로 모형화하고, 단순한 오류의 발생뿐만 아니라 오류의 크기도 확률적으로 변하는 상황을 모형에 반영하여 보다 현실성있게 모형화하였다.

  • PDF

Software Reliability Growth Model with the Testing Effort for Large System (대형 시스템 개발을 위한 시험능력을 고려한 소프트웨어 신뢰도 성장 모델)

  • Lee Jae-ki;Lee Jae-jeong;Nam Sang-sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.987-994
    • /
    • 2005
  • Most of the proposed SRGMs are required to perfect debugging based on removal of defect as soon as the detection of defects in system tests. But the detected defects are corrected after few days as a fixed time or induced new fault in software under the imperfect debugging environments. Solving these problems, we discussed that the formal software reliability model considered testing-effort for the fault detection and correction of software defects, and then using this model we have estimated of the software reliability closed to practical conditions.