• Title/Summary/Keyword: Query Transaction

Search Result 40, Processing Time 0.025 seconds

Development of a Stock Auto-Trading System using Condition-Search

  • Gyu-Sang Cho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.203-210
    • /
    • 2023
  • In this paper, we develope a stock trading system that automatically buy and sell stocks in Kiwoom Securities' HTS system. The system is made by using Kiwoom Open API+ with the Python programming language. A trading strategy is based on an enhanced system query method called a Condition-Search. The Condition-Search script is edited in Kiwoom Hero 4 HTS and the script is stored in the Kiwoom server. The Condition-Search script has the advantage of being easy to change the trading strategy because it can be modified and changed as needed. In the HTS system, up to ten Condition-Search scripts are supported, so it is possible to apply various trading methods. But there are some restrictions on transactions and Condition-Search in Kiwoom Open API+. To avoid one problem that has transaction number and frequency are restricted, a method of adjusting the time interval between transactions is applied and the other problem that do not support a threading technique is solved by an IPC(Inter-Process Communication) with multiple login IDs.

Issues of IPR Database Construction through Interdisciplinary Research (학제간 연구를 통한 IPR 데이터베이스 구축의 쟁점)

  • Kim, Dong Yong;Park, Young Chul
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.8
    • /
    • pp.59-69
    • /
    • 2017
  • Humanities and social sciences researchers and database experts have teamed up to build a database of IPR materials prepared by the Institute of Pacific Relations (IPR). This paper presents the issues and solutions inherent in the database construction for ensuring the quality of IPR materials. For the accessibility of the database, we maintain the database on the Web so that researchers can access it via web browsers; for the convenience of the database construction, we provide an integrated interface that allows researchers to perform all tasks in it; for the completeness of IPR materials constructed, we support the responsible input and the responsible approval that identify responsibilities of each IPR material entered; and for the immediacy of the approval, we support an interactive approval process facilitating the input of researchers. We also use database design, query processing, transaction management, and search and sorting techniques to ensure the correctness of IPR materials entered. In particular, through concurrency control using existence dependency relationships between records, we ensure the correctness between the operating system files and their paths. Our future studies include content search, database download and upload, and copyright related work on IPR materials.

XMDR Hub Framework for Business Process Interoperability based on Store-Procedure (저장-프로시저 기반의 비즈니스 프로세스 상호운용을 위한 XMDR Hub 프레임워크)

  • Moon, Seok-Jae;Jung, Gye-Dong;Kang, Seok-Joong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2207-2218
    • /
    • 2008
  • Various kind of business process exists within enterprise. These business processes achieve business purposes while operate and control using eAI solution. However legacy systems-ERP, PDM are able to many cooperations and interoperability. Generally real data is becoming interoperability using query based on store-procedure on legacy system for business process transaction. Also, It may occur some problems among schema conversion, matching, mapping and other heterogeneous between data interoperability in process. We propose business process interoperability framework based on XMDR Hub that can guarantee interoperability between legacy systems using process that is consisted of SQL query based on store-procedure. It is easy to process data interoperability between legacy systems when business process execute.

Frequent Items Mining based on Regression Model in Data Streams (스트림 데이터에서 회귀분석에 기반한 빈발항목 예측)

  • Lee, Uk-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.147-158
    • /
    • 2009
  • Recently, the data model in stream data environment has massive, continuous, and infinity properties. However the stream data processing like query process or data analysis is conducted using a limited capacity of disk or memory. In these environment, the traditional frequent pattern discovery on transaction database can be performed because it is difficult to manage the information continuously whether a continuous stream data is the frequent item or not. In this paper, we propose the method which we are able to predict the frequent items using the regression model on continuous stream data environment. We can use as a prediction model on indefinite items by constructing the regression model on stream data. We will show that the proposed method is able to be efficiently used on stream data environment through a variety of experiments.

Application Plan of Column-Family Stores in the Big Data Environment (빅데이터환경에서의 칼럼-패밀리 저장소 활용방안)

  • Park, Sungbum;Lee, Sangwon;Ahn, Hyunsup;Jung, In-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.237-239
    • /
    • 2013
  • Data so as to meet key values are preserved at Column-Family Stores such as Cassandra, HBase, Hypertable, and Amazon Simple DB in the Big Data environment. In this paper, with referring to Cassandra, we define column-family data stores and its structure. And then, we check out their characteristics such as consistency, transaction, availability, retrieval function (basic queries and advance queries) with CQL (Cassandra Query Language), and expandability. Also, we appropriate or inappropriate subjects for application of column-family stores.

  • PDF

Temporal Database Management Testbed (시간 지원 데이타 베이스 관리 시험대)

  • Kim, Dong-Ho;Jeon, Geun-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.1-13
    • /
    • 1994
  • The Temporal Database Management Testbed supports valid and transaction time. In this paper, we discuss the design and implementation of a testbed of a temporal database management system in main memory. The testbed consists of a syntactic analyzer, a semantic analyzer, a code generator, and an interpreter. The syntactic analyzer builds a parse tree from a temporal query. The semantic analyzer then checks it for correctness against the system catalog. The code generator builds an execution tree termed ann update network. We employ an incremental view materialization for the execution tree. After building the execution tree, the interpreter activates each node of the execution tree. Also, the indexing structure and the concurrency control are discussed in the testbed.

  • PDF

Asynchronous Communication Technique for Heavy Data Output Performance Improvement on Multi Tier Online Service Environment (다중 Tier 온라인 서비스 상에서 대량 데이터 출력 성능 향상을 위한 비동기 통신 기법)

  • Sung-Lyong Kim;Jae-Oh Oh;Yoon-Ho Jo;Sang-Keun Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.1195-1198
    • /
    • 2008
  • 본 논문은 다중 Tier 상에서 온라인 서비스 대량 데이타 처리를 빠르고 정확하게 클라이언트에 전달하는 기법을 제안한다. Tier 가 많은 온라인 서비스상에서 대량의 데이타를 빠르게 처리하는 데에는 많은 어려움이 있다. Tier 간 지연 시간의 최소화, 네트워크 대역폭를 고려한 트란잭션(Transaction)의 적절한 분할 통신, 이 기종간의 데이타 변환 시 처리속도 개선 등이 해결해야 할 주요한 요건이라고 할 수 있다. 하지만 이러한 문제들이 해결된다고 해서 괄목할 만한 성능의 개선은 쉽게 나타나지 않는다. 그 이유는 바로 Partial Query에 의한 데이타 통신이 꾸준히 반복 발생하기 때문이다. 온라인 서비스의 특성상 대량 데이타는 많은 사용자의 효율적인 트란잭션 처리를 위하여 분할(Partial) 처리되어 통신하는 방식을 기준으로 사용하고 있다. 이러한 방식을 준수 하기 위해서는 데이타 사이즈에 비례하는 반복의 증가가 불가피하다. 그래서 반복 횟수를 줄이는데 포커스를 두고 온라인 서비스 대량 데이타 처리에 대한 성능 데스트를 진행한 결과 반복이 최소화 될수록 성능은 최대한으로 유지되며, 다른 어떤 기술적인 요소를 개선하는 것보다 큰 효과를 볼 수 있음을 알 수 있었다.

Technique for Concurrent Processing Graph Structure and Transaction Using Topic Maps and Cassandra (토픽맵과 카산드라를 이용한 그래프 구조와 트랜잭션 동시 처리 기법)

  • Shin, Jae-Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.159-168
    • /
    • 2012
  • Relation in the new IT environment, such as the SNS, Cloud, Web3.0, has become an important factor. And these relations generate a transaction. However, existing relational database and graph database does not processe graph structure representing the relationships and transactions. This paper, we propose the technique that can be processed concurrently graph structures and transactions in a scalable complex network system. The proposed technique simultaneously save and navigate graph structures and transactions using the Topic Maps data model. Topic Maps is one of ontology language to implement the semantic web(Web 3.0). It has been used as the navigator of the information through the association of the information resources. In this paper, the architecture of the proposed technique was implemented and design using Cassandra - one of column type NoSQL. It is to ensure that can handle up to Big Data-level data using distributed processing. Finally, the experiments showed about the process of storage and query about typical RDBMS Oracle and the proposed technique to the same data source and the same questions. It can show that is expressed by the relationship without the 'join' enough alternative to the role of the RDBMS.

A Review of Science of Databases and Analysis of Its Case Studies (데이터베이스의 과학에 대한 고찰 및 연구 사례 분석)

  • Suh, Young-Kyoon;Kim, Jong Wook
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.237-245
    • /
    • 2016
  • In this paper we introduce a novel database research area called science of databases (SoDB) and carry out a comprehensive analysis of its case studies. SoDB aims to better understand interesting phenomena observed across multiple database management systems (DBMSes). While mathematical and engineering work in the database field has been dominant, less attention has been given to scientific approaches through which DBMSes can be better understood. Scientific investigations can lead to better engineered designs through deeper understanding of query optimizers and transaction processing. The SoDB research has investigated several interesting phenomena observed across different DBMSes and provided several engineering implications based on our uncovered results. In this paper we introduce a novel scientific, empirical methodology and describe the research infrastructure to enable the methodology. We then review each of a selected group of phenomena studied and present an identified structural causal model associated with each phenomenon. We also conduct a comprehensive analysis on the case studies. Finally, we suggest future directions to expand the SoDB research.

Relational Database SQL Test Auto-scoring System

  • Hur, Tai-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.127-133
    • /
    • 2019
  • SQL is the most common language in data processing. Therefore, most of the colleges offer SQL in their curriculum. In this research, an auto scoring SQL test is proposed for the efficient results of SQL education. The system was treated with algorithms instead of using expensive DBMS(Data Base Management System) for automatic scoring, and satisfactory results were produced. For this system, the test question bank was established out of 'personnel management' and 'academic management'. It provides users with different sets of test each time. Scoring was done by dividing tables into two sections. The one that does not change the table(select) and the other that actually changes the table(update, insert, delete). In the case of a search, the answer and response were executed at first and then the results were compared and processed, the user's answers are evaluated by comparing the table with the correct answer. Modification, insertion, and deletion of table actually changes the data table, so data was restored by using ROLLBACK command. This system was implemented and tested 772 times on the 88 students in Computer Information Division of our college. The results of the implementation show that the average scoring time for a test consisting of 10 questions is 0.052 seconds, and the performance of this system is distinguished considering that multiple responses cannot be processed at the same time by a human grader, we want to develop a problem system that takes into account the difficulty of the problem into account near future.