• Title/Summary/Keyword: store methods

Search Result 476, Processing Time 0.031 seconds

Transforming XML DTD to SQL Schema based on JDBC (XML DTD의 JDBC 기반 SQL 스키마로의 변환)

  • 이상태;주경수
    • Journal of Internet Computing and Services
    • /
    • v.3 no.1
    • /
    • pp.29-40
    • /
    • 2002
  • The information exchange on the using of XML such as B2B electronic is common. So the efficient method to store XML message in database is needed. Because the ORDBMS is extended to ORDBMS for supporting multimedia application such as Oracle8i, 9i, Informix and SQL2000 server, SQL2, the standard RDB is extended to SQL3 for ORDB, And the XML application based on java such as J2EE is extended. Therefor it is necessary for the efficient connection methods based on JDBC between XML application and database system. In this paper, the methodology a transformation XML DTD to SQL3 schema is proposed. For the transformation, first the methods of transformation XML DTD to object model in UML class diagram are proposed. And then the methods of mapping transferred object models to SQL3 schema are proposed. This approach for transform::1lion XML. DTD to SQL3 schema such as Oracle8i, 9i, Informix and SQL2000 server based on java is proposed in this paper, can be used in database design to build XML applications based on ORDB.

  • PDF

Rearranged DCT Feature Analysis Based on Corner Patches for CBIR (contents based image retrieval) (CBIR을 위한 코너패치 기반 재배열 DCT특징 분석)

  • Lee, Jimin;Park, Jongan;An, Youngeun;Oh, Sangeon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2270-2277
    • /
    • 2016
  • In modern society, creation and distribution of multimedia contents is being actively conducted. These multimedia information have come out the enormous amount daily, the amount of data is also large enough it can't be compared with past text information. Since it has been increased for a need of the method to efficiently store multimedia information and to easily search the information, various methods associated therewith have been actively studied. In particular, image search methods for finding what you want from the video database or multiple sequential images, have attracted attention as a new field of image processing. Image retrieval method to be implemented in this paper, utilizes the attribute of corner patches based on the corner points of the object, for providing a new method of efficient and robust image search. After detecting the edge of the object within the image, the straight lines using a Hough transformation is extracted. A corner patches is formed by defining the extracted intersection of the straight line as a corner point. After configuring the feature vectors with patches rearranged, the similarity between images in the database is measured. Finally, for an accurate comparison between the proposed algorithm and existing algorithms, the recall precision rate, which has been widely used in content-based image retrieval was used to measure the performance evaluation. For the image used in the experiment, it was confirmed that the image is detected more accurately in the proposed method than the conventional image retrieval methods.

Real-time IoT Big Data Analysis Platform Requirements (실시간 IoT Big Data 분석 플랫폼 요건)

  • Kang, Sun-Kyoung;Lee, Hyun-Chang;Shin, Seong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.165-166
    • /
    • 2017
  • It is demanding to receive information of data in real time anywhere and analyze it with meaningful data. Research on the platform for such analysis is actively underway. In this paper, we try to find out what are important factors in solving the problems of collecting and analyzing IoT data in real time. How much better than existing data collection methods and analytical methods can be the basis for judging the value of the data. It is important to accurately collect and store data more quickly and quickly from many sensors in real time in real time, and analytical methods that can derive values from the stored data. Therefore, an important requirement of the analysis platform in the IoT environment is to process large amount of data in real time and to centralize and manage it.

  • PDF

Migration Method for Efficient Management of Temporal Data (시간지원 데이터의 효율적인 관리를 위한 이동 방법)

  • Yun, Hong-Won
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.813-822
    • /
    • 2001
  • In this paper we proposed four data migration methods based on time segmented storage structure including past segment, current segment, and future segment. The migration methods proposed in this paper are the Time Granularity migration method, the LST-GET (Least valid Start Time-Greatest valid End Time) migration method, the AST-AET (Average valid Start Time-Average valid End Time) migration method, and the Min-Overlap migration method. In the each data migration method we define the dividing criterion among segments and entity versions to store on each segment. We measured the response time of queries for the proposed migration methods. When there are no LLTs (Long Lived Tuples), the average response time of AST-AET migration method and LST-GET migration method are smaller than that of Time Granularity migration method. In case of existing LLT, the performance of the LST-GET migration method decreased. The AST-AET migration method resulted in better performance for queries than the Time Granularity migration method and the LST-GET migration method. The Min-Overlap migration method resulted in the almost equal performance for queries compared with the AST-AET migration method, in case of storage utilization more efficient than the AST-AET.

  • PDF

Anomaly Detection of Hadoop Log Data Using Moving Average and 3-Sigma (이동 평균과 3-시그마를 이용한 하둡 로그 데이터의 이상 탐지)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.283-288
    • /
    • 2016
  • In recent years, there have been many research efforts on Big Data, and many companies developed a variety of relevant products. Accordingly, we are able to store and analyze a large volume of log data, which have been difficult to be handled in the traditional computing environment. To handle a large volume of log data, which rapidly occur in multiple servers, in this paper we design a new data storage architecture to efficiently analyze those big log data through Apache Hive. We then design and implement anomaly detection methods, which identify abnormal status of servers from log data, based on moving average and 3-sigma techniques. We also show effectiveness of the proposed detection methods by demonstrating that our methods identifies anomalies correctly. These results show that our anomaly detection is an excellent approach for properly detecting anomalies from Hadoop log data.

A Study on the Quality Control for the Circulation Steps including Productipn, Transportation, Selling about Hamburger & Sandwich in Convenience Store (편의점에서 판매되는 햄버거와 샌드위치의 유통과정중 품질관리에 관한 연구)

  • Kim, Heh-Young;Song, Yong-Hye
    • Journal of the Korean Society of Food Culture
    • /
    • v.11 no.4
    • /
    • pp.465-473
    • /
    • 1996
  • The purpose of this study was to evaluate microbiological hazards in the steps of production, transportation and selling of hamburger and sandwich that were marketed in CVS, then to identify methods of control. The reasults are as follows: As the reasult of operation surroundings of manufacturerand reserch of circulation, $4{\sim}6$ hours are needed from manufacturer to CVS. Also transportation car mean temperature was $10^{\circ}C$ which exceeds the standard of $7^{\circ}C$ or below. Hamburger: Critical control points identified were purchasing, cooking, post-preparation, transportation and holding at CVS. As the reasult of microbial analysis following the case of holding methods and reheating at CVS, microbes of cold holding and reheat after cold holding was within standard value. But in the case of room temperature microbes exceeded standard value. Sandwich: Critical control points identified were purchasing, cooking, post-preparation, transportation and holding at CVS. As the reasult of microbial analysis following the case of holding methods and reheating at CVS, total plate counts of cold holding and reheat after cold holding was within standard value. But in the case of room temperature holding after 24 hours total plate counts exceeded standard value. In the case of room temperature holding the number of microbes increased according to the passage of time. As a reasult of food poisoning bacteria, it was negative in every test in sample against V. parahaemolyticus, Salmonella, S. aureus.

  • PDF

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

3-dimensional Mesh Model Coding Using Predictive Residual Vector Quantization (예측 잉여신호 벡터 양자화를 이용한 3차원 메시 모델 부호화)

  • 최진수;이명호;안치득
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.136-145
    • /
    • 1997
  • As a 3D mesh model consists of a lot of vertices and polygons and each vertex position is represented by three 32 bit floating-point numbers in a 3D coordinate, the amount of data needed for representing the model is very excessive. Thus, in order to store and/or transmit the 3D model efficiently, a 3D model compression is necessarily required. In this paper, a 3D model compression method using PRVQ (predictive residual vector quantization) is proposed. Its underlying idea is based on the characteristics such as high correlation between the neighboring vertex positions and the vectorial property inherent to a vertex position. Experimental results show that the proposed method obtains higher compression ratio than that of the existing methods and has the advantage of being capable of transmitting the vertex position data progressively.

  • PDF

Efficient Service Discovery Scheme based on Clustering for Ubiquitous Computing Environments (유비쿼터스 컴퓨팅 환경에서 클러스터링 기반 효율적인 서비스 디스커버리 기법)

  • Kang, Eun-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.2
    • /
    • pp.123-128
    • /
    • 2009
  • In ubiquitous computing environments, service discovery to search for an available service is an important issue. In this paper, we propose an efficient service discovery scheme that is combined a node id-based clustering service discovery scheme and a P2P caching-based information spreading scheme. To search quickly a service, proposed scheme store key information in neighbor's local cache and search services using it's information. We do not use a central look up server and do not rely on flooding. Through simulation, we show that the proposed scheme improves the performance of response time and network load compared to other methods.

  • PDF

MS-HEMs: An On-line Management System for High-Energy Molecules at ADD and BMDRC in Korea

  • Lee, Sung-Kwang;Cho, Soo-Gyeong;Park, Jae-Sung;Kim, Kwang-Yeon;No, Kyoung-Tae
    • Bulletin of the Korean Chemical Society
    • /
    • v.33 no.3
    • /
    • pp.855-861
    • /
    • 2012
  • A pioneering version of an on-line management system for high-energy molecules (MS-HEMs) was developed by the ADD and BMDRC in Korea. The current system can manage the physicochemical and explosive properties of virtual and existing HEMs. The on-line MS-HEMs consist of three main routines: management, calculation, and search. The management routine contains a user-friendly interface to store and manage molecular structures and other properties of the new HEMs. The calculation routine automatically calculates a number of compositional and topological molecular descriptors when a new HEM is stored in the MS-HEMs. Physical properties, such as the heat of formation and density, can also be calculated using group additivity methods. In addition, the calculation routine for the impact sensitivity can be used to obtain the safety nature of new HEMs. The impact sensitivity was estimated in a knowledge-based manner using in-house neural network code. The search routine enables general users to find an exact HEM and its properties by sketching a 2D chemical structure, or to retrieve HEMs and their properties by giving a range of properties. These on-line MS-HEMs are expected be powerful tool for deriving novel promising HEMs.