• Title/Summary/Keyword: Relational performance

Search Result 358, Processing Time 0.026 seconds

The traffic performance evaluation between remote server and mobile for applying to encryption protocol in the Wellness environment (웰니스 환경에서 암호화 프로토콜 적용을 위한 모바일과 원격 서버간 트래픽 성능 평가)

  • Lee, Jae-Pil;Kim, Young-Hyuk;Lee, Jae-Kwang
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.415-420
    • /
    • 2013
  • U-WHS refers to a means of remote health monitoring service to combine fitness with wellbing. U-WHS is a system which can measure and manage biometric information of patients without any limitation on time and space. In this paper, we performed in order to look into the influence that the encryption module influences on the communication evaluation in the biometric information transmission gone to the smart mobile device and Hospital Information System.In the case of the U-WHS model, the client used the Objective-c programming language for software development of iOS Xcode environment and SEED and HIGHT encryption module was applied. In the case of HIS, the MySQL which is the Websocket API of the HTML5 and relational database management system for the client and inter-server communication was applied. Therefore, in WIFI communication environment, by using wireshark, data transfer rate of the biometric information, delay and loss rate was checked for the evaluation.

A Dominant Feature based Nomalization and Relational Description of Shape Signature for Scale/Rotational Robustness (2차원 형상 변화에 강건한 지배적 특징 기반 형상 시그너쳐의 정규화 및 관계 특징 기술)

  • Song, Ho-Geun;Koo, Ha-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.11
    • /
    • pp.103-111
    • /
    • 2011
  • In this paper, we propose a Geometrical Centroid Contour Distance(GCCD) which is described by shape signature based on contour sequence. The proposed method uses geomertrical relation features instead of the absolute angle based features after it was normalized and aligned with dominant feature of the shape. Experimental result with MPEG-7 CE-Shape-1 Data Set reveals that our method has low time/spatial complexity and scale/rotation robustness than the other methods, showing that the precision of our method is more accurate than the conventional desctiptors. However, performance of the GCCD is limited with concave and complex shaped objects.

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

Automated Training from Landsat Image for Classification of SPOT-5 and QuickBird Images

  • Kim, Yong-Min;Kim, Yong-Il;Park, Wan-Yong;Eo, Yang-Dam
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.317-324
    • /
    • 2010
  • In recent years, many automatic classification approaches have been employed. An automatic classification method can be effective, time-saving and can produce objective results due to the exclusion of operator intervention. This paper proposes a classification method based on automated training for high resolution multispectral images using ancillary data. Generally, it is problematic to automatically classify high resolution images using ancillary data, because of the scale difference between the high resolution image and the ancillary data. In order to overcome this problem, the proposed method utilizes the classification results of a Landsat image as a medium for automatic classification. For the classification of a Landsat image, a maximum likelihood classification is applied to the image, and the attributes of ancillary data are entered as the training data. In the case of a high resolution image, a K-means clustering algorithm, an unsupervised classification, was conducted and the result was compared to the classification results of the Landsat image. Subsequently, the training data of the high resolution image was automatically extracted using regular rules based on a RELATIONAL matrix that shows the relation between the two results. Finally, a high resolution image was classified and updated using the extracted training data. The proposed method was applied to QuickBird and SPOT-5 images of non-accessible areas. The result showed good performance in accuracy assessments. Therefore, we expect that the method can be effectively used to automatically construct thematic maps for non-accessible areas and update areas that do not have any attributes in geographic information system.

A Numerical Study on the Simulation of Power-pack Start-up of a Staged Combustion Cycle Engine (다단연소 사이클 엔진의 파워팩 시동 모사를 위한 해석적 연구)

  • Lee, Sunghun;Jo, Seonghui;Kim, Hongjip;Kim, SeongRyong;Yi, SeungJae
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.23 no.3
    • /
    • pp.58-66
    • /
    • 2019
  • In this study, the start-up characteristics of a staged combustion engine were analyzed numerically based on relational equation modeling of the entire engine components. The start-up characteristics were extensively analyzed considering the transient period of the total engine system from the start-up sequence till the steady-state of the engine. The performance characteristics of the engine components such as RPM of engine power-pack, chamber pressure and O/F ratio of pre-burner, and mass flow of propellants in the start-up period were investigated. Furthermore, the calculated engine data were compared satisfactorily with the experimental data. Through the comparison of data, successful validation of present engine start-up analysis has been obtained.

Big Data Model for Analyzing Plant Growth Environment Informations and Biometric Informations (농작물 생육환경정보와 생체정보 분석을 위한 빅데이터 모델)

  • Lee, JongYeol;Moon, ChangBae;Kim, ByeongMan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.6
    • /
    • pp.15-23
    • /
    • 2020
  • While research activities in the agricultural field for climate change are being actively carried out, smart agriculture using information and communication technology has become a new trend in line with the Fourth Industrial Revolution. Accordingly, research is being conducted to identify and respond to signs of abnormal growth in advance by monitoring the stress of crops in various outdoor environments and soil conditions. There are also attempts to analyze data collected in real time through various sensors using artificial intelligence techniques or big data technologies. In this paper, we propose a big data model that is effective in analyzing the growth environment informations and biometric information of crops by using the existing relational database for big data analysis. The performance of the model was measured by the response time to a query according to the amount of data. As a result, it was confirmed that there is a maximum time reduction effect of 23.8%.

The effect of information literacy the communication ability of dental hygienist: mediating effect of job crafting (치과위생사의 정보활용역량이 의사소통능력에 미치는 영향 : 잡크래프팅의 매개효과)

  • Park, Jin-Ah;Kim, Seon-Yeong;Moon, Sang-Eun;Kim, Yun-Jeong;Cho, Hye-Eun;Kang, Hyun-Joo
    • Journal of Korean society of Dental Hygiene
    • /
    • v.22 no.3
    • /
    • pp.217-224
    • /
    • 2022
  • Objectives: This study aimed to establish the basic data for the performance of patient-centered care, perception of core competencies, and self-development of dental hygienists by verifying the effects of information literacy on the communication ability of dental hygienists, and the mediating effects of job crafting. Methods: Collected and analyzed data by conducting a survey targeting total 222 dental hygienists working for dental hospitals/clinics located in Seoul and Gyeonggi region. Verify the mediating effects of job crafting on the effects of information literacy on the communication ability, this study conducted the correlation analysis and simple regression analysis and multiple regression analysis. Results: In the effects of information literacy on the communication ability of dental hygienists, the cognitive crafting of job crafting (β=0.209, p<0.001) and relational crafting of job crafting (β=0.318, p<0.001) showed the partially mediating effects. Conclusions: In order to increase the educational accessibility for dental hygienists to perceive the importance of information literacy and communication ability, to improve the expertise as healthcare personnel who perform the patient-centered care by developing it, and also to develop the information literacy and job crafting, it would be necessary to develop various educational programs and contents.

A Study of Cesium Removal Using Prussian Blue-Alginate Beads (프러시안 블루-알지네이트 비드를 이용한 세슘 제거 연구)

  • So-on Park;Su-jung Min;Bum-kyoung Seo;Chang-hyun Roh;Sang-bum Hong
    • Journal of Radiation Industry
    • /
    • v.18 no.1
    • /
    • pp.89-93
    • /
    • 2024
  • Accidents at nuclear facilities and nuclear power plants led to leaks of large amounts of radioactive substances. Of the various radioactive nuclides released, 137Cs are radioactive substances generated during the fission of uranium. Therefore, due to the high fission yield (6.09%), strong gamma rays, and a relatively long half-life (30 years), a rapid and efficient removal method and a study of adsorbents are needed. Accordingly, an adsorbent was prepared using Prussian blue (PB), a material that selectively adsorbs radioactive cesium. As a result of evaluating the adsorption performance with the prepared adsorbent, it was confirmed that 82% of the removal efficiency was obtained, and most of the cesium was rapidly adsorbed within 10 to 15 minutes. The purpose of this study was to adsorb cesium using the Prussian blue alginate bead and to compare the change in detection efficiency according to the amount of adsorbent added for quantitative evaluation. However, in this case, it is difficult to determine the detection efficiency using a standard source with the same conditions as the measurement sample, so the efficiency change of the HPGe detector according to the different heights of Prussian blue was calculated through MCNP simulation using certified standard materials (1 L, Marinelli beaker) for radioactivity measurement. It is expected to derive a relational equation that can calculate detection efficiency through an efficiency curve according to the volume of Prussian blue, quantitatively evaluate the activity at the same time as the adsorption of radioactive nuclides in actual contaminated water and use it in the field of nuclear facility operation and dismantling in the future.

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.