• Title/Summary/Keyword: Retrieval Efficiency

Search Result 324, Processing Time 0.031 seconds

Information and Communication Technologies in the Main Types of Legal Activities

  • Kornev, Arkadiy;Lipen, Sergey;Zenin, Sergey;Tanimov, Oleg;Glazunov, Oleg
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.177-183
    • /
    • 2022
  • Thanks to the informatization of society, complex and high-tech devices are being introduced in all areas of human life, and the latest technologies are being actively improved in the modern, globalizing world. The article deals with the issues of using information and communication technologies in legal activities. It also covers the main types of such activities: law-making, law enforcement, and interpretive activity. Since there is an increase in the flow and accumulation of legal information, it is practically impossible to use traditional methods of working with legal information. The article considers and analyzes the role of information and communication technologies in modern legal activity. It is necessary to reveal the principles, concepts, conditions, and factors of their development and develop theoretical and practical recommendations for the use of such technologies in order to solve legal tasks. The authors of the article raise the issues of increasing the efficiency of legal activity, as well as the integration of information technologies into practical legal activity and their use for collecting, storing, searching, and issuing legal and reference information. Much attention is paid to the specific use of automated data banks and information retrieval systems in legal practice that ensure the accumulation, systematization, and effective search for legally important information. The development of such technologies leads to the creation of comfortable conditions for a lawyer in the course of their professional activity. Currently, legal activity cannot exist without telecommunication technologies, legal reference systems, and electronic programs. The authors believe that due to the use of the latest information technologies, the time for making legal decisions has significantly accelerated, the process of searching and systematizing evidence has been worked out, and it has become possible to quickly and efficiently find information on adopted laws and legal acts.

A Memory Mapping Technique to Reduce Data Retrieval Cost in the Storage Consisting of Multi Memories (다중 메모리로 구성된 저장장치에서 데이터 탐색 비용을 줄이기 위한 메모리 매핑 기법)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.19-24
    • /
    • 2023
  • Recently, with the recent rapid development of memory technology, various types of memory are developed and are used to improve processing speed in data management systems. In particular, NAND flash memory is used as a main media for storing data in memory-based storage devices because it has a nonvolatile characteristic that it can maintain data even at the power off state. However, since the recently studied memory-based storage device consists of various types of memory such as MRAM and PRAM as well as NAND flash memory, research on memory management technology is needed to improve data processing performance and efficiency of media in a storage system composed of different types of memories. In this paper, we propose a memory mapping scheme thought technique for efficiently managing data in the storage device composed of various memories for data management. The proposed idea is a method of managing different memories using a single mapping table. This method can unify the address scheme of data and reduce the search cost of data stored in different memories for data tiering.

Implementation of total management system for exhibitions and Convention using beacon (Beacon기술을 이용한 MICE시스템 설계 및 구현)

  • Kim, Young-Ick;Kim, Mijung;Kim, Hyu-Chan
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.2
    • /
    • pp.35-44
    • /
    • 2016
  • MICE industry is emerging as a new growth engine recently. Most of the domestic MICE events are carried out at low cost on a small scale. The event organizers want to cut down on costs of prints such as brochures and other promotional printed materials, as well as the personnel costs for the simple guide job needed on site, which are generated repeatedly and wastefully. The existing mobile web has a defect that the participants can't easily earn the information in the fixed menu, but have to search by themselves wasting lots of time. Therefore, it is necessary to develop the solution enables providing information efficiently at low cost for short-term use during the events. In this study, we implemented specific total management system for exhibitions and convention using beacon. The information system for exhibitions and events using beacon can raise the management efficiency, and the digital brochure function based on CMS heightens the information retrieval ability and also reduces costs. Organizers can manage their event efficiently in a small exhibition and convention event by running an online website and operate a site management system by them selves.

Machine Learning-Based Atmospheric Correction Based on Radiative Transfer Modeling Using Sentinel-2 MSI Data and ItsValidation Focusing on Forest (농림위성을 위한 기계학습을 활용한 복사전달모델기반 대기보정 모사 알고리즘 개발 및 검증: 식생 지역을 위주로)

  • Yoojin Kang;Yejin Kim ;Jungho Im;Joongbin Lim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.891-907
    • /
    • 2023
  • Compact Advanced Satellite 500-4 (CAS500-4) is scheduled to be launched to collect high spatial resolution data focusing on vegetation applications. To achieve this goal, accurate surface reflectance retrieval through atmospheric correction is crucial. Therefore, a machine learning-based atmospheric correction algorithm was developed to simulate atmospheric correction from a radiative transfer model using Sentinel-2 data that have similarspectral characteristics as CAS500-4. The algorithm was then evaluated mainly for forest areas. Utilizing the atmospheric correction parameters extracted from Sentinel-2 and GEOKOMPSAT-2A (GK-2A), the atmospheric correction algorithm was developed based on Random Forest and Light Gradient Boosting Machine (LGBM). Between the two machine learning techniques, LGBM performed better when considering both accuracy and efficiency. Except for one station, the results had a correlation coefficient of more than 0.91 and well-reflected temporal variations of the Normalized Difference Vegetation Index (i.e., vegetation phenology). GK-2A provides Aerosol Optical Depth (AOD) and water vapor, which are essential parameters for atmospheric correction, but additional processing should be required in the future to mitigate the problem caused by their many missing values. This study provided the basis for the atmospheric correction of CAS500-4 by developing a machine learning-based atmospheric correction simulation algorithm.

A Simulation Study on Handshake Location in an AS/RS with Twin Cranes for Mixed-model Production in an Automotive Plant (자동차 공장의 혼류생산을 고려한 AS/RS 내 트윈크레인 Handshake 작업영역 위치 결정에 관한 시뮬레이션 연구)

  • Jeongtae Park;Bosung Kim;Taehoon Lee;Seonghwan Lee;Soondo Hong
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.4
    • /
    • pp.11-18
    • /
    • 2023
  • This study analyzes the effect of a handshake location of an AS/RS with twin cranes for mixed-model production line at an automobile plant. Implementing a handshake operation has the advantage for preventing route interference between twin cranes that operate without crossing into each other's working areas. However the handshake operation requires additional unloading and loading processes to retrieve assembly parts beyond the handshake area. Therefore the decision regarding the handshake location is crucial to improve efficiency of storage and retrieval operations. Simulation results show that the handshake operation with the optimal handshake location reduces the average response time of storage requests to 87% compared to non-handshake operation.

The Review on the Study of Osteoporosis in Korean Medicine Journals (골다공증의 국내 연구 동향에 대한 고찰 - 한의 학술 논문 검색을 중심으로-)

  • Seo, Min-Su;Kim, Hyun-Chul;Choo, Won-Jung;Jeong, Sang-Yun;Kim, Se-Jeong;Choi, Jeong-Uk;Choi, Yo-Seob;Yoo, Yung-Ki
    • The Journal of Churna Manual Medicine for Spine and Nerves
    • /
    • v.8 no.2
    • /
    • pp.67-78
    • /
    • 2013
  • Objectives : The present study examines the domestic trend of Osteoporosis studies in Korea. Method : We reviewed oriental medicine papers published in last ten years (2003-2012). Korean literature search was used for domestic Internet search portal. 'Naver specialized information retrieval', 'Korea Traditional Knowledge Portal', 'Korea Medical Information Portal (OASIS)',' Scientific and Technological Information Integration Services (NDSL)',' Academic Research Information Service (RISS)'as the primary destination of the search were. Since 2003 until 2012, the thesis o'f osteoporosis'and found 92 papers with the search term '(golwi)' to the search terms found in 3 papers Korean medical target of on going research trends in osteoporosis about investigated. Results : 1. We researched 95 papers in 15 journals and patterns of study were as follows : experimental studies were 79(83%), clinical studies were 12(13%), reviewed studies were 3(3%) and etc. were 1(1%). 2. The experimental studies(79) were divided into papers on efficiency testing of herbal medications(67) and herbal acupuncture(12). 3. The clinical studies(12) showed that research has been carried out in the fields of follow up surveys for the herbal medication efficiency testing, basic research, case report, the relativity of osteoporosis to age and sex, and the perception about osteoporosis and korean medicine treatment. 4. The reviewed studies showed that research has been carried out in the fields of osteoporosis about acient literature and domestic studies about herbal medication of osteoporosis. Conclusion : Reviewing the domestic trend of Osteoporosis studies and examining the strong and weak points of those treatments are essential for the future studies. It is anticipated that this review benefits the future in-depth study on the treatments for osteoporosis in Korean medicine.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

Efficient and Privacy-Preserving Near-Duplicate Detection in Cloud Computing (클라우드 환경에서 검색 효율성 개선과 프라이버시를 보장하는 유사 중복 검출 기법)

  • Hahn, Changhee;Shin, Hyung June;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1112-1123
    • /
    • 2017
  • As content providers further offload content-centric services to the cloud, data retrieval over the cloud typically results in many redundant items because there is a prevalent near-duplication of content on the Internet. Simply fetching all data from the cloud severely degrades efficiency in terms of resource utilization and bandwidth, and data can be encrypted by multiple content providers under different keys to preserve privacy. Thus, locating near-duplicate data in a privacy-preserving way is highly dependent on the ability to deduplicate redundant search results and returns best matches without decrypting data. To this end, we propose an efficient near-duplicate detection scheme for encrypted data in the cloud. Our scheme has the following benefits. First, a single query is enough to locate near-duplicate data even if they are encrypted under different keys of multiple content providers. Second, storage, computation and communication costs are alleviated compared to existing schemes, while achieving the same level of search accuracy. Third, scalability is significantly improved as a result of a novel and efficient two-round detection to locate near-duplicate candidates over large quantities of data in the cloud. An experimental analysis with real-world data demonstrates the applicability of the proposed scheme to a practical cloud system. Last, the proposed scheme is an average of 70.6% faster than an existing scheme.

Collaboration and Node Migration Method of Multi-Agent Using Metadata of Naming-Agent (네이밍 에이전트의 메타데이터를 이용한 멀티 에이전트의 협력 및 노드 이주 기법)

  • Kim, Kwang-Jong;Lee, Yon-Sik
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.105-114
    • /
    • 2004
  • In this paper, we propose a collaboration method of diverse agents each others in multi-agent model and describe a node migration algorithm of Mobile-Agent (MA) using by the metadata of Naming-Agent (NA). Collaboration work of multi-agent assures stability of agent system and provides reliability of information retrieval on the distributed environment. NA, an important part of multi-agent, identifies each agents and series the unique name of each agents, and each agent references the specified object using by its name. Also, NA integrates and manages naming service by agents classification such as Client-Push-Agent (CPA), Server-Push-Agent (SPA), and System-Monitoring-Agent (SMA) based on its characteristic. And, NA provides the location list of mobile nodes to specified MA. Therefore, when MA does move through the nodes, it is needed to improve the efficiency of node migration by specified priority according to hit_count, hit_ratio, node processing and network traffic time. Therefore, in this paper, for the integrated naming service, we design Naming Agent and show the structure of metadata which constructed with fields such as hit_count, hit_ratio, total_count of documents, and so on. And, this paper presents the flow of creation and updating of metadata and the method of node migration with hit_count through the collaboration of multi-agent.