• Title/Summary/Keyword: Retrieval Efficiency

Search Result 324, Processing Time 0.02 seconds

A Study on Adaptability of Returnable Transport Packagings in the Parcel Delivery Service by e-commerce (전자상거래기반 택배물류서비스에서의 재사용 순환물류포장 적용성 연구)

  • Oh, Jae Young;Lim, Mijin;Kim, Kee Back;Kim, Su Hyun;Suh, Sang Uk;Lee, Ga Eun
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.26 no.2
    • /
    • pp.99-103
    • /
    • 2020
  • The volume of parcel delivery is steadily increasing with the rapid growth of the global online e-commerce market. However, a large amount of packaging materials used for these parcel delivery is finally causing environmental problems as waste. In this study, we have taken a pilot test on the returnable parcel delivery packaging & service, which is one of various ways to reduce the generation of the distribution packaging waste raised by e-commerce-based parcel delivery. For the purpose of this project, we made 300 boxes of returnable & foldable delivery packaging(415 mm × 280 mm × 160 mm) and have cooperated with e-commerce company(CJ ENM) & logistics company(LogisAll). Consequently, about 50% of the delivered packages was returned because of the lack of consumer's understanding on the returnable packaging system. We finally have suggested the policy strategy to get over problems derived from the experiment, like the retrieval rate and cost of the returnable packaging, economic efficiency and so on.

A data management system for microbial genome projects

  • Ki-Bong Kim;Hyeweon Nam;Hwajung Seo and Kiejung Park
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.83-85
    • /
    • 2000
  • A lot of microbial genome sequencing projects is being done in many genome centers around the world, since the first genome, Haemophilus influenzae, was sequenced in 1995. The deluge of microbial genome sequence data demands new and highly automatic data flow system in order for genome researchers to manage and analyze their own bulky sequence data from low-level to high-level. In such an aspect, we developed the automatic data management system for microbial genome projects, which consists mainly of local database, analysis programs, and user-friendly interface. We designed and implemented the local database for large-scale sequencing projects, which makes systematic and consistent data management and retrieval possible and is tightly coupled with analysis programs and web-based user interface, That is, parsing and storage of the results of analysis programs in local database is possible and user can retrieve the data in any level of data process by means of web-based graphical user interface. Contig assembly, homology search, and ORF prediction, which are essential in genome projects, make analysis programs in our system. All but Contig assembly program are open as public domain. These programs are connected with each other by means of a lot of utility programs. As a result, this system will maximize the efficiency in cost and time in genome research.

  • PDF

Wave information retrieval algorithm based on iterative refinement (반복적 보정에 의한 파랑정보 추출 기법)

  • Kim, Jin-soo;Lee, Byung-Gil
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.7-15
    • /
    • 2016
  • Ocean wave parameters are important for safety and efficiency of operation and routing of marine traffic. In this paper, by using X-band marine radar, we try to develop an effective algorithm for collecting ocean surface information such as current velocity, wave parameters. Specifically, by exploiting iterative refinement flow instead of using fixed control schemes, an effective algorithm is designed in such a way that it can not only compute efficiently the optimized current velocity but also introduce new cost function in an optimized way. Experimental results show that the proposed algorithm is very effective in retrieving the wave information compared to the conventional algorithms.

A Review of Library Information Service Utilizing Mobile Technology (모바일 기술을 활용한 도서관 정보서비스에 대한 고찰)

  • Kim, Hye-Sun
    • Journal of Information Management
    • /
    • v.33 no.3
    • /
    • pp.105-119
    • /
    • 2002
  • Since mobile internet technology emerging from late 1990's provides anytime and anywhere accessibility to information, mobile is adapted in the various field to increase efficiency of work. After looking into the trend of mobile contents services, case studies of information services using mobile were investigated in this paper. Library information services are providing retrieval of library OPAC and verifications of circulation and return date using mobile internet technology. In addition, various notice services using short message service(SMS) of mobile are available. Considering growing mobile users and advancement of related technology, more information services utilizing mobile should be developed in near future.

A Methodology for Performance Evaluation of Web Robots (웹 로봇의 성능 평가를 위한 방법론)

  • Kim, Kwang-Hyun;Lee, Joon-Ho
    • The KIPS Transactions:PartD
    • /
    • v.11D no.3
    • /
    • pp.563-570
    • /
    • 2004
  • As the use of the Internet becomes more popular, a huge amount of information is published on the Web, and users can access the information effectively with Web search services. Since Web search services retrieve relevant documents from those collected by Web robots we need to improve the crawling quality of Web robots. In this paper, we suggest evaluation criteria for Web robots such as efficiency, continuity, freshness, coverage, silence, uniqueness and safety, and present various functions to improve the performance of Web robots. We also investigate the functions implemented in the conventional Web robots of NAVER, Google, AltaVista etc. It is expected that this study could contribute the development of more effective Web robots.

Clustering Representative Annotations for Image Browsing (이미지 브라우징 처리를 위한 전형적인 의미 주석 결합 방법)

  • Zhou, Tie-Hua;Wang, Ling;Lee, Yang-Koo;Ryu, Keun-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.62-65
    • /
    • 2010
  • Image annotations allow users to access a large image database with textual queries. But since the surrounding text of Web images is generally noisy. an efficient image annotation and retrieval system is highly desired. which requires effective image search techniques. Data mining techniques can be adopted to de-noise and figure out salient terms or phrases from the search results. Clustering algorithms make it possible to represent visual features of images with finite symbols. Annotationbased image search engines can obtains thousands of images for a given query; but their results also consist of visually noise. In this paper. we present a new algorithm Double-Circles that allows a user to remove noise results and characterize more precise representative annotations. We demonstrate our approach on images collected from Flickr image search. Experiments conducted on real Web images show the effectiveness and efficiency of the proposed model.

  • PDF

Video-Assisted Thoracic Surgery Lobectomy

  • Kim, Hong Kwan
    • Journal of Chest Surgery
    • /
    • v.54 no.4
    • /
    • pp.239-245
    • /
    • 2021
  • Video-assisted thoracoscopic surgery (VATS) has been established as the surgical approach of choice for lobectomy in patients with early-stage non-small cell lung cancer (NSCLC). Patients with clinical stage I NSCLC with no lymph node metastasis are considered candidates for VATS lobectomy. To rule out the presence of metastasis to lymph nodes or distant organs, patients should undergo meticulous clinical staging. Assessing patients' functional status is required to ensure that there are no medical contraindications, such as impaired pulmonary function or cardiac comorbidities. Although various combinations of the number, size, and location of ports are available, finding the best method of port placement for each surgeon is fundamental to maximize the efficiency of the surgical procedure. When conducting VATS lobectomy, it is always necessary to comply with the following oncological principles: (1) the vessels and bronchus of the target lobe should be individually divided, (2) systematic lymph node dissection is mandatory, and (3) touching the lymph node itself and rupturing the capsule of the lymph node should be minimized. Most surgeons conduct the procedure in the following sequence: (1) dissection along the hilar structure, (2) fissure division, (3) perivascular and peribronchial dissection, (4) individual division of the vessels and bronchus, (5) specimen retrieval, and (6) mediastinal lymph node dissection. Surgeons should obtain experience in enhancing the exposure of the dissection target and facilitating dissection. This review article provides the basic principles of the surgical techniques and practical maneuvers for performing VATS lobectomy easily, safely, and efficiently.

Design and Implementation of Web Crawler utilizing Unstructured data

  • Tanvir, Ahmed Md.;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.374-385
    • /
    • 2019
  • A Web Crawler is a program, which is commonly used by search engines to find the new brainchild on the internet. The use of crawlers has made the web easier for users. In this paper, we have used unstructured data by structuralization to collect data from the web pages. Our system is able to choose the word near our keyword in more than one document using unstructured way. Neighbor data were collected on the keyword through word2vec. The system goal is filtered at the data acquisition level and for a large taxonomy. The main problem in text taxonomy is how to improve the classification accuracy. In order to improve the accuracy, we propose a new weighting method of TF-IDF. In this paper, we modified TF-algorithm to calculate the accuracy of unstructured data. Finally, our system proposes a competent web pages search crawling algorithm, which is derived from TF-IDF and RL Web search algorithm to enhance the searching efficiency of the relevant information. In this paper, an attempt has been made to research and examine the work nature of crawlers and crawling algorithms in search engines for efficient information retrieval.

An Automatic Urban Function District Division Method Based on Big Data Analysis of POI

  • Guo, Hao;Liu, Haiqing;Wang, Shengli;Zhang, Yu
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.645-657
    • /
    • 2021
  • Along with the rapid development of the economy, the urban scale has extended rapidly, leading to the formation of different types of urban function districts (UFDs), such as central business, residential and industrial districts. Recognizing the spatial distributions of these districts is of great significance to manage the evolving role of urban planning and further help in developing reliable urban planning programs. In this paper, we propose an automatic UFD division method based on big data analysis of point of interest (POI) data. Considering that the distribution of POI data is unbalanced in a geographic space, a dichotomy-based data retrieval method was used to improve the efficiency of the data crawling process. Further, a POI spatial feature analysis method based on the mean shift algorithm is proposed, where data points with similar attributive characteristics are clustered to form the function districts. The proposed method was thoroughly tested in an actual urban case scenario and the results show its superior performance. Further, the suitability of fit to practical situations reaches 88.4%, demonstrating a reasonable UFD division result.

Structural live load surveys by deep learning

  • Li, Yang;Chen, Jun
    • Smart Structures and Systems
    • /
    • v.30 no.2
    • /
    • pp.145-157
    • /
    • 2022
  • The design of safe and economical structures depends on the reliable live load from load survey. Live load surveys are traditionally conducted by randomly selecting rooms and weighing each item on-site, a method that has problems of low efficiency, high cost, and long cycle time. This paper proposes a deep learning-based method combined with Internet big data to perform live load surveys. The proposed survey method utilizes multi-source heterogeneous data, such as images, voice, and product identification, to obtain the live load without weighing each item through object detection, web crawler, and speech recognition. The indoor objects and face detection models are first developed based on fine-tuning the YOLOv3 algorithm to detect target objects and obtain the number of people in a room, respectively. Each detection model is evaluated using the independent testing set. Then web crawler frameworks with keyword and image retrieval are established to extract the weight information of detected objects from Internet big data. The live load in a room is derived by combining the weight and number of items and people. To verify the feasibility of the proposed survey method, a live load survey is carried out for a meeting room. The results show that, compared with the traditional method of sampling and weighing, the proposed method could perform efficient and convenient live load surveys and represents a new load research paradigm.