• Title/Summary/Keyword: Mean Average Precision (MAP)

Search Result 12, Processing Time 0.024 seconds

Prediction of Mean Cutting Force in Ball-end Milling using 2-map and Cutting Parameter (Z-map과 절삭계수를 이용한 볼엔드밀의 평균절삭력 예측)

  • 황인길;김규만;주종남
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.179-184
    • /
    • 1995
  • A new cutting parameter is defined in the spherical part of ball end-mill cutter. A series of slot cutting experiments were carried out to obtain the cutting parameter. The cutter contact area is expressed as the grid posiotion in the cutting plane using Z map. The cutting forces in each grid are calculated and saved as force map, prior to the average cutting forces calculation. The cutting force, in the arbitrary cutting area, can be easily calculated by summing up the cutting forces of the engaged grid in the force map. This model was verified in the inclined surface cutting by cutting test of a cylindrical part.

  • PDF

3D Multi-floor Precision Mapping and Localization for Indoor Autonomous Robots (실내 자율주행 로봇을 위한 3차원 다층 정밀 지도 구축 및 위치 추정 알고리즘)

  • Kang, Gyuree;Lee, Daegyu;Shim, Hyunchul
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.25-31
    • /
    • 2022
  • Moving among multiple floors is one of the most challenging tasks for indoor autonomous robots. Most of the previous researches for indoor mapping and localization have focused on singular floor environment. In this paper, we present an algorithm that creates a multi-floor map using 3D point cloud. We implement localization within the multi-floor map using a LiDAR and an IMU. Our algorithm builds a multi-floor map by constructing a single-floor map using a LOAM-based algorithm, and stacking them through global registration that aligns the common sections in the map of each floor. The localization in the multi-floor map was performed by adding the height information to the NDT (Normal Distribution Transform)-based registration method. The mean error of the multi-floor map showed 0.29 m and 0.43 m errors in the x, and y-axis, respectively. In addition, the mean error of yaw was 1.00°, and the error rate of height was 0.063. The real-world test for localization was performed on the third floor. It showed the mean square error of 0.116 m, and the average differential time of 0.01 sec. This study will be able to help indoor autonomous robots to operate on multiple floors.

Detection of Traditional Costumes: A Computer Vision Approach

  • Marwa Chacha Andrea;Mi Jin Noh;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.125-133
    • /
    • 2023
  • Traditional attire has assumed a pivotal role within the contemporary fashion industry. The objective of this study is to construct a computer vision model tailored to the recognition of traditional costumes originating from five distinct countries, namely India, Korea, Japan, Tanzania, and Vietnam. Leveraging a dataset comprising 1,608 images, we proceeded to train the cutting-edge computer vision model YOLOv8. The model yielded an impressive overall mean average precision (MAP) of 96%. Notably, the Indian sari exhibited a remarkable MAP of 99%, the Tanzanian kitenge 98%, the Japanese kimono 92%, the Korean hanbok 89%, and the Vietnamese ao dai 83%. Furthermore, the model demonstrated a commendable overall box precision score of 94.7% and a recall rate of 84.3%. Within the realm of the fashion industry, this model possesses considerable utility for trend projection and the facilitation of personalized recommendation systems.

MFMAP: Learning to Maximize MAP with Matrix Factorization for Implicit Feedback in Recommender System

  • Zhao, Jianli;Fu, Zhengbin;Sun, Qiuxia;Fang, Sheng;Wu, Wenmin;Zhang, Yang;Wang, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2381-2399
    • /
    • 2019
  • Traditional recommendation algorithms on Collaborative Filtering (CF) mainly focus on the rating prediction with explicit ratings, and cannot be applied to the top-N recommendation with implicit feedbacks. To tackle this problem, we propose a new collaborative filtering approach namely Maximize MAP with Matrix Factorization (MFMAP). In addition, in order to solve the problem of non-smoothing loss function in learning to rank (LTR) algorithm based on pairwise, we also propose a smooth MAP measure which can be easily implemented by standard optimization approaches. We perform experiments on three different datasets, and the experimental results show that the performance of MFMAP is significantly better than other recommendation approaches.

Query Expansion based on Word Graph using Term Proximity (질의 어휘와의 근접도를 반영한 단어 그래프 기반 질의 확장)

  • Jang, Kye-Hun;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.37-42
    • /
    • 2012
  • The pseudo relevance feedback suggests that frequent words at the top documents are related to initial query. However, the main drawback associated with the term frequency method is the fact that it relies on feature independence, and disregards any dependencies that may exist between words in the text. In this paper, we propose query expansion based on word graph using term proximity. It supplements term frequency method. On TREC WT10g test collection, experimental results in MAP(Mean Average Precision) show that the proposed method achieved 6.4% improvement over language model.

Query Expansion based on Word Graph using Term Proximity (단어 근접도를 반영한 단어 그래프 기반 질의 확장)

  • Jang, Gye-Hun;Jo, Seung-Hyeon;Lee, Kyung-Soon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.754-757
    • /
    • 2010
  • 질의 확장은 초기 검색결과에서 질의와 연관된 단어를 선택하여 질의를 확장함으로써 검색 성능을 향상시키는 기법이다. 페이지 랭크(PageRank) 알고리즘은 웹문서 사이의 링크구조를 이용하여 문서들의 상대적인 중요성을 측정하기 위해 제안되었다. 본 논문에서는 문서들 사이의 관계가 아니라 문서 안에서 단어 그래프(Word Graph)를 통해 단어들 사이의 상대적인 중요성을 계산하였다. 질의와 가까이 위치한 단어들 사이의 관계를 단어 그래프에 적용하여 중요도를 계산하고 확장단어를 선택한다. 본 논문의 유효성을 검증하기 위해 웹문서 집합인 TREC WT10g 에 대해 실험하였고, 적합모델(Relevance Model)보다 MAP(Mean Average Precision)가 4.1% 향상되었다.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Korean-Chinese Person Name Translation for Cross Language Information Retrieval

  • Wang, Yu-Chun;Lee, Yi-Hsun;Lin, Chu-Cheng;Tsai, Richard Tzong-Han;Hsu, Wen-Lian
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.489-497
    • /
    • 2007
  • Named entity translation plays an important role in many applications, such as information retrieval and machine translation. In this paper, we focus on translating person names, the most common type of name entity in Korean-Chinese cross language information retrieval (KCIR). Unlike other languages, Chinese uses characters (ideographs), which makes person name translation difficult because one syllable may map to several Chinese characters. We propose an effective hybrid person name translation method to improve the performance of KCIR. First, we use Wikipedia as a translation tool based on the inter-language links between the Korean edition and the Chinese or English editions. Second, we adopt the Naver people search engine to find the query name's Chinese or English translation. Third, we extract Korean-English transliteration pairs from Google snippets, and then search for the English-Chinese transliteration in the database of Taiwan's Central News Agency or in Google. The performance of KCIR using our method is over five times better than that of a dictionary-based system. The mean average precision is 0.3490 and the average recall is 0.7534. The method can deal with Chinese, Japanese, Korean, as well as non-CJK person name translation from Korean to Chinese. Hence, it substantially improves the performance of KCIR.

  • PDF

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.