• Title/Summary/Keyword: Zero-Shot

Search Result 42, Processing Time 0.018 seconds

Golf driver shaft variability on ball speed, head speed and fly distance (골프 드라이버 샤프트의 가변성이 타구속도, 헤드스피드 및 비거리에 미치는 영향)

  • Jung, Chul;Park, Woo-Yung
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.1
    • /
    • pp.273-283
    • /
    • 2018
  • The purpose of this study is to analyze the optimum driver selection according to shaft intensity, shaft length and shaft weight that are determining factors of driver shot. To achieve the above purpose, the subject were participate with handicap zero 10 male pro golfer and mean score 90(handicap about 18) amateur 10 male golfer. The used club limited number 1 driver, we tested 24 driver which is shaft intensity, length, weight, total weight and swing weight. Dependent variable was strike ball speed, flying distance and head speed. The findings can be summarized as follows. First, There is a significantly difference in CPM. Ball speed, head speed and flying distance according to driver shaft intensity were found to be the best when CPM is 230<. Second, There is a significantly difference in shaft length. Ball speed, and head speed according to driver shaft length were found to be the best at 46 inch and flying distance were found to be the best at 45 inch. Third, There is not significantly difference in SW. Ball speed and flying distance according to driver shaft weight were found to be the best with 65g. In the case of head speed, it was the fastest with 50g shaft. Four, total variables were significantly difference between in pro and amateur golfer. In conclusion, there would be differences in individual physical condition but the best result was found with a driver of CPM 230<, shaft length 46inch, and shaft weight 65g.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.