• Title/Summary/Keyword: Zero-shot

Search Result 45, Processing Time 0.026 seconds

Reverse-time migration using the Poynting vector (포인팅 벡터를 이용한 역시간 구조보정)

  • Yoon, Kwang-Jin;Marfurt, Kurt J.
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.102-107
    • /
    • 2006
  • Recently, rapid developments in computer hardware have enabled reverse-time migration to be applied to various production imaging problems. As a wave-equation technique using the two-way wave equation, reverse-time migration can handle not only multi-path arrivals but also steep dips and overturned reflections. However, reverse-time migration causes unwanted artefacts, which arise from the two-way characteristics of the hyperbolic wave equation. Zero-lag cross correlation with diving waves, head waves and back-scattered waves result in spurious artefacts. These strong artefacts have the common feature that the correlating forward and backward wavefields propagate in almost the opposite direction to each other at each correlation point. This is because the ray paths of the forward and backward wavefields are almost identical. In this paper, we present several tactics to avoid artefacts in shot-domain reverse-time migration. Simple muting of a shot gather before migration, or wavefront migration which performs correlation only within a time window following first arriving travel times, are useful in suppressing artefacts. Calculating the wave propagation direction from the Poynting vector gives rise to a new imaging condition, which can eliminate strong artefacts and can produce common image gathers in the reflection angle domain.

Effective ChatGPT Prompts in Mathematical Problem Solving : Focusing on Quadratic Equations and Quadratic Functions (수학 문제 해결에서 효과적인 ChatGPT의 프롬프트 고찰: 이차방정식과 이차함수를 중심으로)

  • Oh, Se Jun
    • Communications of Mathematical Education
    • /
    • v.37 no.3
    • /
    • pp.545-567
    • /
    • 2023
  • This study investigates effective ChatGPT prompts for solving mathematical problems, focusing on the chapters of quadratic equations and quadratic functions. A structured prompt was designed, following a sequence of 'Role-Rule-Example Solution-Problem-Process'. In this study, an artificial intelligence model combining GPT-4, Wolfram plugin, and Advanced Data Analysis was utilized. Wolfram was used as the primary tool for calculations to reduce computational errors. When using the structured prompt, the accuracy rate for problems from nine high school mathematics textbooks on quadratic equations and quadratic functions was 91%, showing higher performance compared to zero-shot prompts. This confirmed the effectiveness of the structured prompts in solving mathematical problems. The structured prompts designed in this study can contribute to the development of intelligent information systems for personalized and customized education.

A Comparative Study on Korean Zero-shot Relation Extraction using a Large Language Model (거대 언어 모델을 활용한 한국어 제로샷 관계 추출 비교 연구)

  • Jinsung Kim;Gyeongmin Kim;Kinam Park;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.648-653
    • /
    • 2023
  • 관계 추출 태스크는 주어진 텍스트로부터 두 개체 간의 적절한 관계를 추론하는 작업이며, 지식 베이스 구축 및 질의응답과 같은 응용 태스크의 기반이 된다. 최근 자연어처리 분야 전반에서 생성형 거대 언어모델의 내재 지식을 활용하여 뛰어난 성능을 성취하면서, 대표적인 정보 추출 태스크인 관계 추출에서 역시 이를 적극적으로 활용 가능한 방안에 대한 탐구가 필요하다. 특히, 실 세계의 추론 환경과의 유사성에서 기인하는 저자원 특히, 제로샷 환경에서의 관계 추출 연구의 중요성에 기반하여, 효과적인 프롬프팅 기법의 적용이 유의미함을 많은 기존 연구에서 증명해왔다. 따라서, 본 연구는 한국어 관계 추출 분야에서 거대 언어모델에 다각적인 프롬프팅 기법을 활용하여 제로샷 환경에서의 추론에 관한 비교 연구를 진행함으로써, 추후 한국어 관계 추출을 위한 최적의 거대 언어모델 프롬프팅 기법 심화 연구의 기반을 제공하고자 한다. 특히, 상식 추론 등의 도전적인 타 태스크에서 큰 성능 개선을 보인 사고의 연쇄(Chain-of-Thought) 및 자가 개선(Self-Refine)을 포함한 세 가지 프롬프팅 기법을 한국어 관계 추출에 도입하여 양적/질적으로 비교 분석을 제공한다. 실험 결과에 따르면, 사고의 연쇄 및 자가 개선 기법 보다 일반적인 태스크 지시 등이 포함된 프롬프팅이 정량적으로 가장 좋은 제로샷 성능을 보인다. 그러나, 이는 두 방법의 한계를 지적하는 것이 아닌, 한국어 관계 추출 태스크에의 최적화의 필요성을 암시한다고 해석 가능하며, 추후 이러한 방법론들을 발전시키는 여러 실험적 연구에 의해 개선될 것으로 판단된다.

  • PDF

Golf driver shaft variability on ball speed, head speed and fly distance (골프 드라이버 샤프트의 가변성이 타구속도, 헤드스피드 및 비거리에 미치는 영향)

  • Jung, Chul;Park, Woo-Yung
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.1
    • /
    • pp.273-283
    • /
    • 2018
  • The purpose of this study is to analyze the optimum driver selection according to shaft intensity, shaft length and shaft weight that are determining factors of driver shot. To achieve the above purpose, the subject were participate with handicap zero 10 male pro golfer and mean score 90(handicap about 18) amateur 10 male golfer. The used club limited number 1 driver, we tested 24 driver which is shaft intensity, length, weight, total weight and swing weight. Dependent variable was strike ball speed, flying distance and head speed. The findings can be summarized as follows. First, There is a significantly difference in CPM. Ball speed, head speed and flying distance according to driver shaft intensity were found to be the best when CPM is 230<. Second, There is a significantly difference in shaft length. Ball speed, and head speed according to driver shaft length were found to be the best at 46 inch and flying distance were found to be the best at 45 inch. Third, There is not significantly difference in SW. Ball speed and flying distance according to driver shaft weight were found to be the best with 65g. In the case of head speed, it was the fastest with 50g shaft. Four, total variables were significantly difference between in pro and amateur golfer. In conclusion, there would be differences in individual physical condition but the best result was found with a driver of CPM 230<, shaft length 46inch, and shaft weight 65g.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.