Deep learning-based computer vision, especially techniques for estimating human 3D postures from single-view images, plays a key role in quantitatively analyzing complex dynamic movements. These techniques have great potential to provide an objective indicator in the field of sports analysis, where subjective evaluation is prone to involvement. This study applies 3D human pose estimation techniques to the field of figure skating to develop an AI algorithm that automatically classifies six major jumps (axel, flip, loop, lutz, salchow, and toeloop) that require high skill and artistry but pose subjectivity issues of judgment. The research process is as follows. First, we applied RTMW3D, a 3D human pose estimation model, to a public figure skating video dataset to extract the skaters' 3D joint coordinates as time-series data. Next, based on the extracted 3D skeleton data, we quantified the key biomechanical features that distinguish each jump, such as edge just before leaping, aerial revolution, and leap method. Finally, these feature vectors were entered into Transformer, a deep learning model specialized for time series classification, to determine the type of jump. The model developed in this study showed 70.3% accuracy, suggesting that further improvements can provide objective feedback to athletes and coaches and improve training efficiency. Furthermore, this model is expected to contribute as an auxiliary tool to increase the accuracy and fairness of referees.