과제정보
This work was supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT)(IITP-2024-RS-2022-00156287, 50%). This work was supported by Institute of Information & Communications Technology Planning & Evaluation(IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development(IITP-2023-RS-2023-00256629, 50%) grant funded by the Korea government (MSIT).
참고문헌
- Girard, A. (2024). History and Evolution of Fashion and Design in Different Regions and Periods in France. International Journal of Fashion and Design, 3(1), 49-59.
- Song, X., Feng, F., Liu, J., Li, Z., Nie, L., & Ma, J. (2017, October). Neurostylist: Neural compatibility modeling for clothing matching. In Proceedings of the 25th ACM international conference on Multimedia (pp. 753-761).
- Shirkhani, S., Mokayed, H., Saini, R., & Chai, H. Y. (2023). Study of AI-Driven Fashion Recommender Systems. SN Computer Science, 4(5), 514.
- Lin, Y. L., Tran, S., & Davis, L. S. (2020). Fashion outfit complementary item retrieval. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3311-3319).
- Han, X., Yu, L., Zhu, X., Zhang, L., Song, Y. Z., & Xiang, T. (2022, October). Fashionvil: Fashion-focused vision-and-language representation learning. In European conference on computer vision (pp. 634-651). Cham: Springer Nature Switzerland.
- Jing, P., Cui, K., Guan, W., Nie, L., & Su, Y. (2023). Category-aware multimodal attention network for fashion compatibility modeling. IEEE Transactions on Multimedia, 25, 9120-9131.
- Aggarwal, P. (2019). Fashion product images dataset. Retrieved from kaggle: https://www.kaggle.com/paramaggarwal/fashion-product-images-dataset.
- Reid, M., Savinov, N., Teplyashin, D., Lepikhin, D., Lillicrap, T., Alayrac, J. B., ... & Mustafa, B. (2024). Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530.