1 |
Vaswani, Ashish, et al. "Attention is all you need,"Advances in neural information processing systems 30, 2017. |
미소장 |
2 |
Yue, Zhihan, et al. "Ts2vec: Towards universal representation of time series," Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8, 2022. |
미소장 |
3 |
Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language under standing," arXiv preprint arXiv:1810.04805, 2018. |
미소장 |
4 |
Franceschi, Jean-Yves, Aymeric Dieuleveut, and Martin Jaggi, "Unsupervised scalable representation learning for multivariate time series," Advances in neural information processing systems 32, 2019. |
미소장 |
5 |
Berndt, Donald J., and James Clifford, "Using dynamic time warping to find patterns in time series," KDD workshop. Vol. 10. No. 16, 1994. |
미소장 |
6 |
Baydogan, Mustafa Gokce, George Runger, and Eugene Tuv. "A bag-of-features framework to classify time series," IEEE transactions on pattern analysis and machine intelligence 35.11, pp. 2796-2802, 2013. |
미소장 |
7 |
Lines, Jason, and Anthony Bagnall. "Time series classification with ensembles of elastic distance measures," Data Mining and Knowledge Discovery 29, pp. 565-592, 2015. |
미소장 |
8 |
Hills, Jon, et al. "Classification of time series by shapelet transformation," Data mining and knowledge discovery 28, pp. 851-881, 2014. |
미소장 |
9 |
Bagnall, Anthony, et al. "Time-series classification with COTE: the collective of transformation-based ensembles," IEEE Transactions on Knowledge and Data Engineering 27.9, pp. 2522-2535, 2015. |
미소장 |
10 |
Sak, Haşim, Andrew Senior, and Françoise Beaufays, "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition," arXiv preprint arXiv:1402.1128, 2014. |
미소장 |
11 |
Chung, Junyoung, et al. "Empirical evaluation of gated recurrent neural networks on sequence modeling," arXiv preprint arXiv:1412.3555, 2014. |
미소장 |
12 |
Wang, Zhiguang, Weizhong Yan, and Tim Oates. "Time series classification from scratch with deep neural networks: A strong baseline," 2017 International joint conference on neural networks (IJCNN). IEEE, 2017. |
미소장 |
13 |
Karim, Fazle, et al. "LSTM fully convolutional networks for time series classification," IEEE access 6, pp. 1662-1669, 2017. |
미소장 |
14 |
Ismail Fawaz, Hassan, et al., "Deep learning for time series classification: a review," Data mining and knowledge discovery 33.4, pp. 917-963, 2019. |
미소장 |
15 |
KARIMI-BIDHENDI, Saeed; MUNSHI, Faramarz;MUNSHI, Ashfaq. Scalable classification of univariate and multivariate time series. In: 2018 IEEE International Conference on Big Data (Big Data). IEEE, pp. 1598-1605, 2018. |
미소장 |
16 |
Tonekaboni, S., Eytan, D., and Goldenberg, "A. Unsupervised representation learning for time series with temporal neighborhood coding, " arXiv preprint arXiv:2106.00750, 2021. |
미소장 |
17 |
Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., and Eickhoff, C, "A transformer-based framework for multivariate time series representation learning," In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Data Mining, pp. 2114-2124, August. 2021. |
미소장 |
18 |
Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C. K., Li, X., and Guan, C, "Time-series representation learning via temporal and contextual contrasting,"arXiv preprint arXiv:2106.14112, 2021. |
미소장 |
19 |
Mikolov, T., Chen, K., Corrado, G., and Dean, J, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013. |
미소장 |
20 |
Pennington, J., Socher, R., & Manning, C. D, "Glove: Global vectors for word representation," In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, October. 2014, |
미소장 |
21 |
Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., and Okruszek, L, "Detecting formal thought disorder by deep contextualized word representations," Psychiatry Research, 304, 114135, 2021. |
미소장 |
22 |
Bahdanau, D., Cho, K., and Bengio, Y, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014. |
미소장 |
23 |
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... and Stoyanov, V, "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019. |
미소장 |
24 |
Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I, "Improving language understanding by generative pre-training,", 2018. |
미소장 |
25 |
Clark, K., Khandelwal, U., Levy, O., and Manning, C. D. "What does bert look at? an analysis of bert's attention," arXiv preprint arXiv:1906.04341, 2019. |
미소장 |
26 |
Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C. C. M., Zhu, Y., Gharghabi, S., ... and Keogh, E, "The UCR time series archive," IEEE/CAA Journal of Automatica Sinica, 6(6), 1293-1305, 2019 |
미소장 |