1 |
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY  |
미소장 |
2 |
C. Glackin, G. Chollet, N. Dugan, N. Cannings, J. Wall, S. Tahir and M. Rajarajan, “Privacy preserving encrypted phonetic search of speech data,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 6414-6418, March 5-7, 2017. |
미소장 |
3 |
H. Wang, L. Zhou, W. Zhang and H. Liu, “Watermarking-based perceptual hashing search over encrypted speech,” in Proc. of Int. Workshop on Digital Watermarking. Springer, Berlin, Heidelberg, pp. 423-434, July 9, 2014. |
미소장 |
4 |
H. X. Wang and G. Y. Hao, “Encryption speech perceptual hashing algorithm and retrieval scheme based on time and frequency domain change characteristics,” China Patent CN104835499A, Aug 12, 2015. |
미소장 |
5 |
H. Zhao and S. He, “A retrieval algorithm for encrypted speech based on perceptual hashing,” in Proc. of 12th Int. Conf. on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1840-1845, Aug 13-15, 2016. |
미소장 |
6 |
S. He and H. Zhao, “A retrieval algorithm of encrypted speech based on syllable-level perceptual hashing,” Computer Science & Information Systems, vol. 14, no. 3, pp. 703-718, 2017. |
미소장 |
7 |
A retrieval algorithm of encrypted speech based on short-term cross-correlation and perceptual hashing  |
미소장 |
8 |
Deep CNN based binary hash video representations for face retrieval  |
미소장 |
9 |
Supervised deep hashing for scalable face image retrieval  |
미소장 |
10 |
Global and local semantics-preserving based deep hashing for cross-modal retrieval  |
미소장 |
11 |
Triplet-Based Deep Hashing Network for Cross-Modal Retrieval  |
미소장 |
12 |
Y. Cao, M. Long, B. Liu and J. Wang, “Deep cauchy hashing for hamming space retrieval,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1229-1237, 2018. |
미소장 |
13 |
Deep Self-Taught Hashing for Image Retrieval  |
미소장 |
14 |
Discriminative Deep Quantization Hashing for Face Image Retrieval  |
미소장 |
15 |
A privacy-preserving image retrieval scheme based secure kNN, DNA coding and deep hashing  |
미소장 |
16 |
I. Song, J. Chung, T. Kim and Y. Bengio, “Dynamic Frame Skipping for Fast Speech Recognition in Recurrent Neural Network Based Acoustic Models,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 4984-4988, April 15-20, 2018. |
미소장 |
17 |
M. Fujimoto and H. Kawai, “Comparative Evaluations of Various Factored Deep Convolutional Rnn Architectures for Noise Robust Speech Recognition,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 4829-4833, April 15-20, 2018. |
미소장 |
18 |
Parsimonious memory unit for recurrent neural networks with application to natural language processing  |
미소장 |
19 |
Putting hands to rest: efficient deep CNN-RNN architecture for chemical named entity recognition with no hand-crafted rules  |
미소장 |
20 |
Y. Xu, Q. Kong, W. Wang and M. D. Plumbley, “Large-scale weakly supervised audio classification using gated convolutional neural network,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 121-125, April 15-20, 2018. |
미소장 |
21 |
J. Sang, S. Park and J. Lee, “Convolutional Recurrent Neural Networks for Urban Sound Classification Using Raw Waveforms,” in Proc. of 26th European Signal Processing Conference (EUSIPCO), pp. 2444-2448, Sept. 3-7, 2018. |
미소장 |
22 |
R. Pradeep and K. S. Rao, “Incorporation of Manner of Articulation Constraint in LSTM for Speech Recognition,” Circuits, Systems, and Signal Processing, vol. 38, no. 8, pp. 3482-3500, August, 2019. |
미소장 |
23 |
S. Ghorbani, A. E. Bulut and J. H. L. Hansen, “Advancing Multi-Accented Lstm-CTC Speech Recognition Using a Domain Specific Student-Teacher Learning Paradigm,” in Proc. of IEEE Spoken Language Technology Workshop (SLT), pp. 29-35, Dec. 18-21, 2018. |
미소장 |
24 |
Long short-term memory with attention and multitask learning for distant speech recognition  |
미소장 |
25 |
Attention-Based Dense LSTM for Speech Emotion Recognition  |
미소장 |
26 |
F. Tao and G. Liu, “Advanced LSTM: A study about better time dependency modeling in emotion recognition,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 2906-2910, April 15-20, 2018. |
미소장 |
27 |
G. Ramet, P. N. Garner, M. Baeriswyl and A. Lazaridis, “Context-Aware Attention Mechanism for Speech Emotion Recognition.” in Proc. of IEEE Spoken Language Technology Workshop (SLT), pp. 126-131, Dec. 18-21, 2018. |
미소장 |
28 |
C. Etienne, G. Fidanza, A. Petrovskii, L. Devillers and B. Schmauch, “Cnn+ lstm architecture for speech emotion recognition with data augmentation,” in Proc. of Workshop on Speech, Music and Mind 2018, 21-25, 2018. |
미소장 |
29 |
S. Jung, J. Park and S. Lee, “Polyphonic Sound Event Detection Using Convolutional Bidirectional Lstm and Synthetic Data-based Transfer Learning,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 885-889, May 12-17, 2019. |
미소장 |
30 |
T. Matsuyoshi, T. Komatsu, R. Kondo, T. Yamada and S. Makino, “Weakly labeled learning using BLSTM-CTC for sound event detection,” in Proc. of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1918-1923, Nov. 12-15, 2018. |
미소장 |
31 |
J. Liu, Y. Yin, H. Jiang, H. Kan, Z. Zhang, P. Chen, B. Zhu and Z. Wang, “Bowel Sound Detection Based on MFCC Feature and LSTM,” in Proc. of IEEE Biomedical Circuits and Systems (BioCAS), pp. 1-4, Oct. 17-19, 2018. |
미소장 |
32 |
B. Elizalde, S. Zarar and B. Raj, “Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 4095-4099, May 12-17, 2019. |
미소장 |
33 |
S. B. Davis and P. Mermelstein, “Evaluation of acoustic parameters for monosyllabic word recognition in continuously spoken sentences,” The Journal of the Acoustical Society of America, vol.64, pp. s180-s181, 1978. |
미소장 |
34 |
Y. Xu, Q. Kong, W. Wang and M. D. Plumbley, “Large-scale weakly supervised audio classification using gated convolutional neural network,” in Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 121-125, April 15-20, 2018. |
미소장 |
35 |
Long short-term memory.  |
미소장 |
36 |
New dynamics coined in a 4-D quadratic autonomous hyper-chaotic system  |
미소장 |
37 |
B. McFee, C. Raffel, D. Liang, D. P. Ellis, M. McVicar, E. Battenberg and O. Nieto, “librosa: Audio and music signal analysis in python,” in Proc. of the 14th python in science conference, pp. 18-24, 2015. |
미소장 |
38 |
D. Wang and X. Zhang, “Thchs-30: A free Chinese speech corpus,” arXiv preprint arXiv:1512.01882, 2015. |
미소장 |
39 |
ITU-T Recommendation P.862, Perceptual Evaluation of Speech Quality (PESQ): An Objective Method for End-to-End Speech Quality Assessment of Narrow-band Telephone Networks and Speech Codecs, ITU-T, Jan. 2002. |
미소장 |