권호기사보기
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
대표형(전거형, Authority) | 생물정보 | 이형(異形, Variant) | 소속 | 직위 | 직업 | 활동분야 | 주기 | 서지 | |
---|---|---|---|---|---|---|---|---|---|
연구/단체명을 입력해주세요. |
|
|
|
|
|
* 주제를 선택하시면 검색 상세로 이동합니다.
Title Page
초록
Abstract
Contents
Chapter 1. Introduction 9
Chapter 2. Background 11
2.1. Deep Neural Networks 11
2.1.1. Layers 11
2.1.2. Weights 11
2.1.3. Parameters 12
2.2. Side-channel Attacks 15
2.2.1. Timing side-channel attacks 15
2.2.2. Cache side-channel attacks 15
2.2.3. FLUSH+RELOAD 16
2.2.4. TLB side-channel attacks 17
2.3. Cache Side-channel Attacks on DNNs 17
2.3.1. Cache Telepathy 18
2.3.2. DeepRecon 18
Chapter 3. Motivation 20
3.1. Threat Model 20
3.2. DNN Reconstruction 20
3.2.1. Layer sequence 20
3.2.2. Layer dimension 21
Chapter 4. Mitigation Techniques 24
4.1. Decoy Process 24
4.2. Input Batching 24
4.3. Network Partitioning 25
4.4. Computation Pairing 27
Chapter 5. DNN Obfuscator 32
5.1. Dimension Obfuscation 32
5.2. Less Aggressive Obfuscation 33
5.3. Optimization 33
Chapter 6. Evaluation 39
6.1. Experimental Setup 39
6.2. Experimental Results 39
6.2.1. Latency 39
6.2.2. Memory usage 40
6.2.3. Side-channel vulnerability metrics 40
6.3. Limitations 42
Chapter 7. Related Works 44
Chapter 8. Conclusion 45
Bibliography 46
Curriculum Vitae in Korean 50
Table 2.1. Parameters of fully-connected layer and convolution layer. 12
Table 4.1. Summary of mitigation techniques. Target indicates whether the mitigation technique tries to conceal the information or not. Effective indicates whether the target information is effectively concealed... 30
Table 6.1. Configurations of generated DNNs. 'M' and 'U' indicate modified and unmodified respectively. Caffe framework is modified to apply ReLU and pooling layer optimization for obfuscated networks. All... 40
Table 6.2. System configurations for the evaluation. DNN inference and side-channel attacker shares LLC. 40
Table 6.3. Cache Side-channel Vulnerability (CSV) metric for three types of VGG-16 network. Less-obfuscated VGG-16 shows higher CSV than fully-obfuscated VGG-16 but it is still low compare to... 43
Figure 2.1. (Top) Fully-connected layers. All neurons in one layer are connected to other neurons in adjacent layers. (Middle) Convolution layers. A neuron is connected to only a few local neurons in the... 13
Figure 2.2. (Top) Weights of fully-connected layer. There are corresponding weight values for all connections between adjacent layers. (Bottom) Filter of convolution layer. A filter moves left-to-right... 14
Figure 3.1. Threat model of reproduced attacks on DNNs. The attacker and the victim utilize same machine learning framework and linear algebra libraries. Also, they are running on the same processor... 21
Figure 3.2. Trace of victim's accesses to the layers. Only records below 100 cycles are regarded as loads from cache hierarchy. Entire layer sequence of VGG-16 is recovered. Dropout layers are omitted for simplicity. 22
Figure 3.3. (Top) Trace of GEMM for the first convolution layer of VGG-16. Victim's accesses to itcopy, oncopy and kernel are presented. (Middle) Detailed view of GEMM trace for the first convolution layer... 23
Figure 4.1. Trace of layer sequence when decoy process is running. Accesses to convolution layer and ReLU layer from the decoy process are traced periodically. These additional traces make attacker difficult... 25
Figure 4.2. (Top) Trace of GEMM for the first convolution layer of AlexNet without batching. There is exactly one trace of GEMM for the convolution layer with batch size 1. (Bottom) Trace of GEMM for the... 26
Figure 4.3. Diagram of network partitioning. (Top) Original architecture of VGG-16 with batched input. (Bottom) Partitioned VGG-16 and partitioned input batches. Two half-batches are forwarded... 27
Figure 4.4. Trace of victim's accesses to the layers of partitioned VGG-16. Since the input batch is halved, the trace of layers is doubled than the trace of accessing the original network. 28
Figure 4.5. Trace of GEMM for the first convolution layer of partitioned VGG-16. Although the dimension of convolution layer is halved, the trace of GEMM is not changed. 28
Figure 4.6. Trace of victim's accesses to the layers of VGG-16 when ReLU-and-Pool-always scheme is applied. Since there are pooling layers for all ReLU layers, it makes it is not obvious to distinguish each... 30
Figure 4.7. (Top) GEMM trace for the first convolution layer of VGG-16 when itcopy-and-oncopy-always scheme is applied. (Middle) Detailed view of GEMM trace for the first convolution layer of... 31
Figure 5.1. (Top) dimension of convolution layers and fully-connected layers of VGG-16. As the data is forwarded, the height and width are reduced and the number of channels is increased. (Bottom)... 35
Figure 5.2. Corresponding filter dimension modification when the dimension of the first convolution layer of VGG-16 is obfuscated. All additional filters have zero value not to affect the accuracy of the... 36
Figure 5.3. Corresponding filter dimension modification when not only the number of channels of the layer but also the height and width of the layer are obfuscated. Here again, all additional filters, as well... 37
Figure 5.4. Dimension obfuscated convolution layers and fully-connected layers of VGG-16. Convolution layers are revealing only two shapes which are (128 x 224 x 224) and (512 x 56 x 56) and fully-connected... 38
Figure 5.5. (Left) ReLU optimization removes unnecessary computations in enlarged ReLU layers. (Right) Pooling optimization introduces crop layer before pooling to remove unnecessary computations... 38
Figure 6.1. Normalized execution time for 5 types of VGG-16 inference. The overhead of Less-obfuscated VGG-16 is significantly reduced compare to fully-obfuscated VGG-16. ReLU and pooling layer optimiza-... 41
Figure 6.2. (Top) Memory usage for each layer of VGG-16. Convolution layers occupy the majority of memory among the all layers. Fully-connected layers occupy little memory compared to convolution... 42
기계 학습 알고리즘 중 하나인 심층 신경망이 여러 복잡한 문제들을 좋은 성능으로 풀어냄에 따라, 심층 신경망의 수요가 높아졌다. 이 심층 신경망은 연산량이 굉장히 많기 때문에 서비스로서의 기계 학습 형태로 사용자들에게 제공되고 서비스 제공자는 좋은 성능의 심층 신경망을 설계하고 분석해서 사용자들에게 제공할 의무가 생긴다. 따라서 좋은 성능의 심층 신경망은 높은 상업적 가치를 지닌다. 이 때문에 최근 클라우드 환경에서 가능한 캐시 부채널 공격을 이용하여 심층 신경망의 구조적 정보를 얻어내려는 연구가 발표되었다. 본 논문에서는 이러한 공격을 막기 위한 여러 방어 기법들을 소개하고 그것들의 효과를 분석한다. 그리고 심층 신경망의 구조적 정보 중 하나인 각 층의 차원을 숨기는 난독화기를 제안한다. 이 난독화기는 심층 신경망 각 층의 차원을 모두 동일하게 만듦으로써 공격자로부터 실제 차원을 숨긴다. 마지막으로, 난독화 정도를 크게 떨어뜨리지 않는 선에서 난독화기의 성능을 최적화하고 난독화된 심층 신경망의 추론 시간, 메모리 사용량, 부채널 취약 지표를 평가한다.
*표시는 필수 입력사항입니다.
*전화번호 | ※ '-' 없이 휴대폰번호를 입력하세요 |
---|
기사명 | 저자명 | 페이지 | 원문 | 기사목차 |
---|
번호 | 발행일자 | 권호명 | 제본정보 | 자료실 | 원문 | 신청 페이지 |
---|
도서위치안내: / 서가번호:
우편복사 목록담기를 완료하였습니다.
*표시는 필수 입력사항입니다.
저장 되었습니다.