본문 바로가기 주메뉴 바로가기
국회도서관 홈으로 정보검색 소장정보 검색

목차보기

Title Page

Abstract

Contents

국문요약 14

Chapter 1. Introduction 16

1.1. Intelligent musical fountain authoring system 16

1.2. Affective music navigation system 18

1.3. Affective music icon system 19

1.4. Contributions 20

1.5. References 21

1.6. Organization 24

Chapter 2. Background and Previous Work 25

2.1. Previous Approaches for musical fountain authoring system 25

2.2. Previous Approaches for affective music information retrieval 25

2.3. Previous Approaches for music navigation system 27

2.4. Previous Approaches for music icon system 27

Chapter 3. Intelligent musical fountain authoring system 29

3.1. System overview 29

3.2. Designing a fountain 30

3.3. Automatic generation of scenarios 33

3.3.1. Fountain mechanism 33

3.3.2. Detecting onsets 34

3.3.3. Calculating probabilities of operation from sample scenarios 38

3.3.4. Constructing the Bayesian network 39

3.3.5. Moving nozzles and lights 41

3.3.6. Example-based approach 41

3.3.7. Editing scenarios manually 49

3.4. Visual 3D simulation 50

3.4.1. Particle dynamic engine 50

3.4.2. GPU Acceleration 50

3.4.3. Visualization system 55

3.5. Controlling fountain hardware 56

3.5.1. Message Queing 57

3.6. High-quality offline rendering 58

3.6.1. NPR Rendering 59

3.7. Interactivity 60

3.7.1. Fountain Intensity 61

3.7.2. Image input 63

3.7.3. Multi-media input 65

3.7.4. Real-time music input 66

3.8. Commercialization and evaluation 66

3.9. Conclusions 69

Chapter 4. Affective music visualization system 70

4.1. Music emotion recognition system 70

4.1.1. Thayer's 2D emotion model 70

4.1.2. Listening Tests 71

4.1.3. Feature extraction 73

4.1.4. Mapping music into the AV plane 74

4.1.5. Analysis 75

4.2. Affective music navigation system 76

4.2.1. Music visualization in the AV plane 76

4.2.2. Real-time music listening 77

4.2.3. Music playlist generation 81

4.2.4. Experimental Results 84

4.2.5. Conclusions 86

4.3. Affective music icon system 87

4.3.1. Harmonograph 87

4.3.2. Mapping arousal 88

4.3.3. Mapping valence 89

4.3.4. An experiment on arousal 91

4.3.5. An experiment on valence 93

4.3.6. Emotion and color 95

4.3.7. Experimental results 95

4.3.8. Conclusions 100

Chapter 5. Conclusions 101

References 102

List of Tables

Table 3.1. Relative degrees and select probability between pump A and other pumps (B~E). 48

Table 3.2. The speed of particle simulation using GPU-based method. 55

Table 4.1. The result of experiments about the two questions. 85

List of Figs

Fig. 3.1. System overview 30

Fig. 3.2. Fountain design interface: 2D input; 3D simulation; and a window for setting parameters. 31

Fig. 3.3. Parameters used in nozzle design. 32

Fig. 3.4. Mechanics of a fountain: (a) pump-based control; (b) solenoid-based control 34

Fig. 3.5. Diagrammatic representation of a Fourier transform. 36

Fig. 3.6. Onset detection: (a) original audio signal; (b) spectral flux of the original audio signal; (c) peak picking with w=100; and (d) peak picking with w=150. 37

Fig. 3.7. Two assumptions made in the scenario generation process: that jet shapes and audio volume are closely related at (a) low volume, and at (b) high volume; and that jets that (c) combine less attractively tend to operate together less often than (d) those which combine more... 39

Fig. 3.8. A Bayesian network for generating musical fountain scenarios automatically. This figure only shows the links that affect the operation of control unit 1. 41

Fig. 3.9. Fountain Scenario Analysis Tool. 42

Fig. 3.10. An Example scenario in example-based approach: (A) means certain scenario element; (b) indicates certain scenario segment. 44

Fig. 3.11. An example process of automatically scenario generation in example-based approach. 47

Fig. 3.12. The timeline interface: the user can edit the operation and movement of nozzles and turn lights on and off and change their colors. 49

Fig. 3.13. A structure of GPU processing implemented in the system. 52

Fig. 3.14. A screenshot of the iMFAS system. The blocks on the scenario tracks at the bottom-left of the screen show the intervals during which particular control units are activated. Editing these blocks changes the scenario. The uppermost track on the timeline shows the amplitude of... 56

Fig. 3.15. A screenshot of the demonstration of the system in the real fountain. 57

Fig. 3.16. A frame from a video generated offline. 59

Fig. 3.17. NPR version of the offline video. 60

Fig. 3.18. A prototype of the image-based fountain control system. Screen in Left top shows the input image through the camera and screen in left bottom shows the frame difference between current input image and previous images. Right screen shows the simulation of... 64

Fig. 3.19. Special pattern used to recognize the movement of the object. 65

Fig. 3.20. New configuration system with which the user can configure any fountain system more easily. 67

Fig. 3.21. New interface of iMFAS. 67

Fig. 3.22. Image captured in Yangsan Fountain, in which our system are testing. 68

Fig. 4.1. Some mood presented in the Thayer's 2D emotion model. 71

Fig. 4.2. Divided regions in AV plane. 72

Fig. 4.3. Modified version of the Lang's survey method. 72

Fig. 4.4. Psysound interface executed(excuted) in the Matlab. 73

Fig. 4.5. Plotting of the arousal and valence values in the training music. Horizontal axis means the valences value and vertical axis indicates the arousal values. 74

Fig. 4.6. Two modes of the interface: (a) gray model and (b) islands of music 77

Fig. 4.7. An interface for multiple selection of the music. All the songs in the purple ellipse are played and the volume is adjusted according to the distance between center point and the point of the song. The transparency of the name text is also adjusted by the distance. 79

Fig. 4.8. Selection with sphere interface. 82

Fig. 4.9. Selection with line stroke interface. 83

Fig. 4.10. Prototype of exploring music system for mobile environments. 84

Fig. 4.11. An example of harmonograph. 87

Fig. 4.12. Images generated by successive reductions in the ratio between f₁ and f₂ in harmonograph equations. 89

Fig. 4.13. Images generated by successive reductions in the ratio between A₁ and A₂ in the harmonograph equations. 90

Fig. 4.14. All images generated by suggested harmonograph equations. 91

Fig. 4.15. Images assumed to be related to the arousal. 92

Fig. 4.16. Examples of the experiment on arousal. 92

Fig. 4.17. Results of the experiment on arousal. 93

Fig. 4.18. Images assumed to be related to the valence. 94

Fig. 4.19. Examples of the experiment on valence. 94

Fig. 4.20. Results of the experiment on valence. The range of this figure is the same as that of Fig. 4.17. 95

Fig. 4.21. Interface of the music exploring system (left-top : folder-viewing window, left-bottom : playlist window, right : file-viewing window). 96

Fig. 4.22. Times for generating a playlist using the two sets of icons. 97

Fig. 4.23. Arousal and valence values of songs selected using (a) regular and (b) affective music icons. 98

Fig. 4.24. Relative times for generating playlists with the two types of icon. (Normalized to the generic icons). 99

Fig. 4.25. Arousal and valence values of songs selected using (a) generic and (b) affective icons, from an unfamiliar music collections. 100