생몰정보
소속
직위
직업
활동분야
주기
서지
국회도서관 서비스 이용에 대한 안내를 해드립니다.
검색결과 (전체 1건)
원문 있는 자료 (1) 열기
원문 아이콘이 없는 경우 국회도서관 방문 시 책자로 이용 가능
목차보기더보기
표제지
제출문
보고서 초록
요약문
SUMMARY
목차
CONTENTS 31
제1장 연구개발과제의 개요 76
제2장 산출요소별 알고리즘 개발 78
제1절 개요 78
제2절 적외 복사전달 모델 87
1. 알고리즘 구성 정보 87
1.1. 알고리즘명 87
1.2. 알고리즘 버전 87
2. 알고리즘 기능 규격서 87
2.1. 산출물 개요 87
2.2. 알고리즘 개념 88
2.3. 알고리즘의 배경 이론 88
2.4. 알고리즘 상세 기술 91
3. 모듈 개발 94
3.1. 개발 개념 94
3.2. 입·출력 자료 94
3.3. 모듈 체계 95
3.4. 모듈 수행 결과 99
3.5. 단계별 개선 사항 104
4. 향후 운영 및 개선 105
4.1. 초기운영계획 105
5. 개발 후기 105
제3절 구름탐지 106
1. 알고리즘 구성 정보 106
1.1. 알고리즘 명 106
1.2. 알고리즘 버전 106
2. 알고리즘 기능 규격서 106
2.1. 산출물 개요 106
2.2. 알고리즘 개념 107
2.3. 알고리즘의 배경 이론 108
2.4. 알고리즘 상세 기술 109
3. 모듈 개념 119
3.1. 개발 개념 119
3.2. 입·출력 자료 120
3.3. 모듈 체계 123
3.4. 모듈 수행 결과 129
3.5. 검증기법 131
3.6. 단계별 개선사항 137
4. 향후 운영 및 개선 137
4.1. 초기운영계획 137
4.2. 관리 및 분석 S/W 138
제4절 청천복사휘도 139
1. 알고리즘 구성 정보 139
1.1. 알고리즘 명 139
1.2. 알고리즘 버전 139
2. 알고리즘 기능 규격서 139
2.1. 산출물 개요 139
2.2. 알고리즘 개념 및 이론적 배경 139
2.3. 알고리즘 상세 기술 140
3. 모듈 개발 141
3.1. 개발 개념 141
3.2. 입·출력 자료 142
3.3. 모듈 체계 145
3.4. 모듈 수행 결과 149
3.5. 단계별 개선사항 150
4. 향후 운영 및 개선 151
제5절 대기운동벡터 152
1. 알고리즘 구성 정보 152
1.1. 알고리즘 명 152
1.2. 알고리즘 버전 152
2. 알고리즘 기능규격서 152
2.1. 산출물 개요 152
2.2. 알고리즘 개념 152
2.3. 알고리즘 배경 이론 153
2.4. 알고리즘 상세 기술 155
3. 모듈 개발 165
3.1. 개발 개념 165
3.2. 입출력 자료 165
3.3. 모듈 체계 167
3.4. 모듈 수행 결과 171
3.5. 검증 기법 172
3.6. 단계별 개선 사항 175
4. 향후 운영 및 개선 176
4.1. 초기운영 계획 176
제6절 해수면온도 177
1. 알고리즘 구성 정보 177
1.1. 알고리즘명 177
1.2. 알고리즘 버전 177
2. 알고리즘 기능규격서 177
2.1. 산출물 개요 177
2.2. 알고리즘 개념 178
2.3. 알고리즘 배경 이론 178
2.4. 알고리즘 상세 기술 181
3. 모듈 개발 186
3.1. 개발 개념 186
3.2. 입·출력 자료 187
3.3. 모듈체계 189
3.4. 모듈 수행 결과 199
3.5. 검증기법 201
3.6. 단계별 개선사항 202
4. 향후 운영 및 개선 226
4.1. 초기운영계획 226
4.2. 관리 및 분석 S/W 228
5. 개발 후기 238
5.1. 해수면온도 계산 구름 처리 문제 238
5.2. 해수면온도 합성장 생산 문제 238
5.3. 시스템 사양 238
제7절 지표면 온도 239
1. 알고리즘 구성 정보 239
1.1. 알고리즘명 239
1.2. 알고리즘 버전 239
2. 알고리즘 기능규격서 239
2.1. 산출물 개요 239
2.2. 알고리즘 개념 240
2.3. 알고리즘 배경 이론 242
2.4. 알고리즘 상세 기술 246
3. 모듈 개발 258
3.1. 개발 개념 258
3.2. 입·출력 자료 259
3.3. 모듈체계 262
3.4. 모듈 수행 결과 271
3.5. 검증기법 271
3.6. 단계별 개선사항 281
4. 향후 운영 및 개선 286
4.1. 초기운영계획 286
4.2. 관리 및 분석 S/W 288
5. 개발 후기 292
5.1. MTSAT-1R 열적외채널의 휘도온도 차 (BTD) 292
5.2. 지표면온도 산출 알고리즘 개발 293
5.3. 배경자료 산출 293
제8절 해빙/적설 탐지 294
1. 알고리즘 구성 정보 294
1.1. 알고리즘명 294
1.2. 알고리즘 버전 294
2. 알고리즘 기능규격서 294
2.1. 산출물개요 294
2.2. 알고리즘 개념 295
2.3. 알고리즘 배경 이론 296
2.4. 알고리즘 상세 기술 297
3. 모듈개발 308
3.1. 개발 개념 308
3.2. 입·출력 자료 309
3.3. 모듈체계 310
3.4. 모듈 수행 결과 312
3.5. 검증기법 313
3.6. 단계별 개선사항 317
4. 향후 운영 및 개선 317
4.1. 초기운영계획 317
4.2. 관리 및 분석 S/W 317
5. 개발 후기 318
제9절 표면도달일사량 319
1. 알고리즘 구성 정보 319
1.1. 알고리즘명 319
1.2. 알고리즘 버전 319
2. 알고리즘 기능규격서 319
2.1. 산출물 개요 319
2.2. 알고리즘 개념 320
2.3. 알고리즘 배경 이론 321
2.4. 알고리즘 상세 기술 322
3. 모듈 개발 334
3.1. 개발 개념 334
3.2. 입·출력 자료 335
3.3. 모듈체계 338
3.4. 모듈 수행 결과 342
3.5. 검증기법 345
3.6. 단계별 개선사항 354
4. 향후 운영 및 개선 355
4.1. 초기운영계획 355
4.2. 관리 및 분석 S/W 358
5. 개발 후기 362
5.1. 표면도달일사량 산출 시 입력 자료 향상 문제 362
제10절 상층수증기량 363
1. 알고리즘 구성 정보 363
1.1. 알고리즘명 363
2. 알고리즘 기능 규격서 363
2.1. 산출물 개요 363
2.2. 알고리즘 개념 363
2.3. 알고리즘의 배경 이론 364
2.4. 알고리즘 상세 기술 365
3. 모듈 개발 373
3.1. 개발 개념 373
3.2. 입·출력 자료 373
3.3. 모듈 체계 375
3.4. 모듈 수행 결과 378
3.5. 검증기법 378
3.6. 단계별 개선 사항 382
4. 향후 운영 및 개선 383
4.1. 초기운영계획 383
4.2. 관리 및 분석 S/W 384
5. 개발후기 386
제11절 가강수량 387
1. 알고리즘 구성 정보 387
1.1. 알고리즘명 387
1.2. 알고리즘 버전 387
2. 알고리즘 기능 규격서 387
2.1. 산출물 개요 387
2.2. 알고리즘 개념 388
2.3. 알고리즘의 배경 이론 390
2.4. 알고리즘 상세 기술 391
3. 모듈 개발 395
3.1. 개발 개념 395
3.2. 입·출력 자료 395
3.3. 모듈 체계 397
3.4. 모듈 수행 결과 400
3.5. 검증기법 401
3.6. 단계별 개선 사항 404
4. 향후 운영 및 개선 406
4.1. 초기운영계획 406
4.2. 관리 및 분석 S/W 408
5. 개발후기 409
제12절 구름분석 410
1. 알고리즘 구성 정보 410
1.1. 알고리즘명 410
1.2. 알고리즘 버전 410
2. 알고리즘 기능 규격서 410
2.1. 산출물 개요 410
2.2. 알고리즘 개념 410
2.3. 알고리즘의 배경 이론 411
2.4. 알고리즘 상세 기술 412
3. 모듈 개발 414
3.1. 개발 개념 414
3.2. 입·출력 자료 415
3.3. 모듈 체계 416
3.4. 모듈 수행 결과 419
3.5. 검증기법 419
3.6. 단계별 개선 사항 421
4. 향후 운영 및 개선 422
4.1. 초기운영계획 422
4.2. 관리 및 분석 S/W 422
5. 개발후기 422
제12-1절 운형 424
1. 알고리즘 구성 정보 424
1.1. 알고리즘명 424
1.2. 알고리즘 버전 424
2. 알고리즘 기능 규격서 424
2.1. 산출물 개요 424
2.2. 알고리즘 개념 424
2.3. 알고리즘 배경 이론 425
2.4. 알고리즘 상세 기술 427
3. 모듈 개발 429
3.1. 개발 개념 429
3.2. 입·출력 자료 430
3.3. 모듈 체계 434
3.4. 모듈 수행 결과 437
3.5. 검증기법 438
3.6. 단계별 개선사항 441
4. 향후 운영 및 개선 442
4.1. 초기운영계획 442
제12-2절 운량 444
1. 알고리즘 구성 정보 444
1.1. 알고리즘명 444
1.2. 알고리즘 버전 444
2. 알고리즘 기능 규격서 444
2.1. 산출물 개요 444
2.2. 알고리즘 개념 444
2.3. 알고리즘의 배경 이론 445
2.4. 알고리즘 상세 기술 446
3. 모듈 개발 447
3.1. 개발 개념 447
3.2. 입·출력 자료 449
3.3. 모듈체계 451
3.4. 모듈 수행 결과 452
3.5. 검증기법 453
3.6. 단계별 개선사항 456
4. 향후 운영 및 개선 456
4.1. 초기운영계획 456
4.2. 관리 및 분석 S/W 457
제12-3절 구름상 459
1. 알고리즘 구성 정보 459
1.1. 알고리즘명 459
1.2. 알고리즘 버전 459
2. 알고리즘 기능 규격서 459
2.1. 산출물 개요 459
2.2. 알고리즘 개념 459
2.3. 알고리즘의 배경 이론 460
2.4. 알고리즘 상세 기술 460
3. 모듈 개발 466
3.1. 개발 개념 466
3.2. 입·출력 자료 467
3.3. 모듈체계 470
3.4. 모델 수행 결과 472
3.5. 검증기법 473
3.6. 단계별 개선사항 482
4. 향후 운영 및 개선 482
4.1. 초기운영계획 483
4.2. 관리 및 분석 S/W 483
제12-4절 구름광학두께 486
1. 알고리즘 구성 정보 486
1.1. 알고리즘명 486
1.2. 알고리즘 버전 486
2. 알고리즘 기능 규격서 486
2.1. 산출물 개요 486
2.2. 알고리즘 개념 486
2.3. 알고리즘의 배경 이론 487
2.4. 알고리즘 상세 기술 488
3. 모듈 개발 490
3.1. 개발 개념 491
3.2. 입·출력 자료 492
3.3. 모듈체계 495
3.4. 모델 수행 결과 497
3.5. 검증기법 497
3.6. 단계별 개선사항 509
4. 향후 운영 및 개선 510
4.1. 초기운영계획 510
4.2. 관리 및 분석 S/W 511
제13절 운정온도 및 운정고도 512
1. 알고리즘 구성 정보 512
1.1. 알고리즘명 512
1.2. 알고리즘 버전 512
2. 알고리즘 기능 규격서 512
2.1. 산출물 개요 512
2.2. 알고리즘 개념 512
2.3. 알고리즘의 배경 이론 513
2.4. 알고리즘 상세 기술 513
3. 모듈 개발 514
3.1. 개발 개념 514
3.2. 입·출력 자료 515
3.3. 모듈체계 519
3.4. 모듈 수행 결과 522
3.5. 검증 기법 522
3.6. 단계별 개선사항 528
4. 향후 운영 및 개선 528
4.1. 초기운영계획 528
4.2. 관리 및 분석 S/W 529
5. 개발후기 530
제14절 안개 531
제14-1절 안개1 531
1. 알고리즘 구성 정보 531
1.1. 알고리즘명 531
1.2. 알고리즘 버전 531
2. 알고리즘 기능 규격서 531
2.1. 산출물 개요 531
2.2. 알고리즘 개념 532
2.3. 알고리즘의 배경 이론 533
2.4. 알고리즘 상세 기술 535
3. 모듈 개발 537
3.1. 개발 개념 537
3.2. 입·출력 자료 538
3.3. 모듈 체계 541
3.4. 모듈 수행 결과 547
3.5. 검증기법 548
3.6. 단계별 개선 사항 549
4. 향후 운영 및 개선 562
4.1. 초기운영계획 562
4.2. 관리 및 분석 S/W 563
5. 개발 후기 564
제14-2절 안개2 567
1. 알고리즘 구성 정보 567
1.1. 알고리즘명 567
1.2. 알고리즘 버전 567
2. 알고리즘 기능규격서 567
2.1. 산출물 개요 567
2.2. 알고리즘 개념 567
2.3. 알고리즘 배경 이론 568
2.4. 알고리즘 상세 기술 569
3. 모듈개발 576
3.1. 개발 개념 576
3.2. 입·출력 자료 577
3.3. 모듈체계 578
3.4. 모듈 수행 결과 579
3.5. 검증기법 583
3.6. 단계별 개선사항 585
4. 향후 운영 및 개선 586
4.1. 초기운영계획 586
4.2. 관리 및 분석 S/W 586
5. 개발 후기 586
제15절 강우강도 587
1. 알고리즘 구성 정보 587
1.1. 알고리즘명 587
1.2. 알고리즘 버전 587
2. 알고리즘 기능 규격서 587
2.1. 산출물 개요 587
2.2. 알고리즘 개념 588
2.3. 알고리즘의 배경 이론 588
2.4. 알고리즘 상세 기술 589
3. 모듈 개발 595
3.1. 개발 개념 595
3.2. 입·출력 자료 595
3.3. 모듈체계 598
3.4. 모듈 수행 결과 602
3.5. 검증기법 603
3.6. 단계별 개선사항 606
4. 향후 운영 및 개선 608
4.1. 초기운영계획 608
4.2. 관리 및 분석 S/W 608
5. 개발 후기 609
제16절 지구방출복사량 610
1. 알고리즘 구성 정보 610
1.1. 알고리즘명 610
1.2. 알고리즘 버전 610
2. 알고리즘 기능 규격서 610
2.1. 산출물 개요 610
2.2. 알고리즘 개념 611
2.3. 알고리즘의 배경 이론 612
2.4. 알고리즘 상세 기술 614
3. 모듈 개발 616
3.1. 개발 개념 616
3.2. 입·출력 자료 617
3.3. 모듈체계 620
3.4. 모듈 수행 결과 622
3.5. 검증기법 625
3.6. 단계별 개선사항 627
4. 향후 운영 및 개선 628
4.1. 초기운영계획 628
4.2. 관리 및 분석 S/W 629
5. 개발 후기 629
제17절 에어로솔 탐지 631
1. 알고리즘 구성 정보 631
1.1. 알고리즘명 631
1.2. 알고리즘 버전 631
2. 알고리즘 기능 규격서 631
2.1. 산출물 개요 631
2.2. 알고리즘 개념 632
2.3. 알고리즘의 배경 이론 632
2.4. 알고리즘 상세 기술 633
3. 모듈 개발 639
3.1. 개발 개념 639
3.2. 입·출력 자료 639
3.3. 모듈체계 640
3.4. 모듈 수행 결과 643
3.5. 검증기법 645
3.6. 단계별 개선사항 646
4. 향후 운영 및 개선 646
4.1. 초기운영계획 646
4.2. 관리 및 분석 S/W 647
5. 개발 후기 648
5.1. 적외채널의 한계 648
5.2. 정확한 배경경계값 산출의 문제 649
제18절 에어로솔 광학두께 650
1. 알고리즘 구성 정보 650
1.1. 알고리즘명 650
1.2. 알고리즘 버전 650
2. 알고리즘 기능 규격서 650
2.1. 산출물 개요 650
2.2. 알고리즘 개념 651
2.3. 알고리즘의 배경 이론 651
2.4. 알고리즘 상세 기술 657
3. 모듈 개발 667
3.1. 개발 개념 667
3.2. 입·출력 자료 668
3.3. 모듈체계 668
3.4. 모듈 수행 결과 671
3.5. 검증 기법 673
3.6. 단계별 개선 사항 674
4. 향후 운영 및 개선 679
4.1. 초기운영계획 679
4.2. 관리 및 분석 S/W 680
5. 개발후가 682
제19절 검정 683
제19-1절 적외채널 복사 검정 683
1. 알고리즘 구성 정보 683
1.1. 알고리즘명 683
1.2. 알고리즘 버전 683
2. 알고리즘 기능규격서 683
2.1. 산출물 개요 683
2.2. 알고리즘 개념 684
2.3. 알고리즘 배경 이론 685
2.4. 알고리즘 상세 기술 687
3. 모듈 개발 695
3.1. 개발 개념 695
3.2. 입·출력 자료 695
3.3. 모듈체계 697
3.4. 모듈 수행 결과 700
3.5. 단계별 개선사항 702
4. 향후 운영 및 개선 705
4.1. 초기운영계획 705
4.2. 관리 및 분석 S/W 706
5. 개발 후기 708
제19-2절 해양을 이용한 가시채널 복사 검정 709
1. 알고리즘 구성 정보 709
1.1. 알고리즘명 709
1.2. 알고리즘 버전 709
2. 알고리즘 기능규격서 709
2.1. 산출물 개요 709
2.2. 알고리즘 개념 709
2.3. 알고리즘 배경 이론 710
2.4. 알고리즘 상세 기술 712
3. 모듈 개발 715
3.1. 개발 개념 715
3.2. 입·출력 자료 716
3.3. 모듈체계 718
3.4. 모듈 수행 결과 725
3.5. 단계별 개선사항 728
4. 향후 운영 및 개선 729
4.1. 초기운영계획 729
5. 개발 후기 729
제19-3절 사막을 이용한 가시 채널 복사 검정 730
1. 알고리즘 구성 정보 730
1.1. 알고리즘명 730
1.2. 알고리즘 버전 730
2. 알고리즘 기능규격서 730
2.1. 산출물 개요 730
2.2. 알고리즘 개념 730
2.3. 알고리즘 배경 이론 731
2.4. 알고리즘 상세 기술 732
3. 모듈 개발 737
3.1. 개발 개념 737
3.2. 입·출력 자료 737
3.3. 모듈체계 738
3.4. 모듈 수행 결과 741
3.5. 단계별 개선사항 742
4. 향후 운영 및 개선 743
4.1. 초기운영계획 743
5. 개발 후기 743
제19-4절 구름을 이용한 가시 채널 복사 검정 744
1. 알고리즘 구성 정보 744
1.1. 알고리즘명 744
1.2. 알고리즘 버전 744
2. 알고리즘 기능규격서 744
2.1. 산출물 개요 744
2.2. 알고리즘 개념 744
2.3. 알고리즘 배경 이론 744
2.4. 알고리즘 상세 기술 747
3. 모듈 개발 750
3.1. 개발 개념 750
3.2. 입·출력 자료 751
3.3. 모듈체계 752
3.4. 모듈 수행 결과 756
3.5. 단계별 개선사항 758
4. 향후 운영 및 개선 758
4.1. 초기운영계획 758
5. 개발 후기 759
제3장 통합소프트웨어 개발 및 운영체계 구축 760
제1절 CMDPS 통합 소프트웨어 760
1. CMDPS 개요 760
2. CMDPS 구성 및 기능 762
2.1. CMDPS의 구성 및 인터페이스 763
2.2. CMDPS 통합 소프트웨어 모듈별 기능 764
3. CMDPS 전처리 구성 및 기능 766
3.1. 입력 자료 준비 766
3.2. 입력 자료의 전처리 767
제2절 시험운영 776
1. 시험운영 개요 776
2. 시험운영 결과 777
2.1. 1차 시험운영 777
2.2. 1차 시험운영 중 개선 사항 779
2.3. 1차 시험운영 검증 결과 779
3. 2차 시험운영 781
3.1. 2차 시험운영 결과 781
3.2. 2차 시험운영 검증 결과 782
4. 국가기상위성센터 기상자료처리시스템 시험운영 784
5. 3차 시험운영 785
5.1. 3차 시험운영 결과 785
5.2. 3차 시험운영 개선 사항 787
5.3. 3차 시험운영 검증 결과 789
6. 소결론 795
제3절 COMS화 및 통합 시험 796
1. COMS용 자료처리시스템 개발 796
1.1. 주입력자료 변경 796
1.2. 보조입력자료 변경 798
1.3. 출력자료 변경 799
1.4. 관측스케줄 변경 800
1.5. 위성발사 전 변경사항 반영 및 시험 803
2. COMS 운영 시스템 통합 검증 시험(QO-SVR) 804
2.1. 개요 804
2.2. 결과 및 개선사항 805
제4절 초기운영 준비 808
1. CMDPS 초기운영 목록 808
2. CMDPS 초기운영시험 스케줄 810
제4장 목표달성도 및 관련분야에의 기여도 812
제5장 연구개발 결과의 활용계획 815
제6장 연구개발 과정에서 수집한 해외과학기술정보 816
제1절 복사전달모델 816
제2절 자료 819
제7장 참고문헌 823
부록: 주요실적 847
부록 I. 논문게재 증빙 자료 847
부록 II. 특허 출원 등록·증빙 자료[개인신상정보 삭제] 896
부록 : 약어(Acronyms) 912
별첨 921
Table 2.1.1. The change of the baseline product according to the change of the Meteorological payload. The asterisk refers the experimental products. 80
Table 2.1.2. Overall organizational structures and responsibilities for the CMDPS development. 84
Table 2.1.3. List of principle algorithm investigator and research theme for the CMDPS project. 85
Table 2.1.4. List of word description for the CMDPS development. 85
Table 2.2.1. Definitions of predictors for mixed gases, water vapor, and ozone. j is the jth layer above level j and q represents satellite viewing angle. 90
Table 2.2.2. Definition of profile variables used in predictors defined in Table 2.2.1. 91
Table 2.2.3. RTTOV-7 pressure levels and temperature, water vapor, and ozone profile limits within which the transmittance calculations are valid. 93
Table 2.2.4. Detailed Input and Output data for the IR_RTM calculation. 95
Table 2.2.5. Modules for radiative transfer calculation. 98
Table 2.2.6. Statistics of the MTSAT-1R brightness temperatures from derived expansion coefficients. The results from RTTOV-7 expansion coefficients are used as a reference. 102
Table 2.3.1. The Structure of Basic Input data for the CLD algorithm. 120
Table 2.3.2. The Structure of Basic Output data for the CLD algorithm. 122
Table 2.3.3. Detailed Input and Output data for the CLD algorithm. 122
Table 2.3.4. Quality test result for the CLD algorithm. 123
Table 2.3.5. Contingency table for validation of cloud detection result. 132
Table 2.3.6. Preliminary validation results for the CMDPS cloud detection algorithm. For calculation of validation scores in this table, MODIS cloud detection output is considered as an true value. 136
Table 2.3.7. Validation result for cloud detection during CMDPS pre- and post-processing and interface development program first operation test period (Nov. 1 - 24, 2007). 136
Table 2.3.8. Validation results for cloud detection during CMDPS operation test periods. 137
Table 2.4.1. The Structure of Basic Input data for the CSR algorithm. 142
Table 2.4.2. The Structure of Basic Output data for the CSR algorithm. 144
Table 2.4.3. Detailed Input and Output data for the CSR algorithm. 144
Table 2.4.3. Continued. 145
Table 2.4.4. Quality test result for the CSR algorithm. 145
Table 2.5.1. Input data from AMV algorithm 166
Table 2.5.2. Output data for AMV algorithm 167
Table 2.5.3. Flow chart of module: CMDPS_AMV_Main 168
Table 2.6.1. SST equation formula used for the derivation of regression coefficients. The temperatures T3, T4, T5 are the brightness temperatures of channels 3.75㎛, 10.8㎛, and 12.0㎛, respectively. The notation 'secsaz' stands for secant of satellite zenith angle. 185
Table 2.6.2. The structure of basic input data for the SST algorithm. 187
Table 2.6.3. The structure of basic output data for the SST algorithm. 189
Table 2.6.4. Detailed Input and Output data for the SST algorithm. 189
Table 2.6.5. Module and contents for the SST algorithm. 192
Table 2.6.6. Module and contents for the SST collocation. 198
Table 2.6.7. Module and contents for the SST validation. 198
Table 2.6.8. Information on available oceanic in-situ surface temperatures. 202
Table 2.6.9. Split window Multi-channel SST for day and nighttime data. 205
Table 2.6.10. RMS and bias errors of split-window MCSST retrieved from the RTM simulation. 205
Table 2.6.11. Daytime and nighttime SST coefficients and errors for MTSAT-1R data. 215
Table 2.6.12. RTM-based split window Multi-channel SST coefficients for day and nighttime COMS data. 227
Table 2.6.13. RMS and bias errors of split-window MCSST retrieved from the RTM simulation. 227
Table 2.6.14. Information of collocation database on area, period, data numbers over the East Asian Seas. 229
Table 2.6.15. Information of collocation database on area, period, data numbers for the full disk region. 231
Table 2.6.16. Contents for eliminating cloudy or partly-cloudy pixel of daytime COMS data. 232
Table 2.6.17. Contents for eliminating cloudy or partly-cloudy pixels of nighttime COMS data. 233
Table 2.6.18. Daytime SST Coefficients for MTSAT-1R data over the full disk region. 235
Table 2.6.19. Nighttime SST Coefficients for MTSAT-1R data over the full disk region. 236
Table 2.6.20. SST program names and their usage for the non-operational purpose. 236
Table 2.6.21. SST variable names and their usage for the non-operational purpose. 237
Table 2.7.1. Initial conditions for MODTRAN4 simulations. 248
Table 2.7.2. Vegetation and ground types and respective values of emissivity for the 17 IGBP classes. 250
Table 2.7.3. Legend of land cover map for emissivity estimation of CMDPS. 257
Table 2.7.4. The structure of basic input data for the LST algorithm. 259
Table 2.7.5. Description of QC flag for the LST outputs. 261
Table 2.7.6. Detailed input and output data for the LST algorithm. 262
Table 2.7.7. Validation results using AWS Ta for the 2nd campaign period. 274
Table 2.7.8. Validation results of CMDPS LST using MODIS LST on 29 November, 2007. 277
Table 2.7.9. Validation results using MODIS/Terra LST for the 2nd campaign period. 278
Table 2.7.10. Same as in Table 2.7.9 except for the MODIS/Aqua. 278
Table 2.7.11. Validation results using MODIS/Terra LST. 280
Table 2.7.12. Same as in Table 2.7.11 except for the MODIS/Aqua. 281
Table 2.7.13. Change of initial conditions for MODTRAN4 simulation. 284
Table 2.8.1. Criteria used in the SSI algorithm. 303
Table 2.8.2. INDEX values for SSI result. 308
Table 2.8.3. Detailed Input and Output data for the SSI algorithm. 309
Table 2.8.4. Description of the auxiliary data for SSI validation. 314
Table 2.8.5. The collocation methods for SSI validation. 315
Table 2.9.1. Parameter used in INS process system. 328
Table 2.9.2. Insolation attenuation coefficient diagram with respect to albedo and TBB from Kawamura (1998). 330
Table 2.9.3. Cloud coefficients map from analysis according to angle and reflectance section. 333
Table 2.9.4. The Structure of Basic Input data for the INS algorithm. 335
Table 2.9.5. The Structure of Basic Output data for the INS algorithm. 337
Table 2.9.6. The flag table of Automatic Quality Control for the INS algorithm. 337
Table 2.9.7. Sub programs called by main module program for INS retrieval. 340
Table 2.9.8. Location information and fixed satellite viewing angles of 22 meteorological stations for the study. 350
Table 2.9.9. Content of the INS Quality flag. 353
Table 2.10.1. Detailed input and output data for the UTH algorithm. 374
Table 2.10.2. Specific descriptions of quality check in the UTH calculation module. 375
Table 2.10.3. Specific module descriptions of the UTH calculation program. 377
Table 2.10.4. In-orbit-test (IOT) plan for COMS pre- and post-launch. 384
Table 2.10.5. Specific procedure description of the maintenance program for UTH. 385
Table 2.11.1. Detailed input and output data for the TPW algorithm. 396
Table 2.11.2. Specific descriptions of quality check in the TPW calculation module. 397
Table 2.11.3. Specific module descriptions of the TPW calculation program. 399
Table 2.11.4. The algorithm coefficients of modified SWLR method for MTSAT-1R. 406
Table 2.11.5. In-orbit-test (IOT) plan for COMS pre- and post-launch. 407
Table 2.11.6. Specific procedure descriptions of the maintenance program for TPW. 409
Table 2.12.1. Detailed Input and Output data far the CLA algorithm. 416
Table 2.12.2. Standardized program list. 417
Table 2.12.3. QC flag for CT. 429
Table 2.12.4. Input DATA of CT. 430
Table 2.12.5. The Structure of Basic Output data for the CT algorithm. 432
Table 2.12.6. Detailed Input and Output data far the CT algorithm. 433
Table 2.12.7. Data format and quality test results for the CT algorithm. 433
Table 2.12.8. Main Module for the CT algorithm. 435
Table 2.12.9. Processing steps of CLA_Cloud_Type_Seviri module. 436
Table 2.12.10. Cloud type validation modules. 436
Table 2.12.11. Equivalence between ISCCP types and CT types 438
Table 2.12.12. Validation result of cloud type. 440
Table 2.12.13. Validation results from the cross-comparison between CT and ISCCP CT. 440
Table 2.12.14. IOT plan for Cloud Type module. 443
Table 2.12.15. Cloud top pressure and cloud bottom height corresponding to cloud type. 447
Table 2.12.16. QC flag for cloud amount. 449
Table 2.12.17. The Input data for the CA. 450
Table 2.12.18. The Structure of Basic Output data for the CA algorithm. 451
Table 2.12.19. Main Module for the CA algorithm. 451
Table 2.12.20. Validation results of CF and CA. 454
Table 2.12.21. IOT plan for cloud amount module. 457
Table 2.12.22. The criteria for determining cloud phase. 466
Table 2.12.23. QC flag for CP 466
Table 2.12.24. The Structure of Basic Input data for the CP algorithm. 467
Table 2.12.25. The Structure of Basic Output data for the CP algorithm. 468
Table 2.12.26. Data format and quality test results for the CP algorithm. 469
Table 2.12.27. Detailed Input and Output data for the CP algorithm. 470
Table 2.12.28. Main Module for the CP algorithm. 470
Table 2.12.29. The criteria for determining cloud phase. 471
Table 2.12.30. Cloud phase validation modules. 471
Table 2.12.31. Definitions of terms used in this analysis. 475
Table 2.12.32. Validation result of cloud phase. 477
Table 2.12.33. Comparison of cloud phases from the JAMI/MTSAT-1R and the MODIS algorithm in August 2006. The numbers indicate the percentage of cloud phase (water/ice/mixed/uncertain) over the total cloud fraction. All results are calculated in the FOV of JAMI. 480
Table 2.12.34. Comparison of cloud phase from the MODIS IR trispectral algorithm and from the algorithm far the COMS. The numbers (in parentheses) designate those from the algorithm from which TB6.7 is excluded (included). 482
Table 2.12.35. IOT plan for cloud phase module. 483
Table 2.12.36. The criteria for determining cloud phase. 485
Table 2.12.37. Lookup table for COT&ER algorithm. 488
Table 2.12.38. QC Flag for cloud optical thickness. 492
Table 2.12.39. The Structure of basic input data for the COT algorithm. 493
Table 2.12.40. The Structure of basic output data for the COT algorithm. 495
Table 2.12.41. Detailed Input and Output data for the COT algorithm. 495
Table 2.12.42. Main Module for the COT algorithm. 496
Table 2.12.43. Definitions of terms used in this analysis. 499
Table 2.12.44. Validation results of COT. 502
Table 2.12.45. IOT plan for cloud optical thickness module. 511
Table 2.13.1. QC flag for CTT module. 515
Table 2.13.2. The Structure of Basic Input data for the CTTH algorithm. 516
Table 2.13.3. The Structure of Basic Output data for the CTTH algorithm. 517
Table 2.13.4. Detailed Input and Output data for the CTTH algorithm. 518
Table 2.13.5. Main Module for the CTT algorithm. 520
Table 2.13.6. Validation results of CTP. 523
Table 2.13.7. IOP plan for CTT module. 529
Table 2.14.1. Upper and lower values of the MTSAT-1R T3.7-11 (i.e., difference in brightness temperature between 3.7 ㎛ and 11 ㎛) threshold (Th3.7-11) for seasonal day/night fog detection over the globe. The values of MTSAT-1R R0.65 (i.e., reflectance at 0.65 ㎛) threshold (Th0.65) are... 536
Table 2.14.2. Detailed input and output data for the FOG algorithm. 538
Table 2.14.3. The structure of basic input data for the FOG algorithm. 539
Table 2.14.4. The structure of basic output data for the FOG algorithm. 540
Table 2.14.5. Main module for the FOG algorithm. 541
Table 2.14.6. Variable definition in the LUT of the fog algorithm. 545
Table 2.14.7. Flag definition in the fog algorithm. 546
Table 2.14.8. The fog_ct_flag (i.e., cloud type) defined in the SEVIRI. 546
Table 2.14.9. Contingency table for verification of fog detection. 549
Table 2.14.10. The names of meteorological stations over the Korean peninsula used for the fog analysis over the land and sea, respectively. The smaller number of stations, the more fog occurrences. In the table, the height (H) values above sea level are also given. 552
Table 2.14.11. Contingency tables of three kinds of MTSAT-1R data (daytime R0.68, daytime T3.7-11, nighttime T3.7-11) for twilight/dawn fog detection under the condition without higher clouds above the fog layer over the 52 meteorological stations of the Korean... 553
Table 2.14.12. Contingency tables of three kinds of MTSAT-1R data (Daytime R0.68, Daytime T3.7-11, Nighttime T3.7-11) for fog detection under the condition without higher clouds above the fog layer over 52 meteorological stations of the Korean Peninsula during the period from March 2006 to February 2007, and... 555
Table 2.14.13. Same as in Table 2.14.12 except for the condition with higher clouds above the fog layer. 555
Table 2.14.14. The MODIS input data of the LUT for improving nighttime fog sensing in this study. Seven fog cases during night were available for the period from March 2006 to February 2007 under almost simultaneous observations of MTSAT-1R and MODIS. The input data are solar zenith angle (SZA), viewing... 556
Table 2.14.15. Improved fog detection during night, based on the correction of T3.7-11 (i.e., △ T3.7-11) from the LUT. Seven fog cases were available during the period of March 2006 to February 2007 under almost simultaneous observations of the satellite data of MTSAT-1R and MODIS. The letters of 'A' and 'T' in the... 556
Table 2.14.16. Validation of CMDPS fog products using DPM ver 2.3 and the revised version, respectively, based on GTS observations. 560
Table 2.14.17. Validation of CMDPS fog products using the revised ver 2.3, based on GTS observations. 560
Table 2.14.18. Validation of fog detection on April 29, 08, 5:33 am LST, 8:33 pm LST, 11:33 pm LST and Jun 13, 08, 2:33 am LST. 561
Table 2.14.19. Validation of fog detection on April 29, 2008, 11:33 UTC. Nighttime input information of COT and CRE for the Look-Up Table(LUT) was replaced by daytime one at 06:33 UTC, based on the time sensitivity test for three cases of 04:33, 06:33, and 08:33 UTC. 562
Table 2.14.20. Contingency tables of three kinds of MTSAT-1R data (daytime R0.65, daytime T3.7-11, nighttime T3.7-11) for twilight/dawn fog detection under the condition without higher clouds above the fog layer over the 52 meteorological stations of the Korean Peninsula during... 565
Table 2.14.21. Input data for the FOG2 Algorithm. 570
Table 2.14.22. Criteria used in the FOG2 algorithm. 571
Table 2.14.23. INDEX values for FOG2 result. 576
Table 2.14.24. QC parameters using bit method. 576
Table 2.14.25. Detailed Input and Output data for the SSI algorithm. 577
Table 2.14.26. Description of the auxiliary data for FOG2 validation. 584
Table 2.14.27. The collocation methods for FOG2 validation. 584
Table 2.15.1. The Structure of basic input data for the RI algorithm. 596
Table 2.15.2. The Structure of Basic Output data for the RI algorithm. 597
Table 2.15.3. Detailed Input and Output data for the RI algorithm. 597
Table 2.15.4. Quality flag for the CMDPS rainfall intensity. 598
Table 2.15.5. Binary category contingency table for validation of CMDPS rainfall intensity. 605
Table 2.15.6. Multi Category contingency table for validation of CMDPS rainfall intensity. 605
Table 2.16.1. The three method retrieving OLR from narrow-band radiance observations. 612
Table 2.16.2. The Structure of Basic Input data for the OLR algorithm. 617
Table 2.16.3. The Structure of Basic Output data for the OLR algorithm. 618
Table 2.16.4. Detailed Input and Output data for the OLR algorithm. 619
Table 2.16.5. Standardized program lists. 620
Table 2.16.6. Validation results of OLR. 627
Table 2.17.1. The score test used for the evaluation. Here FAR is False Alarm Ratio and POFD is Probability Of False Detection. 645
Table 2.17.2. The results from the score test. 646
Table 2.18.1. The summary of AOD algorithms(Jong Min Yoon et al., 2007). 652
Table 2.18.2. Sources of uncertainty in the derived AOD from satellite observation. 656
Table 2.18.3. The local information about used AERONET sites(Anmyon, Beijing and Shirahama). 660
Table 2.18.4. Seasonal and local means of the aerosol properties at 0.5 ㎛. HAOD indicate Heavy-Aerosol Optical Depth value, and n and k means real and imaginary part of refractive index. AOD cases are estimated from total data, and HAOD cases are heavy-AOD(AOD ≥ mean AOD) data. 661
Table 2.18.5. The seasonal mean wind vector from NCEP reanalysis data during the period 2000 to 2007. U is the horizontal element of wind vector and v is the vertical element. 664
Table 2.18.6. Detailed input and output data for AOD algorithm. 668
Table 2.18.7. The validation results of retrieved AODs from MTSAT-1R with the values from MOD04_L2(MODIS TERRA Level2). Observation is done during March and April in 2006. 677
Table 2.18.8. The validation results of retrieved AODs from MTSAT-1R with the values from MYD04_L2(MODIS AQUA Level2). Observation is done during March and April in 2006. 677
Table 2.18.9. Validation results of retrieved AODs from MTSAT-1R with those from MODIS. Observation is done during March 2006. 678
Table 2.18.10. Validation results of retrieved AODs from MTSAT-1R with those from MODIS when more then 70% of retrieved AODs from MTSAT-1R or MODIS are larger then 0.5. Observation is done during March 2006. 678
Table 2.19.1. Coefficients for transfer functions over the ocean obtained from 4 IR channels of MTSAT-1R and Terra/MODIS. 690
Table 2.19.2. Same as Table 2.19.1 except over the land. 690
Table 2.19.3. Inputs of GEO satellite 696
Table 2.19.4. The specification of Terra/MODIS 4 IR channels. 696
Table 2.19.5. Inputs of LEO (Terra/MODIS). 697
Table 2.19.6. Preparations for COMS IRCAL algorithm during the IOT period. 706
Table 2.19.7. Location of each candidate calibration target. 714
Table 2.19.8. The structure of input and output for the VISCO algorithm. 716
Table 2.19.9. Coefficient for the polynomial of black sky and white sky albedo. 732
Table 2.19.10. Detailed Input and Output data for the VISCD algorithm. t1 is number of observations for COMS and t2 is for MODIS. 738
Table 2.19.11. Detailed Input data for the VISCC algorithm. t1 is number of observations for COMS and t2 is for MODIS. 751
Table 2.19.12. Detailed Output data for the VISCC algorithm. 752
Table 3.2.1. CMDPS 통합소프트웨어 개발 및 시험운영 서버의 구성 776
Table 3.2.2. 시험운영에 사용된 MTSAT-1R 위성자료의 관측 시각과 횟수 777
Table 3.2.3. 전구 및 반구 자료의 초기 및 수정된 해상도 778
Table 3.2.4. 시험운영 입력 자료와 산출물의 자료 크기 778
Table 3.2.5. 시험운영 중 산출물 코드의 개선 사항 및 반영 시간 779
Table 3.2.6. 1차 시험운영 산출물별 검증 결과 780
Table 3.2.7. DPM과 AMV의 수행 결과 시간 781
Table 3.2.8. AMV 채널별 수행 시간 782
Table 3.2.9. 2차 시험운영 산출물별 검증 결과 783
Table 3.2.10. 국가기상위성센터 내의 CMDPS 통합소프트웨어(통합소트웨어) 이전 일정 및 수행 내용 784
Table 3.2.11. CMDPS 통합소프트웨어 DPM 일괄모드의 2차 및 3차 시험운영 수행 시간 785
Table 3.2.12. CMDPS 산출물별 입출력 자료 크기 비교 786
Table 3.2.13. 3차 시험운영의 CMDPS DPM 수행률 787
Table 3.2.14. 3차 시험운영의 CMDPS DPM 수행률 787
Table 3.2.15. DPM과 VAM 코드 수정 사항 789
Table 3.2.16. CMDPS 산출물별 정확도 목표치 790
Table 3.2.17. 3차 시험운영 검증 및 특별 사례(FOG, AOD, AI) 검증 결과 791
Table 3.3.1. COMS용 자료처리시스템 개발 시 변경되어야 할 항목 796
Table 3.3.2. 재생산이 필요한 보조 입력자료 목록 799
Table 3.3.3. 출력자료 변경 시 수정되어야 할 내용 799
Table 3.3.4. 지도투영정보에 포함되는 사항 800
Table 3.3.5. 관측스케줄 변경 시 수정되어야 할 내용 801
Table 3.3.6. 관측시나리오의 일부 802
Table 3.3.7. 위성발사 전 변경사항 목록 803
Table 3.3.8. 국가기상위성센터에 구축된 CMDPS 자료처리용 서버의 테스트 수행 정보 804
Table 3.3.9. QO-SVR 테스트 일정 및 평균 수행율 805
Table 3.3.10. 본 테스트 D2의 오류 분석 806
Table 3.3.11. CMDPS 동시 수행 시험 결과 806
Table 3.4.1. List of the planned CMDPS IOT tests 809
Table 3.4.2. Schedule for planned CMDPS IOT tests in consideration of related MI, IRCM and INSRM IOT plans. 810
Table 6.1.1. The list of acquired radiative transfer models in the developments. 817
Table 6.2.1. Distribution of the number of collocated data between NOAA satellite data and oceanic in-situ measurements for the corresponding months from 1994 to 2003. The months are arbitrarily selected. 821
Fig. 2.1.1. Milestones for the CMDPS development. 83
Fig. 2.1.2. S/W development document according to the milestones. 83
Fig. 2.2.1. Flowchart for the generation and application of expansion coefficients. 92
Fig. 2.2.2. Module structure for COMS channel transmittances. 96
Fig. 2.2.3. Module structure for predictor calculation and expansion coefficient production. 96
Fig. 2.2.4. Module structure of radiative transfer calculation. 98
Fig. 2.2.5. Meteosat-8/SEVIRI response function. Black dots show high spectral response function and red crosses show low spectral response function used in RTTOV-7. 99
Fig. 2.2.6. Meteosat-8/SEVIRI brightness temperature differences between derived expansion coefficients and RTTOV-7 expansion coefficients. Upper panel is the result for high spectral response function and lower panel is for low spectral response function (same as RTTOV-7). 100
Fig. 2.2.7. MTSAT-1R brightness temperature differences between derived expansion coefficients and RTTOV-7 expansion coefficients. Upper panel is the result for derived expansion with maximum and minimum profile limitation and lower panel is the result without the limitation. 101
Fig. 2.2.8. Comparison of MTSAT-1R TBs and COMS TBs calculated from derived expansion coefficients. 103
Fig. 2.2.9. Comparison of Meteosat-8/SEVIRI TBs and COMS TBs calculated from derived expansion coefficients. 103
Fig. 2.2.10. Comparison of COMS response functions with Meteosat-8/SEVIRI and MTSAT-1R response functions. 104
Fig. 2.3.1. Detailed schematic diagram for cloud detection tests and automatic quality control procedure. 112
Fig. 2.3.2. The main constituent modules for CLD Algorithm. 124
Fig. 2.3.3. Examples of cloud detection result. Upper panel imageries are for the 0433UTC case and mid-panel imageries are for the 1633UTC case and lower pannel for 1933UTC for 15, August, 2009. 131
Fig. 2.3.4. Comparison of cloud detection results between CMDPS algorithm using MTSAT-1R (0533UTC on April 7, 2006) and MODIS (0555UTC on the same day) (a and b, upper panel).c and d represent infrared and visible imagery of MTSAT-1R, respectively. 134
Fig. 2.3.5. Same as Fig. 2.3.5, except for 0033 UTC on August 31, 2006. 135
Fig. 2.4.1. flow chart for the clear sky radiance algorithm. 141
Fig. 2.4.2. The main constituent modules for CSR Algorithm. 146
Fig. 2.4.3. The output examples of CMDPS CSR algorithm on 0033 UTC, 25, December, 2009. Each imagery represents visible channel clear sky reflectance, and SWIR channel solar reflect component, SWIR, WV, IR1 and IR2 clear sky brightness temperature, respectively. 150
Fig. 2.5.1. Schematics for target and search area 154
Fig. 2.5.2. Target Optimization 156
Fig. 2.5.3. Flow chart of height assignment 157
Fig. 2.5.4. Example showing the adjustment applied to the forward calculations of the water vapor channel TBBs when calculated and measured values disagree. 158
Fig. 2.5.5. Measured TBBs within target area partially filled with clouds. The curve represents the forward calculations of TBBs for IR1 and water vapor channels for opaque clouds at different levels in the atmosphere. 160
Fig. 2.5.6. Simulated water vapor channel emissivity of each layer (blue), and cumulative emissivity from cloud top to top of atmosphere (red). 161
Fig. 2.5.7. Three (a) sampled images for derivation of target displacement and (b) schematic diagram of target tracking for wind vector estimation. The target area in reference image is moved around within the search area(many dotted... 164
Fig. 2.5.8. Sample images of AMVs for each channel (displayed only 25%) 172
Fig. 2.5.9. Flow chart of AMV validation using rawinsonde data 174
Fig. 2.5.10. Validation results of AMV during January and June, 2008 175
Fig. 2.6.1. Conceptual diagram for SST estimation process. 182
Fig. 2.6.2. The structure of the main SST module. 191
Fig. 2.6.3. The structure of the directories of SST pre-processing procedure. 194
Fig. 2.6.4. Brief flow of SST real-time processing procedure and files to be used. 195
Fig. 2.6.5. The structure of the directories of SST real-time processing procedure. 195
Fig. 2.6.6. The structure of SST validation module. 196
Fig. 2.6.7. The structure of SST collocation module. 197
Fig. 2.6.8. Comparison of buoy SST and SST estimated from MTSAT-1R. 199
Fig. 2.6.9. Latitudinal variation of RMS errors of MTSAT-1R SST to buoy measurements. 199
Fig. 2.6.10. Comparison of GTS buoy SST and SST estimations using (a) daytime and (b) nighttime MTSAT-1R data. 200
Fig. 2.6.11. An example of sea surface temperature estimated from the split window MCSST equation for MTSAT- lR data. 200
Fig. 2.6.12. Distribution of TIGR data in the COMS area, where the colors represent months corresponding to the observation date. 204
Fig. 2.6.13. Response function of MTSAT-1R channels 2, 3, 4, and 5. 204
Fig. 2.6.14. Comparison of TIGR sea surface temperature with MCSST retrieved from MODTRAN brightness temperature for day and night. 205
Fig. 2.6.15. Brief flow diagram of SST coefficient retrievals based on radiative transfer model simulation. 209
Fig. 2.6.16. Percent of contribution of seasonal SST cycle (1 ~4 cycles/year) to total variability of SST at each spatial grid. Low-latitude regions show a small contribution of seasonal cycle of less than 20%. 211
Fig. 2.6.17. SST climatology in February using NASA/JPL Pathfinder SST data. 212
Fig. 2.6.18. SST quality flag distribution : 0(clear ocean), 1(land, outside region), 2(cloud mask over ocean), 3(masked from low SST compared to SST climatology), 4(masked from high SST), 5(abnormally low SST of less than -5℃), and 6(abnormally high SST of greater than 37℃). 213
Fig. 2.6.19. Location of collocated points between surface drifters and MTSAT-1R data from July, 2005 to June 2006. 214
Fig. 2.6.20. Comparison of buoy SST and satellite-derived SST from daytime and nighttime MTSAT-1R data using newly-derived SST equations. 216
Fig. 2.6.21. Sea surface temperature estimated from the split window MCSST equation for MTSAT-1R data (31 October 2005). 216
Fig. 2.6.22. Examples of SST images of AQUA/AMSR-E used for SST composite process. 218
Fig. 2.6.23. An SST composite image based on simple average method. 219
Fig. 2.6.24. SST image from OI composite technique using 8×8 window (Lx=180km, Ly=180km) 219
Fig. 2.6.25. An image of SST errors from OI composite technique using 8×8 window (Lx=180km, Ly=180km). 220
Fig. 2.6.26. SST errors as a function of wind speed at low latitude area within 10 degrees from the equator. 221
Fig. 2.6.27. Frequency probability (%) of low wind speed (<6m/s) for the period of 1999 ~2005. 221
Fig. 2.6.28. Schematic plots of vertical temperature profiles of the layer within a few meters from the sea surface according to daytime and nighttime. Oceanic instruments of satellite-tracked surface drifting buoy and CTD measures sea surface temperatures at different depth. 222
Fig. 2.6.29. Sea surface temperature averaged for (a) daytime ascending passes and (b) nighttime descending passes of AQUA/AMSR-E in August, 2002. (c) is the average map of SST difference between daytime and nighttime SST for the same day and (d) shows the maximum of... 225
Fig. 2.6.30. Brief procedure of matchup database production and SST coefficient retrievals. 228
Fig. 2.6.31. An example of contents of the directories and files used for the COMS-buoy data matchup procedure. 229
Fig. 2.6.32. Brief procedure for the derivation of RTM-based SST coefficients. 229
Fig. 2.6.33. Location of collocation data points between MTSAT-1R data and GTS drifter data at the region of the East Asia. 230
Fig. 2.6.34. Location of collocation data points between MTSAT-1R data and GTS drifter data for the daytime and nighttime MTSAT-1R passes over the full disk region. 231
Fig. 2.6.35. Threshold for the 11-12㎛ test as a function of the 11㎛ BT. Curve a) is for the nadir view within 50km of the sub-satellite track, and curve b) the same for the corresponding forward view. 234
Fig. 2.7.1. Distribution of TIGR data according to the SZA used as an input data for MODTRAN4 simulations. 247
Fig. 2.7.2. Response function of IR1 and IR2 channel of MTSAT-1R. 249
Fig. 2.7.3. Histogram of bias between Est_Temp and Ref_Temp. 251
Fig. 2.7.4. Scatter plots of estimated LST versus reference LST according to the surface lapse rate. 252
Fig. 2.7.5. Distribution of bias according to the SZA for Ts=Ta, ε10.8=0.989 and ε12.0=0.991. 252
Fig. 2.7.6. Flow chart for the LST retrieval process using satellite data. 253
Fig. 2.7.7. Block diagram for the retrieval of emissivity. 254
Fig. 2.7.8. Sample image of NDVI (2nd, June). 255
Fig. 2.7.9. Sample image of FVC(2nd, June). 256
Fig. 2.7.10. Spatial distribution of land cover over CMDPS full disk. 257
Fig. 2.7.11. Sample image of IR1(10.8 ㎛, left) and IR2(12.0 ㎛, right) emissivity. 258
Fig. 2.7.12. Flow chart for the LST retrieval process. 263
Fig. 2.7.13. The structure of temporal and spatial collocation modules for the validation processes using AWS and MODIS data. 268
Fig. 2.7.14. The structure of statistical process modules. 268
Fig. 2.7.15. Validation of LST retrieved from MTSAT-1R data with one-minute AWS air temperature data over South-Korea. 275
Fig. 2.7.16. Spatial distribution of LST retrieved from MTSAT-1R and MODIS, and their differences. Also scatter-plot between two LST are shown. 276
Fig. 2.7.17. Sample image of LST derived from MTSAT-1R data on 17, November, 2008. 280
Fig. 2.7.18. Directories and files for the matchup database and LST coefficients. 288
Fig. 2.7.19. Directories and files for retrieval of basic products. 290
Fig. 2.8.1. Spectral reflectance for different land surface cover types. Spectral bands of the GOES imager channels 1 and 2 are shown with bars. (Romanov et al., 2000) 295
Fig. 2.8.2. MTSAT-1R Satellite images on 04 UTC 23 January 2006. a), b) and c) represent SWIR, IR1 and VIS respectively. 296
Fig. 2.8.3. The flow chart of SSI algorithm. 298
Fig. 2.8.4. GOES-9 visible image on 0325 UTC 29 January 2005 299
Fig. 2.8.5. Timeseries of visible reflectance before (a,b) and after solar zenith angle correction (c,d) on January 29 and 30 in 2005. 300
Fig. 2.8.6. NDVI image(a), snow and sea ice extent before(b) and after(c) NDVI correction on January 4, 2007. 301
Fig. 2.8.7. Sea Ice extent with CMDPS algorithm before(a) and after(b) the application of SST at 0333 UTC 27 January 2007. 302
Fig. 2.8.8. a)Modified visible image, b)SWIR3.7㎛-IR110.8㎛ image, and the histograms of c)modified VIS0.65㎛ reflectance and d)IR3.7㎛-IR10.8㎛ for snow, and e)modified VIS0.65㎛ reflectance and f)SWIR3.7㎛-IR110.8㎛ for cloud at 0500 UTC 11 January 2007. 304
Fig. 2.8.9. Spectral properties for ice, water and cloud. a), b) and c) represent visible reflectance, SWIR-IR1 and surface temperature, respectively. 305
Fig. 2.8.10. Sea Ice detected by reflectance(a), sea surface temperature(b), and combination (c) at 0333UTC 27 January 2007. 306
Fig. 2.8.11. Snow cover maps for a)0533 UTC 7 December 2006 and c)0233 UTC 9 December 2006, and the maps(b, d) after the correction(DCD210.8㎛-6.7㎛). 307
Fig. 2.8.12. The main constituent modules for SSI Algorithm. 310
Fig. 2.8.13. Snow and sea ice detection map on 2(a,d), 3(b,e) and 4(c,f) February 2009. 313
Fig. 2.8.14. SSI validation methods. 314
Fig. 2.8.15. The validation results for snow (a) and sea ice (b) during 2007-2008 winter seasons. Graphs on the left hand side are the daily variation, and the right hand side are the monthly mean from November 2007 through April 2008. 316
Fig. 2.8.16. Sea ice extent map by CMDPS (a) and IMS (b) on 12 January 2008. Light blue, dark blue and gray represent sea ice, open sea... 316
Fig. 2.9.1. The flowchart of INS algorithm with geostationary satellite. 320
Fig. 2.9.2. Selected reference data from sample data using condition-1 and condition-2. 331
Fig. 2.9.3. Selected reference data from sample data using condition-3. 331
Fig. 2.9.4. The scatterplot of cloud penetration according to visible reflectance and solar zenith angle. 332
Fig. 2.9.5. The contour image of cloud penetration according to visible reflectance and solar zenith angle. 333
Fig. 2.9.6. The scatter plot of comparison modeled with ground measurement when using previous LUT. 334
Fig. 2.9.7. The scatter plot of comparison modeled with ground measurement when using developed LUT. 334
Fig. 2.9.8. Total synopsis of the INS algorithm. 338
Fig. 2.9.9. Main constituent modules for INS algorithm. 339
Fig. 2.9.10. Study area for retrieved Insolation over SE-Asia. 342
Fig. 2.9.11. Location of observation stations over Korean peninsula. 343
Fig. 2.9.12. (a) : Scatter plot for comparison modeled and ground-based value over clear sky condition, (b) : Scatter plot for comparison modeled and ground-based value over cloudy sky condition. 344
Fig. 2.9.13. Retrieved insolation image from MTSAT-1R data at 1, Aug. 2005 and 7, Aug. 2005. 345
Fig. 2.9.14. The location of 22 pyranometer sites over Korea peninsula. 345
Fig. 2.9.15. Scatter plots between modelled INS and pyranometer using on time value. 347
Fig. 2.9.16. The insolation variation according to time value over ground-based measurement (green line) and INS from CMDPS (blue line) for cloudy cases. 348
Fig. 2.9.17. Scatter plots for INS (blue dot) and pyranometer(pyrameter) (red dot) on 10:00 LTC and 17:00 LTC. 348
Fig. 2.9.18. Scatter plots between modelled INS and pyranometer using adjusted hour angle information. 349
Fig. 2.9.19. Scatter plots between the satellite estimates and the ground measurements over clear sky condition. 350
Fig. 2.9.20. Scatter plots between the satellite estimates and the ground measurements over all sky condition. 351
Fig. 2.9.21. (a) RMSE, (c) Bias for INS model-pyranometer comparisons for clear sky condition (black bars) and all sky condition (grey bars) according to month; (c) RMSE, (d) Bias for INS from CMDPS-pyranometer comparisons over clear (black bars) and all sky condition (gray bars) according to... 352
Fig. 2.9.22. Insolation estimate for COMS scan area. 353
Fig. 2.9.23. Quality flag corresponding to Insolation estimate for COMS scan area. 353
Fig. 2.9.24. Flowchart for determining cloud factor coefficients according to Solar Zenith Angle and Albedo. 359
Fig. 2.9.25. Scatter plot of filtering from condition-1 and condition-2. 361
Fig. 2.9.26. Scatter plot of result from condition-3. 361
Fig. 2.10.1. The schematic diagram of UTH observation from the satellite. 364
Fig. 2.10.2. The latitudinal variation of the normalized base pressure for (a) January, (b) February, … etc. The solid line in each panel is a polynomial fit for the normalized base pressures. 367
Fig. 2.10.3. Scatter plots of retrieved UTH versus radiosonde UTH using different normalized base pressure. (a) For reference P0, (b) for P0 +0.1, and (c) for P0-0.1.(이미지참조) 367
Fig. 2.10.4. The latitudinal variation of the temperature lapse rate for (a) January, (b) February, … etc. The solid line in each panel is a polynomial fit for the temperature lapse rate. 368
Fig. 2.10.5. The comparison of (left-hand side panels) determined algorithm coefficients and (right-hand side panels) retrieved UTH (a-b) for including the effect of temperature lapse rate in calculation and (c-d) algorithm coefficients. 369
Fig. 2.10.6. Normalized sensitivities of the water vapor channel as a function of the latitude. The sensitivities are calculated at 5˚latitude interval. 370
Fig. 2.10.7. Schematic diagram of the sequential procedure for UTH retrieval. 372
Fig. 2.10.8. Flowchart of the UTH retrieval procedure. 372
Fig. 2.10.9. The program structure of UTH modules in CMDPS. 375
Fig. 2.10.10. Specific elements of UTH module components. 377
Fig. 2.10.11. Retrieved UTH from MTSAT-1R at 0033 UTC on November 1 2008. 378
Fig. 2.10.12. The geophysical locations of upper-air station within the MTSAT-1R FOV used in the validation. 380
Fig. 2.10.13. Comparison of UTH retrieved from MTSAT-1R and those of radiosonde soundings on September 2007. The dash-dot line shows a 1-to-1 correspondence. 381
Fig. 2.10.14. The range of application in sensitivities of the MTSAT-1R WV channel as a function of the latitude. The sensitivities are calculated at 5˚latitude interval. Lengthwise dotted lines represent changed range of application considering... 382
Fig. 2.10.15. Comparison of the retrieved UTHs with those of radiosonde soundings in the summer season for GOES-9 from 2004 to 2005.(a) Before considering the latitudinal and seasonal variation of weighting function, (b) after considering the... 383
Fig. 2.11.1. Spectral response functions of the split-window channels for GOES-9 (solid line) and MTSAT-1R (dash-dot line) with simulated brightness temperature spectrum. 388
Fig. 2.11.2. Weighting functions of the split-window channels for MTSAT-1R at nadir viewing angle. Black and red solid lines represent IR1 and IR2 channel, respectively. 389
Fig. 2.11.3. The theoretical sensitivity of TPW amount, retrieved by SWLR method, with respect to the BTD of Split-Window channels. 390
Fig. 2.11.4. Schematic diagram of the sequential procedure for TPW retrieval. 394
Fig. 2.11.5. Flowchart of the TPW retrieval procedure. 394
Fig. 2.11.6. The program structure of TPW modules in CMDPS. 399
Fig. 2.11.7. Specific elements of TPW module components. 400
Fig. 2.11.8. TPW imagery for MTSAT-1R measured at 0033 UTC on November 1, 2008. 401
Fig. 2.11.9. The geophysical locations of upper-air station within the MTSAT-1R FOV used in the validation. 403
Fig. 2.11.10. Comparison of inferred TPW from MTSAT-1R with the radiosonde sounding from January to November in 2008. The dash-dot line shows a 1-to-1 correspondence. 404
Fig. 2.11.11. Comparison of inferred TPW by (a) SWLR and (b) modified SWLR methods using MTSAT-1R data with those of the radiosonde sounding from January to November in 2008. The dash-dot line denotes a 1-to-1 correspondence. 405
Fig. 2.11.12. The sensitivity of TPW, calculated using the improved algorithm, according to the variation of split-window channel BTD, satellite zenith angle, WV channel brightness temperature, and surface temperature. (a) for satellite... 406
Fig. 2.12.1. Flowchart of the methodology for the retrieval of CLA. 413
Fig. 2.12.2. Schematic diagram for the determination of cloud phase, cloud type (CT), cloud optical thickness. 414
Fig. 2.12.3. Input and output data of CLA module with respect to time. 415
Fig. 2.12.4. Constituent programs of the CLA algorithm. 417
Fig. 2.12.5. Spectra of brightness temperature observed from a high spectral resolution infrared spectrometer from the high-flying ER-2 aircraft over a domain, 31.1˚-37.4˚N, 95.0˚-95.3˚W, on April 21, 1996, indicating... 426
Fig. 2.12.6. Cloud type classification diagram according to the split window technique (Inoue, 1989). Six cloud types are classified using clear/cloudy and -20℃ channel-4 brightness temperature thresholds, along with clear and 1℃ brightness temperature differences. 427
Fig. 2.12.7. Flow chart of COMS CT algorithm. 427
Fig. 2.12.8. Input and output data of CLA module with respect to time. 430
Fig. 2.12.9. Flow chart of CT modules. 435
Fig. 2.12.10. Imagery of ISCCP cloud type (a) SEVIRI cloud type (b) from CT module. 437
Fig. 2.12.11. Images of (a), (b)nadir-view cloud fraction and (c), (d)fractional sky cover (Kassianov et al., 2005) 445
Fig. 2.12.12. Illustration of the parameters. 446
Fig. 2.12.13. Flow chart of CF and CA algorithm. 448
Fig. 2.12.14. Input and output data of CA module with respect to time. 449
Fig. 2.12.15. Flow chart of CA modules. 452
Fig. 2.12.16. Imagery of cloud fraction (a) and cloud amount (b) by cloud amount module. 452
Fig. 2.12.17. Validation results of cloud fraction(a) and sky cover(b) for SGP site. Larger circles in b represent improver Nhemisph values compared with... 455
Fig. 2.12.18. monthly mean bias between ground measured cloud fraction and MODIS retrieved cloud fraction (algorithm-retrieved sky cover). 456
Fig. 2.12.19. Imaginary refractive indices of ice and water. 460
Fig. 2.12.20. Flowchart of the methodology for the retrieval of CP. 461
Fig. 2.12.21. The results of a RT model simulation for (a) TB10.8 versus BTD8.7-10.8, (b) TB10.8 versus TB6.7 for clouds composed of water droplets (filled circle) and ice crystals (open circle). The numbers indicate cloud optical thickness. re stands for effective particle radius. 462
Fig. 2.12.22. BTD10.8-12.0 distribution with respect to TB10.8 for water (a) and ice cloud (b) that have a variety of effective particle radius (4 or 5, 8, 16, 32) and cloud optical depth (0.1, 0.2, 0.5, 1, 2, 3, 5, 10). 463
Fig. 2.12.23. Scatter plots of MODIS cloud phase product. 463
Fig. 2.12.24. Calculations of 0.6 ㎛ reflectance and BTD10.8-12.0 for a single-layer water cloud, a sing-layer ice cloud, and ice cloud overlapping a water cloud. The clouds are shown as a function of visible optical depth which ranges are from 1.0 to 20.0. SatZA... 465
Fig. 2.12.25. Flow chart of the CP modules. 471
Fig. 2.12.26. Imagery of cloud phase from CP module. 473
Fig. 2.12.27. JAMI/MTSAT-1R radiance imagery for the five spectral channels centered at 0.725 (VIS), 10.8 (IR1), 12.0 (IR2), 6.75 (IR3), and 3.75 m (IR4) for 0333 UTC August 7, 2006. Except for the VIS channel, the brighter color corresponds to a relatively low value in W m-2... 478
Fig. 2.12.28. Cloud phase derived by the CLA from the JAMI level-1b calibrated radiances shown in Fig. 2.12.27. Base products (left) are the results of conventional methods or without correction methods, and final products (right) from improved methods or with the correction methods developed... 479
Fig. 2.12.29. Time series of the ratio of ice clouds to the total clouds at nine selected sites; base CP from IR1 and IR2 (a), and final CP from IR1, IR2, and IR3 (b). 481
Fig. 2.12.30. Comparison of ch1 and ch3 radiances for various cloud optical thicknesses and effective radius values (King et al, 1997). 488
Fig. 2.12.31. Sensitivity of SWIR3.7 ㎛ thermal radiances (Lth3.7) to IR10.8 ㎛ satellite-received radiances (Lobs10.8) for the clouds with a variety of τc (0 to 64)...(이미지참조) 489
Fig. 2.12.32. Simulated radiances in VIS0.65㎛ as a function of cloud optical thickness and surface albedo (Ag). 490
Fig. 2.12.33. The relationships between the radiance at 0.65 ㎛ and 3.75㎛ for values of cloud optical thickness and effective particle radius. 490
Fig. 2.12.34. Flow chart of the COT algorithm. 492
Fig. 2.12.35. Input and output data of COT module with respect to time. 493
Fig. 2.12.36. Flow chart of COT modules. 496
Fig. 2.12.37. COT (a) and ER (b) Imagery from the COT module. 497
Fig. 2.12.38. JAMI/MTSAT-1R radiance imagery for the five spectral channels centered at 0.725 ㎛ (VIS), 10.8 ㎛ (IR1), 12.0 ㎛ (IR2), 6.75㎛ (IR3), and 3.75 ㎛ (IR4) for 0333 UTC August 7, 2006. Except for the VIS channel, the brighter color corresponds... 503
Fig. 2.12.39. Imagery of cloud optical thickness without using the decoupling method (i.e., base products), using the decoupling method (i.e., final products). 504
Fig. 2.12.40. Relative frequency (in %) of cloud optical thickness without using the decoupling method (i.e., base products), using the decoupling method (i.e., final products), and MODIS data to the total clouds for the corresponding conditions. SH and... 505
Fig. 2.12.41. Same as Fig. 2.12.40 but for cloud effective radius (in ㎛). 506
Fig. 2.12.42. Same as Figure 2.12.41 but for base COT using the VIS and IR4 radiances (a), and final COT corrected using the decoupling method in order to have a reflected component from clouds only in the radiances (b). 508
Fig. 2.12.43. Same as Figure 2.12.41 but for base ER (a) and final ER (b). 508
Fig. 2.12.44. Relative frequency of MTSAT-1R minus MODIS COT/ER for the maximum values. Errors in the retrieved COT/ER (in %) with respect to the corresponding parameters. The solid and dotted lines indicate values from the final... 509
Fig. 2.13.1. Climatological atmospheric absorption used to compute cloud top temperature from IR10.8 ㎛ brightness temperature (SAFNWC/MSG user manual, 2002). 513
Fig. 2.13.2. Flow chart of the CTTP algorithm. 515
Fig. 2.13.3. Input and output data of CTT module with respect to time. 516
Fig. 2.13.4. Flow chart of CTT modules. 520
Fig. 2.13.5. Imagery of cloud top temperature (a) and pressure (b) from CTT module. 522
Fig. 2.13.6. Imagery of cloud top pressure without using the decoupling method (i.e., base products), using the decoupling method (i.e., final products). 524
Fig. 2.13.7. Relative frequency distribution (in %) of (a) MODIS CTP, (b) base CTP retrieved by the IR1 estimate only, and (c) final CTP corrected by the radiance rationing method for the total clouds. The results are shown for August 2006, daytime, nighttime, Northern... 525
Fig. 2.13.8. The difference between MTSAT-1R (both base and final) CTP and MODIS CTP values (in %) shown in Fig. 2.12.50. The radiance ratios are calculated by using clear-sky radiances obtained within various spatial resolutions: 12 (3×3), 60 (15×15), 100 (25×25), and 220km (55×55 pixels). 526
Fig. 2.13.9. Time series of the ratio of ice clouds to the total clouds at nine selected sites; (a) base CTP and (b) final CTP. 527
Fig. 2.13.10. Relative frequency of MTSAT-1R minus MODIS CTP (a) for the maximum values. Errors in the retrieved CTP (b) (in %) with respect to the corresponding parameters. The solid and dotted lines indicate values... 528
Fig. 2.14.1. Schematic diagram showing the effect of higher clouds above daytime fog-layer to satellite observations of the difference in brightness temperature between 3.7 ㎛ and 11 ㎛ (T3.7-11) and the reflectance at 0.65 ㎛ (R0.65). The... 533
Fig. 2.14.2. Theoretical values of △T(i.e., T3.7-11) vs T10.8 for high-level (cirrus) spherical ice crystal clouds as function of particle size and optical depth. The solid isolines of particle size (re) and dashed isolines of optical depth (τ) are identified. Note: isoline of re = 3㎛ is also dashed (after Yoo, 1992). 534
Fig. 2.14.3. Flow chart of the algorithm for fog detection. The correction values of △T3.7-11, △ R0.68 and △T11 are calculated from the look-up table (LUT) by subtracting the case without higher cloud above the low cloud (or fog) from the case with higher cloud. For twilight fog,... 535
Fig. 2.14.4. Time series for a) surface temperature (Tsfc), b) the MTSAT-1R brightness temperature at 11 ㎛ (T11), and c) the difference between Tsfc and T11 (Tsfc-11) during the period from March 2006 to February 2007 over the Korean peninsula, respectively. Here the value of '1201' in the... 536
Fig. 2.14.5. Flow chart for the CMDPS_FOG_EWHA_Main.F90. 542
Fig. 2.14.6. a) Fog spatial distribution in terms of 10.8 ㎛ brightness temperature on October 16, 2008, 22:33 UTC and b) fog flags on November 17, 2008, 00:33 UTC. 548
Fig. 2.14.7. Verification of fog detection in satellite nine pixels. 549
Fig. 2.14.8. Location of 52 meteorological stations used in this study. The symbols of 'blue circle' and 'red triangle' mean the inland and island (or coastal) stations, respectively. 551
Fig. 2.14.9. Flow chart for the CMDPS_FOG_Main.F90. 554
Fig. 2.14.10. Correction of the difference in brightness temperature (K) between 3.7 ㎛ and 11 ㎛ (i.e., T3.7-11), derived from the LUT, during nighttime fog as a function of cloud height (km). The correction values (△T3.7-11)... 557
Fig. 2.14.11. Fog spatial distributions for twilight period, based on MTSAT-1R observations of brightness temperature at 11 ㎛ (7 K < T11max-11 < 14 K). Twilight fog cases in Ulleungdo, Wonju, Chuncheon on May 20, 2006, 0500 LST were used. 558
Fig. 2.14.12. Fog spatial distributions for twilight period, based on MTSAT-1R observations of brightness temperature at 11 ㎛ (7 K < T11max-11 < 14 K). Twilight fog cases in Jeonju, Yeongwol, Baengnyeongdo, Heuksando on June 28,... 558
Fig. 2.14.13. Fog spatial distributions for twilight period, based on MTSAT-1R observations of brightness temperature at 11 ㎛ (7 K < T11max-11 < 14 K, and T11max-11 < -4 K). Twilight fog cases in Jeonju, Yeongwol, Baengnyeongdo,... 559
Fig. 2.14.14. Procedure of producing fog threshold values. Th3.7-11 stands for the threshold value of T3.7-11 (i.e., difference in brightness temperature between 3.7 ㎛ and 11 ㎛), and Th0.68, the R0.68 (i.e., reflectance at 0.68 ㎛) threshold. The units of R0.68 and T3.7-11 are % and K, respectively. For twilight... 563
Fig. 2.14.15. How to select geological positions of 52 meteorological stations in a given area of the Korean peninsula is shown in the program producing fog threshold values. 564
Fig. 2.14.16. The output file from the software program producing fog threshold values. 564
Fig. 2.14.17. RTM simulation results. The value of 3.7㎛-10.8㎛ changes as a function of solar zenith angle and effective radius(Re) in each graph. Different colors varied from light blue to dark blue in the graph represent different Re (2, 4, 8, 16, 32, 64), and each graph (a~f) is the... 569
Fig. 2.14.18. The flowchart FOG2 algorithm 570
Fig. 2.14.19. SWIR-IR1 images on fog case (near red box) at 1833 UTC (a) and 2133 UTC (b) on January 8th and 0033 UTC (c) and 0333 UTC (d) on January 9th in 2008. 572
Fig. 2.14.20. Plots of SWIR-IR1 (K) according to the solar zenith angle (degree) in the red box of fig.2.14.19. 573
Fig. 2.14.21. Example images on IR corrections. (b) image is the results of IR1-IR2 correction on (a), and (d) image is the results of IR1-WV correction on (c). 574
Fig. 2.14.22. The main constituent modules for SSI algorithm. 578
Fig. 2.14.23. Fog detection results with IR1 images and the GTS stations that reported fog at the time. (a) 1733 UTC on 6, (b) 2033 UTC on 6, (c) 2333 UTC on 6, (d) 0233 UTC on 7, and (e) 0533 UTC on 7 November 2007. (f) represents MTSAT-1R SWIR image... 581
Fig. 2.14.24. The same as in Fig. 2.14.23, except for (a) 1733 UTC on 11, (b) 2033 UTC on 11, (c) 0033 UTC on 12, and (d) 0233 UTC on 12 June 2008. (e) represents MTSAT-1R SWIR image at 0033 UTC 12 June 2008. 582
Fig. 2.14.24. FOG2 validation methods. 583
Fig. 2.14.25. The validation results from 1 through 4 in October 2008. (a) and (b) represent the validation results with GTS over East-Asia and with ground observation data over Korean peninsula, respectively. 585
Fig. 2.15.1. Flow chart for CMDPS Rainfall Intensity. 590
Fig. 2.15.2. Scan time of SSM/I satellite for east asia region at Jul. 12. 2008. 590
Fig. 2.15.3. Temporal and spatial coincident between MTSAT-1R brightness temperature and SSM/I rainrate from Jul. 21 to Jul. 25 2008. 591
Fig. 2.15.4. PDF and CDF of COMS brightness temperature (BTT) and SSM/I rainrate at 0033UTC Jul. 24 2008. 593
Fig. 2.15.5. Look-up table between COMS brightness temperature (BTT) and SSM/I rainrate at 0033UTC Jul. 24 2008. 593
Fig. 2.15.6. Timeseries for Dynamic(OLD) and Quasi-dynamic(NEW) LUT between COMS brightness temperature (BTT) and rainrate from Jul. 21 to Jul. 25 2008. 594
Fig. 2.15.7. The POM modules for RI algorithm. 599
Fig. 2.15.8. Flowchart of DPM module for RI estimation. 601
Fig. 2.15.9. Estimated CMDPS RI and quality flag image on 0033UTC Nov. 17. 2008. 603
Fig. 2.15.10. Flowchart for the validation of CMDPS RI. 604
Fig. 2.15.11. Validation result between COMS rainfall intensity and AWS and SSM/I. 607
Fig. 2.16.1. Schematic diagram for narrow-band OLR algorithm. 612
Fig. 2.16.2. The scatter-plots of simulated irradiances versus and OLR for three channels, 10.8, 12.0, and 6.7 ㎛. 614
Fig. 2.16.3. The flow chart of COMS OLR algorithm. 615
Fig. 2.16.4. The simplified flow chart of COMS OLR algorithm. 615
Fig. 2.16.5. The list of OLR modules. 620
Fig. 2.16.6. The distribution of the retrieved Cloud Detection at 00UTC 4 August 2005. 623
Fig. 2.16.7. The distribution of the retrieved (a) OLR12.0, (b) OLR10.8+6.7, and (c) OLR10.8+12.0 at 00UTC 4 August 2005. 624
Fig. 2.16.8. Scatter-plots of (a) OLR12.0, (b) OLR10.8+6.7, and (c) OLR10.8+12.0 vs. CERES OLR data from 100 km × 100 km homogeneous scene for 1-15 August 2005, The statistics on the comparison between the MTSAT-1R and CERES are also shown below the figures. RMS and OLRC... 624
Fig. 2.16.9. The conceptual schematics of OLR algorithm validation. 630
Fig. 2.17.1. Schematic diagram showing how two channel thermal IR transmission through meteorological and yellow sand is different(Gu et al, 2003). 633
Fig. 2.17.2. (a) background brightness temperature of 11㎛(filled circle) and 12㎛ (asterisk) (b) background brightness temperature difference(BTD) between 11 and 12㎛ for Seoul. (c) and (d) is the same as of (a) and (b), respectively, except for the East Sea at 131˚E and 37˚N. The line in... 639
Fig. 2.17.3. The modified flowchart of the methodology for Aerosol Detection. 640
Fig. 2.17.4. DPM constituent modules 642
Fig. 2.17.5. Validation constituent modules. 643
Fig. 2.17.6. a) BTD* image on March, 1, 2008 at 0433UTC, b) BTD image on March, 1, 2008 at 0433 UTC, c) OMI AI image on March 1. 2008 on 0412UTC, d) MODIS RGB image on March, 1, 2008 644
Fig. 2.17.7. a) BTD* image on March, 2, 2008 at 0333UTC, b) BTD image on March, 2, 2008 at 0333 UTC, c) OMI AI image on March 1. 2008 d) MODIS RGB image on March, 2, 2008 644
Fig. 2.17.8. 11㎛ - 12㎛ brightness temperature difference at 00 and 05UTC on 8 April, 2006. Green and blue colored areas are at 00UTC, and blue and yellow colored areas are at 05UTC. 648
Fig. 2.18.1. AOD retrieval error as a function of surface reflectance error. Each color shows top of atmosphere reflectance. 654
Fig. 2.18.2. The sensitivity of satellite retrieved AOD to the single scattering albedo. 655
Fig. 2.18.3. (a) Surface reflectance error and (b) AOD error as a function of background optical depth error(solar zenith angle=45˚, relative azimuth angle=90˚, satellite zenith angle=45˚, true value of background optical depth=0.5). 657
Fig. 2.18.4. (a) and (b) show surface reflectance error caused by O3 content error or H2O content error, respectively (solar zenith angle=45˚, relative azimuth angle=90˚, satellite zenith angle=155˚, true value of H2O=1.5g/cm2 and O3=0.3cm-atm). [Jong Min Yoon et al.,2007] 657
Fig. 2.18.5. The flow chart of AOD retrieval algorithm. 658
Fig. 2.18.6. The RGB image form MODIS(left), and the image of cloud removal Top of Atmosphere(TOA) reflectance from the MTSAT-1R measurement(right) at 0300 UTC 6 March 2006. 659
Fig. 2.18.7. The seasonal variation of volume size distribution at (a) Anmyon, (b) Beijing and (c) Shirahama. Left plots are AOD case and right plots are HAOD case. 662
Fig. 2.18.8. Examples of calculated LUT with respect to optical properties for different seasons and locations(solar zenith angle=0˚, satellite zenith angle=50˚, relative azimuth angle=50˚). X-axis and y-axis means TOA reflectance and AOD respectively, and color variation is AOD variation. each... 663
Fig. 2.18.9. Calculated seasonal and regional weight value. 666
Fig. 2.18.10. The visibilities from the retrieved AOD in 6S and the empirical formula equation. 667
Fig. 2.18.11. The module structure of AOD algorithm. 669
Fig. 2.18.12. The examples of result from CMDPS AOD retrieval algorithm which are 4 type AODs ((a) Anmyon type, (b) Beijing type, (c) Shirahama type and (d) Total type, range : 0~3), (e) visibility (range : 0~30) and (f) QC flag (1 or 2) at... 672
Fig. 2.18.13. (a) MODIS RGB image and (b), (c) retrieved AODs as Anmyon type at 26 March 2006. The different between result (b) and (c) is caused by improvement in LUT sensitivity to surface reflectance sensitivity. 676
Fig. 2.18.14. The seasonal (upper plots) and monthly (lower plots) variation in proportion of volume concentration for fine mode particle to total particle is shown at (a) Anmyon, (b) Beijing and (c) Shirahama site, and dashed red line is AOD [0.676 ㎛]. 679
Fig. 2.19.1. Flow chart for COMS Infrared channels' Intercalibration. 685
Fig. 2.19.2. Comparison of 4 IR channels' response function between MTSAT-1R and Terra/MODIS. 687
Fig. 2.19.3. Scatter plot of 4 IR channels' simulated brightness of MTSAT-1R and Terra/MODIS by RTTOV-7 with inputs of TIGR2000 over ocean at the nadir. 689
Fig. 2.19.4. Comparison of the observed TBIR1 from MTSAT-1R and predicted TBIR1 of MTSAT-1R derived from MODIS band 31 on 5 AUG., 2005. a) and b) are for the condition of observation time difference (DTIME) between MTSAT-1R and MODIS is within ± 5 and ± 15 minutes, respectively. 691
Fig. 2.19.5. Comparison of the observed TBBIR1 from MTSAT-1R and predicted TBBIR1 of MTSAT-1R derived from MODIS band 31 from 1 to 5 AUG., 2005. a) and b) are for the condition of 0.1˚ and 0.2˚ grid, respectively. 693
Fig. 2.19.6. Comparison of the observed TBBIR1 from MTSAT-1R and predicted TBBIR1 of MTSAT-1R derived from MODIS band 31 on 5 AUG, 2005. a) and b) are for the condition of satellite viewing angle difference (DSEZ) between MTSAT-1R and MODIS is within 5˚ and 10˚, respectively. 693
Fig. 2.19.7. Calibration targets with similar viewing geometry for GEO and LEO satellites. Adopted from EUMETSAT. 694
Fig. 2.19.8. The program structure of IRCAL module including subroutines 698
Fig. 2.19.9. Scatterplots of predicted an measured MTSAT-1R brightness temperatures for (a) IR1, (b) IR2, (c) WV, and (d) SWIR channel during the test period. Test period is from NOV. 13 to Dec. 27 in 2006. 700
Fig. 2.19.10. Time series of mean bias (a) and RMSE (b) for MTSAT-1R 4 IR channels during 13 months from Aug., 2005 to Aug., 2006. 701
Fig. 2.19.11. fast-forward model calculated brightness temperature variation with satellite zenith angle for the standard tropical atmosphere. 703
Fig. 2.19.12. Viewing angle distribution of MTSAT-1R (solid line), and calibration targets for MTSAT-1R and Terra/MODIS on 23 Jan., 2008 are shown in yellow, which satisfying the viewing angle difference conditions. 704
Fig. 2.19.13. The spectral response function of WV channel for MTSAT-1R and Terra/MODIS and their weighting function according to the TIGR2000 and mid-latitude summer atmosphere. 704
Fig. 2.19.14. Structure of maintenance programs for producing conversion function of COMS. 706
Fig. 2.19.15. (a) Simulated TOA radiance as a function of AOD at 550 nm. (b) Relative difference from reference AOD value of 0.07. 711
Fig. 2.19.16. Same as in Fig. 2.19.15 except for surface wind speed. Reference wind speed is 5 m/s. 711
Fig. 2.19.17. Same as in Fig. 2.19.15 except for chlorophyll_a concentration. Reference chlorophyll_a concentration is 0.01 g/m3. 711
Fig. 2.19.18. Flowchart for TOA radiance simulation over ocean targets and its applications to vicarious calibration for visible channels. 712
Fig. 2.19.19. 5-year averaged MODIS/Terra monthly mean (a) aerosol optical depth, (b) cloud fraction. And (c) 8-year averaged SeaWiFS chlorophyll_a concentration. 713
Fig. 2.19.20. Color bar means the aerosol optical depth over the ocean where both chlorophyll_a and cloud fraction is less than 0.2g/㎥ and 0.8, respectively. 714
Fig. 2.19.21. Structure of vicarious calibration algorithm using ocean targets. 719
Fig. 2.19.22. (a) 1-day averaged observed radiances and calculated radiances from MODIS over ocean target in 2005, (b) 1-day averaged error and (c) 5-day averaged ratio between calculation and observation during the same time period. 726
Fig. 2.19.23. (a) 1-day averaged observed radiances and calculated radiances from SeaWiFS (VIS band 5, 6 and NIR band 7) over ocean target in 2005, and (b) the 5-day averaged ratios between calculation and observation. 727
Fig. 2.19.24. Reflectance comparison between observation and simulation for MTSAT-1R visible channel during the period from Nov. 15 to Nov. 24, 2007. 728
Fig. 2.19.25. Flow chart of vicarious calibration using desert. 733
Fig. 2.19.26. Brightness test : Time averaged white sky albedo at 3 bands. 734
Fig. 2.19.27. Temporal stability test : Normalized standard deviation with time averaged white sky albedo at 3 bands. 734
Fig. 2.19.28. Spatial uniformity test : Time averaged coefficient of variation at 3 bands. 734
Fig. 2.19.29. Selected calibration target and criterion. 735
Fig. 2.19.30. The flow chart of Spectral(Sepctral) BRDF parameter retrieval. 736
Fig. 2.19.31. Tree structure of vicarious calibration using desert target. 739
Fig. 2.19.32. The scatter plot of simulated TOA radiance at MTSAT-1R visible band and observed digital count fem November 2007 to June 2008. 742
Fig. 2.19.33. Time series of calibration coefficient (slope), intercept, and R2 742
Fig. 2.19.34. Sensitivity of TOA reflectances on the surface albedo and Atmospheric profile. 746
Fig. 2.19.35. Sensitivity of TOA reflectances on the cloud optical thickness and cloud particle effective radius. 746
Fig. 2.19.36. Radiative transfer calculation using MODIS cloud products. 748
Fig. 2.19.37. Flow chart of vicarious calibration using deep clouds. 750
Fig. 2.19.38. Module system of vicarious calibration using deep clouds. 753
Fig. 2.19.39. Simulated and observed MODIS reflectances. 756
Fig. 2.19.40. Simulated and observed SEVIRI reflectances. 757
Fig. 2.19.41. Simulated and observed MTSAT-1R HRIT reflectances. 757
Fig. 3.1.1. CMDPS 시스템의 구성 760
Fig. 3.1.2. Schematic diagram for the functional classification of the CMDPS procedures. 762
Fig. 3.1.3. CMDPS 전·후처리시스템의 인터페이스 763
Fig. 3.1.4. CMDPS 인터페이스의 실시간 처리과정 764
Fig. 3.2.1. CMDPS 산출물과 MODIS 보조 자료의 시간 일치 방법 788
Fig. 3.3.1. COMS Level1B 디코딩 프로그램의 수행 흐름도 797
Fig. 3.3.2. 201103212345 full disk 영상 표출 798
Fig. 3.3.3. AMV 위성자료 프로그램 기능 테스트 결과 798
Fig. 3.3.4_1. 기본스케줄 1안 801
Fig. 3.3.4_2. 기본스케줄 2안 801
Fig. 3.3.5. COMS 관측스케줄 802
Fig. 6.2.1. A schematic diagram of the GHRSST-PP implementation framework. 822
초록보기 더보기
I. 제목
통신해양기상위성 기상자료처리시스템 개발
II. 연구개발의 목적 및 필요성
○ 우리나라 최초의 다목적 정지궤도위성인 통신해양기상위성을 2010년도에 발사함으로서 한반도 주변의 상세한 기상상황에 대한 실시간 감시 능력이 확보됨에 따라 본 기상위성자료를 신속·정확하게 처리할 수 있는 기상자료처리시스템의 개발이 절대적으로 필요
○ 따라서, 기상탑재체로부터 관측된 위성관측 원시자료의 다각적 활용을 위해 산출가능한 모든 기상분석 자료를 안정적으로 생산할 수 있는 알고리즘을 개발하여 체계적으로 분배 및 저장할 수 있는 최적의 기상자료처리시스템을 국내 주도로 개발
○ 각각의 개발된 S/W 모듈간의 안정적인 결합과 현업 운영 시스템과의 인터페이스 개발로 현업 운영을 위한 안정적인 통합 S/W 개발하여 실시간 산출 시스템 구축
III. 연구개발의 내용 및 범위
○ 기상자료처리시스템 개념 설계
- 위성운용 계획에 따른 정규 분석자료 규격 정의
- 기상자료처리 S/W 상세규격 및 흐름도 작성
- 기상자료처리시스템 개발 추진체계 구성
○ 기상자료처리시스템 개발 및 개선
- 산출자료 요소별 처리 S/W 원형 구축 및 표준화
- 인터페이스 설계 및 통합 소프트웨어 개발
- 분석자료 산출 정확도 검증 및 개선
○ 기상분석자료 정규 생산체계 구축
- 운영시스템 내에서의 처리시스템 기능성 확보 및 개선
- 실시간 시험운영 및 검증, 개선
- 통신해양기상위성 관측 자료에 적합한 자료처리시스템 수행체계 구축
○ 기상자료처리시스템(CMDPS) 운영 및 개선을 위한 기술 지원
- 국가기상위성센터 CMDPS 수행용 시스템에 프로그램 이식 및 시험
- 16종 산출요소별 알고리즘 기술분석서, 운영지침서, 코드분석서 작성 및 사용자 교육 수행
- 초기운영계획 상세화
IV. 연구개발결과
- 통신해양기상위성 기상탑재체 특성을 고려하여 기본 산출자료를 총 16종으로 정의
- 산출요소별 알고리즘에 대한 상세 S/W 규격 및 흐름도 작성
- 개별 자료처리의 연관성을 고려하여 전체 알고리즘 흐름도 구성
- CMDPS 단계별 개발추진 체계 구성 및 알고리즘 별 개발주체 선정 등 추진체계 구축
- 개발된 원형소프트웨어 취합 및 분석을 통한 표준화 작업 수행
- CMDPS 통합소프트웨어 구축 및 운영시스템과의 인터페이스 개발 및 접합
- 산출자료 정확도 분석을 통한 지속적인 알고리즘 평가 및 개선
- 국가기상위성센터 지상국 시스템 개발과 연계한 인터페이스 조정 및 접합을 통한 실시간 운영체계 구축
- 가용한 위성자료를 이용한 실시간 시험운영 및 개선을 통한 운영 기능성 확보
- 통신해양기상위성 관측자료 특성을 고려한 알고리즘 미세 조정 및 소프트웨어 개선
○ 기상자료처리시스템 운영 및 개선을 위한 기술 지원
- 국가기상위성센터 내 CMDPS 이식 및 운영 기술 지원
- 알고리즘 상세기술서, 운영지침서 등 상세기술 문서 작성 및 제공
- 산출요소별 상세 알고리즘 교육 및 국가기상위성센터의 CMDPS 운영자 교육 지원
- COMS 초기운영 계획을 고려한 CMDPS 초기운영 계획 수립 및 문서 제공
V. 연구개발결과의 활용계획
○ 완성된 CMDPS 자료처리모듈의 지속적 검증·개선, 최적화를 통한 현업 운영 체계 구축 및 현업 활용
○ CMDPS에서 실시간으로 신속정확하게 산출된 16종의 기본산출자료는 초단기예보, 단기예보, 수치모델 입력 자료, 기후감시 등의 분야에 활용
○ 관측 자료 검정 알고리즘을 통해 원시자료의 품질감시, 센서 성능평가, 산출자료의 품질 정확도 확보 등에 활용
이용현황보기
원문구축 및 2018년 이후 자료는 524호에서 직접 열람하십시요.
도서위치안내: / 서가번호:
우편복사 목록담기를 완료하였습니다.
* 표시는 필수사항 입니다.
* 주의: 국회도서관 이용자 모두에게 공유서재로 서비스 됩니다.
저장 되었습니다.
로그인을 하시려면 아이디와 비밀번호를 입력해주세요. 모바일 간편 열람증으로 입실한 경우 회원가입을 해야합니다.
공용 PC이므로 한번 더 로그인 해 주시기 바랍니다.
아이디 또는 비밀번호를 확인해주세요