• 全部
...

LAB 1-1 individual work

OuRan偶然 2022-10-22 21:02:19

 

The Link Your Classhttps://bbs.csdn.net/forums/MUEE308FZU202201
The Link of Requirement of This Assignmenthttps://bbs.csdn.net/topics/608734618
The Aim of This AssignmentSelf Introduction & Course Planning
MU STU ID and FZU STU IDMU:20124058    FZU:832001227

 


Catalogue

1.self-introduction

2.Skills & evaluating

3.Amount of code

4.Expectations


​​​​​​​

1.self-introduction

My name is WENQI PENG, major in Electronic Engineering (EE) in MIEC. What I like is to realize program with my team, it makes me excited and self-confident. In college, I learned a lot of novel knowledge, opened the way to write programs.

The first language I learned is Python, and of course, the first code is 'print("hello world")'. At that time using codes to solve problems is a really interesting thing for me.

The first program is to add number, which is simple but cool for me at that time.

With the deepening of the course, I learned more knowledge, such as object-oriented, C++, Arduino, and combined software and hardware to make automatic garden irrigation system and intelligent car.

 

2.Skills & evaluating

 1. what professional knowledge I have

  • C
  • C++
  • Python
  • Arduino
  • Matlab
  • Mplab X
  • hardware like Single Chip Microcomputer

*The use of SCM knowledge and C++ to do the intelligent freight car

 

2.what kind of technical direction I am interested in

  • web
  • mobile terminal

3.which ability I lack of

The ability of knowledge integration is slightly insufficient.

 

3.Amount of code

In fact, as each experiment and study went on, the number of code written could not be calculated clearly.

During the five semesters, the program was written almost every two days, and it was estimated that it had reached 50,000 lines. At the end of this course we hope to add another five thousand lines.

 

4.Expectations

1.What I want to get in this course

In this course, I hope to know the process of software making and have the ability to develop a software together with a team.

 

2.Which role I want to play in this course

In the course of study, I hope to become a serious and hardworking student, a developer with outstanding ability, and a contributor to drive the team forward and guide the direction.

 


     

     

    ...全文
    给本帖投票
    469 回复 打赏 收藏 转发到动态 举报
    AI 作业
    写回复
    用AI写文章
    回复
    切换为时间正序
    请发表友善的回复…
    发表回复
    FASMARM v1.42 This package is an ARM assembler add-on for FASM. FASMARM currently supports the full range of instructions for 32-bit and 64-bit ARM processors and coprocessors up to and including v8. Contents: 1. ARM assembly compatibility 2. UAL and pre-UAL syntaxes 3. IT block handling 4. Alternate encodings 5. Output formats 6. Control directives 7. Data definitions 8. Defining registers lists inside macros 9. Half-precision number formatting 10. Variants supported 11. Further information 12. Version history _______________________________________________________________________________ 1. ARM assembly compatibility There are a few restrictions how the ARM instruction set is implemented. The changes are minor and mostly have a minor impact. For the most part the basic instruction outline is the same. Where possible the original style is used but there are some differences: Not everything matches the ARM ADS assembly style, where possible the original style is used but there are some differences 1) label names cannot begin with a digit 2) CPSIE and CPSID formats are changed, use "iflags_aif" form instead of "aif" (eg. "CPSIE iflags_i" instead of "CPSID i") 3) SRS with writeback must have a separating space after the mode number and before "!" (eg. "SRSDB 16 !" instead of "SRSDB 16!") 4) macro, rept, irp, format, if, virtual etc. are all significant changes from the ARM ADS, so you will need to re-write those sections of existing code Original ARM Syntax | fasmarm Syntax ----------------------+---------------------- cpsie a | cpsie iflags_a | srsdb #29! | srsdb #29 ! ;or, | srsdb 29 ! _______________________________________________________________________________ 2. UAL and pre-UAL syntaxes fasmarm supports the original pre-UAL syntax and the newer UAL syntax. These two syntaxes only affect THUMB encodings. UAL stands for: Universal Assembly Language. pre-UAL syntax is selected wi
    Contents Module Overview 1 Lesson 1: Memory 3 Lesson 2: I/O 73 Lesson 3: CPU 111 Module 3: Troubleshooting Server Performance Module Overview Troubleshooting server performance-based support calls requires product knowledge, good communication skills, and a proven troubleshooting methodology. In this module we will discuss Microsoft® SQL Server™ interaction with the operating system and methodology of troubleshooting server-based problems. At the end of this module, you will be able to:  Define the common terms associated the memory, I/O, and CPU subsystems.  Describe how SQL Server leverages the Microsoft Windows® operating system facilities including memory, I/O, and threading.  Define common SQL Server memory, I/O, and processor terms.  Generate a hypothesis based on performance counters captured by System Monitor.  For each hypothesis generated, identify at least two other non-System Monitor pieces of information that would help to confirm or reject your hypothesis.  Identify at least five counters for each subsystem that are key to understanding the performance of that subsystem.  Identify three common myths associated with the memory, I/O, or CPU subsystems. Lesson 1: Memory What You Will Learn After completing this lesson, you will be able to:  Define common terms used when describing memory.  Give examples of each memory concept and how it applies to SQL Server.  Describe how SQL Server user and manages its memory.  List the primary configuration options that affect memory.  Describe how configuration options affect memory usage.  Describe the effect on the I/O subsystem when memory runs low.  List at least two memory myths and why they are not true. Recommended Reading  SQL Server 7.0 Performance Tuning Technical Reference, Microsoft Press  Windows 2000 Resource Kit companion CD-ROM documentation. Chapter 15: Overview of Performance Monitoring  Inside Microsoft Windows 2000, Third Edition, David A. Solomon and Mark E. Russinovich  Windows 2000 Server Operations Guide, Storage, File Systems, and Printing; Chapters: Evaluating Memory and Cache Usage  Advanced Windows, 4th Edition, Jeffrey Richter, Microsoft Press Related Web Sites  http://ntperformance/ Memory Definitions Memory Definitions Before we look at how SQL Server uses and manages its memory, we need to ensure a full understanding of the more common memory related terms. The following definitions will help you understand how SQL Server interacts with the operating system when allocating and using memory. Virtual Address Space A set of memory addresses that are mapped to physical memory addresses by the system. In a 32-bit operation system, there is normally a linear array of 2^32 addresses representing 4,294,967,269 byte addresses. Physical Memory A series of physical locations, with unique addresses, that can be used to store instructions or data. AWE – Address Windowing Extensions A 32-bit process is normally limited to addressing 2 gigabytes (GB) of memory, or 3 GB if the system was booted using the /3G boot switch even if there is more physical memory available. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the additional physical memory. This allows a process to access any portion of the physical memory by mapping it into the applications window. When used in combination with Intel’s Physical Addressing Extensions (PAE) on Windows 2000, an AWE enabled application can support up to 64 GB of memory Reserved Memory Pages in a processes address space are free, reserved or committed. Reserving memory address space is a way to reserve a range of virtual addresses for later use. If you attempt to access a reserved address that has not yet been committed (backed by memory or disk) you will cause an access violation. Committed Memory Committed pages are those pages that when accessed in the end translate to pages in memory. Those pages may however have to be faulted in from a page file or memory mapped file. Backing Store Backing store is the physical representation of a memory address. Page Fault (Soft/Hard) A reference to an invalid page (a page that is not in your working set) is referred to as a page fault. Assuming the page reference does not result in an access violation, a page fault can be either hard or soft. A hard page fault results in a read from disk, either a page file or memory-mapped file. A soft page fault is resolved from one of the modified, standby, free or zero page transition lists. Paging is represented by a number of counters including page faults/sec, page input/sec and page output/sec. Page faults/sec include soft and hard page faults where as the page input/output counters represent hard page faults. Unfortunately, all of these counters include file system cache activity. For more information, see also…Inside Windows 2000,Third Edition, pp. 443-451. Private Bytes Private non-shared committed address space Working Set The subset of processes virtual pages that is resident in physical memory. For more information, see also… Inside Windows 2000,Third Edition, p. 455. System Working Set Like a process, the system has a working set. Five different types of pages represent the system’s working set: system cache; paged pool; pageable code and data in the kernel; page-able code and data in device drivers; and system mapped views. The system working set is represented by the counter Memory: cache bytes. System working set paging activity can be viewed by monitoring the Memory: Cache Faults/sec counter. For more information, see also… Inside Windows 2000,Third Edition, p. 463. System Cache The Windows 2000 cache manager provides data caching for both local and network file system drivers. By caching virtual blocks, the cache manager can reduce disk I/O and provide intelligent read ahead. Represented by Memory:Cache Resident bytes. For more information, see also… Inside Windows 2000,Third Edition, pp. 654-659. Non Paged Pool Range of addresses guaranteed to be resident in physical memory. As such, non-paged pool can be accessed at any time without incurring a page fault. Because device drivers operate at DPC/dispatch level (covered in lesson 2), and page faults are not allowed at this level or above, most device drivers use non-paged pool to assure that they do not incur a page fault. Represented by Memory: Pool Nonpaged Bytes, typically between 3-30 megabytes (MB) in size. Note The pool is, in effect, a common area of memory shared by all processes. One of the most common uses of non-paged pool is the storage of object handles. For more information regarding “maximums,” see also… Inside Windows 2000,Third Edition, pp. 403-404 Paged Pool Range of address that can be paged in and out of physical memory. Typically used by drivers who need memory but do not need to access that memory from DPC/dispatch of above interrupt level. Represented by Memory: Pool Paged Bytes and Memory:Pool Paged Resident Bytes. Typically between 10-30MB + size of Registry. For more information regarding “limits,” see also… Inside Windows 2000,Third Edition, pp. 403-404. Stack Each thread has two stacks, one for kernel mode and one for user mode. A stack is an area of memory in which program procedure or function call addresses and parameters are temporarily stored. In Process To run in the same address space. In-process servers are loaded in the client’s address space because they are implemented as DLLs. The main advantage of running in-process is that the system usually does not need to perform a context switch. The disadvantage to running in-process is that DLL has access to the process address space and can potentially cause problems. Out of Process To run outside the calling processes address space. OLEDB providers can run in-process or out of process. When running out of process, they run under the context of DLLHOST.EXE. Memory Leak To reserve or commit memory and unintentionally not release it when it is no longer being used. A process can leak resources such as process memory, pool memory, user and GDI objects, handles, threads, and so on. Memory Concepts (X86 Address Space) Per Process Address Space Every process has its own private virtual address space. For 32-bit processes, that address space is 4 GB, based on a 32-bit pointer. Each process’s virtual address space is split into user and system partitions based on the underlying operating system. The diagram included at the top represents the address partitioning for the 32-bit version of Windows 2000. Typically, the process address space is evenly divided into two 2-GB regions. Each process has access to 2 GB of the 4 GB address space. The upper 2 GB of address space is reserved for the system. The user address space is where application code, global variables, per-thread stacks, and DLL code would reside. The system address space is where the kernel, executive, HAL, boot drivers, page tables, pool, and system cache reside. For specific information regarding address space layout, refer to Inside Microsoft Windows 2000 Third Edition pages 417-428 by Microsoft Press. Access Modes Each virtual memory address is tagged as to what access mode the processor must be running in. System space can only be accessed while in kernel mode, while user space is accessible in user mode. This protects system space from being tampered with by user mode code. Shared System Space Although every process has its own private memory space, kernel mode code and drivers share system space. Windows 2000 does not provide any protection to private memory being use by components running in kernel mode. As such, it is very important to ensure components running in kernel mode are thoroughly tested. 3-GB Address Space 3-GB Address Space Although 2 GB of address space may seem like a large amount of memory, application such as SQL Server could leverage more memory if it were available. The boot.ini option /3GB was created for those cases where systems actually support greater than 2 GB of physical memory and an application can make use of it This capability allows memory intensive applications running on Windows 2000 Advanced Server to use up to 50 percent more virtual memory on Intel-based computers. Application memory tuning provides more of the computer's virtual memory to applications by providing less virtual memory to the operating system. Although a system having less than 2 GB of physical memory can be booted using the /3G switch, in most cases this is ill-advised. If you restart with the 3 GB switch, also known as 4-Gig Tuning, the amount of non-paged pool is reduced to 128 MB from 256 MB. For a process to access 3 GB of address space, the executable image must have been linked with the /LARGEADDRESSAWARE flag or modified using Imagecfg.exe. It should be pointed out that SQL Server was linked using the /LAREGEADDRESSAWARE flag and can leverage 3 GB when enabled. Note Even though you can boot Windows 2000 Professional or Windows 2000 Server with the /3GB boot option, users processes are still limited to 2 GB of address space even if the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is set in the image. The only thing accomplished by using the /3G option on these system is the reduction in the amount of address space available to the system (ISW2K Pg. 418). Important If you use /3GB in conjunction with AWE/PAE you are limited to 16 GB of memory. For more information, see the following Knowledge Base articles: Q171793 Information on Application Use of 4GT RAM Tuning Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Q274598 W2K Does Not Enable Complete Memory Dumps Between 2 & 4 GB AWE Memory Layout AWE Memory Usually, the operation system is limited to 4 GB of physical memory. However, by leveraging PAE, Windows 2000 Advanced Server can support up to 8 GB of memory, and Data Center 64 GB of memory. However, as stated previously, each 32-bit process normally has access to only 2 GB of address space, or 3 GB if the system was booted with the /3-GB option. To allow processes to allocate more physical memory than can be represented in the 2GB of address space, Microsoft created the Address Windows Extensions (AWE). These extensions allow for the allocation and use of up to the amount of physical memory supported by the operating system. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the physical memory. This allows a process to access any portion of the physical memory by mapping regions of physical memory in and out of the applications window. The allocation and use of AWE memory is accomplished by  Creating a window via VirtualAlloc using the MEM_PHYSICAL option  Allocating the physical pages through AllocateUserPhysicalPages  Mapping the RAM pages to the window using MapUserPhysicalPages Note SQL Server 7.0 supports a feature called extended memory in Windows NT® 4 Enterprise Edition by using a PSE36 driver. Currently there are no PSE drivers for Windows 2000. The preferred method of accessing extended memory is via the Physical Addressing Extensions using AWE. The AWE mapping feature is much more efficient than the older process of coping buffers from extended memory into the process address space. Unfortunately, SQL Server 7.0 cannot leverage PAE/AWE. Because there are currently no PSE36 drivers for Windows 2000 this means SQL Server 7.0 cannot support more than 3GB of memory on Windows 2000. Refer to KB article Q278466. AWE restrictions  The process must have Lock Pages In Memory user rights to use AWE Important It is important that you use Enterprise Manager or DMO to change the service account. Enterprise Manager and DMO will grant all of the privileges and Registry and file permissions needed for SQL Server. The Service Control Panel does NOT grant all the rights or permissions needed to run SQL Server.  Pages are not shareable or page-able  Page protection is limited to read/write  The same physical page cannot be mapped into two separate AWE regions, even within the same process.  The use of AWE/PAE in conjunction with /3GB will limit the maximum amount of supported memory to between 12-16 GB of memory.  Task manager does not show the correct amount of memory allocated to AWE-enabled applications. You must use Memory Manager: Total Server Memory. It should, however, be noted that this only shows memory in use by the buffer pool.  Machines that have PAE enabled will not dump user mode memory. If an event occurs in User Mode Memory that causes a blue screen and root cause determination is absolutely necessary, the machine must be booted with the /NOPAE switch, and with /MAXMEM set to a number appropriate for transferring dump files.  With AWE enabled, SQL Server will, by default, allocate almost all memory during startup, leaving 256 MB or less free. This memory is locked and cannot be paged out. Consuming all available memory may prevent other applications or SQL Server instances from starting. Note PAE is not required to leverage AWE. However, if you have more than 4GB of physical memory you will not be able to access it unless you enable PAE. Caution It is highly recommended that you use the “max server memory” option in combination with “awe enabled” to ensure some memory headroom exists for other applications or instances of SQL Server, because AWE memory cannot be shared or paged. For more information, see the following Knowledge Base articles: Q268363 Intel Physical Addressing Extensions (PAE) in Windows 2000 Q241046 Cannot Create a dump File on Computers with over 4 GB RAM Q255600 Windows 2000 utilities do not display physical memory above 4GB Q274750 How to configure SQL Server memory more than 2 GB (Idea) Q266251 Memory dump stalls when PAE option is enabled (Idea) Tip The KB will return more hits if you query on PAE rather than AWE. Virtual Address Space Mapping Virtual Address Space Mapping By default Windows 2000 (on an X86 platform) uses a two-level (three-level when PAE is enabled) page table structure to translate virtual addresses to physical addresses. Each 32-bit address has three components, as shown below. When a process accesses a virtual address the system must first locate the Page Directory for the current process via register CR3 (X86). The first 10 bits of the virtual address act as an index into the Page Directory. The Page Directory Entry then points to the Page Frame Number (PFN) of the appropriate Page Table. The next 10 bits of the virtual address act as an index into the Page Table to locate the appropriate page. If the page is valid, the PTE contains the PFN of the actual page in memory. If the page is not valid, the memory management fault handler locates the page and attempts to make it valid. The final 12 bits act as a byte offset into the page. Note This multi-step process is expensive. This is why systems have translation look aside buffers (TLB) to speed up the process. One of the reasons context switching is so expensive is the translation buffers must be dumped. Thus, the first few lookups are very expensive. Refer to ISW2K pages 439-440. Core System Memory Related Counters Core System Memory Related Counters When evaluating memory performance you are looking at a wide variety of counters. The counters listed here are a few of the core counters that give you quick overall view of the state of memory. The two key counters are Available Bytes and Committed Bytes. If Committed Bytes exceeds the amount of physical memory in the system, you can be assured that there is some level of hard page fault activity happening. The goal of a well-tuned system is to have as little hard paging as possible. If Available Bytes is below 5 MB, you should investigate why. If Available Bytes is below 4 MB, the Working Set Manager will start to aggressively trim the working sets of process including the system cache.  Committed Bytes Total memory, including physical and page file currently committed  Commit Limit • Physical memory + page file size • Represents the total amount of memory that can be committed without expanding the page file. (Assuming page file is allowed to grow)  Available Bytes Total physical memory currently available Note Available Bytes is a key indicator of the amount of memory pressure. Windows 2000 will attempt to keep this above approximately 4 MB by aggressively trimming the working sets including system cache. If this value is constantly between 3-4 MB, it is cause for investigation. One counter you might expect would be for total physical memory. Unfortunately, there is no specific counter for total physical memory. There are however many other ways to determine total physical memory. One of the most common is by viewing the Performance tab of Task Manager. Page File Usage The only counters that show current page file space usage are Page File:% Usage and Page File:% Peak Usage. These two counters will give you an indication of the amount of space currently used in the page file. Memory Performance Memory Counters There are a number of counters that you need to investigate when evaluating memory performance. As stated previously, no single counter provides the entire picture. You will need to consider many different counters to begin to understand the true state of memory. Note The counters listed are a subset of the counters you should capture. *Available Bytes In general, it is desirable to see Available Bytes above 5 MB. SQL Servers goal on Intel platforms, running Windows NT, is to assure there is approximately 5+ MB of free memory. After Available Bytes reaches 4 MB, the Working Set Manager will start to aggressively trim the working sets of process and, finally, the system cache. This is not to say that working set trimming does not happen before 4 MB, but it does become more pronounced as the number of available bytes decreases below 4 MB. Page Faults/sec Page Faults/sec represents the total number of hard and soft page faults. This value includes the System Working Set as well. Keep this in mind when evaluating the amount of paging activity in the system. Because this counter includes paging associated with the System Cache, a server acting as a file server may have a much higher value than a dedicated SQL Server may have. The System Working Set is covered in depth on the next slide. Because Page Faults/sec includes soft faults, this counter is not as useful as Pages/sec, which represents hard page faults. Because of the associated I/O, hard page faults tend to be much more expensive. *Pages/sec Pages/sec represent the number of pages written/read from disk because of hard page faults. It is the sum of Memory: Pages Input/sec and Memory: Pages Output/sec. Because it is counted in numbers of pages, it can be compared to other counts of pages, such as Memory: Page Faults/sec, without conversion. On a well-tuned system, this value should be consistently low. In and of itself, a high value for this counter does not necessarily indicate a problem. You will need to isolate the paging activity to determine if it is associated with in-paging, out-paging, memory mapped file activity or system cache. Any one of these activities will contribute to this counter. Note Paging in and of itself is not necessarily a bad thing. Paging is only “bad” when a critical process must wait for it’s pages to be in-paged, or when the amount of read/write paging is causing excessive kernel time or disk I/O, thus interfering with normal user mode processing. Tip (Memory: Pages/sec) / (PhysicalDisk: Disk Bytes/sec * 4096) yields the approximate percentage of paging to total disk I/O. Note, this is only relevant on X86 platforms with a 4 KB page size. Page Reads/sec (Hard Page Fault) Page Reads/sec is the number of times the disk was accessed to resolve hard page faults. It includes reads to satisfy faults in the file system cache (usually requested by applications) and in non-cached memory mapped files. This counter counts numbers of read operations, without regard to the numbers of pages retrieved by each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Page Writes/sec (Hard Page Fault) Page Writes/sec is the number of times pages were written to disk to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data, not code. This counter counts write operations, without regard to the number of pages written in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. *Pages Input/sec (Hard Page Fault) Pages Input/sec is the number of pages read from disk to resolve hard page faults. It includes pages retrieved to satisfy faults in the file system cache and in non-cached memory mapped files. This counter counts numbers of pages, and can be compared to other counts of pages, such as Memory:Page Faults/sec, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. This is one of the key counters to monitor for potential performance complaints. Because a process must wait for a read page fault this counter, read page faults have a direct impact on the perceived performance of a process. *Pages Output/sec (Hard Page Fault) Pages Output/sec is the number of pages written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows NT writes more pages back to disk to free up space when physical memory is in short supply. This counter counts numbers of pages, and can be compared to other counts of pages, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Like Pages Input/sec, this is one of the key counters to monitor. Processes will generally not notice write page faults unless the disk I/O begins to interfere with normal data operations. Demand Zero Faults/Sec (Soft Page Fault) Demand Zero Faults/sec is the number of page faults that require a zeroed page to satisfy the fault. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security feature of Windows NT. Windows NT maintains a list of zeroed pages to accelerate this process. This counter counts numbers of faults, without regard to the numbers of pages retrieved to satisfy the fault. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Transition Faults/Sec (Soft Page Fault) Transition Faults/sec is the number of page faults resolved by recovering pages that were on the modified page list, on the standby list, or being written to disk at the time of the page fault. The pages were recovered without additional disk activity. Transition faults are counted in numbers of faults, without regard for the number of pages faulted in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. System Working Set System Working Set Like processes, the system page-able code and data are managed by a working set. For the purpose of this course, that working set is referred to as the System Working Set. This is done to differentiate the system cache portion of the working set from the entire working set. There are five different types of pages that make up the System Working Set. They are: system cache; paged pool; page-able code and data in ntoskrnl.exe; page-able code, and data in device drivers and system-mapped views. Unfortunately, some of the counters that appear to represent the system cache actually represent the entire system working set. Where noted system cache actually represents the entire system working set. Note The counters listed are a subset of the counters you should capture. *Memory: Cache Bytes (Represents Total System Working Set) Represents the total size of the System Working Set including: system cache; paged pool; pageable code and data in ntoskrnl.exe; pageable code and data in device drivers; and system-mapped views. Cache Bytes is the sum of the following counters: System Cache Resident Bytes, System Driver Resident Bytes, System Code Resident Bytes, and Pool Paged Resident Bytes. Memory: System Cache Resident Bytes (System Cache) System Cache Resident Bytes is the number of bytes from the file system cache that are resident in physical memory. Windows 2000 Cache Manager works with the memory manager to provide virtual block stream and file data caching. For more information, see also…Inside Windows 2000,Third Edition, pp. 645-650 and p. 656. Memory: Pool Paged Resident Bytes Represents the physical memory consumed by Paged Pool. This counter should NOT be monitored by itself. You must also monitor Memory: Paged Pool. A leak in the pool may not show up in Pool paged Resident Bytes. Memory: System Driver Resident Bytes Represents the physical memory consumed by driver code and data. System Driver Resident Bytes and System Driver Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Memory: System Code Resident Bytes Represents the physical memory consumed by page-able system code. System Code Resident Bytes and System Code Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Working Set Performance Counter You can measure the number of page faults in the System Working Set by monitoring the Memory: Cache Faults/sec counter. Contrary to the “Explain” shown in System Monitor, this counter measures the total amount of page faults/sec in the System Working Set, not only the System Cache. You cannot measure the performance of the System Cache using this counter alone. For more information, see also…Inside Windows 2000,Third Edition, p. 656. Note You will find that in general the working set manager will usually trim the working sets of normal processes prior to trimming the system working set. System Cache System Cache The Windows 2000 cache manager provides a write-back cache with lazy writing and intelligent read-ahead. Files are not written to disk immediately but differed until the cache manager calls the memory manager to flush the cache. This helps to reduce the total number of I/Os. Once per second, the lazy writer thread queues one-eighth of the dirty pages in the system cache to be written to disk. If this is not sufficient to meet the needs, the lazy writer will calculate a larger value. If the dirty page threshold is exceeded prior to lazy writer waking, the cache manager will wake the lazy writer. Important It should be pointed out that mapped files or files opened with FILE_FLAG_NO_BUFFERING, do not participate in the System Cache. For more information regarding mapped views, see also…Inside Windows 2000,Third Edition, p. 669. For those applications that would like to leverage system cache but cannot tolerate write delays, the cache manager supports write through operations via the FILE_FLAG_WRITE_THROUGH. On the other hand, an application can disable lazy writing by using the FILE_ATTRIBUTE_TEMPORARY. If this flag is enabled, the lazy writer will not write the pages to disk unless there is a shortage of memory or the file is closed. Important Microsoft SQL Server uses both FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH Tip The file system cache is not represented by a static amount of memory. The system cache can and will grow. It is not unusual to see the system cache consume a large amount of memory. Like other working sets, it is trimmed under pressure but is generally the last thing to be trimmed. System Cache Performance Counters The counters listed are a subset of the counters you should capture. Cache: Data Flushes/sec Data Flushes/sec is the rate at which the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. More than one page can be transferred on each flush operation. Cache: Data Flush Pages/sec Data Flush Pages/sec is the number of pages the file system cache has flushed to disk as a result of a request to flush or to satisfy a write-through file write request. Cache: Lazy Write Flushes/sec Represents the rate of lazy writes to flush the system cache per second. More than one page can be transferred per second. Cache: Lazy Write Pages/sec Lazy Write Pages/sec is the rate at which the Lazy Writer thread has written to disk. Note When looking at Memory:Cache Faults/sec, you can remove cache write activity by subtracting (Cache: Data Flush Pages/sec + Cache: Lazy Write Pages/sec). This will give you a better idea of how much other page faulting activity is associated with the other components of the System Working Set. However, you should note that there is no easy way to remove the page faults associated with file cache read activity. For more information, see the following Knowledge Base articles: Q145952 (NT4) Event ID 26 Appears If Large File Transfer Fails Q163401 (NT4) How to Disable Network Redirector File Caching Q181073 (SQL 6.5) DUMP May Cause Access Violation on Win2000 System Pool System Pool As documented earlier, there are two types of shared pool memory: non-paged pool and paged pool. Like private memory, pool memory is susceptible to a leak. Nonpaged Pool Miscellaneous kernel code and structures, and drivers that need working memory while at or above DPC/dispatch level use non-paged pool. The primary counter for non-paged pool is Memory: Pool Nonpaged Bytes. This counter will usually between 3 and 30 MB. Paged Pool Drivers that do not need to access memory above DPC/Dispatch level are one of the primary users of paged pool, however any process can use paged pool by leveraging the ExAllocatePool calls. Paged pool also contains the Registry and file and printing structures. The primary counters for monitoring paged pool is Memory: Pool Paged Bytes. This counter will usually be between 10-30MB plus the size of the Registry. To determine how much of paged pool is currently resident in physical memory, monitor Memory: Pool Paged Resident Bytes. Note The paged and non-paged pools are two of the components of the System Working Set. If a suspected leak is clearly visible in the overview and not associated with a process, then it is most likely a pool leak. If the leak is not associated with SQL Server handles, OLDEB providers, XPROCS or SP_OA calls then most likely this call should be pushed to the Windows NT group. For more information, see the following Knowledge Base articles: Q265028 (MS) Pool Tags Q258793 (MS) How to Find Memory Leaks by Using Pool Bitmap Analysis Q115280 (MS) Finding Windows NT Kernel Mode Memory Leaks Q177415 (MS) How to Use Poolmon to Troubleshoot Kernel Mode Memory Leaks Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Tip To isolate pool leaks you will need to isolate all drivers and third-party processes. This should be done by disabling each service or driver one at a time and monitoring the effect. You can also monitor paged and non-paged pool through poolmon. If pool tagging has been enabled via GFLAGS, you may be able to associate the leak to a particular tag. If you suspect a particular tag, you should involve the platform support group. Process Memory Counters Process _Total Limitations Although the rollup of _Total for Process: Private Bytes, Virtual Bytes, Handles and Threads, represent the key resources being used across all processes, they can be misleading when evaluating a memory leak. This is because a leak in one process may be masked by a decrease in another process. Note The counters listed are a subset of the counters you should capture. Tip When analyzing memory leaks, it is often easier to a build either a separate chart or report showing only one or two key counters for all process. The primary counter used for leak analysis is private bytes, but processes can leak handles and threads just as easily. After a suspect process is located, build a separate chart that includes all the counters for that process. Individual Process Counters When analyzing individual process for memory leaks you should include the counters listed.  Process: % Processor Time  Process: Working Set (includes shared pages)  Process: Virtual Bytes  Process: Private Bytes  Process: Page Faults/sec  Process: Handle Count  Process: Thread Count  Process: Pool Paged Bytes  Process: Pool Nonpaged Bytes Tip WINLOGON, SVCHOST, services, or SPOOLSV are referred to as HELPER processes. They provide core functionality for many operations and as such are often extended by the addition of third-party DLLs. Tlist –s may help identify what services are running under a particular helper. Helper Processes Helper Processes Winlogon, Services, and Spoolsv and Svchost are examples of what are referred to as HELPER processes. They provide core functionality for many operations and, as such, are often extended by the addition of third-party DLLs. Running every service in its own process can waste system resources. Consequently, some services run in their own processes while others share a process with other services. One problem with sharing a process is that a bug in one service may cause the entire process to fail. The resource kit tool, Tlist when used with the –s qualifier can help you identify what services are running in what processes. WINLOGON Used to support GINAs. SPOOLSV SPOOLSV is responsible for printing. You will need to investigate all added printing functionality. Services Service is responsible for system services. Svchost.exe Svchost.exe is a generic host process name for services that are run from dynamic-link libraries (DLLs). There can be multiple instances of Svchost.exe running at the same time. Each Svchost.exe session can contain a grouping of services, so that separate services can be run depending on how and where Svchost.exe is started. This allows for better control and debugging. The Effect of Memory on Other Components Memory Drives Overall Performance Processor, cache, bus speeds, I/O, all of these resources play a roll in overall perceived performance. Without minimizing the impact of these components, it is important to point out that a shortage of memory can often have a larger perceived impact on performance than a shortage of some other resource. On the other hand, an abundance of memory can often be leveraged to mask bottlenecks. For instance, in certain environments, file system cache can significantly reduce the amount of disk I/O, potentially masking a slow I/O subsystem. Effect on I/O I/O can be driven by a number of memory considerations. Page read/faults will cause a read I/O when a page is not in memory. If the modified page list becomes too long the Modified Page Writer and Mapped Page Writer will need to start flushing pages causing disk writes. However, the one event that can have the greatest impact is running low on available memory. In this case, all of the above events will become more pronounced and have a larger impact on disk activity. Effect on CPU The most effective use of a processor from a process perspective is to spend as much time possible executing user mode code. Kernel mode represents processor time associated with doing work, directly or indirectly, on behalf of a thread. This includes items such as synchronization, scheduling, I/O, memory management, and so on. Although this work is essential, it takes processor cycles and the cost, in cycles, to transition between user and kernel mode is expensive. Because all memory management and I/O functions must be done in kernel mode, it follows that the fewer the memory resources the more cycles are going to be spent managing those resources. A direct result of low memory is that the Working Set Manager, Modified Page Writer and Mapped Page Writer will have to use more cycles attempting to free memory. Analyzing Memory Look for Trends and Trend Relationships Troubleshooting performance is about analyzing trends and trend relationships. Establishing that some event happened is not enough. You must establish the effect of the event. For example, you note that paging activity is high at the same time that SQL Server becomes slow. These two individual facts may or may not be related. If the paging is not associated with SQL Servers working set, or the disks SQL is using there may be little or no cause/affect relationship. Look at Physical Memory First The first item to look at is physical memory. You need to know how much physical and page file space the system has to work with. You should then evaluate how much available memory there is. Just because the system has free memory does not mean that there is not any memory pressure. Available Bytes in combination with Pages Input/sec and Pages Output/sec can be a good indicator as to the amount of pressure. The goal in a perfect world is to have as little hard paging activity as possible with available memory greater than 5 MB. This is not to say that paging is bad. On the contrary, paging is a very effective way to manage a limited resource. Again, we are looking for trends that we can use to establish relationships. After evaluating physical memory, you should be able to answer the following questions:  How much physical memory do I have?  What is the commit limit?  Of that physical memory, how much has the operating system committed?  Is the operating system over committing physical memory?  What was the peak commit charge?  How much available physical memory is there?  What is the trend associated with committed and available? Review System Cache and Pool Contribution After you understand the individual process memory usage, you need to evaluate the System Cache and Pool usage. These can and often represent a significant portion of physical memory. Be aware that System Cache can grow significantly on a file server. This is usually normal. One thing to consider is that the file system cache tends to be the last thing trimmed when memory becomes low. If you see abrupt decreases in System Cache Resident Bytes when Available Bytes is below 5 MB you can be assured that the system is experiencing excessive memory pressure. Paged and non-paged pool size is also important to consider. An ever-increasing pool should be an indicator for further research. Non-paged pool growth is usually a driver issue, while paged pool could be driver-related or process-related. If paged pool is steadily growing, you should investigate each process to see if there is a specific process relationship. If not you will have to use tools such as poolmon to investigate further. Review Process Memory Usage After you understand the physical memory limitations and cache and pool contribution you need to determine what components or processes are creating the pressure on memory, if any. Be careful if you opt to chart the _Total Private Byte’s rollup for all processes. This value can be misleading in that it includes shared pages and can therefore exceed the actual amount of memory being used by the processes. The _Total rollup can also mask processes that are leaking memory because other processes may be freeing memory thus creating a balance between leaked and freed memory. Identify processes that expand their working set over time for further analysis. Also, review handles and threads because both use resources and potentially can be mismanaged. After evaluating the process resource usage, you should be able to answer the following:  Are any of the processes increasing their private bytes over time?  Are any processes growing their working set over time?  Are any processes increasing the number of threads or handles over time?  Are any processes increasing their use of pool over time?  Is there a direct relationship between the above named resources and total committed memory or available memory?  If there is a relationship, is this normal behavior for the process in question? For example, SQL does not commit ‘min memory’ on startup; these pages are faulted in into the working set as needed. This is not necessarily an indication of a memory leak.  If there is clearly a leak in the overview and is not identifiable in the process counters it is most likely in the pool.  If the leak in pool is not associated with SQL Server handles, then more often than not, it is not a SQL Server issue. There is however the possibility that the leak could be associated with third party XPROCS, SP_OA* calls or OLDB providers. Review Paging Activity and Its Impact on CPU and I/O As stated earlier, paging is not in and of itself a bad thing. When starting a process the system faults in the pages of an executable, as they are needed. This is preferable to loading the entire image at startup. The same can be said for memory mapped files and file system cache. All of these features leverage the ability of the system to fault in pages as needed The greatest impact of paging on a process is when the process must wait for an in-page fault or when page file activity represents a significant portion of the disk activity on the disk the application is actively using. After evaluating page fault activity, you should be able to answer the following questions:  What is the relationship between PageFaults/sec and Page Input/sec + Page Output/Sec?  What is the relationship if any between hard page faults and available memory?  Does paging activity represent a significant portion of processor or I/O resource usage? Don’t Prematurely Jump to Any Conclusions Analyzing memory pressure takes time and patience. An individual counter in and of it self means little. It is only when you start to explore relationships between cause and effect that you can begin to understand the impact of a particular counter. The key thoughts to remember are:  With the exception of a swap (when the entire process’s working set has been swapped out/in), hard page faults to resolve reads, are the most expensive in terms its effect on a processes perceived performance.  In general, page writes associated with page faults do not directly affect a process’s perceived performance, unless that process is waiting on a free page to be made available. Page file activity can become a problem if that activity competes for a significant percentage of the disk throughput in a heavy I/O orientated environment. That assumes of course that the page file resides on the same disk the application is using. Lab 3.1 System Memory Lab 3.1 Analyzing System Memory Using System Monitor Exercise 1 – Troubleshooting the Cardinal1.log File Students will evaluate an existing System Monitor log and determine if there is a problem and what the problem is. Students should be able to isolate the issue as a memory problem, locate the offending process, and determine whether or not this is a pool issue. Exercise 2 – Leakyapp Behavior Students will start leaky app and monitor memory, page file and cache counters to better understand the dynamics of these counters. Exercise 3 – Process Swap Due To Minimizing of the Cmd Window Students will start SQL from command line while viewing SQL process performance counters. Students will then minimize the window and note the effect on the working set. Overview What You Will Learn After completing this lab, you will be able to:  Use some of the basic functions within System Monitor.  Troubleshoot one or more common performance scenarios. Before You Begin Prerequisites To complete this lab, you need the following:  Windows 2000  SQL Server 2000  Lab Files Provided  LeakyApp.exe (Resource Kit) Estimated time to complete this lab: 45 minutes Exercise 1 Troubleshooting the Cardinal1.log File In this exercise, you will analyze a log file from an actual system that was having performance problems. Like an actual support engineer, you will not have much information from which to draw conclusions. The customer has sent you this log file and it is up to you to find the cause of the problem. However, unlike the real world, you have an instructor available to give you hints should you become stuck. Goal Review the Cardinal1.log file (this file is from Windows NT 4.0 Performance Monitor, which Windows 2000 can read). Chart the log file and begin to investigate the counters to determine what is causing the performance problems. Your goal should be to isolate the problem to a major area such as pool, virtual address space etc, and begin to isolate the problem to a specific process or thread. This lab requires access to the log file Cardinal1.log located in C:\LABS\M3\LAB1\EX1  To analyze the log file 1. Using the Performance MMC, select the System Monitor snap-in, and click the View Log File Data button (icon looks like a disk). 2. Under Files of type, choose PERFMON Log Files (*.log) 3. Navigate to the folder containing Cardinal1.log file and open it. 4. Begin examining counters to find what might be causing the performance problems. When examining some of these counters, you may notice that some of them go off the top of the chart. It may be necessary to adjust the scale on these. This can be done by right-clicking the rightmost pane and selecting Properties. Select the Data tab. Select the counter that you wish to modify. Under the Scale option, change the scale value, which makes the counter data visible on the chart. You may need to experiment with different scale values before finding the ideal value. Also, it may sometimes be beneficial to adjust the vertical scale for the entire chart. Selecting the Graph tab on the Properties page can do this. In the Vertical scale area, adjust the Maximum and Minimum values to best fit the data on the chart. Lab 3.1, Exercise 1: Results Exercise 2 LeakyApp Behavior In this lab, you will have an opportunity to work with a partner to monitor a live system, which is suffering from a simulated memory leak. Goal During this lab, your goal is to observe the system behavior when memory starts to become a limited resource. Specifically you will want to monitor committed memory, available memory, the system working set including the file system cache and each processes working set. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a live system with a memory leak 1. Choose one of the two systems as a victim on which to run the leakyapp.exe program. It is recommended that you boot using the \MAXMEM=128 option so that this lab goes a little faster. You and your partner should decide which server will play the role of the problematic server and which server is to be used for monitoring purposes. 2. On the problematic server, start the leakyapp program. 3. On the monitoring system, create a counter that logs all necessary counters need to troubleshoot a memory problem. This should include physicaldisk counters if you think paging is a problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested interval for capturing is five seconds. 4. After the counters have been started, start the leaky application program 5. Click Start Leaking. The button will now change to Stop Leaking, which indicates that the system is now leaking memory. 6. After leakyapp shows the page file is 50 percent full, click Stop leaking. Note that the process has not given back its memory, yet. After approximately one minute, exit. Lab 3.1, Exercise 2: Questions After analyzing the counter logs you should be able to answer the following: 1. Under which system memory counter does the leak show up clearly? Memory:Committed Bytes 2. What process counter looked very similar to the overall system counter that showed the leak? Private Bytes 3. Is the leak in Paged Pool, Non-paged pool, or elsewhere? Elsewhere 4. At what point did Windows 2000 start to aggressively trim the working sets of all user processes? <5 MB Free 5. Was the System Working Set trimmed before or after the working sets of other processes? After 6. What counter showed this? Memory:Cache Bytes 7. At what point was the File System Cache trimmed? After the first pass through all other working sets 8. What was the effect on all the processes working set when the application quit leaking? None 9. What was the effect on all the working sets when the application exited? Nothing, initially; but all grew fairly quickly based on use 10. When the server was running low on memory, which was Windows spending more time doing, paging to disk or in-paging? Paging to disk, initially; however, as other applications began to run, in-paging increased Exercise 3 Minimizing a Command Window In this exercise, you will have an opportunity to observe the behavior of Windows 2000 when a command window is minimized. Goal During this lab, your goal is to observe the behavior of Windows 2000 when a command window becomes minimized. Specifically, you will want to monitor private bytes, virtual bytes, and working set of SQL Server when the command window is minimized. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a command window’s working set as the window is minimized 1. Using System Monitor, create a counter list that logs all necessary counters needed to troubleshoot a memory problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested capturing interval is five seconds. 2. After the counters have been started, start a Command Prompt window on the target system. 3. In the command window, start SQL Server from the command line. Example: SQL Servr.exe –c –sINSTANCE1 4. After SQL Server has successfully started, Minimize the Command Prompt window. 5. Wait approximately two minutes, and then Restore the window. 6. Wait approximately two minutes, and then stop the counter log. Lab 3.1, Exercise 3: Questions After analyzing the counter logs you should be able to answer the following questions: 1. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was minimized? Private Bytes and Virtual Bytes remained the same, while Working Set went to 0 2. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was restored? None; the Working Set did not grow until SQL accessed the pages and faulted them back in on an as-needed basis SQL Server Memory Overview SQL Server Memory Overview Now that you have a better understanding of how Windows 2000 manages memory resources, you can take a closer look at how SQL Server 2000 manages its memory. During the course of the lecture and labs you will have the opportunity to monitor SQL Servers use of memory under varying conditions using both System Monitor counters and SQL Server tools. SQL Server Memory Management Goals Because SQL Server has in-depth knowledge about the relationships between data and the pages they reside on, it is in a better position to judge when and what pages should be brought into memory, how many pages should be brought in at a time, and how long they should be resident. SQL Servers primary goals for management of its memory are the following:  Be able to dynamically adjust for varying amounts of available memory.  Be able to respond to outside memory pressure from other applications.  Be able to adjust memory dynamically for internal components. Items Covered  SQL Server Memory Definitions  SQL Server Memory Layout  SQL Server Memory Counters  Memory Configurations Options  Buffer Pool Performance and Counters  Set Aside Memory and Counters  General Troubleshooting Process  Memory Myths and Tips SQL Server Memory Definitions SQL Server Memory Definitions Pool A group of resources, objects, or logical components that can service a resource allocation request Cache The management of a pool or resource, the primary goal of which is to increase performance. Bpool The Bpool (Buffer Pool) is a single static class instance. The Bpool is made up of 8-KB buffers and can be used to handle data pages or external memory requests. There are three basic types or categories of committed memory in the Bpool.  Hashed Data Pages  Committed Buffers on the Free List  Buffers known by their owners (Refer to definition of Stolen) Consumer A consumer is a subsystem that uses the Bpool. A consumer can also be a provider to other consumers. There are five consumers and two advanced consumers who are responsible for the different categories of memory. The following list represents the consumers and a partial list of their categories  Connection – Responsible for PSS and ODS memory allocations  General – Resource structures, parse headers, lock manager objects  Utilities – Recovery, Log Manager  Optimizer – Query Optimization  Query Plan – Query Plan Storage Advanced Consumer Along with the five consumers, there are two advanced consumers. They are  Ccache – Procedure cache. Accepts plans from the Optimizer and Query Plan consumers. Is responsible for managing that memory and determines when to release the memory back to the Bpool.  Log Cache – Managed by the LogMgr, which uses the Utility consumer to coordinate memory requests with the Bpool. Reservation Requesting the future use of a resource. A reservation is a reasonable guarantee that the resource will be available in the future. Committed Producing the physical resource Allocation The act of providing the resource to a consumer Stolen The act of getting a buffer from the Bpool is referred to as stealing a buffer. If the buffer is stolen and hashed for a data page, it is referred to as, and counted as, a Hashed buffer, not a stolen buffer. Stolen buffers on the other hand are buffers used for things such as procedure cache and SRV_PROC structures. Target Target memory is the amount of memory SQL Server would like to maintain as committed memory. Target memory is based on the min and max server configuration values and current available memory as reported by the operating system. Actual target calculation is operating system specific. Memory to Leave (Set Aside) The virtual address space set aside to ensure there is sufficient address space for thread stacks, XPROCS, COM objects etc. Hashed Page A page in pool that represents a database page. SQL Server Memory Layout Virtual Address Space When SQL Server is started the minimum of physical ram or virtual address space supported by the OS is evaluated. There are many possible combinations of OS versions and memory configurations. For example: you could be running Microsoft Windows 2000 Advanced Server with 2 GB or possibly 4 GB of memory. To avoid page file use, the appropriate memory level is evaluated for each configuration. Important Utilities can inject a DLL into the process address space by using HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs When the USER32.dll library is mapped into the process space, so, too, are the DLLs listed in the Registry key. To determine what DLL’s are running in SQL Server address space you can use tlist.exe. You can also use a tool such as Depends from Microsoft or HandelEx from http://ww.sysinternals.com. Memory to Leave As stated earlier there are many possible configurations of physical memory and address space. It is possible for physical memory to be greater than virtual address space. To ensure that some virtual address space is always available for things such as thread stacks and external needs such as XPROCS, SQL Server reserves a small portion of virtual address space prior to determining the size of the buffer pool. This address space is referred to as Memory To Leave. Its size is based on the number of anticipated tread stacks and a default value for external needs referred to as cmbAddressSave. After reserving the buffer pool space, the Memory To Leave reservation is released. Buffer Pool Space During Startup, SQL Server must determine the maximum size of the buffer pool so that the BUF, BUFHASH and COMMIT BITMAP structures that are used to manage the Bpool can be created. It is important to understand that SQL Server does not take ‘max memory’ or existing memory pressure into consideration. The reserved address space of the buffer pool remains static for the life of SQL Server process. However, the committed space varies as necessary to provide dynamic scaling. Remember only the committed memory effects the overall memory usage on the machine. This ensures that the max memory configuration setting can be dynamically changed with minimal changes needed to the Bpool. The reserved space does not need to be adjusted and is maximized for the current machine configuration. Only the committed buffers need to be limited to maintain a specified max server memory (MB) setting. SQL Server Startup Pseudo Code The following pseudo code represents the process SQL Server goes through on startup. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (-g)  Determine Total Physical Memory  Determine Available Physical Memory  Determine Total Virtual Memory  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results • Min(Available Memory, Max Server Memory) > Virtual Memory • Supports Read Scatter • SQL Server not started with -f • AWE Enabled via sp_configure • Enterprise Edition • Lock Pages In Memory user right enabled  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave)  Calculate the number of physical and virtual buffers that can be supported AWE Present Physical Buffers = (RAM / (PAGESIZE + Physical Overhead)) Virtual Buffers = (VA Limit / (PAGESIZE + Virtual Overhead)) AWE Not Present Physical Buffers = Virtual Buffers = VA Limit / (PAGESIZE + Physical Overhead + Virtual Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave SQL Server Startup Pseudo Code Example The following is an example based on the pseudo code represented on the previous page. This example is based on a machine with 384 MB of physical memory, not using AWE or /3GB. Note CmbAddressSave was changed between SQL Server 7.0 and SQL Server 2000. For SQL Server 7.0, cmbAddressSave was 128. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (No –g so 256MB)  Determine Total Physical Memory (384)  Determine Available Physical Memory (384)  Determine Total Virtual Memory (2GB)  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB) (255 * .5MB + 256MB = 384MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results (AWE Not Enabled)  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave) 384MB = Min(384MB, 2GB – 384MB)  Calculate the number of physical and virtual buffers that can be supported AWE Not Present 48664 (approx) = 384 MB / (8 KB + Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS) 48664 = Max(48664,1024)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave Tip Trace Flag 1604 can be used to view memory allocations on startup. The cmbAddressSave can be adjusted using the –g XXX startup parameter. SQL Server Memory Counters SQL Server Memory Counters The two primary tools for monitoring and analyzing SQL Server memory usage are System Monitor and DBCC MEMORYSTATUS. For detailed information on DBCC MEMORYSTATUS refer to Q271624 Interpreting the Output of the DBCC MEMORYSTAUS Command. Important Represents SQL Server 2000 Counters. The counters presented are not the same as the counters for SQL Server 7.0. The SQL Server 7.0 counters are listed in the appendix. Determining Memory Usage for OS and BPOOL Memory Manager: Total Server memory (KB) - Represents all of SQL usage Buffer Manager: Total Pages - Represents total bpool usage To determine how much of Total Server Memory (KB) represents MemToLeave space; subtract Buffer Manager: Total Pages. The result can be verified against DBCC MEMORYSTATUS, specifically Dynamic Memory Manager: OS In Use. It should however be noted that this value only represents requests that went thru the bpool. Memory reserved outside of the bpool by components such as COM objects will not show up here, although they will count against SQL Server private byte count. Buffer Counts: Target (Buffer Manager: Target Pages) The size the buffer pool would like to be. If this value is larger than committed, the buffer pool is growing. Buffer Counts: Committed (Buffer Manager: Total Pages) The total number of buffers committed in the OS. This is the current size of the buffer pool. Buffer Counts: Min Free This is the number of pages that the buffer pool tries to keep on the free list. If the free list falls below this value, the buffer pool will attempt to populate it by discarding old pages from the data or procedure cache. Buffer Distribution: Free (Buffer Manager / Buffer Partition: Free Pages) This value represents the buffers currently not in use. These are available for data or may be requested by other components and mar
    关于雷达方面的知识! EFFECTIVENESS OF EXTRACTING WATER SURFACE SLOPES FROM LIDAR DATA WITHIN THE ACTIVE CHANNEL: SANDY RIVER, OREGON, USA by JOHN THOMAS ENGLISH A THESIS Presented to the Department of Geography and the Graduate School of the University of Oregon in partial fulfillment of the requirements for the degree of Master of Science March 2009 11 "Effectiveness of Extracting Water Surface Slopes from LiDAR Data within the Active Channel: Sandy River, Oregon, USA," a thesis prepared by John Thomas English in partial fulfillment of the requirements for the Master of Science degree in the Department of Geography. This thesis has been approved and accepted by: Date Committee in Charge: W. Andrew Marcus, Chair Patricia F. McDowell Accepted by: Dean of the Graduate School © 2009 John Thomas English 111 IV An Abstract of the Thesis of John Thomas English in the Department of Geography for the degree of to be taken Master of Science March 2009 Title: EFFECTIVENESS OF EXTRACTING WATER SURFACE SLOPES FROM LIDAR DATA WITHIN THE ACTIVE CHANNEL: SANDY RIVER, OREGON, USA Approved: _ W. Andrew Marcus This paper examines the capability ofLiDAR data to accurately map river water surface slopes in three reaches of the Sandy River, Oregon, USA. LiDAR data were compared with field measurements to evaluate accuracies and determine how water surface roughness and point density affect LiDAR measurements. Results show that LiDAR derived water surface slopes were accurate to within 0.0047,0.0025, and 0.0014 slope, with adjusted R2 values of 0.35, 0.47, and 0.76 for horizontal intervals of 5, 10, and 20m, respectively. Additionally, results show LiDAR provides greater data density where water surfaces are broken. This study provides conclusive evidence supporting use ofLiDAR to measure water surface slopes of channels with accuracies similar to field based approaches. CURRICULUM VITAE NAME OF AUTHOR: John Thomas English PLACE OF BIRTH: Eugene, Oregon DATE OF BIRTH: January 1st, 1980 GRADUATE AND UNDERGRADUATE SCHOOLS ATTENDED: University of Oregon, Eugene, Oregon Southern Oregon University, Ashland, Oregon DEGREES AWARDED: Master of Science, Geography, March 2009, University of Oregon Bachelor of Science, Geography, 2001, Southern Oregon University AREAS OF SPECIAL INTEREST: Fluvial Geomorphology Remote Sensing PROFESSIONAL EXPERIENCE: LiDAR Database Coordinator, Oregon Department of Geology & Mineral Industries, June 2008 - present. LiDAR & Remote Sensing Specialist, Sky Research Inc., 2003 - 2008 GRANTS, AWARDS AND HONORS: Gamma Theta Upsilon Geographic Society Member, 2006 Gradutate Teaching Fellowship, Social Science Instructional Laboratory, 20062007 v VI ACKNOWLEDGMENTS I wish to express special thanks to Professors W.A. Marcus and Patricia McDowell for their assistance in the preparation of this manuscript. In addition, special thanks are due to Mr. Paul Blanton who assisted with field data collection for this project. I also thank the members ofmy family who have been encouraging and supportive during the entirety of my graduate schooling. I wish to thank my parents Thomas and Nancy English for always being proud of me. Special thanks to my son Finn for always making me smile. Lastly, special thanks to my wife Kathryn for her unwavering support, love, and encouragement. Dedicated to my mother Bonita Claire English (1950-2004). Vll V111 TABLE OF CONTENTS Chapter Page I. INTRODUCTION 1 II. BACKGROlTND 5 Water Surface Slope 5 LiDAR Measurements of Active Channel Features 7 III. STUDY AREA 10 IV. METHODS 22 Overview 22 LiDAR Data and Image Acquisition 23 Field Data Acquisition 24 LiDAR Processing 25 Calculation of Water Surface Slopes 27 Evaluating LiDAR Slope Accuracies and Controls 33 V. RESULTS 35 Comparison of Absolute Elevations from Field and LiDAR Data in Reach 1 35 Slope Comparisons 41 Surface Roughness Analysis 46 VI. DiSCUSSiON 51 VII. CONCLUSION 57 APPENDIX: ARCGIS VBA SCRIPT CODE 58 REFERENCES 106 IX LIST OF FIGURES Figure Page 1. Return Factor vs. LiDAR Scan Angle 2 2. Angle of Incidence 3 3. Wave Action Relationship to LiDAR Echo 3 4. Site Map 11 5. Annual Hydrograph of Sandy River 13 6. Oregon GAP Vegetation within Study Area 15 7. Photo of Himalayan Blackberry on Sandy River 16 8. Reach 1 Site Area Map with photo 18 9. Reach 2 Site Area Map 20 10. Reach 3 Site Area Map 21 11. LiDAR Point Filtering Processing Step 26 12. Field DEM Interpolated using Kriging 29 13. Reach 1 LiDAR Cross Sections and Sample Point Location 31 14. Differences Between LiDAR and Field Based Elevations 37 15. Regression ofLiDAR and Field Cross section Elevations 38 16. Comparison of LiDAR and Field Longitudinal Profiles (5, 10,20 meters) 40 17. Regression ofField and LiDAR Based Slopes (5, 10,20 meters) 42 18. Differences Between LiDAR and Field Based Slopes (5, 10,20 meters) 44 19. Relationship of Water Surfaces to LiDAR Point Density 47 20. Marmot Dam: Orthophotographyand Colorized Slope Model 50 21. LiDAR Point Density versus Interpolation 53 LIST OF TABLES T~k p~ 1. Reported Accuracies of 2006 and 2007 LiDAR 24 2. Results of LiDAR and Field Elevation Comparison 38 3. Results ofLiDAR and Field Slope Comparison (5, 10,20 meters) 45 4. Results of Reach 1 Slope Comparison 46 5. Water Surface Roughness Results for Reach 1,2, and 3 48 6. Results of Reach 1 Water Surface Roughness Comparison 49 7. Subset of Reach 3 Water Surface Roughness Analysis Near Marmot Dam 50 x 1 CHAPTER I INTRODUCTION LiDAR (Light Detection and Ranging) has become a common tool for mapping and documenting floodplain environments by supplying individual point elevations and accurate Digital Terrain Models (DTM) (Bowen & Waltermire, 2002; Gilvear et aI., 2004; Glenn et aI., 2005; Magid et aI., 2005; Thoma, 2005; Smith et aI., 2006; Gangodagamage et aI., 2007). Active channel characteristics that have been extracted using LiDAR include bank profiles, longitudinal profiles (Magid et aI., 2005; Cavalli et aI., 2007) and transverse profiles of gullies under forest canopies (James et aI., 2007). To date, however, no one has tested if LiDAR returns from water surfaces can be used to measure local water surface slopes within the active channel. Much of the reason that researchers have not attempted to measure water surface slopes with LiDAR is because most LiDAR pulses are absorbed or not returned from the water surface. However, where the angle of incidence is close to nadir (i.e. the LiDAR pulse is fired near perpendicular to water surface plane), light is reflected and provides elevations off the water surface (Figure 1, Maslov et aI., 2000). Where LiDAR pulses glance the water surface at angles of incidence greater than 53 degrees, a LiDAR pulse is 2 more often lost to refraction (Figure 2) (Jenkins, 1957). In broken water surface conditions the water surface plane is angled, which produces perpendicular angles of incidence allowing for greater chance of return (Maslov et al. 2000). Su et al. (2007) documented this concept by examining LiDAR returns off disturbed surfaces in a controlled lab setting (Figure 3). LiDAR returns off the water surface potentially provide accurate surface elevations that can be used to calculate surface slopes. 1.0 08 ~ 0.6 o t5 ~ E .2 ~ 04 02 00 000 __d=2° d=10 ° --d=200 --d=300 d=40o d=50o I I 2000 4000 60.00 sensing angle, degree I 8000 Figure 1. Return Factor vs. LiDAR Scan Angle. Figure shows relationship between water surface return and scan angle. Return Factor versus sensing angle at different levels of the waving d (d = scan angle). Figure shows the relationship of scan angle of LiDAR to return from a water surface. Return factor is greatest at low scan angles relative to the nadir region of scan. (Maslov, D. V. et. al. (2000). A Shore-based LiDAR for Coastal Seawater Monitoring. Proceedings ofEARSeL-SIGWorkshop, Figure 1, pg. 47). 3 reflected\\ :.;/ incident 1 I 1 . '\ I lAIR \ •••••••• ••••••••••••• •••••• ••••••••••••••••••••• • •• eo ••••••••••• o •••••••••••• _0 •••••••••• 0 ••• .•.•.•.•.•.•00 ,••••• ' 0•••• 0 ••••••••••• 0 ••I' .•.•.•.•.•.,................. .".0 ••••••••••••• , •••••••••••• , ••••••••••0••••. .....................................~ . ••••••••••••••••••••••••••••••••••••• • •••••••••••••••••••••••••• 0 •••••••••••••••••••• 0 ••••• 0 •• ~~~)}))}))})))))))))\..)}))?()))))))))))))))))j((~j< Figure 2. Angle of Incidence. Figure displays concept of reflection and refraction of light according to angle of incidence. The intensity of light is greater as the angle of incidence approaches nadir. (Jenkins, F.A., White, RE. "Fundamentals of Optics". McGraw-Hili, 1957, Chapter 25) 09 08 0.7 0.6 0.5 0.4 0.3 0.2 0.1 r - 0.\ O,j/6Y3- -500 17.5 35 52.5 70 horizonral scanning dislancC(lllm) 0.9 0.8 0.7 06 0.5 0.4 0.3 0.2 0.1 a b Figure 3. Wave Action Relationship to LiDAR Echo. "LiDAR measurements of wake profiles generated by propeller at 6000 rpm (a) and 8000 rpm (b). Su's work definitively showed LiDAR's ability to measure water surfaces, and the relationship of wave action to capability of echo. From Su (2007) figure 5, p.844 . This study examines whether LiDAR can accurately measure water surface elevations and slopes. In order to address this topic, I assess the vertical accuracy of LiDAR and the effects of water surface roughness on LiDAR within the active channel. Findings shed light on the utility of LiDAR for measuring water surface slopes in different stream environments and methodological constraints to using LiDAR for this purpose. 4 5 CHAPTER II BACKGROlJND Water Surface Slope Water surface slope is a significant component to many equations for modeling hydraulics, sediment transport, and fluvial geomorphic processes (Knighton, 1999, Sing & Zang, in press). Traditional methods for measuring water surface slope include both direct and indirect methods. Direct water surface slope measurements typically use a device such as a total station or theodolite in combination with a stadia rod or drop line to measure water surface elevations (Harrelson, et ai., 1994, Western et ai., 1997). Inaccuracies in measurements stem from surface turbulence that makes it difficult to precisely locate the water surface, especially in fast water where flows pile up against the measuring device (Halwas, 2002). Direct survey methods often require a field team to occupy several known points throughout a reach. This is a time consuming process, especially if one wanted to document water surface slope along large portions of a river. This method can be dangerous in deep or fast water. 6 Indirect methods of water surface slope measurement consist of acquiring approximate water surface elevations using strand lines, water marks, secondary data sources such as contours from topographic maps, or hydraulic modeling to back calculate the water depth (USACE, 1993; Western et aI., 1997). Variable quality of data and modeling errors can lead to inaccuracies using these methods. The use of strand lines and water marks may not necessarily represent the peak flows or the water surface. Contours may be calculated or interpolated from survey points taken outside the channel area. The most commonly used hydraulic models are based on reconstruction of I-dimensional flow within the channel and do not account for channel variability between cross section locations. LiDAR water surface returns have a great deal of promise for improving measurement of water surfaces in several significant ways. LiDAR measurements eliminate hazards associated with surveyors being in the water. LiDAR also captures an immense amount of elevation data over a very short period of time, with hundreds of thousands of pulses collected within a few seconds for a single swath. Within this mass of pulses, hundreds or thousands of measurements off the water's surface may be collected depending on the nature of surface roughness, with broken water surfaces increasing the likelihood of measurements (Figure 3). In addition, most terrestrial LiDAR surveys collect data by flying multiple overlapping flight lines, thus increasing the number of returns in off nadir overlapping areas and the potential for returns from water surfaces. 7 The accuracy of high quality LiDAR measurements is comparable to field techniques. The relative variability of quality LiDAR vertical measurements typically ranges between 0.03-0.05 meters (Leica, 2007), where relative variability is the total range of vertical error within an individual scan on surface of consistent elevation. Lastly, LiDAR has the ability to collect water surface elevations over large stretches of river within a single flight of a few hours. LiDAR Measurements of Active Channel Features Recent studies evaluating the utility of LiDAR in the active channel environment have documented the effectiveness of using LiDAR DTMs to extract bank profiles. Magid et al. (2005) examined long term changes of longitudinal profiles along the Colorado River in the Grand Canyon. The study used historical survey data from 1923 and differenced topographic elevations with LiDAR data flown in 2000. LiDAR with three meter spot spacing was used to estimate water surface profiles based on the LiDAR elevations nearest to the known channel. Cavalli et al. (2007) extracted longitudinal profiles of the exposed bed of the Rio Cordon, Italy using 0.5 meter LiDAR DEM cells. This study successfully attributed LiDAR DEM roughness within the channel to instream habitats. Bowen and Waltermire (2002) found that LiDAR elevations within the floodplain were less accurate than advertised by vendors and sensor manufacturers. Dense vegetation within the riparian area prevented LiDAR pulses from reaching the 8 ground surface resulting in accuracies ranging 1-2 meters. Accuracies within unvegetated areas and flat surfaces met vendor specifications (l5-20cm). James et al. (2007) used LiDAR at 3 meter spot spacing to map transverse profiles of gullies under forest canopies. Results from this study showed that gully morphologies were underestimated by LiDAR data, possibly due to low density point spacing and biased filtering of the bare earth model. Today, point densities of 4-8 points/m2 are common and would likely alleviate some of the troubles found in this study. Additional studies have used LiDAR to extract geomorphic data from channel areas. Schumann et al. (2008) compared a variety of remotely sensed elevation models for floodplain mapping. The study used 2 meter LiDAR DEMs as topographic base data for floodplain modeling, and found that modeled flood stages based on the LiDAR DEM were accurate to within 0.35m. Ruesser and Bierman (2007) used high resolution LiDAR data to calculate erosion fluxes between strath terraces based on elevation. Gangodagamage et al. (2007) used LiDAR to extract river corridor width series, which help to quantify processes involved in valley formation. This study used a fixed water surface elevation and did not attempt to demonstrate the accuracy of LiDAR derived water surfaces. Green LiDAR also has been used to examine riverine environments. Green LiDAR functions much like terrestrial LiDAR (which uses an infrared laser) except that green LiDAR systems use green light that has the ability to penetrate the water surface and measure the elevation of the channel bed. Green LiDAR is far less common than terrestrial LiDAR and the majority of studies have been centered on studies of ocean shorelines. Wang and Philpot (2007) assessed attenuation parameters for measuring bathymetry in near shore shallow water, concluding that quality bathymetric models can be achieved through a number of post-processing steps. Hilldale and Raft (2007) assessed the accuracy and precision of bathymetric LiDAR and concluded that although the resulting models were informative, bathymetric LiDAR was less precise than traditional survey methods. In general, it is often difficult to assess the accuracy of bathymetric LiDAR given issues related to access of the channel bed at time of flight. 9 10 CHAPTER III STUDY AREA The study area is the Sandy River, Oregon, which flows from the western slopes ofMount Hood northwest to the Columbia River (Figure 4). Recent LiDAR data and aerial photography capture the variety of water surface characteristics in the Sandy River, which range from shooting flow to wide pool-riffle formations. The recent removal of the large run-of-river Marmot Dam upstream of the analysis sites has also generated interest in the river's hydraulics and geomorphology. 11 545000 ,·......,c' 550000 556000 560000 Washington, I 565000 -. Portland Sandy River .Eugene Oregon 570000 ooo '~" ooo ~ ooo~ • Gresham (""IIIII/hill /flIt'r Oregon Clack. fna County Marmot Dam IHillshaded area represents 2006 LiDAR extent. Ol1hophotography was collected only along the Sandy River channel within the LiDAR extent. 10 KiiomElt:IS t---+---+-~I--+--+----t-+--+---+----jl 545000 550000 555000 560000 565000 570000 Figure 4. Site Map. Site area map showing location of analysis reaches within the 2006 and 2007 LiDAR coverage areas. Olihophotography was also collected for the 2006 study, but was collected only along the Sandy River channel. 12 Floodplain longitudinal slopes along the Sandy River average 0.02 and reach a maximum of 0.04. The Sandy River has closely spaced pool-riffles and rapids in the upper reaches, transitioning to longer sequenced pool-riffle morphology in the middle and lower reaches. The Sandy River bed is dominated by sand. Cobbles and small boulders are present mostly in areas of riffles and rapids. Much of the channel is incised with steep slopes along the channel boundaries. The flow regime is typical of Pacific Northwest streams, with peak flows in the winter months ofNovember through February and in late spring with snowmelt runoff (Figure 5). Low flows occur between late September and early October. The average peak annual flow at the Sandy River station below Bull Run River (USGS 14142500) is 106cms. Average annual low flow for the same gauge is 13.9cms. 13 USGS 14142500 SRNDY RIVER BL~ BULL RUN RIVER, NR BULL RUN, OR 200 k.===_~~~=~~~=.......==",,=~-........==~ ~....J Jan 01Feb Ollar 01Rpr O:t1ay 01Jun 01Jul 01Rug OJSep 010ct 01Nov O:IJec 01 2006 2006 2006 2006 2006 2006 2006 2006 2006 2006 2006 2006 \ 11 ~I\\ ,1\ 1\ j\ 1"J'fn I\. I, ) \ , ,;' ) I I" 'I'•., I I' I' ] 30000 ~~-~----~-------------~-------, o ~ 20000 ~ 8'-. 10000 ~ Ql Ql ~ U '001 ~ ::::J U, Ql to 1000 to .= u Co? '001 Cl )- .....J. a: Cl Hedian daily statistic <59 years) Daily nean discharge --- Estinated daily nean discharge Period of approved data Period of provisional data Figure 5, Annual Hydrograph of Sandy River. US Geological Survey gaging station annual hydrograph of Sandy River, Oregon at Bull Run River. Data from http://waterdata.usgs.gov/or/nwis/annual/ Vegetation is mostly a mixture of Douglas fir and western red hemlock (Figure 6). Other vegetation includes palustrine forest found in the upper portions of the study area, and agricultural lands found in the middle and lower portions. Douglas fir and western red hemlock make up 87% of vegetated areas, palustrine forest 5%, and agricultural lands 5%, the remaining 3% is open water associated with the channel and reservoirs (Oregon GAP Analysis Program, 2002). The city of Troutdale, OR abuts the lower reaches of the Sandy River. Along this stretch of river Himalayan blackberry, an invasive species, dominates the western banks (Figure 7). The presence of Himalayan blackberry is significant because LiDAR has trouble penetrating through the dense clusters of vines. When this blackberry is close to the water's edge it is difficult to accurately define the channel boundary. 14 15 545000 550000 555000 560000 565000 570000 Reach 3 10 !' 0° 200 MetersO 0 ~~~~~~I O~~~OOO~ Figure 9. Reach 2 Site Area Map. Site map of Reach 2. Reach 2 contains 359 cross sections derived from LiDAR and 3,456 sample points. Inset map shows cross section sample locations derived from LiDAR and smooth/rough water surface delineations used in analysis. 21 Reach 3 is located 40.7km upstream from the mouth of the Sandy and is 2,815 meters in length (Figure 10). The widest portion of this section at approximate banle full is 88 meters. The upstream extent of the channel includes the supercritical flow of Marmot Dam. The channel is incised and relatively straight with a sinuosity of 1.08. Fine sands dominate the channel bed with some boulders likely present from mass wasting along valley walls. As with Reach 2, Douglas fir dominates bank vegetation along. 200 40) Inset mAp displays UDAR point I densily alol1g willl cross seellon Sanlpleing dala LiDAR cross section SAmple locations were used to eX1mcl poinl density values. 503 fOC I 000 '.1..Hrs 1-.,...--,.-+--=1..,=-,---4I--+-1---11 . Reach 3 Figure 10. Reach 3 Site Area Map. Site map of Reach 3. Inset map shows point LiDAR water surface points. Reach 3 contains 550 cross sections and 3,348 sample points. Visual examination of this map allows one to see how point density varies within the active channel. 22 CHAPTER IV METHODS Overview LiDAR data and orthophotography were collected in 2006 and additional LiDAR data were collected over the same area in 2007. Field measurements were obtained five days after the 2007 LiDAR flight in order to compare field measurements of water surface slope to LiDAR-based measurements. Time of flight field measurements of water surface elevations were not obtained for the 2006 flight, but the coincident collection of LiDAR data and orthophotos provide a basis for evaluating variability of LiDAR-based slopes over different channel types as identified from aerial photos. Following sections provide more detail regarding these methods. 23 LiDAR Data and Image Acquisition All LiDAR data were collected using a Leica ALS50 Phase II LiDAR system mounted on a Cessna Caravan C208 (see Table 1 for LiDAR acquisition specifications). The 2006 LiDAR data were collected October 2211d and encompassed 13,780 hectares of high resolution (2':4 points/m2 ) LiDAR data from the mouth of the Sandy River to Marmot Dam. Fifteen centimeter ground resolution orthophotography was collected September 26th , 2006 along the riparian corridor of the Sandy River from its mouth to just above the former site ofMarmot dam (Figure 4). The 2007 LiDAR were collected on October 8th and covered the same extent as the 2006 flight, but did not include orthophotography. Data included filtered XYZ ASCII point data, LiDAR DEMs as ESRI formatted grids at 0.5 meter cell size. Data were collected at 2':8 points per m2 providing a data set with significantly higher point density than the 2006 LiDAR data. The 2006 LiDAR data were collected in one continuous flight. 2006 orthophotography was collected using an RC30 camera system. Data were delivered in RGB geoTIFF format. LiDAR data were calibrated by the contractor to correct for IMU position errors (pitch, roll, heading, and mirror scale). Quality control points were collected along roads and other permanent flat features for absolute vertical correction of data. Horizontal accuracy ofLiDAR data is governed by flying height above ground with horizontal accuracy being equal to 1I3300th of flight altitude (meters) (Leica, 2007). 24 Table 1. Reported Accuracies of 2006 and 2007 LiDAR. Reported Accuracies and conditions for 2006 and 2007 LiDAR data. (Watershed Sciences PGE LiDAR Delivery Report, 2006, Watershed Sciences DOGAMI LiDAR Delivery Report, 2007). Relative Accuracy is a measure of flight line offsets resulting from sensor calibration. 2006 LiDAR 2007 LiDAR Flying height above ground level meters (AGL) 1100 1000 Absolute Vertical Accuracy in meters 0.063 0.034 Relative Accuracy in meters (calibration) 0.058 0.054 Horizontal Accuracy (l/3300th * AGL) meters 0.37 0.33 Discharge @ time of flight (cms) 13.05 20.8 - 21.8 LiDAR data collection over the Reach 1 field survey location was obtained in a single flight on October 8, 2007 between 1:30 and 6:00 pm. During the LiDAR flight, ground quality control data were collected along roads and other permanent flat surfaces within the collection area. These data were used to adjust for absolute vertical accuracy. Field Data Acquisition A river survey crew was dispatched at the soonest possible date (October 13, 2007) after the 2007 flight to collect ground truth data within the Reach 1. The initial aim was to survey water surface elevations at cross sections of the channel, but the survey was limited to near shore measurements due to high velocity conditions. We collected 187 measurements of bed elevation and depth one to fifteen meters from banks along both sides of the channel (Figure 8a) using standard total station longitudinal profile 25 survey methods (Harrelson, 1994). Seventy-six and 98 measurements were collected along the east and west banks, respectively, at intervals of approximately 1 to 2 meters. Thirteen additional measurements were collected along the east bank at approximately ten meter intervals. Depth measurements were added to bed elevations to derive water surface elevations. Discharge during the survey ranged between 22.5 and 22.7 cms during the survey of the east bank and remained steady at 22.5 cms during the survey of the west bank (USGS station 14142500). LiDAR Processing The goal ofLiDAR processing for this project was to classify LiDAR point data within the active channel as water and output this subset data for further analysis. The LiDAR imagery was first clipped to the active channel using a boundary digitized from the 2006 high resolution orthophotography. LiDAR point data were then reclassified to remove bars, banks, and overhanging vegetation (Figure 11). 26 Figure 11. LiDAR Point Filtering Processing Step. LiDAR processing steps. Top image shows entire LiDAR point cloud clipped to active channel boundary. Lower image shows the final processed LiDAR points representing only those points that reflect off the water surface. All bars and overhanging vegetation have been removed as well. 27 Water points were classified using the ground classification algorithm in Terrascan© (Soininen, 2005) to separate water surface returns from those off of vegetation or other surfaces elevated above the ground. The classification routine uses a proprietary mathematical model to accomplish this task. Once the ground classification was finished, classified points were visually inspected to add or remove false positives and remove in-channel features such as bar islands. A total of 11,593 of 1,854,219 LiDAR points were classified as water. Points classified as water were output as comma delimited x,y,z ASCII text files (XYZ), then converted to a 0.5 meter linearly interpolated ESRI formatted grid using ESRI geoprocessing model script. Calculation of Water Surface Slopes Water surface slopes were calculated using the rise over run dimensionless slope equation where the rise is the vertical difference between upstream and downstream water surface elevations and run is the longitudinal distance between elevation locations. LiDAR data is typically used in grid format. For this reason grid data were used for calculation of water surface slopes. We used linear interpolation to grid the LiDAR point data as this is the standard method used by the LiDAR contractor. In order to compare the LiDAR and field data it was also necessary to interpolate field 28 measurements to create a water surface for the entire stream. The field data-based DEM was created using kriging interpolation within ArcGIS Desktop Spatial Analyst (Figure 12). No quantitative analysis was performed to evaluate the interpolation method of the field-based water surface. The kriging interpolation was chosen because it producex the smoothest water surface based on visual inspection when compared to linear and natural neighbor interpolations, which generated irregular fluctuations that were unrealistic for a water surface. The kriged surface provided a water surface elevation model for comparative analysis with LiDAR. 29 Figure 12. Field DEM Interpolated using Kriging. Field DEM interpolated from field survey points using kriging method found in ArcGIS Spatial Analyst. DEM has been hiIlshaded to show surface characteristics. The very small differences in water surface elevations generate only slight variations in the hillshadeing. To compare LiDAR and field-based water surface slopes, water surface elevations from the LiDAR and field-based DEMS were extracted at the same locations along Reach I. To accomplish this, 37 cross sections were manually constructed at approximately Sm spacings (Figure 13). Cross sections comparisons were used rather than point-to-point comparisons between streamside field and LiDAR data points because the cross sections provide water surface slopes that are more representative of the entire channel. The Sm interval spacing was considered to be a sufficient for fine resolution slope extraction. Because cross section center points were used to calculate the longitudinal distance and because the stream was sinuous, the projection of the cross sections from the center line to the banks led to stream side distances between cross sections that differed from Sm. 30 31 Smooth 125 Meters I 100 I 75 I 50 I 25 I Cross Sections Cross Section Data Roughness Delineation Cross Section Sample Locations _ Rough oI ~ each 1 Figure 13. Reach 1 LiDAR Cross Sections and Sample Point Locations. Reach I LiDAR-derived cross section sample locations and areas of smooth and rough water surface delineations. 37 cross section and 444 sample points lie within Reach 1. 32 Cross sections were extracted using a custom ArcObjects VBA script (Appendix A). This script extracted 1 cell nearest neighbor elevations along the transverse cross sections at 5 meter intervals creating 444 cross section sample locations (Figure 13). Cross section averages were calculated using field-based and LiDAR-based elevation water surface grids. The average cross sectional elevation value for field and LiDAR data were then exported to Excel files, merged with longitudinal distance between cross section, and used to calculate field survey-based and LiDAR-based slopes between cross sections. Reaches 2 and 3, for which only LiDAR data were available, were sampled using the same cross sectional approach used in Reach 1. The data extracted from these reaches were used to characterize how LiDAR-based elevations, slopes and point densities interact with varying water surface roughness. Within Reach 2, 359 cross sections were drawn and elevations were sampled every five meters along each cross section creating 3,456 cross section sample locations (Figure 9). Reach 3 contained 550 cross sections and 3,348 cross section sample locations (Figure 10). Slopes were calculated between each cross section. 33 Evaluating LiDAR Slope Accuracies and Controls The accuracy of elevation data is the major control on slope accuracy, so a comparative analysis was performed using field survey and LiDAR elevations. First, field-based and LiDAR slopes were calculated at distance intervals of five, ten and twenty meters using average cross section elevations to test the sensitivity of the slopes to vertical inaccuracies in the LiDAR data. The field and LiDAR elevations were differenced using the same points used to create average cross section elevations. Differences were plotted in the form of histogram and cumulative frequency plot after transforming them into absolute values. Descriptive statistics were calculated to examine the range, minimum, maximum, and mean offset between data sets. Finally LiDAR and field-based values were compared using regression analysis. This study also examined the effects of water surface roughness on LiDAR elevation measurements, LiDAR point density, and LiDAR derived water surface slopes. Each reach was divided into smooth and rough sections based on visual analysis of the orthophoto data. One-meter resolution slope rasters were created from the LiDAR water surface grids using ArcGIS Spatial Analyst. One meter resolution point density grids were created from LiDAR point data (ArcGIS Spatial Analyst). Using the cross section sample points, values for water surface type, elevation, slope, and point density were extracted within each reach. Point sample data were transferred to tabular format, and average values were generated for each cross section. These tables were used to calculate 34 descriptive statistics associated with water surfaces such as elevation variance, average slope variance, average point density, and average slope. It is assumed in this study that smooth water surfaces are associated with pools and thus ought to have relatively low slopes. Conversely rough water surfaces are assumed to be representative of riffles and rapids, and thus ought to have relatively steeper slopes. Reach 1 contains field data, so slopes from LiDAR and field data were compared with respect to water surface conditions as determined from the aerial photos. 35 CHAPTER V RESULTS Results of this study encompass three analyses. Elevation analysis describes the statistical difference between LiDAR and field-based water surface elevations for Reach 1. Slope analysis compares LiDAR derived and field-based slopes calculated at 5, 10, and 20m longitudinal distances. These analyses aim to quantify both slope accuracy and slope sensitivity. Lastly, water surface analysis examines the relationship between LiDAR measured water surface slopes, point density, and water surface roughness. Comparison of Absolute Elevations from Field and LiDAR Data in Reach 1 The difference between water surface elevations from LiDAR affects the numerator within the rise over run equation, which in tum affects slope. This elevation analysis evaluation quantifies differences between field and LiDAR data. LiDAR-based cross section elevations were differenced from field-based cross section elevations. Difference values were examined through statistical analysis. 36 In terms of absolute elevations relative to sea level, the majority of LiDAR-based water surface elevations were lower than field-based elevations, although the LiDAR elevations were higher in the upper portion ofReach 1. Differences ranged between -0.04 and 0.05m with a mean absolute difference between field and LiDAR elevations of 0.02m (Figure 14 and Table 2). The range of differences is within the expected relative accuracies of LiDAR claimed by the LiDAR provider. Elevations for field and LiDAR data are significantly correlated with an R2 of 0.94 (Figure 15). The negative offset was expected given that discharge at time of LiDAR acquisition was lower than discharge at time of field data acquisition. Discharge during field acquisition ranged between 22.5 and 22.7 cfs, while discharge during LiDAR acquisition was between 20.8 and 21.8cfs. The portion of Reach 1 where LiDAR water surface measurements were higher than field measurements may be related to difference in discharge or change in bed configuration. Overall results showed that LiDAR data and field-based water surface measurements are comparable. 37 Distribution of Elevation Differences Between Field and LiDAR Water Surfaces 10 9 8 7 >. 6 u r:: ell 5 :l C'" ~ 4 u.. 3 2 0+---+ -0.05 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05 More Elevation Difference, Field - L1DAR (m) Figure 14. Differences Between LiDAR and Field Based Elevations. Elevation difference statistics between cross sections derived from field and LiDAR elevation data. Positive differences indicate that field-based elevations were higher than LiDAR; negative differences indicate LiDAR elevations were higher. Values on x axis represent minimum difference within range. For example, the 0.01 category includes values ranging from 0.01 to 0.0199. y-1.18x-1.03 .... R2 =0.94 ""..,; I •• ./... ./ .- ./ • ./ • ./. /""I ./iI ../. _._~. -? , 38 Table 2. Results of LiDAR and Field Elevation Comparison. Descriptive and regression statistics for absolute difference lField - LiDARI values between cross section elevations. All units in meters. Sample size is 37. Mean 0.028 Median 0.030 Standard Deviation 0.013 Kurtosis -0.640 Skewness -0.484 Range of difference 0.093 Minimum difference 0.002 Absolute maximum difference 0.047 Confidence Level(95.0%) (m) 0.004 Elevation Comparison of Field and LiDAR Water Surface Elevations 5.72 5.70 ~_ 5.68 g 5.66 :0:; I1l 5.64 > iii 5.62 ell 5.60 () ~ 5.58 ~ 5.56 ~ 5.54 1\1 5.52 ~ IX 5.50 SIll\- ~<::J <::J SIll>< ~/l, r;:,< ~ ~~I>< o"/l, .~. ~.~.~.~. ~ Slope Difference (Field-LiDAR) o +---+--+--+--t- SIll> <::J <::J ~ 3 c Ql :J 2 0" ~ U. C Figure 18. Differences Between LiDAR and Field Based Slopes (5, 10,20 meters). Histogram charts showing difference values between field and LiDAR derived slopes at a) 5 meter slope distances, b) 10 meter slope distances, and c) 20 meter slope distances. 45 The mean difference between slopes decreases from 0.0017 to 0.0007 as slope distance interval is increased. Maximum slope difference and standard deviation of offsets decrease from 0.001 to 0.0005 and 0.0047 to 0.0014 respectively. Regression analysis of these data show a significant relationship for all three comparisons, and adjusted R2 increased from 0.357 to 0.763 with slope distance interval (Table 3). Table 3. Results of LiDAR and Field Slope Comparison (5, 10,20 meters). Descriptive and regression statistics for offsets between field and LiDAR derived slope values (Field minus LiDAR). Slope values are dimensionless rise / run. All data is significant at 0.01. Distance Interval 5m 10m 20m Mean 0.0017 0.0012 0.0007 Standard Deviation 0.0010 0.0007 0.0005 Range of Difference 0.0080 0.0047 0.0024 Minimum difference 0.0000 0.0000 0.0001 Maximum difference 0.0047 0.0026 0.0015 Count 36 16 8 Adjusted R squared 0.36 0.47 0.76 Water surface slope for the entire length of Reach 1 (l59.32m) was compared and yielded a difference of 0.0005. This difference is smaller (by 0.0002) than the difference between 20 meter slope (Table 4). Slope was calculated by differencing the most upstream and downstream cross sections and dividing by total length of reach. Differences between LiDAR and field-based slopes may represent real change due to the five day lag between data sets and difference in discharge. 46 Table 4. Results of Reach 1 Slope Comparison. Comparison of slopes calculated using the farthest upstream and downstream cross section elevation values. Slope values have dimensionless units stemming from rise over run. Upper Lower Reach Elevation (m) Elevation (m) Len2th (m) Slope Field 5.652 5.491 159.32 -0.0010 LiDAR 5.697 5.455 159.32 -0.0015 Surface Roughness Analysis Water surface condition was characterized as smooth or rough based on 2006 aerial photography (Figure 19). Surface roughness was examined to understand its effect on LiDAR data within the active channel, as well as LiDAR's ability to potentially capture difference in water surface turbulence. Table 5 shows statistics with relation to water surface condition for all three reaches. 47 Figure 19. Relationship of Water Surfaces to LiDAR Point Density. 2006 aerial photos were used to delineate rough and smooth water surfaces. Image on left shows a transition between rough water surface (seen as white water) and smooth water surface (seen as upstream pool). Image on right shows LiDAR point density in points per square meter. In all reaches point density, variance of elevations, and water surface slopes were significantly higher in rough surface conditions. These results indicate that LiDAR point density is directly related to the roughness of a water surface and that is capturing the rough water characteristics one would expect in areas where turbulence generates surface waves. 48 Table 5. Water Surface Roughness Results for Reach 1,2, and 3. Water surface statistical output for rough and smooth water surface of Reaches 1, 2, and 3. Results within table represent average values for each Reach. Slope values have dimensionless units from rise over run equation derived from ESRI generated slope grid. Point density values based on points/m2 • Elevation variance in meters. Reach 1 Reach 2 Reach 3 Rou~h water No. of Sample Points 153 1981 1968 Avg Slope -0.013 -0.011 -0.007 Point Density (pts/mL ) 1.195 1.002 1.217 Elevation Variance (m) 0.003 0.018 0.041 Smooth water No. of Sample Points 290 1474 1378 Avg Slope 0.0075 -0.0006 -0.0033 Point Density (pts/mL ) 0.149 0.550 0.480 Elevation Variance (m) 0.001 0.0077 0.024 Within Reach 1, cross section elevations were separated into rough and smooth water conditions and slopes were calculated using field and LiDAR data sets (Table 6). Again, results showed that rough water surfaces have greater slopes than smooth water surfaces. The smooth water surface of Reach 1 yielded a larger discrepancy between field and LiDAR derived slopes compared to rough water surface. This is because small differences between LiDAR and field elevations generate larger proportional error in the rise / run equation when total elevation differences between upstream and downstream are small. 49 Table 6. Results of Reach 1 Water Surface Roughness Comparison. Reach 1 water surface roughness slope analysis. Reach 1 was divided into smooth and rough water surfaces based upon visual characteristics present in aerial photography. Slopes were calculated for each area and compared with field data to examine accuracy. Surface Reach Upper Lower Slope Type Lenl!th (m) Elevation (m) Elevation (m) Slope Difference Field Smooth 83.11 5.652 5.642 -0.0001 N/A LiDAR Smooth 83.11 5.697 5.612 -0.0010 0.0009 Field Rough 71.73 5.635 5.491 -0.0020 N/A LiDAR Rough 71.73 5.592 5.455 -0.0019 -0.0001 Prior to collections of the 2007 data, Reach 3 contained the former Marmot Dam that was dismantled on October 19th , 2007 (Figure 20). The areas at and directly below the dam are rough water surfaces. The super critical flow at the dam yielded a slope of - 0.896 (Table 7). The run below the dam contained low slope values of less than -0.002. Both the dam fall and adjacent run yielded high point densities of greater than 2 points per square meter. 50 Cross Sections o Cross Section Sample Locations L1DAR derived Slope Model Value Higll 178814133 25 50 75 100 125 150 ~.',eters I I I I I I La,·, 0003936 Figure 20. Marmot Dam: Orthophotography and Colorized Slope Model. Mannot Dam at far upstream portion of Reach 3. Image on left shows dam site in 2006 orthophotography. Image on right shows the increase in slope associated with the dam. Marmot Dam was removed Oct. 19th , 2007. Table 7. Subset of Reach 3 Water Surface Roughness Analysis Near Marmot Dam. Subset of Reach 3 immediately surrounding Marmot Dam roughness analysis containing values for Mannot Dam. The roughness results fell within expectations showing increases in slope at the dam fall and high point densities at the dam fall and immediate down stream run. Habitat Type Avg Slope Point Density Point Density Variance Dam Fall -0.896 2.284 1.003 Dam Run -0.001 2.085 5.320 51 CHAPTER VI DISCUSSION The elevation analysis portion of this study shows that LiDAR can provide water surface profiles and slopes that are comparable to field-based data. The differences between LiDAR and field based measurements can be attributed to three potential sources. The first is the relative accuracy of the LiDAR data which has been reported between O.05m and O.06m by the vendor. The second source can be associated with the accuracy of field based measurements which are similar to the relative accuracy of the LiDAR (O.03m-O.05m). Lastly, the discharge differed between field data collection and LiDAR collection by O.02cms. It is possible that much of the O.05m difference observed through most of the Reach 1 profile (Figure 16) could be attributed to the difference in discharge and changes in bed configuration, but without further evidence, the degree of difference due to error or real change cannot be identified. Even if one attributes all the difference to error in LiDAR measurements, the overall correspondence ofLiDAR and field measurement (Figure 15 and 16) indicates that LiDAR-based surveys are useful for many hydrologic applications. 52 In the upper portion of the reach, the profiles display LiDAR elevations that are higher than the field data elevations, whereas the reverse is true at the base of the reach. This could be a function of difference in discharge between datasets, change in bed configuration, or an artifact of low point density. Low density of points forces greater lengths of interpolation between LiDAR points leading to a coarse DEM (Figure 21). Overall, the analysis Reach 1 profile indicates that LiDAR was able to match the fieldbased elevation measurements within ±O.05m. 53 Rough & Smooth Wa~t:e:-r~S~u=rf;:a~c:e:s~rz~~J,;~~ Grid Interpolation in Low Point Density Figure 21. LiDAR Point Density versus Interpolation. Side by side image showing long lines of interpolation associated with smooth water surfaces (right image). Smooth water surfaces tend to have low LiDAR point density. The image on the right shows a hillshade ofthe LiDAR DEM. The DEM has been visualized using a 2 standard deviation stretch to highlight long lines of interpolation. The comparability of LiDAR and field-based slopes showed a significant trend with increasing downstream distances between cross sections. Adjusted R2 values increased from 0.36 to 0.76 and the range of difference between field and LiDAR based slopes decreased from 0.0047 to 0.00 14 as longitudinal distance increased from 5 to 20- 54 m. This suggests that the 0.05m of expected variation of LiDAR derived water surface elevation has less effect on water surface slope accuracy as distance between elevation measurements points increases. Likewise, slopes accuracies along rivers with low gradients will improve as the longitudinal distance between elevation points increases. Overall, data has shown that LiDAR can measure water surface slopes with mean difference relative to field measurements of 0.017, 0.012, and 0.007 at horizontal distances of 5, 10, and 20 meters respectively. Although the discrepancy between field and LiDAR-based slopes is greatest at 5-m intervals, the overall slopes (Fig 17) and longitudinal profiles (Fig 16) even at this distance generally correspond. The use of a 5m interval water surface slope as a basis for comparison is really a worst case example, as water surface slopes are usually measured over longer reach scale distances where the discrepancy between LiDAR and field-based measurements is lower. The continuous channel coverage and accuracies derived from LiDAR represent a new level of accuracy and precision in terms of spatial extent and resolution of water surface slope measurements. Analysis of surface roughness found that rough water surfaces had significantly higher point densities than smooth water surfaces. Rough water surfaces averaged at least 1 point/m2 , while smooth water surfaces averaged less than 1 point/2m2 • Longitudinal profiles of Reach 1 indicate the most accurate water surface measurements occur in areas of higher point density (Fig. 16). Future applications that attempt to use 55 LiDAR to measure water surface slope ought to sample DEM elevations from high point density areas of channel. Water surface analysis also showed trends relating water surface roughness and slope. Rough water surfaces for all three analysis reaches averaged larger average slope values than smooth water surfaces. This is because rough water surfaces are commonly associated with steps, riffles, and rapids. All three of these habitat types are areas have higher slopes than smooth water habitats. Smooth water surfaces are commonly associated with pools or glides, which would be areas of lower slope. Future research should examine the potential for using LiDAR to characterize stream habitats based on in-stream point density and slope. This study is not without its limitations. The field area used to test the accuracy of LiDAR is only representative of a small portion of the Sandy River. Comparisons of field and LiDAR data would be improved by having mid-channel field data. One might also question the use of field based water surface slopes as control for measuring "accuracy". Water surface slope is difficult to measure for reasons stated earlier in this paper. One might make the argument that there is no real way to truly measure LiDAR accuracy of water surface slope, and that LiDAR and field based measurements are simply comparable. In this context, LiDAR holds an advantage over field based measurements given its ability to measure large sections of river in a single day. LiDAR has a distinct advantage over traditional methods of measurement in that measurements are returned from the water surface, and consequently not subject to errors 56 associated with variability of surface turbulence piling up against the measuring device. LiDAR can also capture long stretches of channel within a few seconds reducing the influence of changes in discharge. LiDAR data in general does have its limitations. LiDAR data are only as accurate as the instrumentation and vendor capabilities. LiDAR must be corrected for calibrations and GPS drift to create a reliable data set, and not all LiDAR vendors produce the same level of quality. LiDAR data may be more accurate in some river reaches than others. The study reaches of this study contained well defined open channels, which made identifying LiDAR returns off the water surface possible. Both LiDAR data sets were collected at low flows. Flows that are too low or channels that are too narrow may limit ability to extract water surface elevations because of protruding boulders or dense vegetation that hinders accurate measurements. In some cases vegetation within and adjacent to the channel may interfere with LiDAR's ability to reach the water surface. Researchers should consider flow, channel morphology, and biota when obtaining water surface slopes from LiDAR. 57 CHAPTER VII CONCLUSION This paper examined the ability of LiDAR data to accurately measure water surface slopes. This study has shown that LiDAR data provides sufficiently accurate elevation measurements within the active channel to accurately measure water surface slopes. Measurement of water surface slope with LiDAR provides researchers a tool which is both more efficient and cost effective in comparison with traditional field-based survey methods. Additionally, analysis showed that LiDAR point density is significantly higher in rough surface conditions. Water surface elevations should be gathered from high point density areas as low point density may hinder elevation accuracy. Channel morphology, gradient, flow, and biota should be considered when extracting water surface slopes as these attributes influence water surface measurement. Further study should examine accuracy of LiDAR derived water surface slopes in channel morphologies other than those in this study. Overall, the recognition that LiDAR can accurately measure water surface slopes allows researchers an unprecedented ability to study hydraulic processes for large stretches of river. Common: APPENDIX ARCGIS VBA SCRIPT CODE 58 Public g---.pStrmLayer As ILayer ' stream centerline layer selected by user (for step 1) Public g_StrearnLength As Double ' stream centerline length (for step 1) Public g_InputDistance As Integer 'As Double 'distance entered by user (for step 1) Public g_NumSegments As Integer I number of sample points entered by user (for step 1) Public gyPointLayer As ILayer I point layer created from stream centerline (for step 1) Public g]ntShpF1Name As String I point layer pathname (for step 1) Public gyMouseCursor As IMouseCursor 'mouse cursor Public g_LinearConverson As Double I linear conversion factor Public gyDEMLayer As IRasterLayer I DEM layer (for steps 3 and 4) Public g_DEMConvertUnits As Double I DEM vertical units conversion factor (for steps 3 and 4) Public g_MaxSearchDistance As Double 'maximum search distance (for step 4) Public L NumDirections As Integer I number of directions to search in (for step 4) Public g_SampleDistance As Double 'sample distance (for step 5) Public g_SampleNumber As Double ' total sample points (for step 5) Public g_VegBeginPoint As Boolean I where to start the calucaltion (for step 5) Public g_VegCaclMethod As Boolean 'which method for Vegetation Calculation (for step 5) Public gyContribLayer As ILayer ' contributing point layer (for step 6) Public gyReceivLayer As ILayer 'receiving point layer (for step 6) Public gyOutputLayerName As String I output shapefile (for step 6) Function VerifyField(fLayer As ILayer, fldName As String) As Boolean I verify that topo fields are in the stream centerline point layer Dim pFields As IFields Dim pField As IField Dim pFeatLayer As IFeatureLayer Dim pFeatClass As IFeatureClass Set pFeatLayer = fLayer Set pFeatClass = pFeatLayer.FeatureClass Set pFields = pFeatClass.Fields For i = 0 To pFields.FieldCount - 1 Set pField = pFields.Field(i) 'MsgBox pField.Name IfpField.Name = fldName Then VerifyField = True Exit Function End If Next VerifyField = False End Function Function Ca1cPointLatLong(inPnt As IPoint, inLayer As ILayer) As IPoint , in point layer Dim pFLayer As IFeatureLayer Set pFLayer = inLayer , spatial reference environment Dim pInSpatialRef As ISpatialReference Dim pOutSpatialRef As ISpatialReference Dim pGeoTrans As IGeoTransformation Dim pInGeoDataset As IGeoDataset Set pInGeoDataset = pFLayer Dim pSpatRefFact As ISpatialReferenceFactory , get map units of shapefile spatial reference Dim pPCS As IProjectedCoordinateSystem Set pPCS = pInGeoDataset.SpatialReference 'set spatial reference environment Set pSpatRefFact = New SpatialReferenceEnvironment Set pInSpatialRef= pInGeoDataset.SpatialReference 'MsgBox pInSpatialRef.Name Set pOutSpatialRef= pSpatRefFact.CreateGeographicCoordinateSystem(esriSRGeoCS_WGS1984) Set pGeoTrans = pSpatRefFact.CreateGeoTransformation(esriSRGeoTransformation_NADI983_To_WGS1984_1) Dim pOutGeom As IGeometry2 Set Ca1cPointLatLong = New Point Set CalcPointLatLong.SpatialReference = pInSpatialRef Ca1cPointLatLong.PutCoords inPnt.X, inPnt.Y Set pOutGeom = Ca1cPointLatLong pOutGeom.ProjectEx pOutSpatialRef, esriTransformForward, pGeoTrans, 0, 0, ° 'MsgBox inPnt.X &" "& inPnt.Y & vbCrLf& Ca1cPointLatLong.X &" "& Ca1cPointLatLong.Y End Function Sub OpenGxDialogO Dim pGxdial As IGxDialog Set pGxdial = New GxDialog pGxdial.ButtonCaption = "OK" pGxdial.Title = "Create Stream Centerline Point Shapefile" pGxdial.RememberLocation = True Dim pShapeFileObj As IGxObject Dim pGxFilter As IGxObjectFilter Set pGxFilter = New GxFilterShapefiles 'e.g shp Set pGxdial.ObjectFilter = pGxFilter If pGxdial.DoModaISave(ThisDocument.Parent.hWnd) Then Dim pLocation As IGxFile Dim fn As String 59 Set pLocation = pGxdial.FinalLocation fn = pGxdial.Name End If If Not pLocation Is Nothing Then LPntShpFlName = pLocation.Path & "\" & fn frmlB.tbxShpFileName.Text = g]ntShpFlName frmlB.cmdOK.Enabled = True End If End Sub Function GetAngle(pPolyline As IPolyline, dAlong As Double) As Double Dim pi As Double pi = 4 * Atn(l) Dim dAngle As Double Dim pLine As ILine Set pLine = New Line pPolyline.QueryTangent esriNoExtension, dAlong, False, 1, pLine , convert from radians to degrees dAngle = (180 * pLine.Angle) / pi I adjust angles , ESRI defines 0 degrees as the positive X-axis, increasing counter-clockwise I Ecology references 0 degrees as North, increasing clockwise If dAngle 0 Then SplitWorkspaceName = Mid(sWholeName, 1, pos - 1) Else Exit Function End If Exit Function ERH: MsgBox "Workspace Split" & Err.Description End Function 'Returns a filename given for example C:\temp\dataset returns dataset Function SplitFileName(sWholeName As String) As String On Error GoTo ERH Dim pos As Integer Dim sT, sName As String pos = InStrRev(sWholeName, "\") Ifpos > 0 Then sT = Mid(sWholeName, 1, pos - 1) Ifpos = Len(sWholeName) Then Exit Function End If sName = Mid(sWholeName, pos + 1, Len(sWholeName) - Len(sT)) pos = InStr(sName, ".") If pos > 0 Then SplitFileName = Mid(sName, 1, pos - 1) Else SplitFileName = sName End If End If Exit Function ERH: 61 • MsgBox "Workspace Split:" & Err.Description End Function Public Sub BusyMouse(bolBusy As Boolean) 'Subroutine to change mouse cursor If g---'pMouseCursor Is Nothing Then Set g---'pMouseCursor = New MouseCursor End If IfbolBusy Then g---'pMouseCursor.SetCursor 2 Else g---'pMouseCursor.SetCursor 0 End If End Sub Function MakeColor(lRGB As Long) As IRgbColor Set MakeColor =New RgbColor MakeColor.RGB = lRGB End Function Function MakeDecoElement(pMarkerSym As IMarkerSymbol, _ dPos As Double)_ As ISimpleLineDecorationElement Set MakeDecoElement
    Error Correction Coding - Mathematical Methods and Algorithms (Source Files Contained).pdf Error Correction Coding Mathematical Methods and Algorithms Todd K. Moon Utah State University @ E ! C I E N C E A JOHN WILEY & SONS, INC., PUBLICATION Preface vii List of Program Files xxxi List of Laboratory Exercises XXXii List of Algorithms d V List of Figures XI List of Tables xlii List of Boxes Xliii Part I Introduction and Foundations 1 A Context for Error Correction Coding 2 1.1 F’urpose of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Introduction: Where Are Codes? . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 The Communications System . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Basic Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1 Binary Phase-Shift Keying . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 More General Digital Modulation . . . . . . . . . . . . . . . . . . 11 1.5 Signal Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 The Gaussian Channel . . . . . . . . . . . . . . . . . . . . . . . . 14 1 S.2 MAP and ML Detection . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.3 Special Case: Binary Detection . . . . . . . . . . . . . . . . . . . 18 1.5.4 Probability of Error for Binary Detection . . . . . . . . . . . . . . 19 1 S.5 Bounds on Performance: The Union Bound . . . . . . . . . . . . . 22 1.5.6 The Binary Symmetric Channel . . . . . . . . . . . . . . . . . . . 23 1 S.7 The BSC and the Gaussian Channel Model . . . . . . . . . . . . . 25 1.6 Memoryless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.7 Simulation and Energy Considerations for Coded Signals . . . . . . . . . . 26 1.8 Some Important Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.8.1 Detection of Repetition Codes Over a BSC . . . . . . . . . . . . . 28 Soft-Decision Decoding of Repetition Codes Over the AWGN 1.8.3 Simulation of Results . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.8.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.9 HammingCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.9.1 Hard-Input Decoding Hamming Codes . . . . . . . . . . . . . . . 35 1.9.2 Other Representations of the Hamming Code . . . . . . . . . . . . 36 An Algebraic Representation . . . . . . . . . . . . . . . . . . . . . 37 A Polynomial Representation . . . . . . . . . . . . . . . . . . . . 37 1 A Trellis Representation . . . . . . . . . . . . . . . . . . . . . . . 38 The Tanner Graph Representation . . . . . . . . . . . . . . . . . . 38 1.10 The Basic Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.12 A Bit of Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . 40 1.12.1 Definitions for Discrete Random Variables . . . . . . . . . . . . . . 40 Entropy and Conditional Entropy . . . . . . . . . . . . . . . . . . 40 Relative Entropy. Mutual Information. and Channel Capacity . . . . 41 1.12.2 Definitions for Continuous Random Variables . . . . . . . . . . . . 43 1.12.3 The Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . 45 1.12.4 “Proof“ of the Channel Coding Theorem . . . . . . . . . . . . . . . 45 1.12.5 Capacity for the Continuous-Time AWGN Channel . . . . . . . . . 49 1.12.6 Transmission at Capacity with Errors . . . . . . . . . . . . . . . . 51 1.12.7 The Implication of the Channel Coding Theorem . . . . . . . . . . 52 1.11 Historical Milestones of Coding Theory . . . . . . . . . . . . . . . . . . . 40 Lab 1 Simulating a Communications Channel . . . . . . . . . . . . . . . 53 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Use of Coding in Conjunction with the BSC . . . . . . . . . . . . . . . . . 53 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Resources and Implementation Suggestions . . . . . . . . . . . . . . . . . 54 1.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 1.14 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Part I1 Block Codes 61 2 Groups and Vector Spaces 62 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.2 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.2.1 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 2.2.2 Cyclic Groups and the Order of an Element . . . . . . . . . . . . . 66 2.2.3 Cosets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2.2.4 Lagrange’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.2.5 Induced Operations; Isomorphism . . . . . . . . . . . . . . . . . . 69 2.2.6 Homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 2.3 Fields: A Prelude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 2.4 Review of Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3 Linear Block Codes 83 3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.2 The Generator Matrix Description of Linear Block Codes . . . . . . . . . . 84 3.2.1 Rudimentary Implementation . . . . . . . . . . . . . . . . . . . . . 86 3.3 The Parity Check Matrix and Dual Codes . . . . . . . . . . . . . . . . . . 86 3.3.1 Some Simple Bounds on Block Codes . . . . . . . . . . . . . . . . 88 3.4 Error Detection and Correction over Hard-Input Channels . . . . . . . . . . 90 3.4.1 Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.4.2 Error Correction: The Standard Array . . . . . . . . . . . . . . . . 90 3.6 Hamming Codes and Their Duals . . . . . . . . . . . . . . . . . . . . . . . 97 3.7 Performance of Linear Codes . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.7.1 Error detection performance . . . . . . . . . . . . . . . . . . . . . 99 3.7.2 Error Correction Performance . . . . . . . . . . . . . . . . . . . . 100 3.7.3 Performance for Soft-Decision Decoding . . . . . . . . . . . . . . 103 3.8 Erasure Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.8.1 Binary Erasure Decoding . . . . . . . . . . . . . . . . . . . . . . . 105 3.9 Modifications to Linear Codes . . . . . . . . . . . . . . . . . . . . . . . . 105 3.10 Best Known Linear Block Codes . . . . . . . . . . . . . . . . . . . . . . . 107 3.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 113 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.3 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.3.1 Rings of Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.4 QuotientRings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.5 IdealsinRings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.6 Algebraic Description of Cyclic Codes . . . . . . . . . . . . . . . . . . . . 120 4.7 Nonsystematic Encoding and Parity Check . . . . . . . . . . . . . . . . . . 122 4.8 Systematic Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.9 Some Hardware Background . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.9.1 Computational Building Blocks . . . . . . . . . . . . . . . . . . . 126 4.9.2 Sequences and Power series . . . . . . . . . . . . . . . . . . . . . 127 4.9.3 Polynomial Multiplication . . . . . . . . . . . . . . . . . . . . . . 128 Last-Element-First Processing . . . . . . . . . . . . . . . . . . . . 128 First-Element-First Processing . . . . . . . . . . . . . . . . . . . . 128 4.9.4 Polynomial division . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Last-Element-First Processing . . . . . . . . . . . . . . . . . . . . 129 4.9.5 Simultaneous Polynomial Division and Multiplication . . . . . . . 132 First-Element-First Processing . . . . . . . . . . . . . . . . . . . . 132 4.10 Cyclic Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.1 1 Syndrome Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.12 Shortened Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . Method 1: Simulating the Extra Clock Shifts . . . . . . . . . . . . 144 Method 2: Changing the Error Pattern Detection Circuit . . . . . . 147 4.13 Binary CRC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.13.1 Byte-Oriented Encoding and Decoding Algorithms . . . . . . . . . 150 4.13.2 CRC Protecting Data Files or Data Packets . . . . . . . . . . . . . 153 Appendix 4.A Linear Feedback Shift Registers . . . . . . . . . . . . . . . . . . . 154 Appendix 4.A. 1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . 154 Appendix 4.A.2 Connection With Polynomial Division Appendix 4.A.3 Some Algebraic Properties of Shift Sequences . . . . . . . 3.5 Weight Distributions of Codes and Their Duals . . . . . . . . . . . . . . . 95 4 Cyclic Codes, Rings, and Polynomials 143 . . . . . . . . . . . 157 160 Lab 2 Polynomial Division and Linear Feedback Shift Registers . . . 161 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Preliminary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Programming Part: BinLFSR . . . . . . . . . . . . . . . . . . . . . . . . 161 Resources and Implementation Suggestions . . . . . . . . . . . . . . . . . 161 Programming Part: BinPolyDiv . . . . . . . . . . . . . . . . . . . . . . 162 Follow-On Ideas and Problems . . . . . . . . . . . . . . . . . . . . . . . . 162 Lab 3 CRC Encoding and Decoding . . . . . . . . . . . . . . . . . . . . . 162 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Resources and Implementation Suggestions . . . . . . . . . . . . . . . . . 163 4.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.15 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5 Rudiments of Number Theory and Algebra 171 5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 5.2 Number Theoretic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 175 5.2.1 Divisibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 5.2.3 The Sugiyama Algorithm . . . . . . . . . . . . . . . . . . . . . . . 182 5.2.4 Congruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5.2.5 The q!~ Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.2.6 Some Cryptographic Payoff . . . . . . . . . . . . . . . . . . . . . 186 Fermat's Little Theorem . . . . . . . . . . . . . . . . . . . . . . . 186 RSA Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.3 The Chinese Remainder Theorem . . . . . . . . . . . . . . . . . . . . . . . 188 5.3.1 The CRT and Interpolation . . . . . . . . . . . . . . . . . . . . . . 190 The Evaluation Homomorphism . . . . . . . . . . . . . . . . . . . 190 The Interpolation Problem . . . . . . . . . . . . . . . . . . . . . . 191 5.4 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.4.1 An Examination of IR and C . . . . . . . . . . . . . . . . . . . . . 194 5.4.2 Galois Field Construction: An Example . . . . . . . . . . . . . . . 196 5.4.3 Connection with Linear Feedback Shift Registers . . . . . . . . . . 199 5.5 Galois Fields: Mathematical Facts . . . . . . . . . . . . . . . . . . . . . . 200 5.6 Implementing Galois Field Arithmetic . . . . . . . . . . . . . . . . . . . . 204 5.6.1 Zech Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.6.2 Hardware Implementations . . . . . . . . . . . . . . . . . . . . . . 205 5.7 Subfields of Galois Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.8 Irreducible and Primitive polynomials . . . . . . . . . . . . . . . . . . . . 207 5.9 Conjugate Elements and Minimal Polynomials . . . . . . . . . . . . . . . . 209 5.9.1 Minimal Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.10 Factoring x" - 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.1 1 Cyclotomic Cosets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Programming the Euclidean Algorithm . . . . . . . . . . . . . . . 223 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Preliminary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Preliminary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 5.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Lab 5 Programming Galois Field Arithmetic . . . . . . . . . . . . . . . . 224 6 BCH and Reed-Solomon Codes: Designer Cyclic Codes 235 6.1 BCHCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.1.1 Designing BCH Codes . . . . . . . . . . . . . . . . . . . . . . . . 235 6.1.2 TheBCHBound . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 6.1.3 Weight Distributions for Some Binary BCH Codes . . . . . . . . . 239 6.1.4 Asymptotic Results for BCH Codes . . . . . . . . . . . . . . . . . 240 6.2 Reed-Solomon Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 6.2.1 Reed-Solomon Construction 1 . . . . . . . . . . . . . . . . . . . . 242 6.2.2 Reed-Solomon Construction 2 . . . . . . . . . . . . . . . . . . . . 243 6.2.3 Encoding Reed-Solomon Codes . . . . . . . . . . . . . . . . . . . 244 6.2.4 MDS Codes and Weight Distributions for RS Codes . . . . . . . . . 245 Decoding BCH and RS Codes: The General Outline . . . . . . . . . . . . . 247 6.3.1 Computation of the Syndrome . . . . . . . . . . . . . . . . . . . . 247 6.3.2 The Error Locator Polynomial . . . . . . . . . . . . . . . . . . . . 248 6.3.3 ChienSearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.4 Finding the Error Locator Polynomial . . . . . . . . . . . . . . . . . . . . 250 6.4.2 Berlekamp-Massey Algorithm . . . . . . . . . . . . . . . . . . . . 253 Simplifications for Binary Codes . . . . . . . . . . . . . . . . . . . 259 6.5 Non-Binary BCH and RS Decoding . . . . . . . . . . . . . . . . . . . . . 261 6.5.1 Forney’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 262 6.6 Euclidean Algorithm for the Error Locator Polynomial . . . . . . . . . . . 266 Erasure Decoding for Nonbinary BCH or RS codes . . . . . . . . . . . . . 267 6.8 Galois Field Fourier Transform Methods . . . . . . . . . . . . . . . . . . . 269 6.8.1 Equivalence of the Two Reed-Solomon Code Constructions . . . . 274 6.8.2 Frequency-Domain Decoding . . . . . . . . . . . . . . . . . . . . 275 6.9 Variations and Extensions of Reed-Solomon Codes . . . . . . . . . . . . . 276 6.9.1 Simple Modifications . . . . . . . . . . . . . . . . . . . . . . . . . 276 6.9.3 GoppaCodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Decoding Alternant Codes . . . . . . . . . . . . . . . . . . . . . . 280 The McEliece Public Key Cryptosystem . . . . . . . . . . . . . . . 280 Lab 6 Programming the Berlekamp-Massey Algorithm . . . . . . . . . 281 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Preliminary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Resources and Implementation Suggestions Lab 7 programming the BCH Decoder . . . . . . . . . . . . . . . . . . . 283 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Preliminary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Resources and Implementation Suggestions . . . . . . . . . . . . . . . . . 283 Follow-On Ideas and Problems . . . . . . . . . . . . . . . . . . . . . . . . 284 Lab 8 Reed-Solomon Encoding and Decoding . . . . . . . . . . . . . . 284 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Appendix 6.A Proof of Newton’s Identities . . . . . . . . . . . . . . . . . . . . . 285 6.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.1 1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 293 7.1 Introduction: Workload for Reed-Solomon Decoding . . . . . . . . . . . . 293 7.2 Derivations of Welch-Berlekamp Key Equation . . . . . . . . . . . . . . . 293 7.3 Finding the Error Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.4 Methods of Solving the WB Key Equation . . . . . . . . . . . . . . . . . . 302 7.4.1 Background: Modules . . . . . . . . . . . . . . . . . . . . . . . . 302 7.4.2 The Welch-Berlekamp Algorithm . . . . . . . . . . . . . . . . . . 303 7.4.3 Modular Solution of the WB Key Equation . . . . . . . . . . . . . 310 7.6.1 Bounded Distance, ML, and List Decoding . . . . . . . . . . . . . 322 Error Correction by Interpolation . . . . . . . . . . . . . . . . . . . 323 7.6.3 Polynomials in ?Lvo Variables . . . . . . . . . . . . . . . . . . . . 324 Degree and Monomial Order . . . . . . . . . . . . . . . . . . . . . 325 Zeros and Multiple Zeros . . . . . . . . . . . . . . . . . . . . . . . 328 7.6.4 The GS Decoder: The Main Theorems . . . . . . . . . . . . . . . . 330 The Interpolation Theorem . . . . . . . . . . . . . . . . . . . . . . 331 The Factorization Theorem . . . . . . . . . . . . . . . . . . . . . . 331 The Correction Distance . . . . . . . . . . . . . . . . . . . . . . . 333 The Number of Polynomials in the Decoding List . . . . . . . . . . 335 Algorithms for Computing the Interpolation Step . . . . . . . . . . 337 Finding Linearly Dependent Columns: The Feng-Tzeng Algorithm 338 Finding the Intersection of Kernels: The Katter Algorithm . . . . . 342 7.6.6 A Special Case: m = 1 and L = 1 . . . . . . . . . . . . . . . . . . 348 7.6.7 The Roth-Ruckenstein Algorithm . . . . . . . . . . . . . . . . . . 350 What to Do with Lists of Factors? . . . . . . . . . . . . . . . . . . 354 7.6.8 Soft-Decision Decoding of Reed-Solomon Codes . . . . . . . . . . 358 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 A Factorization Theorem . . . . . . . . . . . . . . . . . . . . . . . 360 Mapping from Reliability to Multiplicity . . . . . . . . . . . . . . 361 The Geometry of the Decoding Regions . . . . . . . . . . . . . . . 363 Computing the Reliability Matrix . . . . . . . . . . . . . . . . . . 364 7.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 7.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8 Other Important Block Codes 369 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8.2 Hadamard Matrices. Codes. and Transforms . . . . . . . . . . . . . . . . . 369 8.2.1 Introduction to Hadamard Matrices . . . . . . . . . . . . . . . . . 369 8.2.2 The Paley Construction of Hadamard Matrices . . . . . . . . . . . 371 8.2.3 Hadamard Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 8.3 Reed-Muller Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.3.1 Boolean Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.3.2 Definition of the Reed-Muller Codes . . . . . . . . . . . . . . . . . 376 8.3.3 Encoding and Decoding Algorithms for First-Order RM Codes . . . 379 Encoding RM (1. m) Codes . . . . . . . . . . . . . . . . . . . . . 379 Decoding RM(1, m) Codes . . . . . . . . . . . . . . . . . . . . . 379 Expediting Decoding Using the Fast Hadamard Transform . . . . . 382 The Reed Decoding Algorithm for RM(r. m) Codes, I 2 1 . . . . . 384 Details for an RM(2. 4) Code . . . . . . . . . . . . . . . . . . . . 384 8.3.5 Other Constructions of Reed-Muller Codes . . . . . . . . . . . . . 391 Building Long Codes from Short Codes: The Squaring Construction . . . . 392 8.3.4 A Geometric Viewpoint . . . . . . . . . . . . . . . . . . . . . . . 387 8.4 8.5 Quadratic Residue Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 8.6 Golaycodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 8.6.1 Decoding the Golay Code . . . . . . . . . . . . . . . . . . . . . . 400 Algebraic Decoding of the $23 Golay Code . . . . . . . . . . . . . 400 Arithmetic Decoding of the 524 Code . . . . . . . . . . . . . . . . 401 8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 8.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9 Bounds on Codes 406 9.1 The Gilbert-Varshamov Bound . . . . . . . . . . . . . . . . . . . . . . . . 409 9.2 The Plotkin Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 9.3 The Griesmer Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 9.4 The Linear Programming and Related Bounds . . . . . . . . . . . . . . . . 413 9.4.1 Krawtchouk Polynomials . . . . . . . . . . . . . . . . . . . . . . . 415 9.4.2 Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 9.4.3 Krawtchouk Polynomials and Characters . . . . . . . . . . . . . . 416 9.5 The McEliece-Rodemich-Rumsey-WelcBh ound . . . . . . . . . . . . . . . 418 9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 9.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 10 Bursty Channels. Interleavers. and Concatenation 425 10.1 Introduction to Bursty Channels . . . . . . . . . . . . . . . . . . . . . . . 425 10.2 Interleavers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 10.3 An Application of Interleaved RS Codes: Compact Discs . . . . . . . . . . 427 10.4 Productcodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 10.5 Reed-Solomon Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 10.6 Concatenated Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 10.7 Fire Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 10.7.1 Fire Code Definition . . . . . . . . . . . . . . . . . . . . . . . . . 433 10.7.2 Decoding Fire Codes: Error Trapping Decoding . . . . . . . . . . . 435 10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 10.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 11 Soft-Decision Decoding Algorithms 439 1 1.2 Generalized Minimum Distance Decoding . . . . . . . . . . . . . . . . . . 441 11.1 Introduction and General Notation . . . . . . . . . . . . . . . . . . . . . . 439 1 1.2.1 Distance Measures and Properties . . . . . . . . . . . . . . . . . . 442 1 1.3 The Chase Decoding Algorithms . . . . . . . . . . . . . . . . . . . . . . . 445 11.4 Halting the Search: An Optimality Condition . . . . . . . . . . . . . . . . 445 1 1.5 Ordered Statistic Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . 447 1 1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 1 1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Part I11 Codes on Graphs 12 Convolutional Codes 452 12.1 Introduction and Basic Notation . . . . . . . . . . . . . . . . . . . . . . . 452 12.1.1 TheState . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 12.2 Definition of Codes and Equivalent Codes . . . . . . . . . . . . . . . . . . 458 12.2.1 Catastrophic Encoders . . . . . . . . . . . . . . . . . . . . . . . . 461 12.2.2 Polynomial and Rational Encoders . . . . . . . . . . . . . . . . . . 464 12.2.3 Constraint Length and Minimal Encoders . . . . . . . . . . . . . . 465 12.2.4 Systematic Encoders . . . . . . . . . . . . . . . . . . . . . . . . . 468 12.3 Decoding Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . 469 12.3.1 Introduction and Notation . . . . . . . . . . . . . . . . . . . . . . 469 12.3.2 The Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 471 12.3.3 Some Implementation Issues . . . . . . . . . . . . . . . . . . . . . 481 The Basic Operation: Add-Compare-Select . . . . . . . . . . . . . 481 Decoding Streams of Data: Windows on the Trellis . . . . . . . . . 481 Output Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Hard and Soft Decoding; Quantization . . . . . . . . . . . . . . . . 484 Synchronization Issues . . . . . . . . . . . . . . . . . . . . . . . . 486 12.4 Some Performance Results . . . . . . . . . . . . . . . . . . . . . . . . . . 487 12.5 Error Analysis for Convolutional Codes . . . . . . . . . . . . . . . . . . . 491 12.5.1 Enumerating Paths Through the Trellis . . . . . . . . . . . . . . . . 493 Enumerating on More Complicated Graphs: Mason’s Rule . . . . . 496 12.5.2 Characterizing the Node Error Probability P, and the Bit Error Rate Pb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 12.5.3 A Bound on Pd for Discrete Channels . . . . . . . . . . . . . . . . 501 Performance Bound on the BSC . . . . . . . . . . . . . . . . . . . 503 12.5.4 A Bound on Pd for BPSK Signaling Over the AWGN Channel . . . 503 12.5.5 Asymptotic Coding Gain . . . . . . . . . . . . . . . . . . . . . . . 504 12.6 Tables of Good Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 12.7 Puncturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 Puncturing to Achieve Variable Rate . . . . . . . . . . . . . . . . . 509 12.8 SuboptimalDecodingAlgorithmsforConvolutionalCodes . . . . . . . . . 510 12.8.1 Tree Representations . . . . . . . . . . . . . . . . . . . . . . . . . 511 12.8.2 The Fano Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 12.8.3 The Stack Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 515 12.8.4 The Fano Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 517 12.8.5 Other Issues for Sequential Decoding . . . . . . . . . . . . . . . . 520 12.9 Convolutional Codes as Block Codes . . . . . . . . . . . . . . . . . . . . . 522 12.10 Trellis Representations of Block and Cyclic Codes . . . . . . . . . . . . . . 523 12.10.1 Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 12.10.2 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 12.10.3 Trellis Decoding of Block Codes . . . . . . . . . . . . . . . . . . . 525 Programming Convolutional Encoders . . . . . . . . . . . . . . . 526 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 12.1 1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 12.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 13 'Ikellis Coded Modulation 535 13.1 Adding Redundancy by Adding Signals . . . . . . . . . . . . . . . . . . . 535 13.2 Background on Signal Constellations . . . . . . . . . . . . . . . . . . . . . 535 13.3 TCM Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 The General Ungerboeck Coding Framework . . . . . . . . . . . . 544 13.3.2 The Set Partitioning Idea . . . . . . . . . . . . . . . . . . . . . . . 545 13.4 Some Error Analysis for TCM Codes . . . . . . . . . . . . . . . . . . . . . 546 13.4.1 General Considerations . . . . . . . . . . . . . . . . . . . . . . . . 546 A Description of the Error Events . . . . . . . . . . . . . . . . . . 548 13.4.3 Known Good TCM Codes . . . . . . . . . . . . . . . . . . . . . . 552 13.5 Decodmg TCM Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 13.6 Rotational Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 Differential Encoding . . . . . . . . . . . . . . . . . . . . . . . . . 558 Constellation Labels and Partitions . . . . . . . . . . . . . . . . . . 559 13.7 Multidimensional TCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 13.7.1 Some Advantages of Multidimensional TCM . . . . . . . . . . . . 562 13.7.2 Lattices and Sublattices . . . . . . . . . . . . . . . . . . . . . . . . 563 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Common Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Sublattices and Cosets . . . . . . . . . . . . . . . . . . . . . . . . 566 The Lattice Code Idea . . . . . . . . . . . . . . . . . . . . . . . . 567 Sources of Coding Gain in Lattice Codes . . . . . . . . . . . . . . 567 Some Good Lattice Codes . . . . . . . . . . . . . . . . . . . . . . 571 13.8 The V.34 Modem Standard . . . . . . . . . . . . . . . . . . . . . . . . . . 571 Lab 11 Trellis-Coded Modulation Encoding and Decoding . . . . . . . . 578 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 13.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 13.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Part IV Iteratively Decoded Codes 581 14 lbrbo Codes 582 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 14.2 Encoding Parallel Concatenated Codes . . . . . . . . . . . . . . . . . . . . 584 14.3 Turbo Decoding Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 586 14.3.1 The MAP Decoding Algorithm . . . . . . . . . . . . . . . . . . . . 588 14.3.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 14.3.3 Posterior Probability . . . . . . . . . . . . . . . . . . . . . . . . . 590 14.3.4 Computing at and pt . . . . . . . . . . . . . . . . . . . . . . . . . 592 14.3.6 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 14.3.7 Summary of the BCJR Algorithm . . . . . . . . . . . . . . . . . . 596 14.3.8 A MatrixNector Formulation . . . . . . . . . . . . . . . . . . . . . 597 14.3.9 Comparison of the Viterbi and BCJR Algorithms . . . . . . . . . . 598 14.3.10 The BCJR Algorithm for Systematic Codes . . . . . . . . . . . . . 598 14.3.11 Turbo Decoding Using the BCJR Algorithm . . . . . . . . . . . . . 600 The Terminal State of the Encoders . . . . . . . . . . . . . . . . . 602 14.3.12 Likelihood Ratio Decoding . . . . . . . . . . . . . . . . . . . . . . 602 Log Prior Ratio Ap. . . . . . . . . . . . . . . . . . . . . . . . . . 603 Log Posterior A,. . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 14.3.13 Statement of the Turbo Decoding Algorithm . . . . . . . . . . . . . 605 14.3.14 Turbo Decoding Stopping Criteria . . . . . . . . . . . . . . . . . . 605 The Cross Entropy Stopping Criterion . . . . . . . . . . . . . . . . 606 The Sign Change Ratio (SCR) Criterion . . . . . . . . . . . . . . . 607 The Hard Decision Aided (HDA) Criterion . . . . . . . . . . . . . 608 14.3.15 Modifications of the MAP Algorithm . . . . . . . . . . . . . . . . 608 The Max-Log-MAP Algorithm . . . . . . . . . . . . . . . . . . . . 608 14.3.16 Corrections to the Max-Log-MAP Algorithm . . . . . . . . . . . . 609 14.3.17 The Soft Output Viterbi Algorithm . . . . . . . . . . . . . . . . . . 610 14.4 On the Error Floor and Weight Distributions . . . . . . . . . . . . . . . . . 612 14.4.1 The Error Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 14.4.2 Spectral Thinning and Random Interleavers . . . . . . . . . . . . . 614 14.4.3 On Interleavers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618 14.5 EXIT Chart Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 14.5.1 TheEXITChart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 Block Turbo Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 14.7 Turbo Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 14.7.1 Introduction to Turbo Equalization . . . . . . . . . . . . . . . . . . 626 14.7.2 The Framework for Turbo Equalization . . . . . . . . . . . . . . . 627 Lab 12 Turbo Code Decoding . . . . . . . . . . . . . . . . . . . . . . . . . 629 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Programming Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 14.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 14.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 15 Low-Density Parity-Check Codes 634 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 15.2 LDPC Codes: Construction and Notation . . . . . . . . . . . . . . . . . . . 635 15.3 Tanner Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638 15.4 Transmission Through a Gaussian Channel . . . . . . . . . . . . . . . . . . 638 15.5 Decoding LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 15.5.1 The Vertical Step: Updating qmn(x) . . . . . . . . . . . . . . . . . 641 15.5.2 Horizontal Step: Updating rmn (x) . . . . . . . . . . . . . . . . . . 644 15.5.3 Terminating andInitializing the Decoding Algorithm . . . . . . . . 647 15.5.4 Summary of the Algorithm . . . . . . . . . . . . . . . . . . . . . . 648 15.5.5 Message Passing Viewpoint . . . . . . . . . . . . . . . . . . . . . 649 15.5.6 Likelihood Ratio Decoder Formulation . . . . . . . . . . . . . . . 649 15.6 Why Low-Density Parity-Check Codes? . . . . . . . . . . . . . . . . . . . 653 15.7 The Iterative Decoder on General Block Codes . . . . . . . . . . . . . . . . 654 15.8 Density Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 15.9 EXIT Charts for LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . 659 15.10 Irregular LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 15.10.1 Degree Distribution Pairs . . . . . . . . . . . . . . . . . . . . . . . 662 15.10.2 Some Good Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 664 15.10.3 Density Evolution for Irregular Codes . . . . . . . . . . . . . . . . 664 15.10.4 Computation and Optimization of Density Evolution . . . . . . . . 667 15.10.5 Using Irregular Codes . . . . . . . . . . . . . . . . . . . . . . . . 668 15.1 1 More on LDPC Code Construction 668 15.1 1.1 A Construction Based on Finite Geometries . . . . . . . . . . . . . 668 15.1 1.2 Constructions Based on Other Combinatoric Objects . . . . . . . . 669 15.12 Encoding LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669 15.13 A Variation: Low-Density Generator Matrix Codes . . . . . . . . . . . . . 671 15.14 Serial Concatenated Codes; Repeat-Accumulate Codes . . . . . . . . . . . 671 15.14.1 Irregular RA Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Lab 13 Programming an LDPC Decoder . . . . . . . . . . . . . . . . . . . 674 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675 Numerical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 675 15.15 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 15.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680 16.2 Operations in Semirings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 16.3 Functions on Local Domains . . . . . . . . . . . . . . . . . . . . . . . . . 681 16.4 Factor Graphs and Marginalization . . . . . . . . . . . . . . . . . . . . . . 686 Marginalizing on a Single Variable . . . . . . . . . . . . . . . . . . 687 16.4.2 Marginalizing on All Individual Variables . . . . . . . . . . . . . . 691 16.5 Applications to Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 16.5.1 Blockcodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 16.5.2 ModificationstoMessagePassingfor Binary Variables . . . . . . . 695 16.5.3 Trellis Processing and the FonvardBackward Algorithm . . . . . . 696 16.5.4 Turbo Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Summary of Decoding Algorithms on Graphs . . . . . . . . . . . . . . . . 699 16.7 Transformations of Factor Graphs . . . . . . . . . . . . . . . . . . . . . . . 700 16.7.1 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 16.7.2 Stretching Variable Nodes . . . . . . . . . . . . . . . . . . . . . . 701 16.7.3 Exact Computation of Graphs with Cycles . . . . . . . . . . . . . . 702 16.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706 16.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708 16 Decoding Algorithms on Graphs 680 16.4.1 16.6 Part V Space-Time Coding 709 17 Fading Channels and Space-Time Codes 710 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 17.2 Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 17.2.1 Rayleigh Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . 712 . . . . . . . . 714 17.3.1 The Narrowband MIMO Channel . . . . . . . . . . . . . . . . . . 716 17.3.2 Diversity Performance with Maximal-Ratio Combining . . . . . . . 717 17.4 Space-Time Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 17.4.1 The Alamouti Code . . . . . . . . . . . . . . . . . . . . . . . . . . 719 17.4.2 A More General Formulation . . . . . . . . . . . . . . . . . . . . . 721 17.4.3 Performance Calculation . . . . . . . . . . . . . . . . . . . . . . . 721 Real Orthogonal Designs . . . . . . . . . . . . . . . . . . . . . . . 723 EncodingandDecodingBasedonOrthogonalDesigns . . . . . . . 724 Generalized Real Orthogonal Designs . . . . . . . . . . . . . . . . 726 17.4.4 Complex Orthogonal Designs . . . . . . . . . . . . . . . . . . . . 727 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728 Space-Time Trellis Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 728 17.5.1 Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729 17.6 How Many Antennas? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732 17.7 Estimating Channel Information . . . . . . . . . . . . . . . . . . . . . . . 733 17.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 17.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734 A Log Likelihood Algebra 735 A.l Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 References Index 739 750

    285

    社区成员

    发帖
    与我相关
    我的任务
    社区描述
    福州大学 梅努斯国际工程学院 软件工程(2022秋) 教学
    软件工程 高校
    社区管理员
    • LinQF39
    加入社区
    • 近7日
    • 近30日
    • 至今
    社区公告
    暂无公告

    试试用AI创作助手写篇文章吧

    手机看
    关注公众号

    关注公众号

    客服 返回
    顶部