You do not have sufficient privileges to access this file

「已注销」 2012-02-23 11:46:27
When I'm running an application with administration account
I get a msg "C:\windows\system32\CONFIG.NT You do not have sufficient privileges to access this file"
...全文
5774 2 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
2 条回复
切换为时间正序
请发表友善的回复…
发表回复
Frederick 2012-02-24
  • 打赏
  • 举报
回复
打开控制面板 -- 系统和安全 -- 操作中心下有个更改用户设置 -- 设置为从不通知. 重启即可.
如果还不行, 试试 运行 -- netplwiz -- 高级 -- 高级 -- 用户 --Administrator --去掉用户已禁用的对勾. 确定会切换用户为Administrator试试
窗外雨潇潇 2012-02-24
  • 打赏
  • 举报
回复
将C:\windows\system32\CONFIG.NT 修改文件权限的所有者试下
// $Id: INSTALL.txt,v 1.61.2.4 2008/07/09 19:15:59 goba Exp $ CONTENTS OF THIS FILE --------------------- * Requirements * Optional requirements * Installation * Drupal administration * Customizing your theme(s) * Multisite Configuration * More Information REQUIREMENTS ------------ Drupal requires a web server, PHP 4 (4.3.5 or greater) or PHP 5 (http://www.php.net/) and either MySQL (http://www.mysql.com/) or PostgreSQL (http://www.postgresql.org/). The Apache web server and MySQL database are recommended; other web server and database combinations such as IIS and PostgreSQL have been tested to a lesser extent. When using MySQL, version 4.1.1 or greater is recommended to assure you can safely transfer the database. For more detailed information about Drupal requirements, see "Requirements" (http://drupal.org/requirements) in the Drupal handbook. For detailed information on how to configure a test server environment using a variety of operating systems and web servers, see "Local server setup" (http://drupal.org/node/157602) in the Drupal handbook. OPTIONAL TASKS -------------- - To use XML-based services such as the Blogger API and RSS syndication, you will need PHP's XML extension. This extension is enabled by default. - To use Drupal's "Clean URLs" feature on an Apache web server, you will need the mod_rewrite module and the ability to use local .htaccess files. For Clean URLs support on IIS, see "Using Clean URLs with IIS" (http://drupal.org/node/3854) in the Drupal handbook. - Various Drupal features require that the web server process (for example, httpd) be able to initiate outbound connections. This is usually possible, but some hosting providers or server configurations forbid such connections. The features that depend on this functionality include the integrated "Update status" module (which downloads information about available updates of Drupal core and any installed contributed modules and themes), the ability to log in via OpenID, fetching aggregator feeds, or other network-dependent services. INSTALLATION ------------ 1. DOWNLOAD DRUPAL AND OPTIONALLY A TRANSLATION You can obtain the latest Drupal release from http://drupal.org/. The files are in .tar.gz format and can be extracted using most compression tools. On a typical Unix command line, use: wget http://drupal.org/files/projects/drupal-x.x.tar.gz tar -zxvf drupal-x.x.tar.gz This will create a new directory drupal-x.x/ containing all Drupal files and directories. Move the contents of that directory into a directory within your web server's document root or your public HTML directory: mv drupal-x.x/* drupal-x.x/.htaccess /var/www/html If you would like to have the default English interface translated to a different language, we have good news. You can install and use Drupal in other languages from the start. Check whether a released package of the language desired is available for this Drupal version at http://drupal.org/project/translations and download the package. Extract the contents to the same directory where you extracted Drupal into. 2. CREATE THE CONFIGURATION FILE AND GRANT WRITE PERMISSIONS Drupal comes with a default.settings.php file in the sites/default directory. The installer uses this file as a template to create your settings file using the details you provide through the install process. To avoid problems when upgrading, Drupal is not packaged with an actual settings file. You must create a file named settings.php. You may do so by making a copy of default.settings.php (or create an empty file with this name in the same directory). For example, (from the installation directory) make a copy of the default.settings.php file with the command: cp sites/default/default.settings.php sites/default/settings.php Next, give the web server write privileges to the sites/default/settings.php file with the command (from the installation directory): chmod o+w sites/default/settings.php So that the files directory can be created automatically, give the web server write privileges to the sites/default directory with the command (from the installation directory): chmod o+w sites/default 3. CREATE THE DRUPAL DATABASE Drupal requires access to a database in order to be installed. Your database user will need sufficient privileges to run Drupal. Additional information about privileges, and instructions to create a database using the command line are available in INSTALL.mysql.txt (for MySQL) or INSTALL.pgsql.txt (for PostgreSQL). To create a database using PHPMyAdmin or a web-based control panel consult the documentation or ask your webhost service provider. Take note of the username, password, database name and hostname as you create the database. You will enter these items in the install script. 4. RUN THE INSTALL SCRIPT To run the install script point your browser to the base URL of your website (e.g., http://www.example.com). You will be guided through several screens to set up the database, create tables, add the first user account and provide basic web site settings. The install script will attempt to create a files storage directory in the default location at sites/default/files (the location of the files directory may be changed after Drupal is installed). In some cases, you may need to create the directory and modify its permissions manually. Use the following commands (from the installation directory) to create the files directory and grant the web server write privileges to it: mkdir sites/default/files chmod o+w sites/default/files The install script will attempt to write-protect the settings.php file and the sites/default directory after saving your configuration. However, you may need to manually write-protect them using the commands (from the installation directory): chmod a-w sites/default/settings.php chmod a-w sites/default If you make manual changes to the file later, be sure to protect it again after making your modifications. Failure to remove write permissions to that file is a security risk. Although the default location for the settings.php file is at sites/default/settings.php, it may be in another location if you use the multi-site setup, as explained below. 5. CONFIGURE DRUPAL When the install script succeeds, you will be directed to the "Welcome" page, and you will be logged in as the administrator already. Proceed with the initial configuration steps suggested on the "Welcome" page. If the default Drupal theme is not displaying properly and links on the page result in "Page Not Found" errors, try manually setting the $base_url variable in the settings.php file if not already set. It's currently known that servers running FastCGI can run into problems if the $base_url variable is left commented out (see http://bugs.php.net/bug.php?id=19656). 6. REVIEW FILE SYSTEM STORAGE SETTINGS AND FILE PERMISSIONS The files directory created in step 4 is the default file system path used to store all uploaded files, as well as some temporary files created by Drupal. After installation, the settings for the file system path may be modified to store uploaded files in a different location. It is not necessary to modify this path, but you may wish to change it if: * your site runs multiple Drupal installations from a single codebase (modify the file system path of each installation to a different directory so that uploads do not overlap between installations); or, * your site runs a number of web server front-ends behind a load balancer or reverse proxy (modify the file system path on each server to point to a shared file repository). To modify the file system path: * Ensure that the new location for the path exists or create it if necessary. To create a new directory named uploads, for example, use the following command from a shell or system prompt (while in the installation directory): mkdir uploads * Ensure that the new location for the path is writable by the web server process. To grant write permissions for a directory named uploads, you may need to use the following command from a shell or system prompt (while in the installation directory): chmod o+w uploads * Access the file system path settings in Drupal by selecting these menu items from the Navigation menu: Administer > Site configuration > File system Enter the path to the new location (e.g.: uploads) at the File System Path prompt. Changing the file system path after files have been uploaded may cause unexpected problems on an existing site. If you modify the file system path on an existing site, remember to copy all files from the original location to the new location. Some administrators suggest making the documentation files, especially CHANGELOG.txt, non-readable so that the exact version of Drupal you are running is slightly more difficult to determine. If you wish to implement this optional security measure, use the following command from a shell or system prompt (while in the installation directory): chmod a-r CHANGELOG.txt Note that the example only affects CHANGELOG.txt. To completely hide all documentation files from public view, repeat this command for each of the Drupal documentation files in the installation directory, substituting the name of each file for CHANGELOG.txt in the example. For more information on setting file permissions, see "Modifying Linux, Unix, and Mac file permissions" (http://drupal.org/node/202483) or "Modifying Windows file permissions" (http://drupal.org/node/202491) in the online handbook. 7. CRON MAINTENANCE TASKS Many Drupal modules have periodic tasks that must be triggered by a cron maintenance task, including search module (to build and update the index used for keyword searching), aggregator module (to retrieve feeds from other sites), ping module (to notify other sites about new or updated content), and system module (to perform routine maintenance and pruning on system tables). To activate these tasks, call the cron page by visiting http://www.example.com/cron.php, which, in turn, executes tasks on behalf of installed modules. Most systems support the crontab utility for scheduling tasks like this. The following example crontab line will activate the cron tasks automatically on the hour: 0 * * * * wget -O - -q -t 1 http://www.example.com/cron.php More information about cron maintenance tasks are available in the help pages and in Drupal's online handbook at http://drupal.org/cron. Example scripts can be found in the scripts/ directory. DRUPAL ADMINISTRATION --------------------- A new installation of Drupal defaults to a very basic configuration with only a few active modules and minimal user access rights. Use your administration panel to enable and configure services. For example: General Settings Administer > Site configuration > Site information Enable Modules Administer > Site building > Modules Configure Themes Administer > Site building > Themes Set User Permissions Administer > User management > Permissions For more information on configuration options, read the instructions which accompany the different configuration settings and consult the various help pages available in the administration panel. Community-contributed modules and themes are available at http://drupal.org/. CUSTOMIZING YOUR THEME(S) ------------------------- Now that your installation is running, you will want to customize the look of your site. Several sample themes are included and more can be downloaded from drupal.org. Simple customization of your theme can be done using only CSS. Further changes require understanding the phptemplate engine that is part of Drupal. See http://drupal.org/handbook/customization to find out more. MULTISITE CONFIGURATION ----------------------- A single Drupal installation can host several Drupal-powered sites, each with its own individual configuration. Additional site configurations are created in subdirectories within the 'sites' directory. Each subdirectory must have a 'settings.php' file which specifies the configuration settings. The easiest way to create additional sites is to copy the 'default' directory and modify the 'settings.php' file as appropriate. The new directory name is constructed from the site's URL. The configuration for www.example.com could be in 'sites/example.com/settings.php' (note that 'www.' should be omitted if users can access your site at http://example.com/). Sites do not have to have a different domain. You can also use subdomains and subdirectories for Drupal sites. For example, example.com, sub.example.com, and sub.example.com/site3 can all be defined as independent Drupal sites. The setup for a configuration such as this would look like the following: sites/default/settings.php sites/example.com/settings.php sites/sub.example.com/settings.php sites/sub.example.com.site3/settings.php When searching for a site configuration (for example www.sub.example.com/site3), Drupal will search for configuration files in the following order, using the first configuration it finds: sites/www.sub.example.com.site3/settings.php sites/sub.example.com.site3/settings.php sites/example.com.site3/settings.php sites/www.sub.example.com/settings.php sites/sub.example.com/settings.php sites/example.com/settings.php sites/default/settings.php If you are installing on a non-standard port, the port number is treated as the deepest subdomain. For example: http://www.example.com:8080/ could be loaded from sites/8080.www.example.com/. The port number will be removed according to the pattern above if no port-specific configuration is found, just like a real subdomain. Each site configuration can have its own site-specific modules and themes in addition to those installed in the standard 'modules' and 'themes' directories. To use site-specific modules or themes, simply create a 'modules' or 'themes' directory within the site configuration directory. For example, if sub.example.com has a custom theme and a custom module that should not be accessible to other sites, the setup would look like this: sites/sub.example.com/: settings.php themes/custom_theme modules/custom_module NOTE: for more information about multiple virtual hosts or the configuration settings, consult the Drupal handbook at drupal.org. For more information on configuring Drupal's file system path in a multi-site configuration, see step 6 above. MORE INFORMATION ---------------- - For additional documentation, see the online Drupal handbook at http://drupal.org/handbook. - For a list of security announcements, see the "Security announcements" page at http://drupal.org/security (available as an RSS feed). This page also describes how to subscribe to these announcements via e-mail. - For information about the Drupal security process, or to find out how to report a potential security issue to the Drupal security team, see the "Security team" page at http://drupal.org/security-team. - For information about the wide range of available support options, see the "Support" page at http://drupal.org/support.
Contents Module Overview 1 Lesson 1: Memory 3 Lesson 2: I/O 73 Lesson 3: CPU 111 Module 3: Troubleshooting Server Performance Module Overview Troubleshooting server performance-based support calls requires product knowledge, good communication skills, and a proven troubleshooting methodology. In this module we will discuss Microsoft® SQL Server™ interaction with the operating system and methodology of troubleshooting server-based problems. At the end of this module, you will be able to:  Define the common terms associated the memory, I/O, and CPU subsystems.  Describe how SQL Server leverages the Microsoft Windows® operating system facilities including memory, I/O, and threading.  Define common SQL Server memory, I/O, and processor terms.  Generate a hypothesis based on performance counters captured by System Monitor.  For each hypothesis generated, identify at least two other non-System Monitor pieces of information that would help to confirm or reject your hypothesis.  Identify at least five counters for each subsystem that are key to understanding the performance of that subsystem.  Identify three common myths associated with the memory, I/O, or CPU subsystems. Lesson 1: Memory What You Will Learn After completing this lesson, you will be able to:  Define common terms used when describing memory.  Give examples of each memory concept and how it applies to SQL Server.  Describe how SQL Server user and manages its memory.  List the primary configuration options that affect memory.  Describe how configuration options affect memory usage.  Describe the effect on the I/O subsystem when memory runs low.  List at least two memory myths and why they are not true. Recommended Reading  SQL Server 7.0 Performance Tuning Technical Reference, Microsoft Press  Windows 2000 Resource Kit companion CD-ROM documentation. Chapter 15: Overview of Performance Monitoring  Inside Microsoft Windows 2000, Third Edition, David A. Solomon and Mark E. Russinovich  Windows 2000 Server Operations Guide, Storage, File Systems, and Printing; Chapters: Evaluating Memory and Cache Usage  Advanced Windows, 4th Edition, Jeffrey Richter, Microsoft Press Related Web Sites  http://ntperformance/ Memory Definitions Memory Definitions Before we look at how SQL Server uses and manages its memory, we need to ensure a full understanding of the more common memory related terms. The following definitions will help you understand how SQL Server interacts with the operating system when allocating and using memory. Virtual Address Space A set of memory addresses that are mapped to physical memory addresses by the system. In a 32-bit operation system, there is normally a linear array of 2^32 addresses representing 4,294,967,269 byte addresses. Physical Memory A series of physical locations, with unique addresses, that can be used to store instructions or data. AWE – Address Windowing Extensions A 32-bit process is normally limited to addressing 2 gigabytes (GB) of memory, or 3 GB if the system was booted using the /3G boot switch even if there is more physical memory available. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the additional physical memory. This allows a process to access any portion of the physical memory by mapping it into the applications window. When used in combination with Intel’s Physical Addressing Extensions (PAE) on Windows 2000, an AWE enabled application can support up to 64 GB of memory Reserved Memory Pages in a processes address space are free, reserved or committed. Reserving memory address space is a way to reserve a range of virtual addresses for later use. If you attempt to access a reserved address that has not yet been committed (backed by memory or disk) you will cause an access violation. Committed Memory Committed pages are those pages that when accessed in the end translate to pages in memory. Those pages may however have to be faulted in from a page file or memory mapped file. Backing Store Backing store is the physical representation of a memory address. Page Fault (Soft/Hard) A reference to an invalid page (a page that is not in your working set) is referred to as a page fault. Assuming the page reference does not result in an access violation, a page fault can be either hard or soft. A hard page fault results in a read from disk, either a page file or memory-mapped file. A soft page fault is resolved from one of the modified, standby, free or zero page transition lists. Paging is represented by a number of counters including page faults/sec, page input/sec and page output/sec. Page faults/sec include soft and hard page faults where as the page input/output counters represent hard page faults. Unfortunately, all of these counters include file system cache activity. For more information, see also…Inside Windows 2000,Third Edition, pp. 443-451. Private Bytes Private non-shared committed address space Working Set The subset of processes virtual pages that is resident in physical memory. For more information, see also… Inside Windows 2000,Third Edition, p. 455. System Working Set Like a process, the system has a working set. Five different types of pages represent the system’s working set: system cache; paged pool; pageable code and data in the kernel; page-able code and data in device drivers; and system mapped views. The system working set is represented by the counter Memory: cache bytes. System working set paging activity can be viewed by monitoring the Memory: Cache Faults/sec counter. For more information, see also… Inside Windows 2000,Third Edition, p. 463. System Cache The Windows 2000 cache manager provides data caching for both local and network file system drivers. By caching virtual blocks, the cache manager can reduce disk I/O and provide intelligent read ahead. Represented by Memory:Cache Resident bytes. For more information, see also… Inside Windows 2000,Third Edition, pp. 654-659. Non Paged Pool Range of addresses guaranteed to be resident in physical memory. As such, non-paged pool can be accessed at any time without incurring a page fault. Because device drivers operate at DPC/dispatch level (covered in lesson 2), and page faults are not allowed at this level or above, most device drivers use non-paged pool to assure that they do not incur a page fault. Represented by Memory: Pool Nonpaged Bytes, typically between 3-30 megabytes (MB) in size. Note The pool is, in effect, a common area of memory shared by all processes. One of the most common uses of non-paged pool is the storage of object handles. For more information regarding “maximums,” see also… Inside Windows 2000,Third Edition, pp. 403-404 Paged Pool Range of address that can be paged in and out of physical memory. Typically used by drivers who need memory but do not need to access that memory from DPC/dispatch of above interrupt level. Represented by Memory: Pool Paged Bytes and Memory:Pool Paged Resident Bytes. Typically between 10-30MB + size of Registry. For more information regarding “limits,” see also… Inside Windows 2000,Third Edition, pp. 403-404. Stack Each thread has two stacks, one for kernel mode and one for user mode. A stack is an area of memory in which program procedure or function call addresses and parameters are temporarily stored. In Process To run in the same address space. In-process servers are loaded in the client’s address space because they are implemented as DLLs. The main advantage of running in-process is that the system usually does not need to perform a context switch. The disadvantage to running in-process is that DLL has access to the process address space and can potentially cause problems. Out of Process To run outside the calling processes address space. OLEDB providers can run in-process or out of process. When running out of process, they run under the context of DLLHOST.EXE. Memory Leak To reserve or commit memory and unintentionally not release it when it is no longer being used. A process can leak resources such as process memory, pool memory, user and GDI objects, handles, threads, and so on. Memory Concepts (X86 Address Space) Per Process Address Space Every process has its own private virtual address space. For 32-bit processes, that address space is 4 GB, based on a 32-bit pointer. Each process’s virtual address space is split into user and system partitions based on the underlying operating system. The diagram included at the top represents the address partitioning for the 32-bit version of Windows 2000. Typically, the process address space is evenly divided into two 2-GB regions. Each process has access to 2 GB of the 4 GB address space. The upper 2 GB of address space is reserved for the system. The user address space is where application code, global variables, per-thread stacks, and DLL code would reside. The system address space is where the kernel, executive, HAL, boot drivers, page tables, pool, and system cache reside. For specific information regarding address space layout, refer to Inside Microsoft Windows 2000 Third Edition pages 417-428 by Microsoft Press. Access Modes Each virtual memory address is tagged as to what access mode the processor must be running in. System space can only be accessed while in kernel mode, while user space is accessible in user mode. This protects system space from being tampered with by user mode code. Shared System Space Although every process has its own private memory space, kernel mode code and drivers share system space. Windows 2000 does not provide any protection to private memory being use by components running in kernel mode. As such, it is very important to ensure components running in kernel mode are thoroughly tested. 3-GB Address Space 3-GB Address Space Although 2 GB of address space may seem like a large amount of memory, application such as SQL Server could leverage more memory if it were available. The boot.ini option /3GB was created for those cases where systems actually support greater than 2 GB of physical memory and an application can make use of it This capability allows memory intensive applications running on Windows 2000 Advanced Server to use up to 50 percent more virtual memory on Intel-based computers. Application memory tuning provides more of the computer's virtual memory to applications by providing less virtual memory to the operating system. Although a system having less than 2 GB of physical memory can be booted using the /3G switch, in most cases this is ill-advised. If you restart with the 3 GB switch, also known as 4-Gig Tuning, the amount of non-paged pool is reduced to 128 MB from 256 MB. For a process to access 3 GB of address space, the executable image must have been linked with the /LARGEADDRESSAWARE flag or modified using Imagecfg.exe. It should be pointed out that SQL Server was linked using the /LAREGEADDRESSAWARE flag and can leverage 3 GB when enabled. Note Even though you can boot Windows 2000 Professional or Windows 2000 Server with the /3GB boot option, users processes are still limited to 2 GB of address space even if the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is set in the image. The only thing accomplished by using the /3G option on these system is the reduction in the amount of address space available to the system (ISW2K Pg. 418). Important If you use /3GB in conjunction with AWE/PAE you are limited to 16 GB of memory. For more information, see the following Knowledge Base articles: Q171793 Information on Application Use of 4GT RAM Tuning Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Q274598 W2K Does Not Enable Complete Memory Dumps Between 2 & 4 GB AWE Memory Layout AWE Memory Usually, the operation system is limited to 4 GB of physical memory. However, by leveraging PAE, Windows 2000 Advanced Server can support up to 8 GB of memory, and Data Center 64 GB of memory. However, as stated previously, each 32-bit process normally has access to only 2 GB of address space, or 3 GB if the system was booted with the /3-GB option. To allow processes to allocate more physical memory than can be represented in the 2GB of address space, Microsoft created the Address Windows Extensions (AWE). These extensions allow for the allocation and use of up to the amount of physical memory supported by the operating system. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the physical memory. This allows a process to access any portion of the physical memory by mapping regions of physical memory in and out of the applications window. The allocation and use of AWE memory is accomplished by  Creating a window via VirtualAlloc using the MEM_PHYSICAL option  Allocating the physical pages through AllocateUserPhysicalPages  Mapping the RAM pages to the window using MapUserPhysicalPages Note SQL Server 7.0 supports a feature called extended memory in Windows NT® 4 Enterprise Edition by using a PSE36 driver. Currently there are no PSE drivers for Windows 2000. The preferred method of accessing extended memory is via the Physical Addressing Extensions using AWE. The AWE mapping feature is much more efficient than the older process of coping buffers from extended memory into the process address space. Unfortunately, SQL Server 7.0 cannot leverage PAE/AWE. Because there are currently no PSE36 drivers for Windows 2000 this means SQL Server 7.0 cannot support more than 3GB of memory on Windows 2000. Refer to KB article Q278466. AWE restrictions  The process must have Lock Pages In Memory user rights to use AWE Important It is important that you use Enterprise Manager or DMO to change the service account. Enterprise Manager and DMO will grant all of the privileges and Registry and file permissions needed for SQL Server. The Service Control Panel does NOT grant all the rights or permissions needed to run SQL Server.  Pages are not shareable or page-able  Page protection is limited to read/write  The same physical page cannot be mapped into two separate AWE regions, even within the same process.  The use of AWE/PAE in conjunction with /3GB will limit the maximum amount of supported memory to between 12-16 GB of memory.  Task manager does not show the correct amount of memory allocated to AWE-enabled applications. You must use Memory Manager: Total Server Memory. It should, however, be noted that this only shows memory in use by the buffer pool.  Machines that have PAE enabled will not dump user mode memory. If an event occurs in User Mode Memory that causes a blue screen and root cause determination is absolutely necessary, the machine must be booted with the /NOPAE switch, and with /MAXMEM set to a number appropriate for transferring dump files.  With AWE enabled, SQL Server will, by default, allocate almost all memory during startup, leaving 256 MB or less free. This memory is locked and cannot be paged out. Consuming all available memory may prevent other applications or SQL Server instances from starting. Note PAE is not required to leverage AWE. However, if you have more than 4GB of physical memory you will not be able to access it unless you enable PAE. Caution It is highly recommended that you use the “max server memory” option in combination with “awe enabled” to ensure some memory headroom exists for other applications or instances of SQL Server, because AWE memory cannot be shared or paged. For more information, see the following Knowledge Base articles: Q268363 Intel Physical Addressing Extensions (PAE) in Windows 2000 Q241046 Cannot Create a dump File on Computers with over 4 GB RAM Q255600 Windows 2000 utilities do not display physical memory above 4GB Q274750 How to configure SQL Server memory more than 2 GB (Idea) Q266251 Memory dump stalls when PAE option is enabled (Idea) Tip The KB will return more hits if you query on PAE rather than AWE. Virtual Address Space Mapping Virtual Address Space Mapping By default Windows 2000 (on an X86 platform) uses a two-level (three-level when PAE is enabled) page table structure to translate virtual addresses to physical addresses. Each 32-bit address has three components, as shown below. When a process accesses a virtual address the system must first locate the Page Directory for the current process via register CR3 (X86). The first 10 bits of the virtual address act as an index into the Page Directory. The Page Directory Entry then points to the Page Frame Number (PFN) of the appropriate Page Table. The next 10 bits of the virtual address act as an index into the Page Table to locate the appropriate page. If the page is valid, the PTE contains the PFN of the actual page in memory. If the page is not valid, the memory management fault handler locates the page and attempts to make it valid. The final 12 bits act as a byte offset into the page. Note This multi-step process is expensive. This is why systems have translation look aside buffers (TLB) to speed up the process. One of the reasons context switching is so expensive is the translation buffers must be dumped. Thus, the first few lookups are very expensive. Refer to ISW2K pages 439-440. Core System Memory Related Counters Core System Memory Related Counters When evaluating memory performance you are looking at a wide variety of counters. The counters listed here are a few of the core counters that give you quick overall view of the state of memory. The two key counters are Available Bytes and Committed Bytes. If Committed Bytes exceeds the amount of physical memory in the system, you can be assured that there is some level of hard page fault activity happening. The goal of a well-tuned system is to have as little hard paging as possible. If Available Bytes is below 5 MB, you should investigate why. If Available Bytes is below 4 MB, the Working Set Manager will start to aggressively trim the working sets of process including the system cache.  Committed Bytes Total memory, including physical and page file currently committed  Commit Limit • Physical memory + page file size • Represents the total amount of memory that can be committed without expanding the page file. (Assuming page file is allowed to grow)  Available Bytes Total physical memory currently available Note Available Bytes is a key indicator of the amount of memory pressure. Windows 2000 will attempt to keep this above approximately 4 MB by aggressively trimming the working sets including system cache. If this value is constantly between 3-4 MB, it is cause for investigation. One counter you might expect would be for total physical memory. Unfortunately, there is no specific counter for total physical memory. There are however many other ways to determine total physical memory. One of the most common is by viewing the Performance tab of Task Manager. Page File Usage The only counters that show current page file space usage are Page File:% Usage and Page File:% Peak Usage. These two counters will give you an indication of the amount of space currently used in the page file. Memory Performance Memory Counters There are a number of counters that you need to investigate when evaluating memory performance. As stated previously, no single counter provides the entire picture. You will need to consider many different counters to begin to understand the true state of memory. Note The counters listed are a subset of the counters you should capture. *Available Bytes In general, it is desirable to see Available Bytes above 5 MB. SQL Servers goal on Intel platforms, running Windows NT, is to assure there is approximately 5+ MB of free memory. After Available Bytes reaches 4 MB, the Working Set Manager will start to aggressively trim the working sets of process and, finally, the system cache. This is not to say that working set trimming does not happen before 4 MB, but it does become more pronounced as the number of available bytes decreases below 4 MB. Page Faults/sec Page Faults/sec represents the total number of hard and soft page faults. This value includes the System Working Set as well. Keep this in mind when evaluating the amount of paging activity in the system. Because this counter includes paging associated with the System Cache, a server acting as a file server may have a much higher value than a dedicated SQL Server may have. The System Working Set is covered in depth on the next slide. Because Page Faults/sec includes soft faults, this counter is not as useful as Pages/sec, which represents hard page faults. Because of the associated I/O, hard page faults tend to be much more expensive. *Pages/sec Pages/sec represent the number of pages written/read from disk because of hard page faults. It is the sum of Memory: Pages Input/sec and Memory: Pages Output/sec. Because it is counted in numbers of pages, it can be compared to other counts of pages, such as Memory: Page Faults/sec, without conversion. On a well-tuned system, this value should be consistently low. In and of itself, a high value for this counter does not necessarily indicate a problem. You will need to isolate the paging activity to determine if it is associated with in-paging, out-paging, memory mapped file activity or system cache. Any one of these activities will contribute to this counter. Note Paging in and of itself is not necessarily a bad thing. Paging is only “bad” when a critical process must wait for it’s pages to be in-paged, or when the amount of read/write paging is causing excessive kernel time or disk I/O, thus interfering with normal user mode processing. Tip (Memory: Pages/sec) / (PhysicalDisk: Disk Bytes/sec * 4096) yields the approximate percentage of paging to total disk I/O. Note, this is only relevant on X86 platforms with a 4 KB page size. Page Reads/sec (Hard Page Fault) Page Reads/sec is the number of times the disk was accessed to resolve hard page faults. It includes reads to satisfy faults in the file system cache (usually requested by applications) and in non-cached memory mapped files. This counter counts numbers of read operations, without regard to the numbers of pages retrieved by each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Page Writes/sec (Hard Page Fault) Page Writes/sec is the number of times pages were written to disk to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data, not code. This counter counts write operations, without regard to the number of pages written in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. *Pages Input/sec (Hard Page Fault) Pages Input/sec is the number of pages read from disk to resolve hard page faults. It includes pages retrieved to satisfy faults in the file system cache and in non-cached memory mapped files. This counter counts numbers of pages, and can be compared to other counts of pages, such as Memory:Page Faults/sec, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. This is one of the key counters to monitor for potential performance complaints. Because a process must wait for a read page fault this counter, read page faults have a direct impact on the perceived performance of a process. *Pages Output/sec (Hard Page Fault) Pages Output/sec is the number of pages written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows NT writes more pages back to disk to free up space when physical memory is in short supply. This counter counts numbers of pages, and can be compared to other counts of pages, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Like Pages Input/sec, this is one of the key counters to monitor. Processes will generally not notice write page faults unless the disk I/O begins to interfere with normal data operations. Demand Zero Faults/Sec (Soft Page Fault) Demand Zero Faults/sec is the number of page faults that require a zeroed page to satisfy the fault. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security feature of Windows NT. Windows NT maintains a list of zeroed pages to accelerate this process. This counter counts numbers of faults, without regard to the numbers of pages retrieved to satisfy the fault. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Transition Faults/Sec (Soft Page Fault) Transition Faults/sec is the number of page faults resolved by recovering pages that were on the modified page list, on the standby list, or being written to disk at the time of the page fault. The pages were recovered without additional disk activity. Transition faults are counted in numbers of faults, without regard for the number of pages faulted in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. System Working Set System Working Set Like processes, the system page-able code and data are managed by a working set. For the purpose of this course, that working set is referred to as the System Working Set. This is done to differentiate the system cache portion of the working set from the entire working set. There are five different types of pages that make up the System Working Set. They are: system cache; paged pool; page-able code and data in ntoskrnl.exe; page-able code, and data in device drivers and system-mapped views. Unfortunately, some of the counters that appear to represent the system cache actually represent the entire system working set. Where noted system cache actually represents the entire system working set. Note The counters listed are a subset of the counters you should capture. *Memory: Cache Bytes (Represents Total System Working Set) Represents the total size of the System Working Set including: system cache; paged pool; pageable code and data in ntoskrnl.exe; pageable code and data in device drivers; and system-mapped views. Cache Bytes is the sum of the following counters: System Cache Resident Bytes, System Driver Resident Bytes, System Code Resident Bytes, and Pool Paged Resident Bytes. Memory: System Cache Resident Bytes (System Cache) System Cache Resident Bytes is the number of bytes from the file system cache that are resident in physical memory. Windows 2000 Cache Manager works with the memory manager to provide virtual block stream and file data caching. For more information, see also…Inside Windows 2000,Third Edition, pp. 645-650 and p. 656. Memory: Pool Paged Resident Bytes Represents the physical memory consumed by Paged Pool. This counter should NOT be monitored by itself. You must also monitor Memory: Paged Pool. A leak in the pool may not show up in Pool paged Resident Bytes. Memory: System Driver Resident Bytes Represents the physical memory consumed by driver code and data. System Driver Resident Bytes and System Driver Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Memory: System Code Resident Bytes Represents the physical memory consumed by page-able system code. System Code Resident Bytes and System Code Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Working Set Performance Counter You can measure the number of page faults in the System Working Set by monitoring the Memory: Cache Faults/sec counter. Contrary to the “Explain” shown in System Monitor, this counter measures the total amount of page faults/sec in the System Working Set, not only the System Cache. You cannot measure the performance of the System Cache using this counter alone. For more information, see also…Inside Windows 2000,Third Edition, p. 656. Note You will find that in general the working set manager will usually trim the working sets of normal processes prior to trimming the system working set. System Cache System Cache The Windows 2000 cache manager provides a write-back cache with lazy writing and intelligent read-ahead. Files are not written to disk immediately but differed until the cache manager calls the memory manager to flush the cache. This helps to reduce the total number of I/Os. Once per second, the lazy writer thread queues one-eighth of the dirty pages in the system cache to be written to disk. If this is not sufficient to meet the needs, the lazy writer will calculate a larger value. If the dirty page threshold is exceeded prior to lazy writer waking, the cache manager will wake the lazy writer. Important It should be pointed out that mapped files or files opened with FILE_FLAG_NO_BUFFERING, do not participate in the System Cache. For more information regarding mapped views, see also…Inside Windows 2000,Third Edition, p. 669. For those applications that would like to leverage system cache but cannot tolerate write delays, the cache manager supports write through operations via the FILE_FLAG_WRITE_THROUGH. On the other hand, an application can disable lazy writing by using the FILE_ATTRIBUTE_TEMPORARY. If this flag is enabled, the lazy writer will not write the pages to disk unless there is a shortage of memory or the file is closed. Important Microsoft SQL Server uses both FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH Tip The file system cache is not represented by a static amount of memory. The system cache can and will grow. It is not unusual to see the system cache consume a large amount of memory. Like other working sets, it is trimmed under pressure but is generally the last thing to be trimmed. System Cache Performance Counters The counters listed are a subset of the counters you should capture. Cache: Data Flushes/sec Data Flushes/sec is the rate at which the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. More than one page can be transferred on each flush operation. Cache: Data Flush Pages/sec Data Flush Pages/sec is the number of pages the file system cache has flushed to disk as a result of a request to flush or to satisfy a write-through file write request. Cache: Lazy Write Flushes/sec Represents the rate of lazy writes to flush the system cache per second. More than one page can be transferred per second. Cache: Lazy Write Pages/sec Lazy Write Pages/sec is the rate at which the Lazy Writer thread has written to disk. Note When looking at Memory:Cache Faults/sec, you can remove cache write activity by subtracting (Cache: Data Flush Pages/sec + Cache: Lazy Write Pages/sec). This will give you a better idea of how much other page faulting activity is associated with the other components of the System Working Set. However, you should note that there is no easy way to remove the page faults associated with file cache read activity. For more information, see the following Knowledge Base articles: Q145952 (NT4) Event ID 26 Appears If Large File Transfer Fails Q163401 (NT4) How to Disable Network Redirector File Caching Q181073 (SQL 6.5) DUMP May Cause Access Violation on Win2000 System Pool System Pool As documented earlier, there are two types of shared pool memory: non-paged pool and paged pool. Like private memory, pool memory is susceptible to a leak. Nonpaged Pool Miscellaneous kernel code and structures, and drivers that need working memory while at or above DPC/dispatch level use non-paged pool. The primary counter for non-paged pool is Memory: Pool Nonpaged Bytes. This counter will usually between 3 and 30 MB. Paged Pool Drivers that do not need to access memory above DPC/Dispatch level are one of the primary users of paged pool, however any process can use paged pool by leveraging the ExAllocatePool calls. Paged pool also contains the Registry and file and printing structures. The primary counters for monitoring paged pool is Memory: Pool Paged Bytes. This counter will usually be between 10-30MB plus the size of the Registry. To determine how much of paged pool is currently resident in physical memory, monitor Memory: Pool Paged Resident Bytes. Note The paged and non-paged pools are two of the components of the System Working Set. If a suspected leak is clearly visible in the overview and not associated with a process, then it is most likely a pool leak. If the leak is not associated with SQL Server handles, OLDEB providers, XPROCS or SP_OA calls then most likely this call should be pushed to the Windows NT group. For more information, see the following Knowledge Base articles: Q265028 (MS) Pool Tags Q258793 (MS) How to Find Memory Leaks by Using Pool Bitmap Analysis Q115280 (MS) Finding Windows NT Kernel Mode Memory Leaks Q177415 (MS) How to Use Poolmon to Troubleshoot Kernel Mode Memory Leaks Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Tip To isolate pool leaks you will need to isolate all drivers and third-party processes. This should be done by disabling each service or driver one at a time and monitoring the effect. You can also monitor paged and non-paged pool through poolmon. If pool tagging has been enabled via GFLAGS, you may be able to associate the leak to a particular tag. If you suspect a particular tag, you should involve the platform support group. Process Memory Counters Process _Total Limitations Although the rollup of _Total for Process: Private Bytes, Virtual Bytes, Handles and Threads, represent the key resources being used across all processes, they can be misleading when evaluating a memory leak. This is because a leak in one process may be masked by a decrease in another process. Note The counters listed are a subset of the counters you should capture. Tip When analyzing memory leaks, it is often easier to a build either a separate chart or report showing only one or two key counters for all process. The primary counter used for leak analysis is private bytes, but processes can leak handles and threads just as easily. After a suspect process is located, build a separate chart that includes all the counters for that process. Individual Process Counters When analyzing individual process for memory leaks you should include the counters listed.  Process: % Processor Time  Process: Working Set (includes shared pages)  Process: Virtual Bytes  Process: Private Bytes  Process: Page Faults/sec  Process: Handle Count  Process: Thread Count  Process: Pool Paged Bytes  Process: Pool Nonpaged Bytes Tip WINLOGON, SVCHOST, services, or SPOOLSV are referred to as HELPER processes. They provide core functionality for many operations and as such are often extended by the addition of third-party DLLs. Tlist –s may help identify what services are running under a particular helper. Helper Processes Helper Processes Winlogon, Services, and Spoolsv and Svchost are examples of what are referred to as HELPER processes. They provide core functionality for many operations and, as such, are often extended by the addition of third-party DLLs. Running every service in its own process can waste system resources. Consequently, some services run in their own processes while others share a process with other services. One problem with sharing a process is that a bug in one service may cause the entire process to fail. The resource kit tool, Tlist when used with the –s qualifier can help you identify what services are running in what processes. WINLOGON Used to support GINAs. SPOOLSV SPOOLSV is responsible for printing. You will need to investigate all added printing functionality. Services Service is responsible for system services. Svchost.exe Svchost.exe is a generic host process name for services that are run from dynamic-link libraries (DLLs). There can be multiple instances of Svchost.exe running at the same time. Each Svchost.exe session can contain a grouping of services, so that separate services can be run depending on how and where Svchost.exe is started. This allows for better control and debugging. The Effect of Memory on Other Components Memory Drives Overall Performance Processor, cache, bus speeds, I/O, all of these resources play a roll in overall perceived performance. Without minimizing the impact of these components, it is important to point out that a shortage of memory can often have a larger perceived impact on performance than a shortage of some other resource. On the other hand, an abundance of memory can often be leveraged to mask bottlenecks. For instance, in certain environments, file system cache can significantly reduce the amount of disk I/O, potentially masking a slow I/O subsystem. Effect on I/O I/O can be driven by a number of memory considerations. Page read/faults will cause a read I/O when a page is not in memory. If the modified page list becomes too long the Modified Page Writer and Mapped Page Writer will need to start flushing pages causing disk writes. However, the one event that can have the greatest impact is running low on available memory. In this case, all of the above events will become more pronounced and have a larger impact on disk activity. Effect on CPU The most effective use of a processor from a process perspective is to spend as much time possible executing user mode code. Kernel mode represents processor time associated with doing work, directly or indirectly, on behalf of a thread. This includes items such as synchronization, scheduling, I/O, memory management, and so on. Although this work is essential, it takes processor cycles and the cost, in cycles, to transition between user and kernel mode is expensive. Because all memory management and I/O functions must be done in kernel mode, it follows that the fewer the memory resources the more cycles are going to be spent managing those resources. A direct result of low memory is that the Working Set Manager, Modified Page Writer and Mapped Page Writer will have to use more cycles attempting to free memory. Analyzing Memory Look for Trends and Trend Relationships Troubleshooting performance is about analyzing trends and trend relationships. Establishing that some event happened is not enough. You must establish the effect of the event. For example, you note that paging activity is high at the same time that SQL Server becomes slow. These two individual facts may or may not be related. If the paging is not associated with SQL Servers working set, or the disks SQL is using there may be little or no cause/affect relationship. Look at Physical Memory First The first item to look at is physical memory. You need to know how much physical and page file space the system has to work with. You should then evaluate how much available memory there is. Just because the system has free memory does not mean that there is not any memory pressure. Available Bytes in combination with Pages Input/sec and Pages Output/sec can be a good indicator as to the amount of pressure. The goal in a perfect world is to have as little hard paging activity as possible with available memory greater than 5 MB. This is not to say that paging is bad. On the contrary, paging is a very effective way to manage a limited resource. Again, we are looking for trends that we can use to establish relationships. After evaluating physical memory, you should be able to answer the following questions:  How much physical memory do I have?  What is the commit limit?  Of that physical memory, how much has the operating system committed?  Is the operating system over committing physical memory?  What was the peak commit charge?  How much available physical memory is there?  What is the trend associated with committed and available? Review System Cache and Pool Contribution After you understand the individual process memory usage, you need to evaluate the System Cache and Pool usage. These can and often represent a significant portion of physical memory. Be aware that System Cache can grow significantly on a file server. This is usually normal. One thing to consider is that the file system cache tends to be the last thing trimmed when memory becomes low. If you see abrupt decreases in System Cache Resident Bytes when Available Bytes is below 5 MB you can be assured that the system is experiencing excessive memory pressure. Paged and non-paged pool size is also important to consider. An ever-increasing pool should be an indicator for further research. Non-paged pool growth is usually a driver issue, while paged pool could be driver-related or process-related. If paged pool is steadily growing, you should investigate each process to see if there is a specific process relationship. If not you will have to use tools such as poolmon to investigate further. Review Process Memory Usage After you understand the physical memory limitations and cache and pool contribution you need to determine what components or processes are creating the pressure on memory, if any. Be careful if you opt to chart the _Total Private Byte’s rollup for all processes. This value can be misleading in that it includes shared pages and can therefore exceed the actual amount of memory being used by the processes. The _Total rollup can also mask processes that are leaking memory because other processes may be freeing memory thus creating a balance between leaked and freed memory. Identify processes that expand their working set over time for further analysis. Also, review handles and threads because both use resources and potentially can be mismanaged. After evaluating the process resource usage, you should be able to answer the following:  Are any of the processes increasing their private bytes over time?  Are any processes growing their working set over time?  Are any processes increasing the number of threads or handles over time?  Are any processes increasing their use of pool over time?  Is there a direct relationship between the above named resources and total committed memory or available memory?  If there is a relationship, is this normal behavior for the process in question? For example, SQL does not commit ‘min memory’ on startup; these pages are faulted in into the working set as needed. This is not necessarily an indication of a memory leak.  If there is clearly a leak in the overview and is not identifiable in the process counters it is most likely in the pool.  If the leak in pool is not associated with SQL Server handles, then more often than not, it is not a SQL Server issue. There is however the possibility that the leak could be associated with third party XPROCS, SP_OA* calls or OLDB providers. Review Paging Activity and Its Impact on CPU and I/O As stated earlier, paging is not in and of itself a bad thing. When starting a process the system faults in the pages of an executable, as they are needed. This is preferable to loading the entire image at startup. The same can be said for memory mapped files and file system cache. All of these features leverage the ability of the system to fault in pages as needed The greatest impact of paging on a process is when the process must wait for an in-page fault or when page file activity represents a significant portion of the disk activity on the disk the application is actively using. After evaluating page fault activity, you should be able to answer the following questions:  What is the relationship between PageFaults/sec and Page Input/sec + Page Output/Sec?  What is the relationship if any between hard page faults and available memory?  Does paging activity represent a significant portion of processor or I/O resource usage? Don’t Prematurely Jump to Any Conclusions Analyzing memory pressure takes time and patience. An individual counter in and of it self means little. It is only when you start to explore relationships between cause and effect that you can begin to understand the impact of a particular counter. The key thoughts to remember are:  With the exception of a swap (when the entire process’s working set has been swapped out/in), hard page faults to resolve reads, are the most expensive in terms its effect on a processes perceived performance.  In general, page writes associated with page faults do not directly affect a process’s perceived performance, unless that process is waiting on a free page to be made available. Page file activity can become a problem if that activity competes for a significant percentage of the disk throughput in a heavy I/O orientated environment. That assumes of course that the page file resides on the same disk the application is using. Lab 3.1 System Memory Lab 3.1 Analyzing System Memory Using System Monitor Exercise 1 – Troubleshooting the Cardinal1.log File Students will evaluate an existing System Monitor log and determine if there is a problem and what the problem is. Students should be able to isolate the issue as a memory problem, locate the offending process, and determine whether or not this is a pool issue. Exercise 2 – Leakyapp Behavior Students will start leaky app and monitor memory, page file and cache counters to better understand the dynamics of these counters. Exercise 3 – Process Swap Due To Minimizing of the Cmd Window Students will start SQL from command line while viewing SQL process performance counters. Students will then minimize the window and note the effect on the working set. Overview What You Will Learn After completing this lab, you will be able to:  Use some of the basic functions within System Monitor.  Troubleshoot one or more common performance scenarios. Before You Begin Prerequisites To complete this lab, you need the following:  Windows 2000  SQL Server 2000  Lab Files Provided  LeakyApp.exe (Resource Kit) Estimated time to complete this lab: 45 minutes Exercise 1 Troubleshooting the Cardinal1.log File In this exercise, you will analyze a log file from an actual system that was having performance problems. Like an actual support engineer, you will not have much information from which to draw conclusions. The customer has sent you this log file and it is up to you to find the cause of the problem. However, unlike the real world, you have an instructor available to give you hints should you become stuck. Goal Review the Cardinal1.log file (this file is from Windows NT 4.0 Performance Monitor, which Windows 2000 can read). Chart the log file and begin to investigate the counters to determine what is causing the performance problems. Your goal should be to isolate the problem to a major area such as pool, virtual address space etc, and begin to isolate the problem to a specific process or thread. This lab requires access to the log file Cardinal1.log located in C:\LABS\M3\LAB1\EX1  To analyze the log file 1. Using the Performance MMC, select the System Monitor snap-in, and click the View Log File Data button (icon looks like a disk). 2. Under Files of type, choose PERFMON Log Files (*.log) 3. Navigate to the folder containing Cardinal1.log file and open it. 4. Begin examining counters to find what might be causing the performance problems. When examining some of these counters, you may notice that some of them go off the top of the chart. It may be necessary to adjust the scale on these. This can be done by right-clicking the rightmost pane and selecting Properties. Select the Data tab. Select the counter that you wish to modify. Under the Scale option, change the scale value, which makes the counter data visible on the chart. You may need to experiment with different scale values before finding the ideal value. Also, it may sometimes be beneficial to adjust the vertical scale for the entire chart. Selecting the Graph tab on the Properties page can do this. In the Vertical scale area, adjust the Maximum and Minimum values to best fit the data on the chart. Lab 3.1, Exercise 1: Results Exercise 2 LeakyApp Behavior In this lab, you will have an opportunity to work with a partner to monitor a live system, which is suffering from a simulated memory leak. Goal During this lab, your goal is to observe the system behavior when memory starts to become a limited resource. Specifically you will want to monitor committed memory, available memory, the system working set including the file system cache and each processes working set. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a live system with a memory leak 1. Choose one of the two systems as a victim on which to run the leakyapp.exe program. It is recommended that you boot using the \MAXMEM=128 option so that this lab goes a little faster. You and your partner should decide which server will play the role of the problematic server and which server is to be used for monitoring purposes. 2. On the problematic server, start the leakyapp program. 3. On the monitoring system, create a counter that logs all necessary counters need to troubleshoot a memory problem. This should include physicaldisk counters if you think paging is a problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested interval for capturing is five seconds. 4. After the counters have been started, start the leaky application program 5. Click Start Leaking. The button will now change to Stop Leaking, which indicates that the system is now leaking memory. 6. After leakyapp shows the page file is 50 percent full, click Stop leaking. Note that the process has not given back its memory, yet. After approximately one minute, exit. Lab 3.1, Exercise 2: Questions After analyzing the counter logs you should be able to answer the following: 1. Under which system memory counter does the leak show up clearly? Memory:Committed Bytes 2. What process counter looked very similar to the overall system counter that showed the leak? Private Bytes 3. Is the leak in Paged Pool, Non-paged pool, or elsewhere? Elsewhere 4. At what point did Windows 2000 start to aggressively trim the working sets of all user processes? <5 MB Free 5. Was the System Working Set trimmed before or after the working sets of other processes? After 6. What counter showed this? Memory:Cache Bytes 7. At what point was the File System Cache trimmed? After the first pass through all other working sets 8. What was the effect on all the processes working set when the application quit leaking? None 9. What was the effect on all the working sets when the application exited? Nothing, initially; but all grew fairly quickly based on use 10. When the server was running low on memory, which was Windows spending more time doing, paging to disk or in-paging? Paging to disk, initially; however, as other applications began to run, in-paging increased Exercise 3 Minimizing a Command Window In this exercise, you will have an opportunity to observe the behavior of Windows 2000 when a command window is minimized. Goal During this lab, your goal is to observe the behavior of Windows 2000 when a command window becomes minimized. Specifically, you will want to monitor private bytes, virtual bytes, and working set of SQL Server when the command window is minimized. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a command window’s working set as the window is minimized 1. Using System Monitor, create a counter list that logs all necessary counters needed to troubleshoot a memory problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested capturing interval is five seconds. 2. After the counters have been started, start a Command Prompt window on the target system. 3. In the command window, start SQL Server from the command line. Example: SQL Servr.exe –c –sINSTANCE1 4. After SQL Server has successfully started, Minimize the Command Prompt window. 5. Wait approximately two minutes, and then Restore the window. 6. Wait approximately two minutes, and then stop the counter log. Lab 3.1, Exercise 3: Questions After analyzing the counter logs you should be able to answer the following questions: 1. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was minimized? Private Bytes and Virtual Bytes remained the same, while Working Set went to 0 2. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was restored? None; the Working Set did not grow until SQL accessed the pages and faulted them back in on an as-needed basis SQL Server Memory Overview SQL Server Memory Overview Now that you have a better understanding of how Windows 2000 manages memory resources, you can take a closer look at how SQL Server 2000 manages its memory. During the course of the lecture and labs you will have the opportunity to monitor SQL Servers use of memory under varying conditions using both System Monitor counters and SQL Server tools. SQL Server Memory Management Goals Because SQL Server has in-depth knowledge about the relationships between data and the pages they reside on, it is in a better position to judge when and what pages should be brought into memory, how many pages should be brought in at a time, and how long they should be resident. SQL Servers primary goals for management of its memory are the following:  Be able to dynamically adjust for varying amounts of available memory.  Be able to respond to outside memory pressure from other applications.  Be able to adjust memory dynamically for internal components. Items Covered  SQL Server Memory Definitions  SQL Server Memory Layout  SQL Server Memory Counters  Memory Configurations Options  Buffer Pool Performance and Counters  Set Aside Memory and Counters  General Troubleshooting Process  Memory Myths and Tips SQL Server Memory Definitions SQL Server Memory Definitions Pool A group of resources, objects, or logical components that can service a resource allocation request Cache The management of a pool or resource, the primary goal of which is to increase performance. Bpool The Bpool (Buffer Pool) is a single static class instance. The Bpool is made up of 8-KB buffers and can be used to handle data pages or external memory requests. There are three basic types or categories of committed memory in the Bpool.  Hashed Data Pages  Committed Buffers on the Free List  Buffers known by their owners (Refer to definition of Stolen) Consumer A consumer is a subsystem that uses the Bpool. A consumer can also be a provider to other consumers. There are five consumers and two advanced consumers who are responsible for the different categories of memory. The following list represents the consumers and a partial list of their categories  Connection – Responsible for PSS and ODS memory allocations  General – Resource structures, parse headers, lock manager objects  Utilities – Recovery, Log Manager  Optimizer – Query Optimization  Query Plan – Query Plan Storage Advanced Consumer Along with the five consumers, there are two advanced consumers. They are  Ccache – Procedure cache. Accepts plans from the Optimizer and Query Plan consumers. Is responsible for managing that memory and determines when to release the memory back to the Bpool.  Log Cache – Managed by the LogMgr, which uses the Utility consumer to coordinate memory requests with the Bpool. Reservation Requesting the future use of a resource. A reservation is a reasonable guarantee that the resource will be available in the future. Committed Producing the physical resource Allocation The act of providing the resource to a consumer Stolen The act of getting a buffer from the Bpool is referred to as stealing a buffer. If the buffer is stolen and hashed for a data page, it is referred to as, and counted as, a Hashed buffer, not a stolen buffer. Stolen buffers on the other hand are buffers used for things such as procedure cache and SRV_PROC structures. Target Target memory is the amount of memory SQL Server would like to maintain as committed memory. Target memory is based on the min and max server configuration values and current available memory as reported by the operating system. Actual target calculation is operating system specific. Memory to Leave (Set Aside) The virtual address space set aside to ensure there is sufficient address space for thread stacks, XPROCS, COM objects etc. Hashed Page A page in pool that represents a database page. SQL Server Memory Layout Virtual Address Space When SQL Server is started the minimum of physical ram or virtual address space supported by the OS is evaluated. There are many possible combinations of OS versions and memory configurations. For example: you could be running Microsoft Windows 2000 Advanced Server with 2 GB or possibly 4 GB of memory. To avoid page file use, the appropriate memory level is evaluated for each configuration. Important Utilities can inject a DLL into the process address space by using HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs When the USER32.dll library is mapped into the process space, so, too, are the DLLs listed in the Registry key. To determine what DLL’s are running in SQL Server address space you can use tlist.exe. You can also use a tool such as Depends from Microsoft or HandelEx from http://ww.sysinternals.com. Memory to Leave As stated earlier there are many possible configurations of physical memory and address space. It is possible for physical memory to be greater than virtual address space. To ensure that some virtual address space is always available for things such as thread stacks and external needs such as XPROCS, SQL Server reserves a small portion of virtual address space prior to determining the size of the buffer pool. This address space is referred to as Memory To Leave. Its size is based on the number of anticipated tread stacks and a default value for external needs referred to as cmbAddressSave. After reserving the buffer pool space, the Memory To Leave reservation is released. Buffer Pool Space During Startup, SQL Server must determine the maximum size of the buffer pool so that the BUF, BUFHASH and COMMIT BITMAP structures that are used to manage the Bpool can be created. It is important to understand that SQL Server does not take ‘max memory’ or existing memory pressure into consideration. The reserved address space of the buffer pool remains static for the life of SQL Server process. However, the committed space varies as necessary to provide dynamic scaling. Remember only the committed memory effects the overall memory usage on the machine. This ensures that the max memory configuration setting can be dynamically changed with minimal changes needed to the Bpool. The reserved space does not need to be adjusted and is maximized for the current machine configuration. Only the committed buffers need to be limited to maintain a specified max server memory (MB) setting. SQL Server Startup Pseudo Code The following pseudo code represents the process SQL Server goes through on startup. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (-g)  Determine Total Physical Memory  Determine Available Physical Memory  Determine Total Virtual Memory  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results • Min(Available Memory, Max Server Memory) > Virtual Memory • Supports Read Scatter • SQL Server not started with -f • AWE Enabled via sp_configure • Enterprise Edition • Lock Pages In Memory user right enabled  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave)  Calculate the number of physical and virtual buffers that can be supported AWE Present Physical Buffers = (RAM / (PAGESIZE + Physical Overhead)) Virtual Buffers = (VA Limit / (PAGESIZE + Virtual Overhead)) AWE Not Present Physical Buffers = Virtual Buffers = VA Limit / (PAGESIZE + Physical Overhead + Virtual Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave SQL Server Startup Pseudo Code Example The following is an example based on the pseudo code represented on the previous page. This example is based on a machine with 384 MB of physical memory, not using AWE or /3GB. Note CmbAddressSave was changed between SQL Server 7.0 and SQL Server 2000. For SQL Server 7.0, cmbAddressSave was 128. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (No –g so 256MB)  Determine Total Physical Memory (384)  Determine Available Physical Memory (384)  Determine Total Virtual Memory (2GB)  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB) (255 * .5MB + 256MB = 384MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results (AWE Not Enabled)  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave) 384MB = Min(384MB, 2GB – 384MB)  Calculate the number of physical and virtual buffers that can be supported AWE Not Present 48664 (approx) = 384 MB / (8 KB + Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS) 48664 = Max(48664,1024)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave Tip Trace Flag 1604 can be used to view memory allocations on startup. The cmbAddressSave can be adjusted using the –g XXX startup parameter. SQL Server Memory Counters SQL Server Memory Counters The two primary tools for monitoring and analyzing SQL Server memory usage are System Monitor and DBCC MEMORYSTATUS. For detailed information on DBCC MEMORYSTATUS refer to Q271624 Interpreting the Output of the DBCC MEMORYSTAUS Command. Important Represents SQL Server 2000 Counters. The counters presented are not the same as the counters for SQL Server 7.0. The SQL Server 7.0 counters are listed in the appendix. Determining Memory Usage for OS and BPOOL Memory Manager: Total Server memory (KB) - Represents all of SQL usage Buffer Manager: Total Pages - Represents total bpool usage To determine how much of Total Server Memory (KB) represents MemToLeave space; subtract Buffer Manager: Total Pages. The result can be verified against DBCC MEMORYSTATUS, specifically Dynamic Memory Manager: OS In Use. It should however be noted that this value only represents requests that went thru the bpool. Memory reserved outside of the bpool by components such as COM objects will not show up here, although they will count against SQL Server private byte count. Buffer Counts: Target (Buffer Manager: Target Pages) The size the buffer pool would like to be. If this value is larger than committed, the buffer pool is growing. Buffer Counts: Committed (Buffer Manager: Total Pages) The total number of buffers committed in the OS. This is the current size of the buffer pool. Buffer Counts: Min Free This is the number of pages that the buffer pool tries to keep on the free list. If the free list falls below this value, the buffer pool will attempt to populate it by discarding old pages from the data or procedure cache. Buffer Distribution: Free (Buffer Manager / Buffer Partition: Free Pages) This value represents the buffers currently not in use. These are available for data or may be requested by other components and mar
A project model for the FreeBSD Project Niklas Saers Copyright © 2002-2005 Niklas Saers [ Split HTML / Single HTML ] Table of Contents Foreword 1 Overview 2 Definitions 2.1. Activity 2.2. Process 2.3. Hat 2.4. Outcome 2.5. FreeBSD 3 Organisational structure 4 Methodology model 4.1. Development model 4.2. Release branches 4.3. Model summary 5 Hats 5.1. General Hats 5.1.1. Contributor 5.1.2. Committer 5.1.3. Core Team 5.1.4. Maintainership 5.2. Official Hats 5.2.1. Documentation project manager 5.2.2. CVSup Mirror Site Coordinator 5.2.3. Internationalisation 5.2.4. Postmaster 5.2.5. Quality Assurance 5.2.6. Release Coordination 5.2.7. Public Relations & Corporate Liaison 5.2.8. Security Officer 5.2.9. Source Repository Manager 5.2.10. Election Manager 5.2.11. Web site Management 5.2.12. Ports Manager 5.2.13. Standards 5.2.14. Core Secretary 5.2.15. GNATS Administrator 5.2.16. Bugmeister 5.2.17. Donations Liaison Officer 5.2.18. Admin 5.3. Process dependent hats 5.3.1. Report originator 5.3.2. Bugbuster 5.3.3. Mentor 5.3.4. Vendor 5.3.5. Reviewers 5.3.6. CVSup Mirror Site Admin 6 Processes 6.1. Adding new and removing old committers 6.2. Adding/Removing an official CVSup Mirror 6.3. Committing code 6.4. Core election 6.5. Development of new features 6.6. Maintenance 6.7. Problem reporting 6.8. Reacting to misbehaviour 6.9. Release engineering 7 Tools 7.1. Concurrent Versions System (CVS) 7.2. CVSup 7.3. GNATS 7.4. Mailman 7.5. Perforce 7.6. Pretty Good Privacy 7.7. Secure Shell 8 Sub-projects 8.1. The Ports Subproject 8.2. The FreeBSD Documentation Project References List of Figures 3-1. The FreeBSD Project's structure 3-2. The FreeBSD Project's structure with committers in categories 4-1. Jørgenssen's model for change integration 4-2. The FreeBSD release tree 4-3. The overall development model 5-1. Overview of official hats 6-1. Process summary: adding a new committer 6-2. Process summary: removing a committer 6-3. Process summary: adding a CVSup mirror 6-4. Process summary: A committer commits code 6-5. Process summary: A contributor commits code 6-6. Process summary: Core elections 6-7. Jørgenssen's model for change integration 6-8. Process summary: problem reporting 6-9. Process summary: release engineering 8-1. Number of ports added between 1996 and 2005 Foreword Up until now, the FreeBSD project has released a number of described techniques to do different parts of work. However, a project model summarising how the project is structured is needed because of the increasing amount of project members. [1] This paper will provide such a project model and is donated to the FreeBSD Documentation project where it can evolve together with the project so that it can at any point in time reflect the way the project works. It is based on [Saers, 2003]. I would like to thank the following people for taking the time to explain things that were unclear to me and for proofreading the document. Andrey A. Chernov Bruce A. Mah Dag-Erling Smørgrav Giorgos Keramidas Ingvil Hovig Jesper Holck John Baldwin John Polstra Kirk McKusick Mark Linimon Marleen Devos Niels Jørgenssen Nik Clayton Poul-Henning Kamp Simon L. Nielsen Chapter 1 Overview A project model is a means to reduce the communications overhead in a project. As shown by [Brooks, 1995], increasing the number of project participants increases the communication in the project exponentionally. FreeBSD has during the past few year increased both its mass of active users and committers, and the communication in the project has risen accordingly. This project model will serve to reduce this overhead by providing an up-to-date description of the project. During the Core elections in 2002, Mark Murray stated “I am opposed to a long rule-book, as that satisfies lawyer-tendencies, and is counter to the technocentricity that the project so badly needs.” [FreeBSD, 2002B]. This project model is not meant to be a tool to justify creating impositions for developers, but as a tool to facilitate coordination. It is meant as a description of the project, with an overview of how the different processes are executed. It is an introduction to how the FreeBSD project works. The FreeBSD project model will be described as of July 1st, 2004. It is based on the Niels Jørgensen's paper [Jørgensen, 2001], FreeBSD's official documents, discussions on FreeBSD mailing lists and interviews with developers. After providing definitions of terms used, this document will outline the organisational structure (including role descriptions and communication lines), discuss the methodology model and after presenting the tools used for process control, it will present the defined processes. Finally it will outline major sub-projects of the FreeBSD project. [FreeBSD, 2002A, Section 1.2 and 1.3] give the vision and the architectural guidelines for the project. The vision is “To produce the best UNIX-like operating system package possible, with due respect to the original software tools ideology as well as usability, performance and stability.” The architectural guidelines help determine whether a problem that someone wants to be solved is within the scope of the project Chapter 2 Definitions 2.1. Activity An “activity” is an element of work performed during the course of a project [PMI, 2000]. It has an output and leads towards an outcome. Such an output can either be an input to another activity or a part of the process' delivery. 2.2. Process A “process” is a series of activities that lead towards a particular outcome. A process can consist of one or more sub-processes. An example of a process is software design. 2.3. Hat A “hat” is synonymous with role. A hat has certain responsibilities in a process and for the process outcome. The hat executes activities. It is well defined what issues the hat should be contacted about by the project members and people outside the project. 2.4. Outcome An “outcome” is the final output of the process. This is synonymous with deliverable, that is defined as “any measurable, tangible, verifiable outcome, result or item that must be produced to complete a project or part of a project. Often used more narrowly in reference to an external deliverable, which is a deliverable that is subject to approval by the project sponsor or customer” by [PMI, 2000]. Examples of outcomes are a piece of software, a decision made or a report written. 2.5. FreeBSD When saying “FreeBSD” we will mean the BSD derivative UNIX-like operating system FreeBSD, whereas when saying “the FreeBSD Project” we will mean the project organisation. Chapter 3 Organisational structure While no-one takes ownership of FreeBSD, the FreeBSD organisation is divided into core, committers and contributors and is part of the FreeBSD community that lives around it. Figure 3-1. The FreeBSD Project's structure Number of committers has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004 and contributors by going through the list of contributions and problem reports. The main resource in the FreeBSD community is its developers: the committers and contributors. It is with their contributions that the project can move forward. Regular developers are referred to as contributors. As by January 1st, 2003, there are an estimated 5500 contributors on the project. Committers are developers with the privilege of being able to commit changes. These are usually the most active developers who are willing to spend their time not only integrating their own code but integrating code submitted by the developers who do not have this privilege. They are also the developers who elect the core team, and they have access to closed discussions. The project can be grouped into four distinct separate parts, and most developers will focus their involvement in one part of FreeBSD. The four parts are kernel development, userland development, ports and documentation. When referring to the base system, both kernel and userland is meant. This split changes our triangle to look like this: Figure 3-2. The FreeBSD Project's structure with committers in categories Number of committers per area has been determined by going through CVS logs from January 1st, 2004 to December 31st, 2004. Note that many committers work in multiple areas, making the total number higher than the real number of committers. The total number of committers at that time was 269. Committers fall into three groups: committers who are only concerned with one area of the project (for instance file systems), committers who are involved only with one sub-project and committers who commit to different parts of the code, including sub-projects. Because some committers work on different parts, the total number in the committers section of the triangle is higher than in the above triangle. The kernel is the main building block of FreeBSD. While the userland applications are protected against faults in other userland applications, the entire system is vulnerable to errors in the kernel. This, combined with the vast amount of dependencies in the kernel and that it is not easy to see all the consequences of a kernel change, demands developers with a relative full understanding of the kernel. Multiple development efforts in the kernel also requires a closer coordination than userland applications do. The core utilities, known as userland, provide the interface that identifies FreeBSD, both user interface, shared libraries and external interfaces to connecting clients. Currently, 162 people are involved in userland development and maintenance, many being maintainers for their own part of the code. Maintainership will be discussed in the Maintainership section. Documentation is handled by The FreeBSD Documentation Project and includes all documents surrounding the FreeBSD project, including the web pages. There were during 2004 101 people making commits to the FreeBSD Documentation Project. Ports is the collection of meta-data that is needed to make software packages build correctly on FreeBSD. An example of a port is the port for the web-browser Mozilla. It contains information about where to fetch the source, what patches to apply and how, and how the package should be installed on the system. This allows automated tools to fetch, build and install the package. As of this writing, there are more than 12600 ports available. [2] , ranging from web servers to games, programming languages and most of the application types that are in use on modern computers. Ports will be discussed further in the section The Ports Subproject. Chapter 4 Methodology model 4.1. Development model There is no defined model for how people write code in FreeBSD. However, Niels Jørgenssen has suggested a model of how written code is integrated into the project. Figure 4-1. Jørgenssen's model for change integration The “development release” is the FreeBSD-CURRENT ("-CURRENT") branch and the “production release” is the FreeBSD-STABLE branch ("-STABLE") [Jørgensen, 2001]. This is a model for one change, and shows that after coding, developers seek community review and try integrating it with their own systems. After integrating the change into the development release, called FreeBSD-CURRENT, it is tested by many users and developers in the FreeBSD community. After it has gone through enough testing, it is merged into the production release, called FreeBSD-STABLE. Unless each stage is finished successfully, the developer needs to go back and make modifications in the code and restart the process. To integrate a change with either -CURRENT or -STABLE is called making a commit. Jørgensen found that most FreeBSD developers work individually, meaning that this model is used in parallel by many developers on the different ongoing development efforts. A developer can also be working on multiple changes, so that while he is waiting for review or people to test one or more of his changes, he may be writing another change. As each commit represents an increment, this is a massively incremental model. The commits are in fact so frequent that during one year [3] , 85427 commits were made, making a daily average of 233 commits. Within the “code” bracket in Jørgensen's figure, each programmer has his own working style and follows his own development models. The bracket could very well have been called “development” as it includes requirements gathering and analysis, system and detailed design, implementation and verification. However, the only output from these stages is the source code or system documentation. From a stepwise model's perspective (such as the waterfall model), the other brackets can be seen as further verification and system integration. This system integration is also important to see if a change is accepted by the community. Up until the code is committed, the developer is free to choose how much to communicate about it to the rest of the project. In order for -CURRENT to work as a buffer (so that bright ideas that had some undiscovered drawbacks can be backed out) the minimum time a commit should be in -CURRENT before merging it to -STABLE is 3 days. Such a merge is referred to as an MFC (Merge From Current). It is important to notice the word “change”. Most commits do not contain radical new features, but are maintenance updates. The only exceptions from this model are security fixes and changes to features that are deprecated in the -CURRENT branch. In these cases, changes can be committed directly to the -STABLE branch. In addition to many people working on the project, there are many related projects to the FreeBSD Project. These are either projects developing brand new features, sub-projects or projects whose outcome is incorporated into FreeBSD [4]. These projects fit into the FreeBSD Project just like regular development efforts: they produce code that is integrated with the FreeBSD Project. However, some of them (like Ports and Documentation) have the privilege of being applicable to both branches or commit directly to both -CURRENT and -STABLE. There is no standards to how design should be done, nor is design collected in a centralised repository. The main design is that of 4.4BSD. [5] As design is a part of the “Code” bracket in Jørgenssen's model, it is up to every developer or sub-project how this should be done. Even if the design should be stored in a central repository, the output from the design stages would be of limited use as the differences of methodologies would make them poorly if at all interoperable. For the overall design of the project, the project relies on the sub-projects to negotiate fit interfaces between each other rather than to dictate interfacing. 4.2. Release branches The releases of FreeBSD is best illustrated by a tree with many branches where each major branch represents a major version. Minor versions are represented by branches of the major branches. In the following release tree, arrows that follow one-another in a particular direction represent a branch. Boxes with full lines and diamonds represent official releases. Boxes with dotted lines represent the development branch at that time. Security branches are represented by ovals. Diamonds differ from boxes in that they represent a fork, meaning a place where a branch splits into two branches where one of the branches becomes a sub-branch. For example, at 4.0-RELEASE the 4.0-CURRENT branch split into 4-STABLE and 5.0-CURRENT. At 4.5-RELEASE, the branch forked off a security branch called RELENG_4_5. Figure 4-2. The FreeBSD release tree The latest -CURRENT version is always referred to as -CURRENT, while the latest -STABLE release is always referred to as -STABLE. In this figure, -STABLE refers to 4-STABLE while -CURRENT refers to 5.0-CURRENT following 5.0-RELEASE. [FreeBSD, 2002E] A “major release” is always made from the -CURRENT branch. However, the -CURRENT branch does not need to fork at that point in time, but can focus on stabilising. An example of this is that following 3.0-RELEASE, 3.1-RELEASE was also a continuation of the -CURRENT-branch, and -CURRENT did not become a true development branch until this version was released and the 3-STABLE branch was forked. When -CURRENT returns to becoming a development branch, it can only be followed by a major release. 5-STABLE is predicted to be forked off 5.0-CURRENT at around 5.3-RELEASE. It is not until 5-STABLE is forked that the development branch will be branded 6.0-CURRENT. A “minor release” is made from the -CURRENT branch following a major release, or from the -STABLE branch. Following and including, 4.3-RELEASE[6], when a minor release has been made, it becomes a “security branch”. This is meant for organisations that do not want to follow the -STABLE branch and the potential new/changed features it offers, but instead require an absolutely stable environment, only updating to implement security updates. [7] Each update to a security branch is called a “patchlevel”. For every security enhancement that is done, the patchlevel number is increased, making it easy for people tracking the branch to see what security enhancements they have implemented. In cases where there have been especially serious security flaws, an entire new release can be made from a security branch. An example of this is 4.6.2-RELEASE. 4.3. Model summary To summarise, the development model of FreeBSD can be seen as the following tree: Figure 4-3. The overall development model The tree of the FreeBSD development with ongoing development efforts and continuous integration. The tree symbolises the release versions with major versions spawning new main branches and minor versions being versions of the main branch. The top branch is the -CURRENT branch where all new development is integrated, and the -STABLE branch is the branch directly below it. Clouds of development efforts hang over the project where developers use the development models they see fit. The product of their work is then integrated into -CURRENT where it undergoes parallel debugging and is finally merged from -CURRENT into -STABLE. Security fixes are merged from -STABLE to the security branches. Chapter 5 Hats Many committers have a special area of responsibility. These roles are called hats [Losh, 2002]. These hats can be either project roles, such as public relations officer, or maintainer for a certain area of the code. Because this is a project where people give voluntarily of their spare time, people with assigned hats are not always available. They must therefore appoint a deputy that can perform the hat's role in his or her absence. The other option is to have the role held by a group. Many of these hats are not formalised. Formalised hats have a charter stating the exact purpose of the hat along with its privileges and responsibilities. The writing of such charters is a new part of the project, and has thus yet to be completed for all hats. These hat descriptions are not such a formalisation, rather a summary of the role with links to the charter where available and contact addresses, 5.1. General Hats 5.1.1. Contributor A Contributor contributes to the FreeBSD project either as a developer, as an author, by sending problem reports, or in other ways contributing to the progress of the project. A contributor has no special privileges in the FreeBSD project. [FreeBSD, 2002F] 5.1.2. Committer A person who has the required privileges to add his code or documentation to the repository. A committer has made a commit within the past 12 months. [FreeBSD, 2000A] An active committer is a committer who has made an average of one commit per month during that time. It is worth noting that there are no technical barriers to prevent someone, once having gained commit privileges to the main- or a sub-project, to make commits in parts of that project's source the committer did not specifically get permission to modify. However, when wanting to make modifications to parts a committer has not been involved in before, he/she should read the logs to see what has happened in this area before, and also read the MAINTAINER file to see if the maintainer of this part has any special requests on how changes in the code should be made 5.1.3. Core Team The core team is elected by the committers from the pool of committers and serves as the board of directors of the FreeBSD project. It promotes active contributors to committers, assigns people to well-defined hats, and is the final arbiter of decisions involving which way the project should be heading. As by July 1st, 2004, core consisted of 9 members. Elections are held every two years. 5.1.4. Maintainership Maintainership means that that person is responsible for what is allowed to go into that area of the code and has the final say should disagreements over the code occur. This involves involves proactive work aimed at stimulating contributions and reactive work in reviewing commits. With the FreeBSD source comes the MAINTAINERS file that contains a one-line summary of how each maintainer would like contributions to be made. Having this notice and contact information enables developers to focus on the development effort rather than being stuck in a slow correspondence should the maintainer be unavailable for some time. If the maintainer is unavailable for an unreasonably long period of time, and other people do a significant amount of work, maintainership may be switched without the maintainer's approval. This is based on the stance that maintainership should be demonstrated, not declared. Maintainership of a particular piece of code is a hat that is not held as a group. 5.2. Official Hats The official hats in the FreeBSD Project are hats that are more or less formalised and mainly administrative roles. They have the authority and responsibility for their area. The following illustration shows the responsibility lines. After this follows a description of each hat, including who it is held by. Figure 5-1. Overview of official hats All boxes consist of groups of committers, except for the dotted boxes where the holders are not necessarily committers. The flattened circles are sub-projects and consist of both committers and non-committers of the main project. 5.2.1. Documentation project manager The FreeBSD Documentation Project architect is responsible for defining and following up documentation goals for the committers in the Documentation project. Hat held by: The DocEng team . The DocEng Charter. 5.2.2. CVSup Mirror Site Coordinator The CVSup Mirror Site Coordinator coordinates all the CVSup Mirror Site Admins to ensure that they are distributing current versions of the software, that they have the capacity to update themselves when major updates are in progress, and making it easy for the general public to find their closest CVSup mirror. Hat currently held by: John Polstra . 5.2.3. Internationalisation The Internationalisation hat is responsible for coordinating the localisation efforts of the FreeBSD kernel and userland utilities. The translation effort are coordinated by The FreeBSD Documentation Project. The Internationalisation hat should suggest and promote standards and guidelines for writing and maintaining the software in a fashion that makes it easier to translate. Hat currently available. 5.2.4. Postmaster The Postmaster is responsible for mail being correctly delivered to the committers' email address. He is also responsible for ensuring that the mailing lists work and should take measures against possible disruptions of mail such as having troll-, spam- and virus-filters. Hat currently held by: David Wolfskill . 5.2.5. Quality Assurance The responsibilities of this role are to manage the quality assurance measures. Hat currently held by: Robert Watson . 5.2.6. Release Coordination The responsibilities of the Release Engineering Team are Setting, publishing and following a release schedule for official releases Documenting and formalising release engineering procedures Creation and maintenance of code branches Coordinating with the Ports and Documentation teams to have an updated set of packages and documentation released with the new releases Coordinating with the Security team so that pending releases are not affected by recently disclosed vulnerabilities. Further information about the development process is available in the release engineering section. Hat held by: the Release Engineering team , currently headed by Murray Stokely . The Release Engineering Charter. 5.2.7. Public Relations & Corporate Liaison The Public Relations & Corporate Liaison's responsibilities are: Making press statements when happenings that are important to the FreeBSD Project happen. Being the official contact person for corporations that are working close with the FreeBSD Project. Take steps to promote FreeBSD within both the Open Source community and the corporate world. Handle the “freebsd-advocacy” mailing list. This hat is currently not occupied. 5.2.8. Security Officer The Security Officer's main responsibility is to coordinate information exchange with others in the security community and in the FreeBSD project. The Security Officer is also responsible for taking action when security problems are reported and promoting proactive development behaviour when it comes to security. Because of the fear that information about vulnerabilities may leak out to people with malicious intent before a patch is available, only the Security Officer, consisting of an officer, a deputy and two Core team members, receive sensitive information about security issues. However, to create or implement a patch, the Security Officer has the Security Officer Team to help do the work. Hat held by: the Security Officer fficer@FreeBSD.org>, currently headed by Colin Percival . The Security Officer and The Security Officer Team's charter. 5.2.9. Source Repository Manager The Source Repository Manager is the only one who is allowed to directly modify the repository without using the CVS tool. It is his/her responsibility to ensure that technical problems that arise in the repository are resolved quickly. The source repository manager has the authority to back out commits if this is necessary to resolve a CVS technical problem. Hat held by: the Source Repository Manager , currently headed by Peter Wemm . 5.2.10. Election Manager The Election Manager is responsible for the Core election process. The manager is responsible for running and maintaining the election system, and is the final authority should minor unforseen events happen in the election process. Major unforseen events have to be discussed with the Core team Hat held only during elections. 5.2.11. Web site Management The Web site Management hat is responsible for coordinating the rollout of updated web pages on mirrors around the world, for the overall structure of the primary web site and the system it is running upon. The management needs to coordinate the content with The FreeBSD Documentation Project and acts as maintainer for the “www” tree. Hat held by: the FreeBSD Webmasters . 5.2.12. Ports Manager The Ports Manager acts as a liaison between The Ports Subproject and the core project, and all requests from the project should go to the ports manager. Hat held by: the Ports Management Team , 5.2.13. Standards The Standards hat is responsible for ensuring that FreeBSD complies with the standards it is committed to , keeping up to date on the development of these standards and notifying FreeBSD developers of important changes that allows them to take a proactive role and decrease the time between a standards update and FreeBSD's compliancy. Hat currently held by: Garrett Wollman . 5.2.14. Core Secretary The Core Secretary's main responsibility is to write drafts to and publish the final Core Reports. The secretary also keeps the core agenda, thus ensuring that no balls are dropped unresolved. Hat currently held by: Joel Dahl . 5.2.15. GNATS Administrator The GNATS Administrator is responsible for ensuring that the maintenance database is in working order, that the entries are correctly categorised and that there are no invalid entries. Hat currently held by: Ceri Davies and Mark Linimon . 5.2.16. Bugmeister The Bugmeister is the person in charge of the problem report group. Hat currently held by: Ceri Davies and Mark Linimon . 5.2.17. Donations Liaison Officer The task of the donations liason officer is to match the developers with needs with people or organisations willing to make a donation. The Donations Liason Charter is available here Hat held by: the Donations Liaison Office , currently headed by Michael W. Lucas . 5.2.18. Admin (Also called “FreeBSD Cluster Admin”) The admin team consists of the people responsible for administrating the computers that the project relies on for its distributed work and communication to be synchronised. It consists mainly of those people who have physical access to the servers. Hat held by: the Admin team , currently headed by Mark Murray 5.3. Process dependent hats 5.3.1. Report originator The person originally responsible for filing a Problem Report. 5.3.2. Bugbuster A person who will either find the right person to solve the problem, or close the PR if it is a duplicate or otherwise not an interesting one. 5.3.3. Mentor A mentor is a committer who takes it upon him/her to introduce a new committer to the project, both in terms of ensuring the new committers setup is valid, that the new committer knows the available tools required in his/her work and that the new committer knows what is expected of him/her in terms of behaviour. 5.3.4. Vendor The person(s) or organisation whom external code comes from and whom patches are sent to. 5.3.5. Reviewers People on the mailing list where the request for review is posted. 5.3.6. CVSup Mirror Site Admin A CVSup Mirror Site Admin has accesses to a server that he/she uses to mirror the CVS repository. The admin works with the CVSup Mirror Site Coordinator to ensure the site remains up-to-date and is following the general policy of official mirror sites. Chapter 6 Processes The following section will describe the defined project processes. Issues that are not handled by these processes happen on an ad-hoc basis based on what has been customary to do in similar cases. 6.1. Adding new and removing old committers The Core team has the responsibility of giving and removing commit privileges to contributors. This can only be done through a vote on the core mailing list. The ports and documentation sub-projects can give commit privileges to people working on these projects, but have to date not removed such privileges. Normally a contributor is recommended to core by a committer. For contributors or outsiders to contact core asking to be a committer is not well thought of and is usually rejected. If the area of particular interest for the developer potentially overlaps with other committers' area of maintainership, the opinion of those maintainers is sought. However, it is frequently this committer that recommends the developer. When a contributor is given committer status, he is assigned a mentor. The committer who recommended the new committer will, in the general case, take it upon himself to be the new committers mentor. When a contributor is given his commit bit, a PGP-signed email is sent from either Core Secretary, Ports Manager or nik@freebsd.org to both admins@freebsd.org, the assigned mentor, the new committer and core confirming the approval of a new account. The mentor then gathers a password line, SSH 2 public key and PGP key from the new committer and sends them to Admin. When the new account is created, the mentor activates the commit bit and guides the new committer through the rest of the initial process. Figure 6-1. Process summary: adding a new committer When a contributor sends a piece of code, the receiving committer may choose to recommend that the contributor is given commit privileges. If he recommends this to core, they will vote on this recommendation. If they vote in favour, a mentor is assigned the new committer and the new committer has to email his details to the administrators for an account to be created. After this, the new committer is all set to make his first commit. By tradition, this is by adding his name to the committers list. Recall that a committer is considered to be someone who has committed code during the past 12 months. However, it is not until after 18 months of inactivity have passed that commit privileges are eligible to be revoked. [FreeBSD, 2002H] There are, however, no automatic procedures for doing this. For reactions concerning commit privileges not triggered by time, see section 1.5.8. Figure 6-2. Process summary: removing a committer When Core decides to clean up the committers list, they check who has not made a commit for the past 18 months. Committers who have not done so have their commit bits revoked. It is also possible for committers to request that their commit bit be retired if for some reason they are no longer going to be actively committing to the project. In this case, it can also be restored at a later time by core, should the committer ask. Roles in this process: Core team Contributor Committer Maintainership Mentor [FreeBSD, 2000A] [FreeBSD, 2002H] [FreeBSD, 2002I] 6.2. Adding/Removing an official CVSup Mirror A CVSup mirror is a replica of the official CVSup master that contains all the up-to-date source code for all the branches in the FreeBSD project, ports and documentation. Adding an official CVSup mirror starts with the potential CVSup Mirror Site Admin installing the “cvsup-mirror” package. Having done this and updated the source code with a mirror site, he now runs a fairly recent unofficial CVSup mirror. Deciding he has a stable environment, the processing power, the network capacity and the storage capacity to run an official mirror, he mails the CVSup Mirror Site Coordinator who decides whether the mirror should become an official mirror or not. In making this decision, the CVSup Mirror Site Coordinator has to determine whether that geographical area needs another mirror site, if the mirror administrator has the skills to run it reliably, if the network bandwidth is adequate and if the master server has the capacity to server another mirror. If CVSup Mirror Site Coordinator decides that the mirror should become an official mirror, he obtains an authentication key from the mirror admin that he installs so the mirror admin can update the mirror from the master server. Figure 6-3. Process summary: adding a CVSup mirror When a CVSup mirror administrator of an unofficial mirror offers to become an official mirror site, the CVSup coordinator decides if another mirror is needed and if there is sufficient capacity to accommodate it. If so, an authorisation key is requested and the mirror is given access to the main distribution site and added to the list of official mirrors. Tools used in this process: CVSup SSH 2 Hats involved in this process: CVSup Mirror Site Coordinator CVSup Mirror Site Admin 6.3. Committing code The committing of new or modified code is one of the most frequent processes in the FreeBSD project and will usually happen many times a day. Committing of code can only be done by a “committer”. Committers commit either code written by themselves, code submitted to them or code submitted through a problem report. When code is written by the developer that is non-trivial, he should seek a code review from the community. This is done by sending mail to the relevant list asking for review. Before submitting the code for review, he should ensure it compiles correctly with the entire tree and that all relevant tests run. This is called “pre-commit test”. When contributed code is received, it should be reviewed by the committer and tested the same way. When a change is committed to a part of the source that has been contributed from an outside Vendor, the maintainer should ensure that the patch is contributed back to the vendor. This is in line with the open source philosophy and makes it easier to stay in sync with outside projects as the patches do not have to be reapplied every time a new release is made. After the code has been available for review and no further changes are necessary, the code is committed into the development branch, -CURRENT. If the change applies for the -STABLE branch or the other branches as well, a “Merge From Current” ("MFC") countdown is set by the committer. After the number of days the committer chose when setting the MFC have passed, an email will automatically be sent to the committer reminding him to commit it to the -STABLE branch (and possibly security branches as well). Only security critical changes should be merged to security branches. Delaying the commit to -STABLE and other branches allows for “parallel debugging” where the committed code is tested on a wide range of configurations. This makes changes to -STABLE to contain fewer faults and thus giving the branch its name. Figure 6-4. Process summary: A committer commits code When a committer has written a piece of code and wants to commit it, he first needs to determine if it is trivial enough to go in without prior review or if it should first be reviewed by the developer community. If the code is trivial or has been reviewed and the committer is not the maintainer, he should consult the maintainer before proceeding. If the code is contributed by an outside vendor, the maintainer should create a patch that is sent back to the vendor. The code is then committed and the deployed by the users. Should they find problems with the code, this will be reported and the committer can go back to writing a patch. If a vendor is affected, he can choose to implement or ignore the patch. Figure 6-5. Process summary: A contributor commits code The difference when a contributor makes a code contribution is that he submits the code through the send-pr program. This report is picked up by the maintainer who reviews the code and commits it. Hats included in this process are: Committer Contributor Vendor Reviewer [FreeBSD, 2001] [Jørgensen, 2001] 6.4. Core election Core elections are held at least every two years. [8] Nine core members are elected. New elections are held if the number of core members drops below seven. New elections can also be held should at least 1/3 of the active committers demand this. When an election is to take place, core announces this at least 6 weeks in advance, and appoints an election manager to run the elections. Only committers can be elected into core. The candidates need to submit their candidacy at least one week before the election starts, but can refine their statements until the voting starts. They are presented in the candidates list. When writing their election statements, the candidates must answer a few standard questions submitted by the election manager. During elections, the rule that a committer must have committed during the 12 past months is followed strictly. Only these committers are eligible to vote. When voting, the committer may vote once in support of up to nine nominees. The voting is done over a period of four weeks with reminders being posted on “developers” mailing list that is available to all committers. The election results are released one week after the election ends, and the new core team takes office one week after the results have been posted. Should there be a voting tie, this will be resolved by the new, unambiguously elected core members. Votes and candidate statements are archived, but the archives are not publicly available. Figure 6-6. Process summary: Core elections Core announces the election and selects an election manager. He prepares the elections, and when ready, candidates can announce their candidacies through submitting their statements. The committers then vote. After the vote is over, the election results are announced and the new core team takes office. Hats in core elections are: Core team Committer Election Manager [FreeBSD, 2000A] [FreeBSD, 2002B] [FreeBSD, 2002G] 6.5. Development of new features Within the project there are sub-projects that are working on new features. These projects are generally done by one person [Jørgensen, 2001]. Every project is free to organise development as it sees fit. However, when the project is merged to the -CURRENT branch it must follow the project guidelines. When the code has been well tested in the -CURRENT branch and deemed stable enough and relevant to the -STABLE branch, it is merged to the -STABLE branch. The requirements of the project are given by developer wishes, requests from the community in terms of direct requests by mail, Problem Reports, commercial funding for the development of features, or contributions by the scientific community. The wishes that come within the responsibility of a developer are given to that developer who prioritises his time between the request and his wishes. A common way to do this is maintain a TODO-list maintained by the project. Items that do not come within someone's responsibility are collected on TODO-lists unless someone volunteers to take the responsibility. All requests, their distribution and follow-up are handled by the GNATS tool. Requirements analysis happens in two ways. The requests that come in are discussed on mailing lists, both within the main project and in the sub-project that the request belongs to or is spawned by the request. Furthermore, individual developers on the sub-project will evaluate the feasibility of the requests and determine the prioritisation between them. Other than archives of the discussions that have taken place, no outcome is created by this phase that is merged into the main project. As the requests are prioritised by the individual developers on the basis of doing what they find interesting, necessary or are funded to do, there is no overall strategy or priorisation of what requests to regard as requirements and following up their correct implementation. However, most developers have some shared vision of what issues are more important, and they can ask for guidelines from the release engineering team. The verification phase of the project is two-fold. Before committing code to the current-branch, developers request their code to be reviewed by their peers. This review is for the most part done by functional testing, but also code review is important. When the code is committed to the branch, a broader functional testing will happen, that may trigger further code review and debugging should the code not behave as expected. This second verification form may be regarded as structural verification. Although the sub-projects themselves may write formal tests such as unit tests, these are usually not collected by the main project and are usually removed before the code is committed to the current branch. [9] 6.6. Maintenance It is an advantage to the project to for each area of the source have at least one person that knows this area well. Some parts of the code have designated maintainers. Others have de-facto maintainers, and some parts of the system do not have maintainers. The maintainer is usually a person from the sub-project that wrote and integrated the code, or someone who has ported it from the platform it was written for. [10] The maintainer's job is to make sure the code is in sync with the project the code comes from if it is contributed code, and apply patches submitted by the community or write fixes to issues that are discovered. The main bulk of work that is put into the FreeBSD project is maintenance. [Jørgensen, 2001] has made a figure showing the life cycle of changes. Figure 6-7. Jørgenssen's model for change integration Here “development release” refers to the -CURRENT branch while “production release” refers to the -STABLE branch. The “pre-commit test” is the functional testing by peer developers when asked to do so or trying out the code to determine the status of the sub-project. “Parallel debugging” is the functional testing that can trigger more review, and debugging when the code is included in the -CURRENT branch. As of this writing, there were 269 committers in the project. When they commit a change to a branch, that constitutes a new release. It is very common for users in the community to track a particular branch. The immediate existence of a new release makes the changes widely available right away and allows for rapid feedback from the community. This also gives the community the response time they expect on issues that are of importance to them. This makes the community more engaged, and thus allows for more and better feedback that again spurs more maintenance and ultimately should create a better product. Before making changes to code in parts of the tree that has a history unknown to the committer, the committer is required to read the commit logs to see why certain features are implemented the way they are in order not to make mistakes that have previously either been thought through or resolved. 6.7. Problem reporting FreeBSD comes with a problem reporting tool called “send-pr” that is a part of the GNATS package. All users and developers are encouraged to use this tool for reporting problems in software they do not maintain. Problems include bug reports, feature requests, features that should be enhanced and notices of new versions of external software that is included in the project. Problem reports are sent to an email address where it is inserted into the GNATS maintenance database. A Bugbuster classifies the problem and sends it to the correct group or maintainer within the project. After someone has taken responsibility for the report, the report is being analysed. This analysis includes verifying the problem and thinking out a solution for the problem. Often feedback is required from the report originator or even from the FreeBSD community. Once a patch for the problem is made, the originator may be asked to try it out. Finally, the working patch is integrated into the project, and documented if applicable. It there goes through the regular maintenance cycle as described in section maintenance. These are the states a problem report can be in: open, analyzed, feedback, patched, suspended and closed. The suspended state is for when further progress is not possible due to the lack of information or for when the task would require so much work that nobody is working on it at the moment. Figure 6-8. Process summary: problem reporting A problem is reported by the report originator. It is then classified by a bugbuster and handed to the correct maintainer. He verifies the problem and discusses the problem with the originator until he has enough information to create a working patch. This patch is then committed and the problem report is closed. The roles included in this process are: Report originator Maintainership Bugbuster [FreeBSD, 2002C]. [FreeBSD, 2002D] 6.8. Reacting to misbehaviour [FreeBSD, 2001] has a number of rules that committers should follow. However, it happens that these rules are broken. The following rules exist in order to be able to react to misbehaviour. They specify what actions will result in how long a suspension the committer's commit privileges. Committing during code freezes without the approval of the Release Engineering team - 2 days Committing to a security branch without approval - 2 days Commit wars - 5 days to all participating parties Impolite or inappropriate behaviour - 5 days [Lehey, 2002] For the suspensions to be efficient, any single core member can implement a suspension before discussing it on the “core” mailing list. Repeat offenders can, with a 2/3 vote by core, receive harsher penalties, including permanent removal of commit privileges. (However, the latter is always viewed as a last resort, due to its inherent tendency to create controversy). All suspensions are posted to the “developers” mailing list, a list available to committers only. It is important that you cannot be suspended for making technical errors. All penalties come from breaking social etiquette. Hats involved in this process: Core team Committer 6.9. Release engineering The FreeBSD project has a Release Engineering team with a principal release engineer that is responsible for creating releases of FreeBSD that can be brought out to the user community via the net or sold in retail outlets. Since FreeBSD is available on multiple platforms and releases for the different architectures are made available at the same time, the team has one person in charge of each architecture. Also, there are roles in the team responsible for coordinating quality assurance efforts, building a package set and for having an updated set of documents. When referring to the release engineer, a representative for the release engineering team is meant. When a release is coming, the FreeBSD project changes shape somewhat. A release schedule is made containing feature- and code-freezes, release of interim releases and the final release. A feature-freeze means no new features are allowed to be committed to the branch without the release engineers' explicit consent. Code-freeze means no changes to the code (like bugs-fixes) are allowed to be committed without the release engineers explicit consent. This feature- and code-freeze is known as stabilising. During the release process, the release engineer has the full authority to revert to older versions of code and thus "back out" changes should he find that the changes are not suitable to be included in the release. There are three different kinds of releases: .0 releases are the first release of a major version. These are branched of the -CURRENT branch and have a significantly longer release engineering cycle due to the unstable nature of the -CURRENT branch .X releases are releases of the -STABLE branch. They are scheduled to come out every 4 months. .X.Y releases are security releases that follow the .X branch. These come out only when sufficient security fixes have been merged since the last release on that branch. New features are rarely included, and the security team is far more involved in these than in regular releases. For releases of the -STABLE-branch, the release process starts 45 days before the anticipated release date. During the first phase, the first 15 days, the developers merge what changes they have had in -CURRENT that they want to have in the release to the release branch. When this period is over, the code enters a 15 day code freeze in which only bug fixes, documentation updates, security-related fixes and minor device driver changes are allowed. These changes must be approved by the release engineer in advance. At the beginning of the last 15 day period a release candidate is created for widespread testing. Updates are less likely to be allowed during this period, except for important bug fixes and security updates. In this final period, all releases are considered release candidates. At the end of the release process, a release is created with the new version number, including binary distributions on web sites and the creation of a CD-ROM images. However, the release is not considered "really released" until a PGP-signed message stating exactly that, is sent to the mailing list freebsd-announce; anything labelled as a "release" before that may well be in-process and subject to change before the PGP-signed message is sent. [11]. The releases of the -CURRENT-branch (that is, all releases that end with “.0”) are very similar, but with twice as long timeframe. It starts 8 weeks prior to the release with announcement of the release time line. Two weeks into the release process, the feature freeze is initiated and performance tweaks should be kept to a minimum. Four weeks prior to the release, an official beta version is made available. Two weeks prior to release, the code is officially branched into a new version. This version is given release candidate status, and as with the release engineering of -STABLE, the code freeze of the release candidate is hardened. However, development on the main development branch can continue. Other than these differences, the release engineering processes are alike. .0 releases go into their own branch and are aimed mainly at early adopters. The branch then goes through a period of stabilisation, and it is not until the Release Engineering Team> decides the demands to stability have been satisfied that the branch becomes -STABLE and -CURRENT targets the next major version. While this for the majority has been with .1 versions, this is not a demand. Most releases are made when a given date that has been deemed a long enough time since the previous release comes. A target is set for having major releases every 18 months and minor releases every 4 months. The user community has made it very clear that security and stability cannot be sacrificed by self-imposed deadlines and target release dates. For slips of time not to become too long with regards to security and stability issues, extra discipline is required when committing changes to -STABLE. Figure 6-9. Process summary: release engineering These are the stages in the release engineering process. Multiple release candidates may be created until the release is deemed stable enough to be released. [FreeBSD, 2002E] Chapter 7 Tools The major support tools for supporting the development process are CVS, CVSup, Perforce, GNATS, Mailman and OpenSSH. Except for CVSup, these are externally developed tools. These tools are commonly used in the open source world. 7.1. Concurrent Versions System (CVS) Concurrent Versions System or simply “CVS” is a system to handle multiple versions of text files and tracking who committed what changes and why. A project lives within a “repository” and different versions are considered different “branches”. 7.2. CVSup CVSup is a software package for distributing and updating collections of files across a network. It consists of a client program, cvsup, and a server program, cvsupd. The package is tailored specifically for distributing CVS repositories, and by taking advantage of CVS' properties, it performs updates much faster than traditional systems. 7.3. GNATS GNATS is a maintenance database consisting of a set of tools to track bugs at a central site. It supports the bug tracking process for sending and handling bugs as well as querying and updating the database and editing bug reports. The project uses one of its many client interfaces, “send-pr”, to send “Problem Reports” by email to the projects central GNATS server. The committers have also web and command-line clients available. 7.4. Mailman Mailman is a program that automates the management of mailing lists. The FreeBSD Project uses it to run 16 general lists, 60 technical lists, 4 limited lists and 5 lists with CVS commit logs. It is also used for many mailing lists set up and used by other people and projects in the FreeBSD community. General lists are lists for the general public, technical lists are mainly for the development of specific areas of interest, and closed lists are for internal communication not intended for the general public. The majority of all the communication in the project goes through these 85 lists [FreeBSD, 2003A, Appendix C]. 7.5. Perforce Perforce is a commercial software configuration management system developed by Perforce Systems that is available on over 50 operating systems. It is a collection of clients built around the Perforce server that contains the central file repository and tracks the operations done upon it. The clients are both clients for accessing the repository and administration of its configuration. 7.6. Pretty Good Privacy Pretty Good Privacy, better known as PGP, is a cryptosystem using a public key architecture to allow people to digitally sign and/or encrypt information in order to ensure secure communication between two parties. A signature is used when sending information out many recipients, enabling them to verify that the information has not been tampered with before they received it. In the FreeBSD Project this is the primary means of ensuring that information has been written by the person who claims to have written it, and not altered in transit. 7.7. Secure Shell Secure Shell is a standard for securely logging into a remote system and for executing commands on the remote system. It allows other connections, called tunnels, to be established and protected between the two involved systems. This standard exists in two primary versions, and only version two is used for the FreeBSD Project. The most common implementation of the standard is OpenSSH that is a part of the project's main distribution. Since its source is updated more often than FreeBSD releases, the latest version is also available in the ports tree. Chapter 8 Sub-projects Sub-projects are formed to reduce the amount of communication needed to coordinate the group of developers. When a problem area is sufficiently isolated, most communication would be within the group focusing on the problem, requiring less communication with the groups they communicate with than were the group not isolated. 8.1. The Ports Subproject A “port” is a set of meta-data and patches that are needed to fetch, compile and install correctly an external piece of software on a FreeBSD system. The amount of ports have grown at a tremendous rate, as shown by the following figure. Figure 8-1. Number of ports added between 1996 and 2005 Figure 8-1 is taken from the FreeBSD web site. It shows the number of ports available to FreeBSD in the period 1995 to 2005. It looks like the curve has first grown exponentionally, and then since the middle of 2001 grown linerly. As the external software described by the port often is under continued development, the amount of work required to maintain the ports is already large, and increasing. This has led to the ports part of the FreeBSD project gaining a more empowered structure, and is more and more becoming a sub-project of the FreeBSD project. Ports has its own core team with the Ports Manager as its leader, and this team can appoint committers without FreeBSD Core's approval. Unlike in the FreeBSD Project, where a lot of maintenance frequently is rewarded with a commit bit, the ports sub-project contains many active maintainers that are not committers. Unlike the main project, the ports tree is not branched. Every release of FreeBSD follows the current ports collection and has thus available updated information on where to find programs and how to build them. This, however, means that a port that makes dependencies on the system may need to have variations depending on what version of FreeBSD it runs on. With an unbranched ports repository it is not possible to guarantee that any port will run on anything other than -CURRENT and -STABLE, in particular older, minor releases. There is neither the infrastructure nor volunteer time needed to guarantee this. For efficiency of communication, teams depending on Ports, such as the release engineering team, have their own ports liaisons. 8.2. The FreeBSD Documentation Project The FreeBSD Documentation project was started January 1995. From the initial group of a project leader, four team leaders and 16 members, they are now a total of 44 committers. The documentation mailing list has just under 300 members, indicating that there is quite a large community around it. The goal of the Documentation project is to provide good and useful documentation of the FreeBSD project, thus making it easier for new users to get familiar with the system and detailing advanced features for the users. The main tasks in the Documentation project are to work on current projects in the “FreeBSD Documentation Set”, and translate the documentation to other languages. Like the FreeBSD Project, documentation is split in the same branches. This is done so that there is always an updated version of the documentation for each version. Only documentation errors are corrected in the security branches. Like the ports sub-project, the Documentation project can appoint documentation committers without FreeBSD Core's approval. [FreeBSD, 2003B]. The Documentation project has a primer. This is used both to introduce new project members to the standard tools and syntaxes and acts as a reference when working on the project. References [Brooks, 1995] Frederick P. Brooks, 1975, 1995, 0201835959, Addison-Wesley Pub Co, The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition). [Saers, 2003] Niklas Saers, 2003, A project model for the FreeBSD Project: Candidatus Scientiarum thesis. [Jørgensen, 2001] Niels Jørgensen, 2001, Putting it All in the Trunk: Incremental Software Development in the FreeBSD Open Source Project. [PMI, 2000] Project Management Institute, 1996, 2000, 1-880410-23-0, Project Management Institute, Pennsylvania, PMBOK Guide: A Guide to the Project Management Body of Knowledge, 2000 Edition. [FreeBSD, 2000A] 2002, Core Bylaws. [FreeBSD, 2002A] 2002, FreeBSD Developer's Handbook. [FreeBSD, 2002B] 2002, Core team election 2002. [Losh, 2002] Warner Losh, 2002, Working with Hats. [FreeBSD, 2002C] Dag-Erling Smørgrav and Hiten Pandya, 2002, The FreeBSD Documentation Project, Problem Report Handling Guidelines. [FreeBSD, 2002D] Dag-Erling Smørgrav, 2002, The FreeBSD Documentation Project, Writing FreeBSD Problem Reports. [FreeBSD, 2001] 2001, The FreeBSD Documentation Project, Committers Guide. [FreeBSD, 2002E] Murray Stokely, 2002, The FreeBSD Documentation Project, FreeBSD Release Engineering. [FreeBSD, 2003A] The FreeBSD Documentation Project, FreeBSD Handbook. [FreeBSD, 2002F] 2002, The FreeBSD Documentation Project, Contributors to FreeBSD. [FreeBSD, 2002G] 2002, The FreeBSD Project, Core team elections 2002. [FreeBSD, 2002H] 2002, The FreeBSD Project, Commit Bit Expiration Policy: 2002/04/06 15:35:30. [FreeBSD, 2002I] 2002, The FreeBSD Project, New Account Creation Procedure: 2002/08/19 17:11:27. [FreeBSD, 2003B] 2002, The FreeBSD Documentation Project, FreeBSD DocEng Team Charter: 2003/03/16 12:17. [Lehey, 2002] Greg Lehey, 2002, Greg Lehey, Two years in the trenches: The evolution of a software project. Notes [1] This goes hand-in-hand with Brooks' law that “adding another person to a late project will make it later” since it will increase the communication needs Brooks, 1995. A project model is a tool to reduce the communication needs. [2] Statistics are generated by counting the number of entries in the file fetched by portsdb by April 1st, 2005. portsdb is a part of the port sysutils/portupgrade. [3] The period from January 1st, 2004 to December 31st, 2004 was examined to find this number. [4] For instance, the development of the Bluetooth stack started as a sub-project until it was deemed stable enough to be merged into the -CURRENT branch. Now it is a part of the core FreeBSD system. [5] According to Kirk McKusick, after 20 years of developing UNIX operating systems, the interfaces are for the most part figured out. There is therefore no need for much design. However, new applications of the system and new hardware leads to some implementations being more beneficial than those that used to be preferred. One example is the introduction of web browsing that made the normal TCP/IP connection a short burst of data rather than a steady stream over a longer period of time. [6] The first release this actually happened for was 4.5-RELEASE, but security branches were at the same time created for 4.3-RELEASE and 4.4-RELEASE. [7] There is a terminology overlap with respect to the word "stable", which leads to some confusion. The -STABLE branch is still a development branch, whose goal is to be useful for most people. If it is never acceptable for a system to get changes that are not announced at the time it is deployed, that system should run a security branch. [8] The first Core election was held September 2000 [9] More and more tests are however performed when building the system &

18,141

社区成员

发帖
与我相关
我的任务
社区描述
Windows客户端使用相关问题交流社区
社区管理员
  • Windows客户端使用社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧