错误'operator =' function is unavailable in 'A'

boycott2 2008-08-28 12:27:07
// testCreate.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
class A
{
public:
A():i(2)
{

printf("A is create\n");
}

~A()
{
printf("A is destroyed\n");
}

const int i;
};

A& createA(A &a)
{
A b;
return b;
}

int main(int argc, _TCHAR* argv[])
{

A a;
a=createA(a);

return 0;
}

编译总是报错:error C2582: 'operator =' function is unavailable in 'A'
请大家看看什么原因
...全文
316 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
phisherr 2008-08-28
  • 打赏
  • 举报
回复
当然,要返回堆上的指针
phisherr 2008-08-28
  • 打赏
  • 举报
回复
没有operator =函数,
再说有const成员默认的operator =也不行;
所以你还是 createA(A &a) 返回指针把
boycott2 2008-08-28
  • 打赏
  • 举报
回复
恩,有const成员默认的operator =也不行。谢谢phisher
Introduction xiii CHAPTER 1 Standard Procedures for SMS in GSM Networks 1 1.1 GSM Network Architecture and Principle of the SMS Procedure 1 1.2 Implementation of SMS Services 3 1.2.1 SMS-MO Implementation 3 1.2.2 The SMS-MT Implementation 6 1.2.3 Sending Commands to the SMSC 14 1.2.4 Addressing the Foreign Network HLRs for SMS-MT 15 1.2.5 Summary of the Network Equipment Model for SMS 16 1.3 MAP Dialogue Models at the Application Level 16 1.3.1 Request and CNF (Simple) Dialogue 17 1.3.2 Concatenated SMS Dialogue: More Message to Send 17 1.3.3 Update Location Dialogue 17 1.3.4 Send Routing Information for SM Dialogue 18 1.4 SCCP Addresses: The Tool for Flexible International Roaming 18 1.5 Mobility Procedures 19 1.5.1 Update Location Procedure 20 1.5.2 Making a Telephone Call to a Mobile 22 1.6 GPRS Procedures: The Gc Interface 23 1.7 SMS Billing Records and Methods 23 1.7.1 SMS-MO CDRs 25 1.7.2 SMS-MT CDRs 26 1.8 Load Test of an SMSC 26 1.8.1 SMS-MT Test Configuration 26 1.8.2 Results and Performance Model 26 Exercises 28 References 28 CHAPTER 2 SS7 Network and Protocol Layers 29 2.1 History 29 2.2 Efficient and Secure Worldwide Telecommunications 29 2.3 MTP Protocol (OSI Layers 1–3) 30 2.3.1 MTP Layer 1: Signaling Data Link Level 31 v 2.3.2 MTP Layer 2: Signaling Link Functions 31 2.3.3 MTP Layer 3: Signaling Network Functions 34 2.4 Signaling Connection Control Part 37 2.4.1 SCCP Message Format 38 2.4.2 SCCP Layer Architecture 38 2.4.3 SCCP Routing 39 2.5 Transaction Capability Application Part (TCAP) 42 2.5.1 Main Features of TCAP 43 2.5.2 TCAP Architecture 43 2.5.3 TCAP Operation Invocation Example 44 2.6 User-Level Application Parts: MAP, INAP, CAMEL 45 2.6.1 User Part Mapping onto TCAP: MAP Example 45 2.6.2 Routing Design 48 2.6.3 Service-Oriented Design: Application to an SS7-Based Fault-Tolerant System 50 2.7 SS7 and VoIP Interworking Overview SIGTRAN 51 2.7.1 SCTP 51 2.7.2 Interworking with SS7 52 2.7.3 M3UA Layer 52 2.7.4 M2UA Layer 52 2.7.5 SUA Layer 52 2
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column <operator> value  value <operator> column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is
文将对 Linux™ 程序员可以使用的内存管理技术进行概述,虽然关注的重点是 C 语言,但同样也适用于其他语言。文中将为您提供如何管理内存的细节,然后将进一步展示如何手工管理内存,如何使用引用计数或者内存池来半手工地管理内存,以及如何使用垃圾收集自动管理内存。 为什么必须管理内存 内存管理是计算机编程最为基本的领域之一。在很多脚本语言中,您不必担心内存是如何管理的,这并不能使得内存管理的重要性有一点点降低。对实际编程来说,理解您的内存管理器的能力与局限性至关重要。在大部分系统语言中,比如 C 和 C++,您必须进行内存管理。本文将介绍手工的、半手工的以及自动的内存管理实践的基本概念。 追溯到在 Apple II 上进行汇编语言编程的时代,那时内存管理还不是个大问题。您实际上在运行整个系统。系统有多少内存,您就有多少内存。您甚至不必费心思去弄明白它有多少内存,因为每一台机器的内存数量都相同。所以,如果内存需要非常固定,那么您只需要选择一个内存范围并使用它即可。 不过,即使是在这样一个简单的计算机中,您也会有问题,尤其是当您不知道程序的每个部分将需要多少内存时。如果您的空间有限,而内存需求是变化的,那么您需要一些方法来满足这些需求: 确定您是否有足够的内存来处理数据。 从可用的内存中获取一部分内存。 向可用内存池(pool)中返回部分内存,以使其可以由程序的其他部分或者其他程序使用。 实现这些需求的程序库称为 分配程序(allocators),因为它们负责分配和回收内存。程序的动态性越强,内存管理就越重要,您的内存分配程序的选择也就更重要。让我们来了解可用于内存管理的不同方法,它们的好处与不足,以及它们最适用的情形。 回页首 C 风格的内存分配程序 C 编程语言提供了两个函数来满足我们的三个需求: malloc:该函数分配给定的字节数,并返回一个指向它们的指针。如果没有足够的可用内存,那么它返回一个空指针。 free:该函数获得指向由 malloc 分配的内存片段的指针,并将其释放,以便以后的程序或操作系统使用(实际上,一些 malloc 实现只能将内存归还给程序,而无法将内存归还给操作系统)。 物理内存和虚拟内存 要理解内存在程序中是如何分配的,首先需要理解如何将内存从操作系统分配给程序。计算机上的每一个进程都认为自己可以访问所有的物理内存。显然,由于同时在运行多个程序,所以每个进程不可能拥有全部内存。实际上,这些进程使用的是 虚拟内存。 只是作为一个例子,让我们假定您的程序正在访问地址为 629 的内存。不过,虚拟内存系统不需要将其存储在位置为 629 的 RAM 中。实际上,它甚至可以不在 RAM 中 —— 如果物理 RAM 已经满了,它甚至可能已经被转移到硬盘上!由于这类地址不必反映内存所在的物理位置,所以它们被称为虚拟内存。操作系统维持着一个虚拟地址到物理地址的转换的表,以便计算机硬件可以正确地响应地址请求。并且,如果地址在硬盘上而不是在 RAM 中,那么操作系统将暂时停止您的进程,将其他内存转存到硬盘中,从硬盘上加载被请求的内存,然后再重新启动您的进程。这样,每个进程都获得了自己可以使用的地址空间,可以访问比您物理上安装的内存更多的内存。 在 32-位 x86 系统上,每一个进程可以访问 4 GB 内存。现在,大部分人的系统上并没有 4 GB 内存,即使您将 swap 也算上, 每个进程所使用的内存也肯定少于 4 GB。因此,当加载一个进程时,它会得到一个取决于某个称为 系统中断点(system break)的特定地址的初始内存分配。该地址之后是未被映射的内存 —— 用于在 RAM 或者硬盘中没有分配相应物理位置的内存。因此,如果一个进程运行超出了它初始分配的内存,那么它必须请求操作系统“映射进来(map in)”更多的内存。(映射是一个表示一一对应关系的数学术语 —— 当内存的虚拟地址有一个对应的物理地址来存储内存内容时,该内存将被映射。) 基于 UNIX 的系统有两个可映射到附加内存中的基本系统调用: brk: brk() 是一个非常简单的系统调用。还记得系统中断点吗?该位置是进程映射的内存边界。 brk() 只是简单地将这个位置向前或者向后移动,就可以向进程添加内存或者从进程取走内存。 mmap: mmap(),或者说是“内存映像”,类似于 brk(),但是更为灵活。首先,它可以映射任何位置的内存,而不单单只局限于进程。其次,它不仅可以将虚拟地址映射到物理的 RAM 或者 swap,它还可以将它们映射到文件和文件位置,这样,读写内存将对文件中的数据进行读写。不过,在这里,我们只关心 mmap 向进程添加被映射的内存的能力。 munmap() 所做的事情与 mmap() 相反。 如您所见, brk() 或者 mmap() 都可以用来向我们的进程添加额外的虚拟内存。在我们的例子中将使用 brk(),因为它更简单,更通用。 实现一个简单的分配程序 如果您曾经编写过很多 C 程序,那么您可能曾多次使用过 malloc() 和 free()。不过,您可能没有用一些时间去思考它们在您的操作系统中是如何实现的。本节将向您展示 malloc 和 free 的一个最简化实现的代码,来帮助说明管理内存时都涉及到了哪些事情。 要试着运行这些示例,需要先 复制本代码清单,并将其粘贴到一个名为 malloc.c 的文件中。接下来,我将一次一个部分地对该清单进行解释。 在大部分操作系统中,内存分配由以下两个简单的函数来处理: void *malloc(long numbytes):该函数负责分配 numbytes 大小的内存,并返回指向第一个字节的指针。 void free(void *firstbyte):如果给定一个由先前的 malloc 返回的指针,那么该函数会将分配的空间归还给进程的“空闲空间”。 malloc_init 将是初始化内存分配程序的函数。它要完成以下三件事:将分配程序标识为已经初始化,找到系统中最后一个有效内存地址,然后建立起指向我们管理的内存的指针。这三个变量都是全局变量: 清单 1. 我们的简单分配程序的全局变量 int has_initialized = 0; void *managed_memory_start; void *last_valid_address; 如前所述,被映射的内存的边界(最后一个有效地址)常被称为系统中断点或者 当前中断点。在很多 UNIX® 系统中,为了指出当前系统中断点,必须使用 sbrk(0) 函数。 sbrk 根据参数中给出的字节数移动当前系统中断点,然后返回新的系统中断点。使用参数 0 只是返回当前中断点。这里是我们的 malloc 初始化代码,它将找到当前中断点并初始化我们的变量: 清单 2. 分配程序初始化函数 /* Include the sbrk function */ #include void malloc_init() { /* grab the last valid address from the OS */ last_valid_address = sbrk(0); /* we don't have any memory to manage yet, so *just set the beginning to be last_valid_address */ managed_memory_start = last_valid_address; /* Okay, we're initialized and ready to go */ has_initialized = 1; } 现在,为了完全地管理内存,我们需要能够追踪要分配和回收哪些内存。在对内存块进行了 free 调用之后,我们需要做的是诸如将它们标记为未被使用的等事情,并且,在调用 malloc 时,我们要能够定位未被使用的内存块。因此, malloc 返回的每块内存的起始处首先要有这个结构: 清单 3. 内存控制块结构定义 struct mem_control_block { int is_available; int size; }; 现在,您可能会认为当程序调用 malloc 时这会引发问题 —— 它们如何知道这个结构?答案是它们不必知道;在返回指针之前,我们会将其移动到这个结构之后,把它隐藏起来。这使得返回的指针指向没有用于任何其他用途的内存。那样,从调用程序的角度来看,它们所得到的全部是空闲的、开放的内存。然后,当通过 free() 将该指针传递回来时,我们只需要倒退几个内存字节就可以再次找到这个结构。 在讨论分配内存之前,我们将先讨论释放,因为它更简单。为了释放内存,我们必须要做的惟一一件事情就是,获得我们给出的指针,回退 sizeof(struct mem_control_block) 个字节,并将其标记为可用的。这里是对应的代码: 清单 4. 解除分配函数 void free(void *firstbyte) { struct mem_control_block *mcb; /* Backup from the given pointer to find the * mem_control_block */ mcb = firstbyte - sizeof(struct mem_control_block); /* Mark the block as being available */ mcb->is_available = 1; /* That's It! We're done. */ return; } 如您所见,在这个分配程序中,内存的释放使用了一个非常简单的机制,在固定时间内完成内存释放。分配内存稍微困难一些。以下是该算法的略述: 清单 5. 主分配程序的伪代码 1. If our allocator has not been initialized, initialize it. 2. Add sizeof(struct mem_control_block) to the size requested. 3. start at managed_memory_start. 4. Are we at last_valid address? 5. If we are: A. We didn't find any existing space that was large enough -- ask the operating system for more and return that. 6. Otherwise: A. Is the current space available (check is_available from the mem_control_block)? B. If it is: i) Is it large enough (check "size" from the mem_control_block)? ii) If so: a. Mark it as unavailable b. Move past mem_control_block and return the pointer iii) Otherwise: a. Move forward "size" bytes b. Go back go step 4 C. Otherwise: i) Move forward "size" bytes ii) Go back to step 4 我们主要使用连接的指针遍历内存来寻找开放的内存块。这里是代码: 清单 6. 主分配程序 void *malloc(long numbytes) { /* Holds where we are looking in memory */ void *current_location; /* This is the same as current_location, but cast to a * memory_control_block */ struct mem_control_block *current_location_mcb; /* This is the memory location we will return. It will * be set to 0 until we find something suitable */ void *memory_location; /* Initialize if we haven't already done so */ if(! has_initialized) { malloc_init(); } /* The memory we search for has to include the memory * control block, but the users of malloc don't need * to know this, so we'll just add it in for them. */ numbytes = numbytes + sizeof(struct mem_control_block); /* Set memory_location to 0 until we find a suitable * location */ memory_location = 0; /* Begin searching at the start of managed memory */ current_location = managed_memory_start; /* Keep going until we have searched all allocated space */ while(current_location != last_valid_address) { /* current_location and current_location_mcb point * to the same address. However, current_location_mcb * is of the correct type, so we can use it as a struct. * current_location is a void pointer so we can use it * to calculate addresses. */ current_location_mcb = (struct mem_control_block *)current_location; if(current_location_mcb->is_available) { if(current_location_mcb->size >= numbytes) { /* Woohoo! We've found an open, * appropriately-size location. */ /* It is no longer available */ current_location_mcb->is_available = 0; /* We own it */ memory_location = current_location; /* Leave the loop */ break; } } /* If we made it here, it's because the Current memory * block not suitable; move to the next one */ current_location = current_location + current_location_mcb->size; } /* If we still don't have a valid location, we'll * have to ask the operating system for more memory */ if(! memory_location) { /* Move the program break numbytes further */ sbrk(numbytes); /* The new memory will be where the last valid * address left off */ memory_location = last_valid_address; /* We'll move the last valid address forward * numbytes */ last_valid_address = last_valid_address + numbytes; /* We need to initialize the mem_control_block */ current_location_mcb = memory_location; current_location_mcb->is_available = 0; current_location_mcb->size = numbytes; } /* Now, no matter what (well, except for error conditions), * memory_location has the address of the memory, including * the mem_control_block */ /* Move the pointer past the mem_control_block */ memory_location = memory_location + sizeof(struct mem_control_block); /* Return the pointer */ return memory_location; } 这就是我们的内存管理器。现在,我们只需要构建它,并在程序中使用它即可。 运行下面的命令来构建 malloc 兼容的分配程序(实际上,我们忽略了 realloc() 等一些函数,不过, malloc() 和 free() 才是最主要的函数): 清单 7. 编译分配程序 gcc -shared -fpic malloc.c -o malloc.so 该程序将生成一个名为 malloc.so 的文件,它是一个包含有我们的代码的共享库。 在 UNIX 系统中,现在您可以用您的分配程序来取代系统的 malloc(),做法如下: 清单 8. 替换您的标准的 malloc LD_PRELOAD=/path/to/malloc.so export LD_PRELOAD LD_PRELOAD 环境变量使动态链接器在加载任何可执行程序之前,先加载给定的共享库的符号。它还为特定库中的符号赋予优先权。因此,从现在起,该会话中的任何应用程序都将使用我们的 malloc(),而不是只有系统的应用程序能够使用。有一些应用程序不使用 malloc(),不过它们是例外。其他使用 realloc() 等其他内存管理函数的应用程序,或者错误地假定 malloc() 内部行为的那些应用程序,很可能会崩溃。ash shell 似乎可以使用我们的新 malloc() 很好地工作。 如果您想确保 malloc() 正在被使用,那么您应该通过向函数的入口点添加 write() 调用来进行测试。 我们的内存管理器在很多方面都还存在欠缺,但它可以有效地展示内存管理需要做什么事情。它的某些缺点包括: 由于它对系统中断点(一个全局变量)进行操作,所以它不能与其他分配程序或者 mmap 一起使用。 当分配内存时,在最坏的情形下,它将不得不遍历 全部进程内存;其中可能包括位于硬盘上的很多内存,这意味着操作系统将不得不花时间去向硬盘移入数据和从硬盘中移出数据。 没有很好的内存不足处理方案( malloc 只假定内存分配是成功的)。 它没有实现很多其他的内存函数,比如 realloc()。 由于 sbrk() 可能会交回比我们请求的更多的内存,所以在堆(heap)的末端会遗漏一些内存。 虽然 is_available 标记只包含一位信息,但它要使用完整的 4-字节 的字。 分配程序不是线程安全的。 分配程序不能将空闲空间拼合为更大的内存块。 分配程序的过于简单的匹配算法会导致产生很多潜在的内存碎片。 我确信还有很多其他问题。这就是为什么它只是一个例子! 其他 malloc 实现 malloc() 的实现有很多,这些实现各有优点与缺点。在设计一个分配程序时,要面临许多需要折衷的选择,其中包括: 分配的速度。 回收的速度。 有线程的环境的行为。 内存将要被用光时的行为。 局部缓存。 簿记(Bookkeeping)内存开销。 虚拟内存环境中的行为。 小的或者大的对象。 实时保证。 每一个实现都有其自身的优缺点集合。在我们的简单的分配程序中,分配非常慢,而回收非常快。另外,由于它在使用虚拟内存系统方面较差,所以它最适于处理大的对象。 还有其他许多分配程序可以使用。其中包括: Doug Lea Malloc:Doug Lea Malloc 实际上是完整的一组分配程序,其中包括 Doug Lea 的原始分配程序,GNU libc 分配程序和 ptmalloc。 Doug Lea 的分配程序有着与我们的版本非常类似的基本结构,但是它加入了索引,这使得搜索速度更快,并且可以将多个没有被使用的块组合为一个大的块。它还支持缓存,以便更快地再次使用最近释放的内存。 ptmalloc 是 Doug Lea Malloc 的一个扩展版本,支持多线程。在本文后面的 参考资料部分中,有一篇描述 Doug Lea 的 Malloc 实现的文章。 BSD Malloc:BSD Malloc 是随 4.2 BSD 发行的实现,包含在 FreeBSD 之中,这个分配程序可以从预先确实大小的对象构成的池中分配对象。它有一些用于对象大小的 size 类,这些对象的大小为 2 的若干次幂减去某一常数。所以,如果您请求给定大小的一个对象,它就简单地分配一个与之匹配的 size 类。这样就提供了一个快速的实现,但是可能会浪费内存。在 参考资料部分中,有一篇描述该实现的文章。 Hoard:编写 Hoard 的目标是使内存分配在多线程环境中进行得非常快。因此,它的构造以锁的使用为中心,从而使所有进程不必等待分配内存。它可以显著地加快那些进行很多分配和回收的多线程进程的速度。在 参考资料部分中,有一篇描述该实现的文章。 众多可用的分配程序中最有名的就是上述这些分配程序。如果您的程序有特别的分配需求,那么您可能更愿意编写一个定制的能匹配您的程序内存分配方式的分配程序。不过,如果不熟悉分配程序的设计,那么定制分配程序通常会带来比它们解决的问题更多的问题。要获得关于该主题的适当的介绍,请参阅 Donald Knuth 撰写的 The Art of Computer Programming Volume 1: Fundamental Algorithms 中的第 2.5 节“Dynamic Storage Allocation”(请参阅 参考资料中的链接)。它有点过时,因为它没有考虑虚拟内存环境,不过大部分算法都是基于前面给出的函数。 在 C++ 中,通过重载 operator new(),您可以以每个类或者每个模板为单位实现自己的分配程序。在 Andrei Alexandrescu 撰写的 Modern C++ Design 的第 4 章(“Small Object Allocation”)中,描述了一个小对象分配程序(请参阅 参考资料中的链接)。 基于 malloc() 的内存管理的缺点 不只是我们的内存管理器有缺点,基于 malloc() 的内存管理器仍然也有很多缺点,不管您使用的是哪个分配程序。对于那些需要保持长期存储的程序使用 malloc() 来管理内存可能会非常令人失望。如果您有大量的不固定的内存引用,经常难以知道它们何时被释放。生存期局限于当前函数的内存非常容易管理,但是对于生存期超出该范围的内存来说,管理内存则困难得多。而且,关于内存管理是由进行调用的程序还是由被调用的函数来负责这一问题,很多 API 都不是很明确。 因为管理内存的问题,很多程序倾向于使用它们自己的内存管理规则。C++ 的异常处理使得这项任务更成问题。有时好像致力于管理内存分配和清理的代码比实际完成计算任务的代码还要多!因此,我们将研究内存管理的其他选择。 回页首 半自动内存管理策略 引用计数 引用计数是一种 半自动(semi-automated)的内存管理技术,这表示它需要一些编程支持,但是它不需要您确切知道某一对象何时不再被使用。引用计数机制为您完成内存管理任务。 在引用计数中,所有共享的数据结构都有一个域来包含当前活动“引用”结构的次数。当向一个程序传递一个指向某个数据结构指针时,该程序会将引用计数增加 1。实质上,您是在告诉数据结构,它正在被存储在多少个位置上。然后,当您的进程完成对它的使用后,该程序就会将引用计数减少 1。结束这个动作之后,它还会检查计数是否已经减到零。如果是,那么它将释放内存。 这样做的好处是,您不必追踪程序中某个给定的数据结构可能会遵循的每一条路径。每次对其局部的引用,都将导致计数的适当增加或减少。这样可以防止在使用数据结构时释放该结构。不过,当您使用某个采用引用计数的数据结构时,您必须记得运行引用计数函数。另外,内置函数和第三方的库不会知道或者可以使用您的引用计数机制。引用计数也难以处理发生循环引用的数据结构。 要实现引用计数,您只需要两个函数 —— 一个增加引用计数,一个减少引用计数并当计数减少到零时释放内存。 一个示例引用计数函数集可能看起来如下所示: 清单 9. 基本的引用计数函数 /* Structure Definitions*/ /* Base structure that holds a refcount */ struct refcountedstruct { int refcount; } /* All refcounted structures must mirror struct * refcountedstruct for their first variables */ /* Refcount maintenance functions */ /* Increase reference count */ void REF(void *data) { struct refcountedstruct *rstruct; rstruct = (struct refcountedstruct *) data; rstruct->refcount++; } /* Decrease reference count */ void UNREF(void *data) { struct refcountedstruct *rstruct; rstruct = (struct refcountedstruct *) data; rstruct->refcount--; /* Free the structure if there are no more users */ if(rstruct->refcount == 0) { free(rstruct); } } REF 和 UNREF 可能会更复杂,这取决于您想要做的事情。例如,您可能想要为多线程程序增加锁,那么您可能想扩展 refcountedstruct,使它同样包含一个指向某个在释放内存之前要调用的函数的指针(类似于面向对象语言中的析构函数 —— 如果您的结构中包含这些指针,那么这是 必需的)。 当使用 REF 和 UNREF 时,您需要遵守这些指针的分配规则: UNREF 分配前左端指针(left-hand-side pointer)指向的值。 REF 分配后左端指针(left-hand-side pointer)指向的值。 在传递使用引用计数的结构的函数中,函数需要遵循以下这些规则: 在函数的起始处 REF 每一个指针。 在函数的结束处 UNREF 第一个指针。 以下是一个使用引用计数的生动的代码示例: 清单 10. 使用引用计数的示例 /* EXAMPLES OF USAGE */ /* Data type to be refcounted */ struct mydata { int refcount; /* same as refcountedstruct */ int datafield1; /* Fields specific to this struct */ int datafield2; /* other declarations would go here as appropriate */ }; /* Use the functions in code */ void dosomething(struct mydata *data) { REF(data); /* Process data */ /* when we are through */ UNREF(data); } struct mydata *globalvar1; /* Note that in this one, we don't decrease the * refcount since we are maintaining the reference * past the end of the function call through the * global variable */ void storesomething(struct mydata *data) { REF(data); /* passed as a parameter */ globalvar1 = data; REF(data); /* ref because of Assignment */ UNREF(data); /* Function finished */ } 由于引用计数是如此简单,大部分程序员都自已去实现它,而不是使用库。不过,它们依赖于 malloc 和 free 等低层的分配程序来实际地分配和释放它们的内存。 在 Perl 等高级语言中,进行内存管理时使用引用计数非常广泛。在这些语言中,引用计数由语言自动地处理,所以您根本不必担心它,除非要编写扩展模块。由于所有内容都必须进行引用计数,所以这会对速度产生一些影响,但它极大地提高了编程的安全性和方便性。以下是引用计数的益处: 实现简单。 易于使用。 由于引用是数据结构的一部分,所以它有一个好的缓存位置。 不过,它也有其不足之处: 要求您永远不要忘记调用引用计数函数。 无法释放作为循环数据结构的一部分的结构。 减缓几乎每一个指针的分配。 尽管所使用的对象采用了引用计数,但是当使用异常处理(比如 try 或 setjmp()/ longjmp())时,您必须采取其他方法。 需要额外的内存来处理引用。 引用计数占用了结构中的第一个位置,在大部分机器中最快可以访问到的就是这个位置。 在多线程环境中更慢也更难以使用。 C++ 可以通过使用 智能指针(smart pointers)来容忍程序员所犯的一些错误,智能指针可以为您处理引用计数等指针处理细节。不过,如果不得不使用任何先前的不能处理智能指针的代码(比如对 C 库的联接),实际上,使用它们的后果通实比不使用它们更为困难和复杂。因此,它通常只是有益于纯 C++ 项目。如果您想使用智能指针,那么您实在应该去阅读 Alexandrescu 撰写的 Modern C++ Design 一书中的“Smart Pointers”那一章。 内存池 内存池是另一种半自动内存管理方法。内存池帮助某些程序进行自动内存管理,这些程序会经历一些特定的阶段,而且每个阶段中都有分配给进程的特定阶段的内存。例如,很多网络服务器进程都会分配很多针对每个连接的内存 —— 内存的最大生存期限为当前连接的存在期。Apache 使用了池式内存(pooled memory),将其连接拆分为各个阶段,每个阶段都有自己的内存池。在结束每个阶段时,会一次释放所有内存。 在池式内存管理中,每次内存分配都会指定内存池,从中分配内存。每个内存池都有不同的生存期限。在 Apache 中,有一个持续时间为服务器存在期的内存池,还有一个持续时间为连接的存在期的内存池,以及一个持续时间为请求的存在期的池,另外还有其他一些内存池。因此,如果我的一系列函数不会生成比连接持续时间更长的数据,那么我就可以完全从连接池中分配内存,并知道在连接结束时,这些内存会被自动释放。另外,有一些实现允许注册 清除函数(cleanup functions),在清除内存池之前,恰好可以调用它,来完成在内存被清理前需要完成的其他所有任务(类似于面向对象中的析构函数)。 要在自己的程序中使用池,您既可以使用 GNU libc 的 obstack 实现,也可以使用 Apache 的 Apache Portable Runtime。GNU obstack 的好处在于,基于 GNU 的 Linux 发行版本中默认会包括它们。Apache Portable Runtime 的好处在于它有很多其他工具,可以处理编写多平台服务器软件所有方面的事情。要深入了解 GNU obstack 和 Apache 的池式内存实现,请参阅 参考资料部分中指向这些实现的文档的链接。 下面的假想代码列表展示了如何使用 obstack: 清单 11. obstack 的示例代码 #include #include /* Example code listing for using obstacks */ /* Used for obstack macros (xmalloc is a malloc function that exits if memory is exhausted */ #define obstack_chunk_alloc xmalloc #define obstack_chunk_free free /* Pools */ /* Only permanent allocations should go in this pool */ struct obstack *global_pool; /* This pool is for per-connection data */ struct obstack *connection_pool; /* This pool is for per-request data */ struct obstack *request_pool; void allocation_failed() { exit(1); } int main() { /* Initialize Pools */ global_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(global_pool); connection_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(connection_pool); request_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(request_pool); /* Set the error handling function */ obstack_alloc_failed_handler = &allocation_failed; /* Server main loop */ while(1) { wait_for_connection(); /* We are in a connection */ while(more_requests_available()) { /* Handle request */ handle_request(); /* Free all of the memory allocated * in the request pool */ obstack_free(request_pool, NULL); } /* We're finished with the connection, time * to free that pool */ obstack_free(connection_pool, NULL); } } int handle_request() { /* Be sure that all object allocations are allocated * from the request pool */ int bytes_i_need = 400; void *data1 = obstack_alloc(request_pool, bytes_i_need); /* Do stuff to process the request */ /* return */ return 0; } 基本上,在操作的每一个主要阶段结束之后,这个阶段的 obstack 会被释放。不过,要注意的是,如果一个过程需要分配持续时间比当前阶段更长的内存,那么它也可以使用更长期限的 obstack,比如连接或者全局内存。传递给 obstack_free() 的 NULL 指出它应该释放 obstack 的全部内容。可以用其他的值,但是它们通常不怎么实用。 使用池式内存分配的益处如下所示: 应用程序可以简单地管理内存。 内存分配和回收更快,因为每次都是在一个池中完成的。分配可以在 O(1) 时间内完成,释放内存池所需时间也差不多(实际上是 O(n) 时间,不过在大部分情况下会除以一个大的因数,使其变成 O(1))。 可以预先分配错误处理池(Error-handling pools),以便程序在常规内存被耗尽时仍可以恢复。 有非常易于使用的标准实现。 池式内存的缺点是: 内存池只适用于操作可以分阶段的程序。 内存池通常不能与第三方库很好地合作。 如果程序的结构发生变化,则不得不修改内存池,这可能会导致内存管理系统的重新设计。 您必须记住需要从哪个池进行分配。另外,如果在这里出错,就很难捕获该内存池。 回页首 垃圾收集 垃圾收集(Garbage collection)是全自动地检测并移除不再使用的数据对象。垃圾收集器通常会在当可用内存减少到少于一个具体的阈值时运行。通常,它们以程序所知的可用的一组“基本”数据 —— 栈数据、全局变量、寄存器 —— 作为出发点。然后它们尝试去追踪通过这些数据连接到每一块数据。收集器找到的都是有用的数据;它没有找到的就是垃圾,可以被销毁并重新使用这些无用的数据。为了有效地管理内存,很多类型的垃圾收集器都需要知道数据结构内部指针的规划,所以,为了正确运行垃圾收集器,它们必须是语言本身的一部分。 收集器的类型 复制(copying): 这些收集器将内存存储器分为两部分,只允许数据驻留在其中一部分上。它们定时地从“基本”的元素开始将数据从一部分复制到另一部分。内存新近被占用的部分现在成为活动的,另一部分上的所有内容都认为是垃圾。另外,当进行这项复制操作时,所有指针都必须被更新为指向每个内存条目的新位置。因此,为使用这种垃圾收集方法,垃圾收集器必须与编程语言集成在一起。 标记并清理(Mark and sweep):每一块数据都被加上一个标签。不定期的,所有标签都被设置为 0,收集器从“基本”的元素开始遍历数据。当它遇到内存时,就将标签标记为 1。最后没有被标记为 1 的所有内容都认为是垃圾,以后分配内存时会重新使用它们。 增量的(Incremental):增量垃圾收集器不需要遍历全部数据对象。因为在收集期间的突然等待,也因为与访问所有当前数据相关的缓存问题(所有内容都不得不被页入(page-in)),遍历所有内存会引发问题。增量收集器避免了这些问题。 保守的(Conservative):保守的垃圾收集器在管理内存时不需要知道与数据结构相关的任何信息。它们只查看所有数据类型,并假定它们 可以全部都是指针。所以,如果一个字节序列可以是一个指向一块被分配的内存的指针,那么收集器就将其标记为正在被引用。有时没有被引用的内存会被收集,这样会引发问题,例如,如果一个整数域中包含一个值,该值是已分配内存的地址。不过,这种情况极少发生,而且它只会浪费少量内存。保守的收集器的优势是,它们可以与任何编程语言相集成。 Hans Boehm 的保守垃圾收集器是可用的最流行的垃圾收集器之一,因为它是免费的,而且既是保守的又是增量的,可以使用 --enable-redirect-malloc 选项来构建它,并且可以将它用作系统分配程序的简易替代者(drop-in replacement)(用 malloc/ free 代替它自己的 API)。实际上,如果这样做,您就可以使用与我们在示例分配程序中所使用的相同的 LD_PRELOAD 技巧,在系统上的几乎任何程序中启用垃圾收集。如果您怀疑某个程序正在泄漏内存,那么您可以使用这个垃圾收集器来控制进程。在早期,当 Mozilla 严重地泄漏内存时,很多人在其中使用了这项技术。这种垃圾收集器既可以在 Windows® 下运行,也可以在 UNIX 下运行。 垃圾收集的一些优点: 您永远不必担心内存的双重释放或者对象的生命周期。 使用某些收集器,您可以使用与常规分配相同的 API。 其缺点包括: 使用大部分收集器时,您都无法干涉何时释放内存。 在多数情况下,垃圾收集比其他形式的内存管理更慢。 垃圾收集错误引发的缺陷难于调试。 如果您忘记将不再使用的指针设置为 null,那么仍然会有内存泄漏。 回页首 结束语 一切都需要折衷:性能、易用、易于实现、支持线程的能力等,这里只列出了其中的一些。为了满足项目的要求,有很多内存管理模式可以供您使用。每种模式都有大量的实现,各有其优缺点。对很多项目来说,使用编程环境默认的技术就足够了,不过,当您的项目有特殊的需要时,了解可用的选择将会有帮助。下表对比了本文中涉及的内存管理策略。 表 1. 内存分配策略的对比 策略 分配速度 回收速度 局部缓存 易用性 通用性 实时可用 SMP 线程友好 定制分配程序 取决于实现 取决于实现 取决于实现 很难 无 取决于实现 取决于实现 简单分配程序 内存使用少时较快 很快 差 容易 高 否 否 GNU malloc 中 快 中 容易 高 否 中 Hoard 中 中 中 容易 高 否 是 引用计数 N/A N/A 非常好 中 中 是(取决于 malloc 实现) 取决于实现 池 中 非常快 极好 中 中 是(取决于 malloc 实现) 取决于实现 垃圾收集 中(进行收集时慢) 中 差 中 中 否 几乎不 增量垃圾收集 中 中 中 中 中 否 几乎不 增量保守垃圾收集 中 中 中 容易 高 否 几乎不 参考资料 您可以参阅本文在 developerWorks 全球站点上的 英文原文。 Web 上的文档 GNU C Library 手册的 obstacks 部分 提供了 obstacks 编程接口。 Apache Portable Runtime 文档 描述了它们的池式分配程序的接口。 基本的分配程序 Doug Lea 的 Malloc 是最流行的内存分配程序之一。 BSD Malloc 用于大部分基于 BSD 的系统中。 ptmalloc 起源于 Doug Lea 的 malloc,用于 GLIBC 之中。 Hoard 是一个为多线程应用程序优化的 malloc 实现。 GNU Memory-Mapped Malloc(GDB 的组成部分) 是一个基于 mmap() 的 malloc 实现。 池式分配程序 GNU Obstacks(GNU Libc 的组成部分)是安装最多的池式分配程序,因为在每一个基于 glibc 的系统中都有它。 Apache 的池式分配程序(Apache Portable Runtime 中) 是应用最为广泛的池式分配程序。 Squid 有其自己的池式分配程序。 NetBSD 也有其自己的池式分配程序。 talloc 是一个池式分配程序,是 Samba 的组成部分。 智能指针和定制分配程序 Loki C++ Library 有很多为 C++ 实现的通用模式,包括智能指针和一个定制的小对象分配程序。 垃圾收集器 Hahns Boehm Conservative Garbage Collector 是最流行的开源垃圾收集器,它可以用于常规的 C/C++ 程序。 关于现代操作系统中的虚拟内存的文章 Marshall Kirk McKusick 和 Michael J. Karels 合著的 A New Virtual Memory Implementation for Berkeley UNIX 讨论了 BSD 的 VM 系统。 Mel Gorman's Linux VM Documentation 讨论了 Linux VM 系统。 关于 malloc 的文章 Poul-Henning Kamp 撰写的 Malloc in Modern Virtual Memory Environments 讨论的是 malloc 以及它如何与 BSD 虚拟内存交互。 Berger、McKinley、Blumofe 和 Wilson 合著的 Hoard -- a Scalable Memory Allocator for Multithreaded Environments 讨论了 Hoard 分配程序的实现。 Marshall Kirk McKusick 和 Michael J. Karels 合著的 Design of a General Purpose Memory Allocator for the 4.3BSD UNIX Kernel 讨论了内核级的分配程序。 Doug Lea 撰写的 A Memory Allocator 给出了一个关于设计和实现分配程序的概述,其中包括设计选择与折衷。 Emery D. Berger 撰写的 Memory Management for High-Performance Applications 讨论的是定制内存管理以及它如何影响高性能应用程序。 关于定制分配程序的文章 Doug Lea 撰写的 Some Storage Management Techniques for Container Classes 描述的是为 C++ 类编写定制分配程序。 Berger、Zorn 和 McKinley 合著的 Composing High-Performance Memory Allocators 讨论了如何编写定制分配程序来加快具体工作的速度。 Berger、Zorn 和 McKinley 合著的 Reconsidering Custom Memory Allocation 再次提及了定制分配的主题,看是否真正值得为其费心。 关于垃圾收集的文章 Paul R. Wilson 撰写的 Uniprocessor Garbage Collection Techniques 给出了垃圾收集的一个基本概述。 Benjamin Zorn 撰写的 The Measured Cost of Garbage Collection 给出了关于垃圾收集和性能的硬数据(hard data)。 Hans-Juergen Boehm 撰写的 Memory Allocation Myths and Half-Truths 给出了关于垃圾收集的神话(myths)。 Hans-Juergen Boehm 撰写的 Space Efficient Conservative Garbage Collection 是一篇描述他的用于 C/C++ 的垃圾收集器的文章。 Web 上的通用参考资料 内存管理参考 中有很多关于内存管理参考资料和技术文章的链接。 关于内存管理和内存层级的 OOPS Group Papers 是非常好的一组关于此主题的技术文章。 C++ 中的内存管理讨论的是为 C++ 编写定制的分配程序。 Programming Alternatives: Memory Management 讨论了程序员进行内存管理时的一些选择。 垃圾收集 FAQ 讨论了关于垃圾收集您需要了解的所有内容。 Richard Jones 的 Garbage Collection Bibliography 有指向任何您想要的关于垃圾收集的文章的链接。 书籍 Michael Daconta 撰写的 C++ Pointers and Dynamic Memory Management 介绍了关于内存管理的很多技术。 Frantisek Franek 撰写的 Memory as a Programming Concept in C and C++ 讨论了有效使用内存的技术与工具,并给出了在计算机编程中应当引起注意的内存相关错误的角色。 Richard Jones 和 Rafael Lins 合著的 Garbage Collection: Algorithms for Automatic Dynamic Memory Management 描述了当前使用的最常见的垃圾收集算法。 在 Donald Knuth 撰写的 The Art of Computer Programming 第 1 卷 Fundamental Algorithms 的第 2.5 节“Dynamic Storage Allocation”中,描述了实现基本的分配程序的一些技术。 在 Donald Knuth 撰写的 The Art of Computer Programming 第 1 卷 Fundamental Algorithms 的第 2.3.5 节“Lists and Garbage Collection”中,讨论了用于列表的垃圾收集算法。 Andrei Alexandrescu 撰写的 Modern C++ Design 第 4 章“Small Object Allocation”描述了一个比 C++ 标准分配程序效率高得多的一个高速小对象分配程序。 Andrei Alexandrescu 撰写的 Modern C++ Design 第 7 章“Smart Pointers”描述了在 C++ 中智能指针的实现。 Jonathan 撰写的 Programming from the Ground Up 第 8 章“Intermediate Memory Topics”中有本文使用的简单分配程序的一个汇编语言版本。 来自 developerWorks 自我管理数据缓冲区内存 (developerWorks,2004 年 1 月)略述了一个用于管理内存的自管理的抽象数据缓存器的伪 C (pseudo-C)实现。 A framework for the user defined malloc replacement feature (developerWorks,2002 年 2 月)展示了如何利用 AIX 中的一个工具,使用自己设计的内存子系统取代原有的内存子系统。 掌握 Linux 调试技术 (developerWorks,2002 年 8 月)描述了可以使用调试方法的 4 种不同情形:段错误、内存溢出、内存泄漏和挂起。 在 处理 Java 程序中的内存漏洞 (developerWorks,2001 年 2 月)中,了解导致 Java 内存泄漏的原因,以及何时需要考虑它们。 在 developerWorks Linux 专区中,可以找到更多为 Linux 开发人员准备的参考资料。 从 developerWorks 的 Speed-start your Linux app 专区中,可以下载运行于 Linux 之上的 IBM 中间件产品的免费测试版本,其中包括 WebSphere® Studio Application Developer、WebSphere Application Server、DB2® Universal Database、Tivoli® Access Manager 和 Tivoli Directory Server,查找 how-to 文章和技术支持。 通过参与 developerWorks blogs 加入到 developerWorks 社区。 可以在 Developer Bookstore Linux 专栏中定购 打折出售的 Linux 书籍。 关于作者 Jonathan Bartlett 是 Programming from the Ground Up 一书的作者,这本书介绍的是 Linux 汇编语言编程。Jonathan Bartlett 是 New Media Worx 的总开发师,负责为客户开发 Web、视频、kiosk 和桌面应用程序。您可以通过 johnnyb@eskimo.com 与 Jonathan 联系。

16,471

社区成员

发帖
与我相关
我的任务
社区描述
VC/MFC相关问题讨论
社区管理员
  • 基础类社区
  • Web++
  • encoderlee
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告

        VC/MFC社区版块或许是CSDN最“古老”的版块了,记忆之中,与CSDN的年龄几乎差不多。随着时间的推移,MFC技术渐渐的偏离了开发主流,若干年之后的今天,当我们面对着微软的这个经典之笔,内心充满着敬意,那些曾经的记忆,可以说代表着二十年前曾经的辉煌……
        向经典致敬,或许是老一代程序员内心里面难以释怀的感受。互联网大行其道的今天,我们期待着MFC技术能够恢复其曾经的辉煌,或许这个期待会永远成为一种“梦想”,或许一切皆有可能……
        我们希望这个版块可以很好的适配Web时代,期待更好的互联网技术能够使得MFC技术框架得以重现活力,……

试试用AI创作助手写篇文章吧