空指针的错误:NULL pointer access (probably)

hahahaha5211 2010-07-01 07:03:22
这是我代码跑起来之后出现的错误,各位前辈能提醒下是大概是哪里出了问题吗,不胜感激!



NULL pointer access (probably)
Defered Exception context
CURRENT PROCESS:
COMM=sh PID=414
TEXT = 0x01980040-0x019b41e0 DATA = 0x019b41e4-0x019c0514
BSS = 0x019c0514-0x019c3e94 USER-STACK = 0x019c8f60

return address: [0x01990ad6]; contents of:
0x01990ab0: 1365 67e7 e140 019b 300f e100 fc04 e300
0x01990ac0: 019b 3228 e140 019b e100 fc08 e300 1356
0x01990ad0: 0c45 180a b9f0 [9228] 0c45 183d e801 0000
0x01990ae0: 3045 05b5 0010 e14a 019c e10a 0dcc b9f0

SEQUENCER STATUS: Not tainted
SEQSTAT: 00000027 IPEND: 0030 SYSCFG: 0006
HWERRCAUSE: 0x0
EXCAUSE : 0x27
RETE: <0x00000000> /* Maybe null pointer? */
RETN: <0x001da000> /* unknown address */
RETX: <0x01990ad6> [ sh + 0x10a96 ]
RETS: <0x01990ad0> [ sh + 0x10a90 ]
PC : <0x01990ad6> [ sh + 0x10a96 ]
DCPLB_FAULT_ADDR: <0x0000002c> /* Maybe null pointer? */
ICPLB_FAULT_ADDR: <0x01990ad6> [ sh + 0x10a96 ]

PROCESSOR STATE:
R0 : 0000002c R1 : 00468fd4 R2 : 0000002c R3 : 00468ff4
R4 : 019bda9c R5 : 00000005 R6 : 00000000 R7 : 019c8e28
P0 : 019c8d68 P1 : 019c0d94 P2 : 00468ff4 P3 : 00000023
P4 : 00000000 P5 : 0000002c FP : 019c8e2c SP : 001d9f24
LB0: 0198017d LT0: 01980170 LC0: 00000000
LB1: 019a8cc5 LT1: 019a8cc4 LC1: 00000000
B0 : 00000000 L0 : 00000000 M0 : 00000000 I0 : 002604ec
B1 : 00000000 L1 : 00000000 M1 : 00000000 I1 : 00260661
B2 : 00000000 L2 : 00000000 M2 : 00000000 I2 : 07615703
B3 : 00000000 L3 : 00000000 M3 : 00000000 I3 : 00000000
A0.w: 00000000 A0.x: 00000000 A1.w: 00000000 A1.x: 00000000
USP : 019c8e10 ASTAT: 02002000
Hardware Trace:
0 Target : <0x0000483c> { _trap_c + 0x0 }
Source : <0xffa007f4> { _exception_to_level5 + 0xb4 }
1 Target : <0xffa00740> { _exception_to_level5 + 0x0 }
Source : <0xffa00698> { _ex_trap_c + 0x5c }
2 Target : <0xffa0063c> { _ex_trap_c + 0x0 }
Source : <0xffa00894> { _trap + 0x28 }
3 Target : <0xffa0086c> { _trap + 0x0 }
Source : <0x01990ad4> [ sh + 0x10a94 ]
4 Target : <0x01990ad0> [ sh + 0x10a90 ]
Source : <0x01993184> [ sh + 0x13144 ]
5 Target : <0x01993178> [ sh + 0x13138 ]
Source : <0x01990acc> [ sh + 0x10a8c ]
6 Target : <0x01990ac2> [ sh + 0x10a82 ]
Source : <0x01990e64> [ sh + 0x10e24 ]
7 Target : <0x01990e52> [ sh + 0x10e12 ]
Source : <0x01990e2c> [ sh + 0x10dec ]
8 Target : <0x01990e20> [ sh + 0x10de0 ]
Source : <0x01990e12> [ sh + 0x10dd2 ]
9 Target : <0x01990df4> [ sh + 0x10db4 ]
Source : <0x01990abe> [ sh + 0x10a7e ]
10 Target : <0x01990ab2> [ sh + 0x10a72 ]
Source : <0x01993184> [ sh + 0x13144 ]
11 Target : <0x01993178> [ sh + 0x13138 ]
Source : <0x01990aae> [ sh + 0x10a6e ]
12 Target : <0x01990a90> [ sh + 0x10a50 ]
Source : <0x019919ae> [ sh + 0x1196e ]
13 Target : <0x019919a4> [ sh + 0x11964 ]
Source : <0x0199195c> [ sh + 0x1191c ]
14 Target : <0x01991954> [ sh + 0x11914 ]
Source : <0x01990d6c> [ sh + 0x10d2c ]
15 Target : <0x01990d64> [ sh + 0x10d24 ]
<0x01990d32> [ sh + 0x10cf2 ]
Source : Stack from 001d9f04:
00000000 ffa007f8 0019e560 0019e560 0019e55c 04000021 00000000 01981864
01990ad6 00000030 00000027 00000000 001da000 01990ad6 01990ad6 01990ad0
0000002c 02002000 019a8cc5 0198017d 019a8cc4 01980170 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 07615703 00260661 002604ec 019c8e10 019c8e2c 0000002c 00000000


...全文
432 7 打赏 收藏 转发到动态 举报
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
秦晋随风 2010-07-13
  • 打赏
  • 举报
回复
直接GDB
linkejin 2010-07-07
  • 打赏
  • 举报
回复
用memwatch查下内存使用情况?
hahahaha5211 2010-07-06
  • 打赏
  • 举报
回复
[Quote=引用 4 楼 helldevil 的回复:]

http://blog.chinaunix.net/u/12325/showart_1999057.html
看看這個
[/Quote]

没明白你在说什么
T-Quake 2010-07-04
  • 打赏
  • 举报
回复
hahahaha5211 2010-07-02
  • 打赏
  • 举报
回复
是不是我busybox有问题啊?
不知道各位前辈有没有碰到过类似的问题,多多指教啊!
hahahaha5211 2010-07-02
  • 打赏
  • 举报
回复
各位大侠多多指教啊,急!
hahahaha5211 2010-07-01
  • 打赏
  • 举报
回复
我也知道这是shell出了问题,但我始终无从下手,不知道如何定位错误在哪里。还请各位前辈多多指教。
Release history (reverse chronological order) This release 1.3.7.1 Release date: December 11, 2006 Known bugs: none Fixes/features added from previous release: a) added support for multiple filters per process in VsDrvr.dll b) updated manual Previous release 1.3.5 Release date: October 11th, 2005 Known bugs: none Fixes/features added from previous release a) added VsSetWavelengthStep and VsGetWavelengthStep functions b) added VsSetWavelengthWavesConfirm() function c) fixed error-handling of VsSetWavelength() In earlier revisions, the error status light was cleared after a VsSetWavelength() call failed, so the user did not see the light turn red to alert that an error had occurred. This has been fixed in 1.35 so the error light remains lit, and an error code is returned. d) added range-check to VsDefinePalette() Previous revisions did not range-check the palette index number, and hard crashes could be produced if out-of-range values were supplied to this routine. Previous release 1.33b Release date: February 9, 2005 Known bugs: none Fixes/features changed from previous release: a) Fixed installer: programmers?guide (vsdrvr.pdf) installed when SDK is selected. Previous release 1.33a Release date: January 10th, 2005 Known bugs: i) SDK programmers?guide is not installed even if SDK is selected. Fixes/features added from previous release a) VsDrvr.dll fixed handling of COMx ports that do not support 460kb The autobaud sequence tries a variety of baud rates, some of which are not supported by RS-232 interfaces (but are supported on USB virtual COM ports). This was not handled properly, so if a call was made to VsOpen when no VariSpec was present, but a later call was made when a filter was present, the latter would fail. b) VsGui added check of which COMx ports are present on computer This program now filters its COMx list and only shows ports which actually exist; it used to show COM1 ?COM8 even if not all of these were present. c) VsGui added automatic filter detection on Configure dialog This checks all ports in turn, and reports the first detected filter. The search order is determined by the order in which the computer lists ports in the Registry. d) VsGui changed to recognize filters as present while initializing In prior revisions, VsGui would not report no filter found if a filter was present but still going through its power-up initialization. Now, a message box is posted to indicate that a filter was found, and the program checks whether initialization is complete, at 1 second intervals. When the filter is done initializing, the VsGui controls become active and report the filter information (serial number, wavelength range, etc). e) VsGui added filter status item to Configure dialog Adjacent the COMx combo box, there is now a text field that indicates filter status as 揘ot found? 揑nitializing? or 揜eady? This field is updated whenever the combo box selection is changed. Previous release 1.32 Release date: July 27th, 2004 Known bugs: COMx port described above as 1.33 fix item a) Fixes/features added from previous release a) VsGui added a sweep feature to enable cycling the filter The wavelength start, stop, and step are adjustable. Cycling can be done a fixed number of times or indefinitely. Previous release 1.30 Release date: June 23rd, 2004 Known bugs: none Fixes/features added from previous release a) New commands VsSetWaveplateAndWaves(), VsGetWaveplateAndWaves(), VsGetWaveplateLimits(), and VsGetWaveplateStages() were added for support of variable retarder models. b) New commands VsSetRetries() and VsSetLatencyMs() were added for control of serial port latency and automatic retry in case of error. c) New commands VsSetMode() and VsGetMode() were added for control of the VariSpec filter抯 triggering and sweep modes d) New command VsGetSettleMs() was added to learn optics settling time e) New commands VsIsDiagnostic() and VsIsEngagedInBeam() were added. These are reserved for CRI use and are not supported for use by end users. f) The command syntax and functionality of the VsSendCommand() function was changed - see description of this command for details g) The VsGui program was modified to add sweep function, and the associated files were added to the file manifest. The new functions are assigned higher ordinal numbers than the earlier commands, so the ordinal numbers assigned to routines in the earlier VsDrvr routines are preserved. This means one may use the new VsDrvr.dll file with applications that were developed and linked with the earlier release, without any need to recompile or relink the application. Of course, to use the new functions one must link the application code with the new .lib file containing these functions. Previous release: 1.20 Release date December 3rd, 2003 Known bugs: a) there is a conflict when one uses the implicit palette to set wavelengths, and also defines palette states explicitly using the VsDefinePalette() function. When the explicitly set palette state overwrites a palette state implicitly associated with a certain wavelength, that wavelength will not be accurately set when one issues the VsSetWavelength() command. This is fixed in release 1.30 Fixes/features added from previous release a) fixes bug with implicit palette in September 8 release b) incorporates implicit retry for command send/reply if error in transmission c) recognizes filters with serial numbers > 60000 (normally VariLC numbers) d) supports binary transfer of >127 bytes Previous release 1.11 Release date September 8, 2003 Known bugs a) implicit palette can fail to create palette entry, causing tuning error b) VsSendBinary() fails if 128 chars or more sent (signed char error) Fixes/features added from previous release a) included VsIsPresent() function omitted from function list of 1.10 release Previous release 1.10 Release date: August 28th, 2003 Known bugs: a) VsIsPresent function not included ?generates 搖nresolved external?at link-time Fixes/features added from previous release: b) added command VsEnableImplicitPalette() to code and documentation added command VsConnect() to code and documentation added command VsClose() to code and documentation added local variable to avoid unnecessary querying of diagnostic status documented that command VsConnect() will not be supported in future documented that command VsDisconnect() will not be supported in future documented that command VsIsConnected() will not be supported in future changed to Windows Installer from previous ZIP file added table summary of commands to this manual Previous release 1.00 Release date: November 5th, 2002 Known bugs: a) none Fixes/features added from previous release b) n/a ?initial releaseDescription This package provides a set of functions to control the VariSpec filter, which may be called from C or C++ programs. It incorporates all aspects of the filter communication, including low-level serial routines. With these routines, one can address the filter as a virtual object, with little need for detailed understanding of its behavior. This simplifies the programming task for those who want to integrate the VariSpec into larger software packages. File manifest All files are contained in a single installer file which includes the following: vsdrvr.h declaration file vsdrvr.lib library stub file vsdrvr.dll run-time library vsdrvr_r1p30.pdf (this file) release notes and programmer抯 guide {sample program using VsDrvr package} registryAccess.cpp registryAccess.h resource.h stdafx.h VsConfigDlg.cpp VsConfigfDlg.h VsGui.cpp VsGui.h VsGui.mak VsGui.rc VsGuiDlg.cpp VsGuiDlg.h VsSweep.cpp VsSweep.h Development cycle In order to use the DLL, one should take the following steps: a) Add #include 搗sdrvr.h?statements to all files that access the VariSpec software b) Add vsdrvr.lib to the list of modules searched by the linker c) Place a copy of vsdrvr.dll in either the folder that includes the executable code for the program being developed; or, preferably, in the windows system folder. Failures in step a) will lead to compiler errors; in step b) to linker errors; in step c) to a run-time error message that 揳 required .DLL file, vsdrvr.dll, was not found? VariSpec filter configuration The VariSpec filter communicates via ASCII commands sent over an RS-232 interface or USB. The RS232 can operate at 9600 or 19,200 baud, while the USB appears as a virtual COMx device. While it appears to be present at either 9600 baud or 115.2 kbaud , the actual data transmission occurs at 12 MBaud over the USB. Each command is terminated with an end-of-line terminator which can be either a carriage-return or line feed . For RS-232 models, the baud rate and terminator character are selected using DIP switches inside the VariSpec electronics module. Default settings are 9600 baud, and the character (denoted 慭r?in the C language). For USB devices, the terminator is always . For latest information, or to determine how to alter the settings from the factory defaults, consult the VariSpec manual. Timing and latency The VariSpec filter takes a finite time to process commands, which adds an additional delay to that imposed by simple communication delays. In general, the time to process a given command is short except for the following operations: ?filter initialization ?wavelength selection ?palette definition The first of these is quite lengthy (30 seconds or more) because it involves measurements and exercising of the liquid crystal optics. The latter two are much faster but still can take a significant amount of time (up to 300 ms) on the older RS-232 electronics due to the computations involved. On the newer, USB electronics, the latter two functions are completed in less than 5 ms. For this reason, the functions that handle these actions offer the option of waiting until the action is complete before returning (so-called synchronous operation); although they can be called in an asynchronous mode where the function returns as soon as all commands have been sent to the VariSpec, without waiting for them to run to completion. Another option is to use implicit palette tables. If this is enabled, by calling the VsEnableImplicitPalette() function, the driver will define the settings for a given wavelength once, then saves the results within the VariSpec for faster access next time that wavelength is used. Subsequent access times are essentially instantaneous, until either all of the 128 palette states are in use, or the palette is cleared via the VsClearPalette() command. The VsIsReady() function can be used to determine whether a filter is done processing all commands. Ideally, one should check VsIsReady() using a timer or the like to wait efficiently, so that the host PC is free to do other tasks while waiting for the VariSpec. The VariSpec always processes each command to completion before starting on the next command, and it has a 256 byte input buffer, so there is no problem issuing several commands at once; they will all be executed, and in the order given. This also indicates another way to coordinate one抯 program with the VariSpec filter: one can issue any of the VsGetxxx() functions, which query the filter. Since these do not return until the filter has responded, one may be sure that there are no pending commands when the VsGetxxx() function completes. The VsDrvr package provides for automatic re-try of commands up to 3 times, in the event that communications are garbled, and will wait up to 2 seconds for completion of serial commands. The number of retries can be set from 0 to 10, and the latency adjusted, if desired. However, there should be no need to do so. The hardware and software have been tested and observed to execute several million commands without a single communications error, so in practice the need for the retry protocol is very slight. Communication speed is not improved by reducing the latency, since commands proceed when all characters are received, and the latency time to time-out is only relevent when there is a communications lapse ?and as noted, these are very unlikely so the performance burden of retries should not be a practical consideration. Multiple Filters and Multiple Processes These routines only permit one VariSpec per process, and one process per VariSpec. So, these routines cannot control multiple filters at once from a single process; nor can several active processes seek to control the same filter at the same time. The VsDrvr package anticipates a future upgrade to enable control of multiple filters per process, so it makes use of an integer handle to identify which VariSpec is being controlled, even though (for now) only a single filter can be active. This handle is checked, and the correct handle must be used in all calls. Program flow and sequence Typical programs should use the following API calls (all applications, upon initiating link to the filter) ?call VsOpen() to establish communications link (required) ?call VsIsPresent() to confirm a filter is actually present ?call VsIsReady() in case filter is still doing power-up sequence er busy> ?call VsGetFilterIdentity() to learn wavelength limits and serial number if needed (if setting wavelengths via implicit palettes; recommended especially with older filters) ?call VsEnableImplicitPalettes() ? (to set wavelengths, either directly or via implicit palettes) ?call VsSetWavelength() and VsGetWavelength() to select and retrieve tuning (if setting wavelengths by means of palettes, and managing palettes explicity) ?call VsDefinePaletteEntry() and VsClearPalette() to define palette entries ?call VsSetPalette() and VsGetPalette() to select and retrieve palette state (all applications, when done with the filter) ?call VsClose() to release the communications link (required) Sample program Source code for a sample program, VsGui, is provided, which illustrates how to control a VariSpec filter using the VsDrvr package. All filter control code lives in the VsGuiDlg.cpp module, specifically in the Connect(), RequestToSetWavelength(), and VsWriteTimerProc() functions. The latter two use a system timer to decouple the GUI from the actual filter control, for more responsive feedback to the user. Such an approach is unnecessary if palettes are used, which is preferable when one wishes the best real-time performance. See the VariSpec manual for further information. Auxiliary commands Certain commands are normally only used at the factory when filters are being built and configured, or in specialized configurations. These appear after the normal command set in the listing below. Obsolescent commands The VsConnect(), VsIsConnected(), and VsDisconnect() functions are obsolescent. They are supported in this release, but will not necessarily exist in releases after 1.3x. As they are obsolescent, they are not recommended for new code. These function calls are not documented further in this manual.Summary of commands Normal Commands VsClearError(vsHnd) VsClearPalette(vsHnd) VsClearPendingCommands(vsHnd) VsClose(vsHnd) VsDefinePalette(vsHnd, palEntry, wl) VsEnableImplicitPalette(vsHnd, isEnabled) VsGetError(vsHnd, *pErr) VsGetFilterIdentity(vsHnd, *pVer, *pSerno, *pminWl, *pmaxWl) VsGetMode(vsHnd, int *pMode) VsGetPalette(vsHnd, *ppalEntryNo) VsGetSettleMs(vsHnd, *psettleMs) VsGetTemperature(vsHnd, *pTemperature) VsGetWavelength(vsHnd, *pwl) VsGetWavelengthAndWaves(vsHnd, double *pWl, double *pwaves) VsGetWaveplateLimits(vsHnd, double *pminWaves, double *pmaxWaves) VsGetWaveplateStages(vsHnd, int *pnStages) VsIsPresent(vsHnd) VsIsReady(vsHnd) VsOpen(*pvsHnd, portName, *pErrorCode) VsSetLatencyMs(vsHnd, nLatencyMs) VsSetMode(vsHnd, mode) VsSetPalette(vsHnd, palEntry) VsSetRetries(vsHnd, nRetries) VsSetWavelength(vsHnd, wl, confirm) VsSetWavelengthAndWaves(vsHnd, wl, waveplateVector) Auxiliary commands VsGetAllDrive(vsHnd, *pStages, drive[]) VsGetNstages(vsHnd, *pStages) VsGetPendingReply(vsHnd, reply, nChars, *pQuit, firstMs, subsequentMs) VsGetReply(vsHnd, reply, nChars, waitMs) VsIsDiagnostic(vsHnd) VsIsEngagedInBeam(vsHnd) VsSendBinary(vsHnd, bin[], nChars, clearEcho) VsSendCommand(vsHnd, cmd, sendEolChar) VsSetStageDrive(vsHnd, stage, drive) VsThermistorCounts(vsHnd, *pCounts) Alphabetical list of function calls Syntax Throughout this manual, the following conventions are used: VSDRVR_API Int32 VsOpen( VS_HANDLE *vsHnd, LPCSTR port, Int32 *pErrorCode ) Bold text is used for function names Italics indicate variables whose names (or values) are supplied by the user in their code Name-mangling The declaration file vsdrvr.h includes statements that render the API names accurately in a C++ environment, i.e. free of the name-mangling decoration suffix that is normally added by C++ compilers. Thus the functions can be called freely from either C or C++ programs, using the names exactly as shown in this manual or in the VsDrvr.h file. Call and argument declarations The call protocol type, VSDRVR_API, is declared in vsdrvr.h, as are the types Int32 and VS_HANDLE. Errors All functions return an Int32 status value, which is TRUE if the routine completed successfully and FALSE if there was an error. If there is an error in the VsOpen() function, the error is returned in *pErrorCode. If there is an error in communicating with a filter after a successful VsOpen(), one should use the VsGetError() function to obtain the specific error code involved. This function returns VSD_ERR_NOERROR if there is no error pending. Main and auxiliary functions The next section provides a description of the main functions, in alphabetic order; followed by the auxiliary functions, also in alphabetical order. In normal use, one will probably have no need for the auxiliary functions, but this list is provided for completeness. VSDRVR_API Int32 VsClearError( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Purpose: this function clears any pending error on the VariSpec. This resets the error LED on the filter, and sets the pending error to VS_ERR_NOERROR. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsClearPalette( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: clears all elements of the current filter palette and renders the current palette element undefined. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsClearPendingCommands( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: clears all pending commands including any presently in-process Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsClose( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen(). May also be NULL, in which case all VariSpec filters are disconnected. Function: Disconnects the filter. Returns: TRUE if successful, FALSE otherwise Notes: No other functions will work until VsOpen() is called to re-establish communications with the filter. VSDRVR_API Int32 VsDefinePalette( VS_HANDLE vsHnd, Int32 palEntry, double wl) Arguments: vsHnd handle value returned by VsOpen() palEntry palette entry to be defined, in the range [0, 127] wl wavelength associated with this palette entry Function: creates a palette entry for the entry and wavelength specified. This palette entry can then be accessed using VsSetPalette() and VsGetPalette() functions. Returns: TRUE if successful, FALSE otherwise Notes: palettes provide a fast way to define filter settings for wavelengths that are to be repeatedly accessed. The calculations are performed once, at the time the palette element is defined, and the results are saved in a palette table to tune to that wavelength without repeating the underlying calculations. And, one may cycle through the palette table, once defined, by means of TTL a trigger signal to the filter electronics. For more information about using palettes, consult the VariSpec user抯 manual. VSDRVR_API Int32 VsEnableImplicitPalette( VS_HANDLE vsHnd, BOOL imlEnabled) Arguments: vsHnd handle value returned by VsOpen() implEnabled selects whether to use implicit palette definition Function: enables or disables implicit palette generation when wavelengths are defined using the VsSetWavelength function. If enabled, a new palette entry is created whenever a new wavelength is accessed, and the VsSetWavelength function will use this palette entry whenever that wavelength is accessed again, until the palette is cleared. The result is improved tuning speed; however, it means that the palette contents are altered dynamically, which can be a problem if one relies upon the palette contents remaining fixed. Clearing the palette with VsClearPalette() will clear all implicit palette entries as well as explicitly defined palette entries. This is useful if one knows that wavelengths used previously will not be used again, or that a new set of wavelengths is about to be defined and one wishes to make sure there is sufficient room in the palette. Returns: TRUE if successful, FALSE otherwise Notes: By default, the implicit palette is enabled for VariSpec filters that have RS-232 interface, and is disabled for newer VariSpec filters that have the USB interface. This is because the newer filters perform the filter tuning calculations fast enough that no performance improvement is obtained by using the implicit palette to set wavelength. For more information about using palettes, consult the VariSpec user抯 manual. VSDRVR_API Int32 VsGetError( VS_HANDLE vsHnd, Int32 *pErr) Arguments: vsHnd handle value returned by VsOpen() pErr pointer to the int that will receive the most recent error code Purpose: this function clears any pending error on the VariSpec. This resets the error LED on the filter, and sets the pending error to VS_ERR_NOERROR. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetFilterIdentity( VS_HANDLE vsHnd, Int32 *pVer, Int32 *pSerno, double *pminWl, double *pmaxWl ) Arguments: vsHnd handle value returned by VsOpen() pVer pointer to variable that receives the filter firmware version pSerno pointer to variable that receives the filter serial number pminWl pointer to variable that receives the filter抯 minimum wavelength pmaxWl pointer to variable that receives the filter抯 maximum wavelength Purpose: this function reads the filter抯 information using the VariSpec 慥?command, and puts it to the call variables. Any one of the pointers may be NULL, in which case that piece of information is not returned. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetMode( VS_HANDLE vsHnd, Int32 *pMode ) Arguments: vsHnd handle value returned by VsOpen() pMode pointer to variable that receives the filter mode Purpose: this function enables one to read the filter抯 present mode. The mode describes how the filter responds to hardware triggers, and is described in the filter manual. If the pointer *pMode is NULL, no information is returned. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetPalette( VS_HANDLE vsHnd, Int32 *ppalEntry ) Arguments: vsHnd handle value returned by VsOpen() ppalEntry pointer to int that receives the 0-based palette entry number. This pointer may not be NULL. Purpose: this function determines what palette entry is currently active and returns it to *ppalEntry. If the present palette entry is undefined, it sets *ppalEntry to ? and returns a successful status code. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetSettleMs( VS_HANDLE vsHnd, Int32 *pSettleMs ) Arguments: vsHnd handle value returned by VsOpen() pSettleMs pointer to variable that receives the filter settling time Purpose: this function returns the filter抯 settling time, in milliseconds. This is useful for establishing overall system timing. The settling time is defined as beginning at the moment that the electronics have processed the request to change wavelength, as determined by VsIsReady() or equivalent. At that moment, the new set of drive signals are applied to the optics, and the optics will settle in *psettleMs milliseconds. The settling time is defined as a 95% settling time, meaning the filter has settled to 95% of its ultimate transmission value at the new wavelength being tuned to. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetTemperature( VS_HANDLE vsHnd, double *pTemperature ) Arguments: vsHnd handle value returned by VsOpen() pTemperature pointer to double that will receive the filter temperature, in C This pointer may not be NULL Purpose: this function determines the filter temperature using the VariSpec 慪?command, and puts the result to *pTemperature. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetWavelength( VS_HANDLE vsHnd, double *pwl ) Arguments: vsHnd handle value returned by VsOpen() pwl pointer to double that will receive the filter wavelength, in nm This pointer may not be NULL Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefined, it sets *pwl to ? and returns a successful status code. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsGetWavelengthAndWaves( VS_HANDLE vsHnd, double *pwl, double *pwaves ) Arguments: vsHnd handle value returned by VsOpen() pwl pointer to double that will receive the filter wavelength, in nm. This pointer may not be NULL pwaves pointer to double array that will receive one or more waveplate settings. The actual number of settings may be determined by VsGetWaveplateStages(). Purpose: this function determines the current filter wavelength and returns it to *pwl. If the present wavelength is undefined, it sets *pwl to ? and returns a successful status code. If the present wavelength is defined, it also returns the waves of retardance at each of the polarization analysis waveplates in the optics, in the pwaves[] array. Returns: TRUE if successful, FALSE otherwise Notes: See the description of the VsGetWaveplateStages() command for more detail on what stages are considered waveplates. VSDRVR_API Int32 VsGetWaveplateLimits( VS_HANDLE vsHnd, double *pminWaves, double *pmaxWaves ) Arguments: vsHnd handle value returned by VsOpen() pminWaves pointer to double array that will receive the minimum retardances possible at each of the waveplate stages in the filter optics. pmaxWaves pointer to double array that will receive the maximum retardances possible at each of the waveplate stages in the filter optics Purpose: this function determines the range of retardances that are possible at each waveplate stage, in waves, at the present wavelength setting. Note that the retardance range is itself a function of wavelength, so the results will vary as the wavelength is changed. Returns: TRUE if successful, FALSE otherwise Notes: See the description of the VsGetWaveplateStages command for more detail on what stages are considered waveplates. VSDRVR_API Int32 VsGetWaveplateStages( VS_HANDLE vsHnd, Int32 *pnwpStages ) Arguments: vsHnd handle value returned by VsOpen() pnwpStages pointer to Int32 that will receive the number of waveplate stages in the filter optics. This pointer may not be NULL Purpose: this function determines how many polarization analysis stages are present in the optics and returns this number. Note that although all VariSpec filters operate by means of variable retarder element, optical stages that perform wavelength tuning rather than polarization analysis are not treated as waveplate stages. For example, most VariSpec filters do not include any polarization analysis stages and thus report no waveplates. VsGetWaveplateStages will return a value of 2 for conventional PolScope optics. In contrast, VsGetNstages() reports the total number of stages in a filter, including stages that perform polarization analysis and stages that perform wavelength tuning. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsIsPresent( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: determines whether a filter is actually present and responding. This is done using the status-check character ??as described in the VariSpec manual. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsIsReady( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: determines whether the filter is done processing all commands, and is ready to receive new commands. Returns: TRUE if successful, FALSE otherwise Notes: this is useful when sending commands such as VsSetWavelength(), VsInitialize(), VsExercise(), and VsDefinePaletteEntry() in asynchronous mode. These commands take a prolonged time, and running them synchronously ties up the processor waiting. Alternatively, one can create a loop that uses CreateWaitableTimer(), SetWaitableTimer(), and WaitForSingleObject() to call VsIsReady() at intervals, checking whether the filter is ready. This approach, though more work for the programmer, leaves most of the processor capacity free for other tasks such as GUI update and the like. VSDRVR_API Int32 VsOpen (VS_HANDLE *pvsHnd, LPCSTR port, Int32 *pErrorCode ) Arguments: pvsHnd pointer to handle. This pointer may not be NULL. port port name, such as 揅OM1? pErrorCode pointer to Int32 to receive an error code if VsOpen() fails Purpose: establishes a connection to the VariSpec using the port specified, and automatically determines the baud rate and end-of-line character for subsequent communications. It also retrieves the filter抯 serial number and wavelength range, to confirm that it is a VariSpec and not some other similar device. However, these are retrieved purely as an integrity check, and the values are not returned to the calling application. See VsGetFilterInfo() to access this information. If the device responds as a VariSpec does when it is not ready (i.e. still initializing), VsOpen() fails and returns the error code VSD_ERR_BUSY. However, one may not be sure that the device is a VariSpec until VsOpen() completes successfully The error codes returned by this function are listed in VsDrvr.h. When VsOpen() runs successfully, *pErrorCode is set to VSD_ERR_NOERROR. The handle associated with this filter is set by VsOpen() to a nonzero handle value if successful, or to NULL if no connection is established. The port may refer to COM1 through COM8. Return: TRUE if successful, FALSE otherwise Notes: Until this function is called, none of the other functions will work. VSDRVR_API Int32 VsSetLatency( VS_HANDLE vsHnd, Int32 latencyMs ) Arguments: vsHnd handle value returned by VsOpen() latencyMs the serial port latency, in ms, in the range [1, 5000] Purpose: this function sets the latency time for USB or RS-232 commands to the value given by latencyMs. Commands that do not conclude in this time are considered to have timed-out. Returns: TRUE if successful, FALSE otherwise Notes: increasing the latency time does not increase the time for commands to complete, nor does it insert any delays in normal processing. It merely defines the window for maximum transmission time, beyond which time an error is reported. VSDRVR_API Int32 VsSetPalette( VS_HANDLE vsHnd, Int32 palEntry ) Arguments: vsHnd handle value returned by VsOpen() palEntry the palette entry to be set, in the range [0, 127] Purpose: this function sets the filter to the palette entry specified by palEntry Returns: TRUE if successful, FALSE otherwise Notes: palettes are a good way to control the filter in applications where it will be cycled repeatedly to various, fixed wavelength settings. Palettes calculate the filter settings once, and save the results for rapid access later, rather than calculating them each time, as occurs when one sets the wavelength directly with VsSetWavelength(). See the VariSpec manual for more information on palettes.VSDRVR_API Int32 VsSetRetries( VS_HANDLE vsHnd, Int32 nRetries ) Arguments: vsHnd handle value returned by VsOpen() nRetries the number serial communications retries, in the range [0, 10] Purpose: The VsDrvr software automatically detects errors in communication and re-sends if an error is detected. This function sets the number of times to retry sending any single command, before reporting a communications failure. The default is 3, which should be adequate, and one should rarely need to change this, if ever. The primary purpose of this function is to enable setting the number of retries to zero, to force single-error events to cause detectable errors (as they would normally be fixed automatically via the retry mechanism) Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsSetWavelength( VS_HANDLE vsHnd, double wl, BOOL confirm ) Arguments: vsHnd handle value returned by VsOpen() wl wavelength to tune to, in nm confirm logical flag, indicating whether to confirm actual wavelength value Purpose: this function sets the filter wavelength to the value in wl. If confirm is TRUE, it waits for the filter to complete the command, and then reads back the actual wavelength to confirm it was implemented successfully. Note that the only time there can be a disparity is when the wavelength requested by wl lies outside the legal range for that filter, or if the wavelength is specified to a finer resolution than the filter recognizes (normally, 0.01 nm). Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetAllDrive( VS_HANDLE vsHnd, Int32 *pStages, Int32 drive[] ) Arguments: vsHnd handle value returned by VsOpen() pStages pointer to int that will receive the number of stages in the filter drive[] int array to receive the filter drive levels. Purpose: this function reports the number of filter stages in *pStages. If this argument is NULL, it is ignored. The function returns the actual drive level at each stage, in counts, in drive[] , which must not be NULL. Returns: TRUE if successful, FALSE otherwise Notes: The array drive[] must be large enough to receive all the drive levels ?if the exact number of stages is not known, call VsGetNstages() first, or allocate enough array elements (12) to accommodate the largest filter design.VSDRVR_API Int32 VsGetNstages( VS_HANDLE vsHnd, Int32 *pStages ) Arguments: vsHnd handle value returned by VsOpen() pStages pointer to int that will receive the number of stages in the filter Purpose: this function determines the number of optical stages in the filter and returns it in *pStages, which may not be NULL. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsGetPendingReply( VS_HANDLE vsHnd, LPSTR reply, Int32 nChars, Int32 *pQuit, Int32 firstMs, Int32 subsequentMs ) Arguments: vsHnd handle value returned by VsOpen() reply pointer to buffer that is to receive the reply nChars number of characters to receive pQuit pointer to flag to control this function ?see Notes below firstMs maximum time to wait, in ms, for first character of reply subsequentMs maximum time to wait, in ms, for each subsequent character Purpose: this function is used to exploit some of the less-common aspects of the filter, and it is likely that most programs will require its use. It receives a reply from the filter that may not arrive for a long time. The routine waits up to firstMs for the first character to arrive. Subsequent characters must arrive within subsequentMs of one another. Typically, this routine is called with a high value for firstMs and a lower value for subsequentMs. Returns: TRUE if successful, FALSE otherwise Notes: pQuit can be used to cancel this function while it is waiting for the reply, if that is desired, such as to respond to a user cancellation request. To use this feature, pQuit must be non-NULL and *pQuit must be FALSE at the time VsGetPendingReply() is called. VsGetPendingReply() checks this address periodically, and if it discovers that *pQuit is TRUE, it will cancel and return immediately.VSDRVR_API Int32 VsGetReply( VS_HANDLE vsHnd, LPSTR reply, Int32 nChars, Int32 waitMs ) Arguments: vsHnd handle value returned by VsOpen() reply pointer to buffer that will receive the filter reply nChars the number of characters sought waitMs the maximum time, in ms, to wait for the reply Purpose: this function is used to exploit those filter commands that are not directly provided by other functions, and most programmers will not need to use it. If the reply is not received in the time indicated by waitMs, or if less than nChars are received, the function returns with an unsuccessful status code. Returns: TRUE if successful, FALSE otherwise Notes: noneVSDRVR_API Int32 VsIsDiagnostic( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: determines whether the filter is in the diagnostic mode that is used at the factory for setup and calibration. This command is reserved for CRI use only. Returns: TRUE if diagnostic, FALSE otherwise. VSDRVR_API Int32 VsIsEngagedInBeam( VS_HANDLE vsHnd ) Arguments: vsHnd handle value returned by VsOpen() Function: determines whether the filter is engaged in the beam, when configured into certain CRI systems. This function is reserved for CRI use only Returns: TRUE if engaged in the beam, FALSE otherwise VSDRVR_API Int32 VsSendBinary( VS_HANDLE vsHnd, char *bin, Int32 nChars, BOOL clearEcho ) Arguments: vsHnd handle value returned by VsOpen() bin pointer a buffer that contains binary data to be sent to the filter nChars the number of binary characters to be sent clearEcho flag indicating whether to clear echo characters from the queue Purpose: this routine sends binary blocks of data to the filter. This is only necessary when programming calibration data to the filter, and it is not anticipated that this function will be necessary in any normal use. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsSendCommand( VS_HANDLE vsHnd, LPCSTR cmd, BOOL sendEolChar) Arguments: vsHnd handle value returned by VsOpen() cmd pointer to the command to be sent to the filter sendEolChar flag indicating whether to append the end-of-line character or not Purpose: this function sends the command in cmd to the filter, and appends an end-of-line terminator (or not) based on sendEolChar. It automatically retrieves and discards the character echo of this command by the VariSpec. It does not automatically retrieve the reply, if any, from the VariSpec. Returns: TRUE if successful, FALSE otherwise Notes: The parameter sendEolChar should normally be true in all cases, unless one is sending individual character commands such as the ??or 慇?commands described in the VariSpec user抯 manual.VSDRVR_API Int32 VsSetStageDrive( VS_HANDLE vsHnd, Int32 stage, Int32 drive ) Arguments: vsHnd handle value returned by VsOpen() stage stage number whose drive level is to be adjusted drive drive level, in counts, for that stage Purpose: this function provides a way to manually adjust the drive levels at each of the filter抯 optical stages. It is normally used only during manufacture, and is not a function that most software programs will have any reason to use. Returns: TRUE if successful, FALSE otherwise Notes: none VSDRVR_API Int32 VsThermistorCounts( VS_HANDLE vsHnd, Int32 *pCounts ) Arguments: vsHnd handle value returned by VsOpen() pCounts pointer to int that will receive the thermistor signal, in counts Purpose: this function provides a way to determine the signal level, in counts, at the thermistor. It is normally used only during manufacture, and is not a function that most software programs will have any reason to use. Returns: TRUE if successful, FALSE otherwise Notes: none
Table of Contents Header Files The #define Guard Header File Dependencies Inline Functions The -inl.h Files Function Parameter Ordering Names and Order of Includes Scoping Namespaces Nested Classes Nonmember, Static Member, and Global Functions Local Variables Static and Global Variables Classes Doing Work in Constructors Default Constructors Explicit Constructors Copy Constructors Structs vs. Classes Inheritance Multiple Inheritance Interfaces Operator Overloading Access Control Declaration Order Write Short Functions Google-Specific Magic Smart Pointers cpplint Other C++ Features Reference Arguments Function Overloading Default Arguments Variable-Length Arrays and alloca() Friends Exceptions Run-Time Type Information (RTTI) Casting Streams Preincrement and Predecrement Use of const Integer Types 64-bit Portability Preprocessor Macros 0 and NULL sizeof Boost C++0x Naming General Naming Rules File Names Type Names Variable Names Constant Names Function Names Namespace Names Enumerator Names Macro Names Exceptions to Naming Rules Comments Comment Style File Comments Class Comments Function Comments Variable Comments Implementation Comments Punctuation, Spelling and Grammar TODO Comments Deprecation Comments Formatting Line Length Non-ASCII Characters Spaces vs. Tabs Function Declarations and Definitions Function Calls Conditionals Loops and Switch Statements Pointer and Reference Expressions Boolean Expressions Return Values Variable and Array Initialization Preprocessor Directives Class Format Constructor Initializer Lists Namespace Formatting Horizontal Whitespace Vertical Whitespace Exceptions to the Rules Existing Non-conformant Code Windows Code Important Note Displaying Hidden Details in this Guide link ▶This style guide contains many details that are initially hidden from view. They are marked by the triangle icon, which you see here on your left. Click it now. You should see "Hooray" appear below. Hooray! Now you know you can expand points to get more details. Alternatively, there's an "expand all" at the top of this document. Background C++ is the main development language used by many of Google's open-source projects. As every C++ programmer knows, the language has many powerful features, but this power brings with it complexity, which in turn can make code more bug-prone and harder to read and maintain. The goal of this guide is to manage this complexity by describing in detail the dos and don'ts of writing C++ code. These rules exist to keep the code base manageable while still allowing coders to use C++ language features productively. Style, also known as readability, is what we call the conventions that govern our C++ code. The term Style is a bit of a misnomer, since these conventions cover far more than just source file formatting. One way in which we keep the code base manageable is by enforcing consistency. It is very important that any programmer be able to look at another's code and quickly understand it. Maintaining a uniform style and following conventions means that we can more easily use "pattern-matching" to infer what various symbols are and what invariants are true about them. Creating common, required idioms and patterns makes code much easier to understand. In some cases there might be good arguments for changing certain style rules, but we nonetheless keep things as they are in order to preserve consistency. Another issue this guide addresses is that of C++ feature bloat. C++ is a huge language with many advanced features. In some cases we constrain, or even ban, use of certain features. We do this to keep code simple and to avoid the various common errors and problems that these features can cause. This guide lists these features and explains why their use is restricted. Open-source projects developed by Google conform to the requirements in this guide. Note that this guide is not a C++ tutorial: we assume that the reader is familiar with the language. Header Files In general, every .cc file should have an associated .h file. There are some common exceptions, such as unittests and small .cc files containing just a main() function. Correct use of header files can make a huge difference to the readability, size and performance of your code. The following rules will guide you through the various pitfalls of using header files. The #define Guard link ▶All header files should have #define guards to prevent multiple inclusion. The format of the symbol name should be ___H_. To guarantee uniqueness, they should be based on the full path in a project's source tree. For example, the file foo/src/bar/baz.h in project foo should have the following guard: #ifndef FOO_BAR_BAZ_H_ #define FOO_BAR_BAZ_H_ ... #endif // FOO_BAR_BAZ_H_ Header File Dependencies link ▶Don't use an #include when a forward declaration would suffice. When you include a header file you introduce a dependency that will cause your code to be recompiled whenever the header file changes. If your header file includes other header files, any change to those files will cause any code that includes your header to be recompiled. Therefore, we prefer to minimize includes, particularly includes of header files in other header files. You can significantly minimize the number of header files you need to include in your own header files by using forward declarations. For example, if your header file uses the File class in ways that do not require access to the declaration of the File class, your header file can just forward declare class File; instead of having to #include "file/base/file.h". How can we use a class Foo in a header file without access to its definition? We can declare data members of type Foo* or Foo&. We can declare (but not define) functions with arguments, and/or return values, of type Foo. (One exception is if an argument Foo or const Foo& has a non-explicit, one-argument constructor, in which case we need the full definition to support automatic type conversion.) We can declare static data members of type Foo. This is because static data members are defined outside the class definition. On the other hand, you must include the header file for Foo if your class subclasses Foo or has a data member of type Foo. Sometimes it makes sense to have pointer (or better, scoped_ptr) members instead of object members. However, this complicates code readability and imposes a performance penalty, so avoid doing this transformation if the only purpose is to minimize includes in header files. Of course, .cc files typically do require the definitions of the classes they use, and usually have to include several header files. Note: If you use a symbol Foo in your source file, you should bring in a definition for Foo yourself, either via an #include or via a forward declaration. Do not depend on the symbol being brought in transitively via headers not directly included. One exception is if Foo is used in myfile.cc, it's ok to #include (or forward-declare) Foo in myfile.h, instead of myfile.cc. Inline Functions link ▶Define functions inline only when they are small, say, 10 lines or less. Definition: You can declare functions in a way that allows the compiler to expand them inline rather than calling them through the usual function call mechanism. Pros: Inlining a function can generate more efficient object code, as long as the inlined function is small. Feel free to inline accessors and mutators, and other short, performance-critical functions. Cons: Overuse of inlining can actually make programs slower. Depending on a function's size, inlining it can cause the code size to increase or decrease. Inlining a very small accessor function will usually decrease code size while inlining a very large function can dramatically increase code size. On modern processors smaller code usually runs faster due to better use of the instruction cache. Decision: A decent rule of thumb is to not inline a function if it is more than 10 lines long. Beware of destructors, which are often longer than they appear because of implicit member- and base-destructor calls! Another useful rule of thumb: it's typically not cost effective to inline functions with loops or switch statements (unless, in the common case, the loop or switch statement is never executed). It is important to know that functions are not always inlined even if they are declared as such; for example, virtual and recursive functions are not normally inlined. Usually recursive functions should not be inline. The main reason for making a virtual function inline is to place its definition in the class, either for convenience or to document its behavior, e.g., for accessors and mutators. The -inl.h Files link ▶You may use file names with a -inl.h suffix to define complex inline functions when needed. The definition of an inline function needs to be in a header file, so that the compiler has the definition available for inlining at the call sites. However, implementation code properly belongs in .cc files, and we do not like to have much actual code in .h files unless there is a readability or performance advantage. If an inline function definition is short, with very little, if any, logic in it, you should put the code in your .h file. For example, accessors and mutators should certainly be inside a class definition. More complex inline functions may also be put in a .h file for the convenience of the implementer and callers, though if this makes the .h file too unwieldy you can instead put that code in a separate -inl.h file. This separates the implementation from the class definition, while still allowing the implementation to be included where necessary. Another use of -inl.h files is for definitions of function templates. This can be used to keep your template definitions easy to read. Do not forget that a -inl.h file requires a #define guard just like any other header file. Function Parameter Ordering link ▶When defining a function, parameter order is: inputs, then outputs. Parameters to C/C++ functions are either input to the function, output from the function, or both. Input parameters are usually values or const references, while output and input/output parameters will be non-const pointers. When ordering function parameters, put all input-only parameters before any output parameters. In particular, do not add new parameters to the end of the function just because they are new; place new input-only parameters before the output parameters. This is not a hard-and-fast rule. Parameters that are both input and output (often classes/structs) muddy the waters, and, as always, consistency with related functions may require you to bend the rule. Names and Order of Includes link ▶Use standard order for readability and to avoid hidden dependencies: C library, C++ library, other libraries' .h, your project's .h. All of a project's header files should be listed as descentants of the project's source directory without use of UNIX directory shortcuts . (the current directory) or .. (the parent directory). For example, google-awesome-project/src/base/logging.h should be included as #include "base/logging.h" In dir/foo.cc, whose main purpose is to implement or test the stuff in dir2/foo2.h, order your includes as follows: dir2/foo2.h (preferred location — see details below). C system files. C++ system files. Other libraries' .h files. Your project's .h files. The preferred ordering reduces hidden dependencies. We want every header file to be compilable on its own. The easiest way to achieve this is to make sure that every one of them is the first .h file #included in some .cc. dir/foo.cc and dir2/foo2.h are often in the same directory (e.g. base/basictypes_test.cc and base/basictypes.h), but can be in different directories too. Within each section it is nice to order the includes alphabetically. For example, the includes in google-awesome-project/src/foo/internal/fooserver.cc might look like this: #include "foo/public/fooserver.h" // Preferred location. #include #include #include #include #include "base/basictypes.h" #include "base/commandlineflags.h" #include "foo/public/bar.h" Scoping Namespaces link ▶Unnamed namespaces in .cc files are encouraged. With named namespaces, choose the name based on the project, and possibly its path. Do not use a using-directive. Definition: Namespaces subdivide the global scope into distinct, named scopes, and so are useful for preventing name collisions in the global scope. Pros: Namespaces provide a (hierarchical) axis of naming, in addition to the (also hierarchical) name axis provided by classes. For example, if two different projects have a class Foo in the global scope, these symbols may collide at compile time or at runtime. If each project places their code in a namespace, project1::Foo and project2::Foo are now distinct symbols that do not collide. Cons: Namespaces can be confusing, because they provide an additional (hierarchical) axis of naming, in addition to the (also hierarchical) name axis provided by classes. Use of unnamed spaces in header files can easily cause violations of the C++ One Definition Rule (ODR). Decision: Use namespaces according to the policy described below. Unnamed Namespaces Unnamed namespaces are allowed and even encouraged in .cc files, to avoid runtime naming conflicts: namespace { // This is in a .cc file. // The content of a namespace is not indented enum { kUnused, kEOF, kError }; // Commonly used tokens. bool AtEof() { return pos_ == kEOF; } // Uses our namespace's EOF. } // namespace However, file-scope declarations that are associated with a particular class may be declared in that class as types, static data members or static member functions rather than as members of an unnamed namespace. Terminate the unnamed namespace as shown, with a comment // namespace. Do not use unnamed namespaces in .h files. Named Namespaces Named namespaces should be used as follows: Namespaces wrap the entire source file after includes, gflags definitions/declarations, and forward declarations of classes from other namespaces: // In the .h file namespace mynamespace { // All declarations are within the namespace scope. // Notice the lack of indentation. class MyClass { public: ... void Foo(); }; } // namespace mynamespace // In the .cc file namespace mynamespace { // Definition of functions is within scope of the namespace. void MyClass::Foo() { ... } } // namespace mynamespace The typical .cc file might have more complex detail, including the need to reference classes in other namespaces. #include "a.h" DEFINE_bool(someflag, false, "dummy flag"); class C; // Forward declaration of class C in the global namespace. namespace a { class A; } // Forward declaration of a::A. namespace b { ...code for b... // Code goes against the left margin. } // namespace b Do not declare anything in namespace std, not even forward declarations of standard library classes. Declaring entities in namespace std is undefined behavior, i.e., not portable. To declare entities from the standard library, include the appropriate header file. You may not use a using-directive to make all names from a namespace available. // Forbidden -- This pollutes the namespace. using namespace foo; You may use a using-declaration anywhere in a .cc file, and in functions, methods or classes in .h files. // OK in .cc files. // Must be in a function, method or class in .h files. using ::foo::bar; Namespace aliases are allowed anywhere in a .cc file, anywhere inside the named namespace that wraps an entire .h file, and in functions and methods. // Shorten access to some commonly used names in .cc files. namespace fbz = ::foo::bar::baz; // Shorten access to some commonly used names (in a .h file). namespace librarian { // The following alias is available to all files including // this header (in namespace librarian): // alias names should therefore be chosen consistently // within a project. namespace pd_s = ::pipeline_diagnostics::sidetable; inline void my_inline_function() { // namespace alias local to a function (or method). namespace fbz = ::foo::bar::baz; ... } } // namespace librarian Note that an alias in a .h file is visible to everyone #including that file, so public headers (those available outside a project) and headers transitively #included by them, should avoid defining aliases, as part of the general goal of keeping public APIs as small as possible. Nested Classes link ▶Although you may use public nested classes when they are part of an interface, consider a namespace to keep declarations out of the global scope. Definition: A class can define another class within it; this is also called a member class. class Foo { private: // Bar is a member class, nested within Foo. class Bar { ... }; }; Pros: This is useful when the nested (or member) class is only used by the enclosing class; making it a member puts it in the enclosing class scope rather than polluting the outer scope with the class name. Nested classes can be forward declared within the enclosing class and then defined in the .cc file to avoid including the nested class definition in the enclosing class declaration, since the nested class definition is usually only relevant to the implementation. Cons: Nested classes can be forward-declared only within the definition of the enclosing class. Thus, any header file manipulating a Foo::Bar* pointer will have to include the full class declaration for Foo. Decision: Do not make nested classes public unless they are actually part of the interface, e.g., a class that holds a set of options for some method. Nonmember, Static Member, and Global Functions link ▶Prefer nonmember functions within a namespace or static member functions to global functions; use completely global functions rarely. Pros: Nonmember and static member functions can be useful in some situations. Putting nonmember functions in a namespace avoids polluting the global namespace. Cons: Nonmember and static member functions may make more sense as members of a new class, especially if they access external resources or have significant dependencies. Decision: Sometimes it is useful, or even necessary, to define a function not bound to a class instance. Such a function can be either a static member or a nonmember function. Nonmember functions should not depend on external variables, and should nearly always exist in a namespace. Rather than creating classes only to group static member functions which do not share static data, use namespaces instead. Functions defined in the same compilation unit as production classes may introduce unnecessary coupling and link-time dependencies when directly called from other compilation units; static member functions are particularly susceptible to this. Consider extracting a new class, or placing the functions in a namespace possibly in a separate library. If you must define a nonmember function and it is only needed in its .cc file, use an unnamed namespace or static linkage (eg static int Foo() {...}) to limit its scope. Local Variables link ▶Place a function's variables in the narrowest scope possible, and initialize variables in the declaration. C++ allows you to declare variables anywhere in a function. We encourage you to declare them in as local a scope as possible, and as close to the first use as possible. This makes it easier for the reader to find the declaration and see what type the variable is and what it was initialized to. In particular, initialization should be used instead of declaration and assignment, e.g. int i; i = f(); // Bad -- initialization separate from declaration. int j = g(); // Good -- declaration has initialization. Note that gcc implements for (int i = 0; i < 10; ++i) correctly (the scope of i is only the scope of the for loop), so you can then reuse i in another for loop in the same scope. It also correctly scopes declarations in if and while statements, e.g. while (const char* p = strchr(str, '/')) str = p + 1; There is one caveat: if the variable is an object, its constructor is invoked every time it enters scope and is created, and its destructor is invoked every time it goes out of scope. // Inefficient implementation: for (int i = 0; i < 1000000; ++i) { Foo f; // My ctor and dtor get called 1000000 times each. f.DoSomething(i); } It may be more efficient to declare such a variable used in a loop outside that loop: Foo f; // My ctor and dtor get called once each. for (int i = 0; i < 1000000; ++i) { f.DoSomething(i); } Static and Global Variables link ▶Static or global variables of class type are forbidden: they cause hard-to-find bugs due to indeterminate order of construction and destruction. Objects with static storage duration, including global variables, static variables, static class member variables, and function static variables, must be Plain Old Data (POD): only ints, chars, floats, or pointers, or arrays/structs of POD. The order in which class constructors and initializers for static variables are called is only partially specified in C++ and can even change from build to build, which can cause bugs that are difficult to find. Therefore in addition to banning globals of class type, we do not allow static POD variables to be initialized with the result of a function, unless that function (such as getenv(), or getpid()) does not itself depend on any other globals. Likewise, the order in which destructors are called is defined to be the reverse of the order in which the constructors were called. Since constructor order is indeterminate, so is destructor order. For example, at program-end time a static variable might have been destroyed, but code still running -- perhaps in another thread -- tries to access it and fails. Or the destructor for a static 'string' variable might be run prior to the destructor for another variable that contains a reference to that string. As a result we only allow static variables to contain POD data. This rule completely disallows vector (use C arrays instead), or string (use const char []). If you need a static or global variable of a class type, consider initializing a pointer (which will never be freed), from either your main() function or from pthread_once(). Note that this must be a raw pointer, not a "smart" pointer, since the smart pointer's destructor will have the order-of-destructor issue that we are trying to avoid. Classes Classes are the fundamental unit of code in C++. Naturally, we use them extensively. This section lists the main dos and don'ts you should follow when writing a class. Doing Work in Constructors link ▶In general, constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. Definition: It is possible to perform initialization in the body of the constructor. Pros: Convenience in typing. No need to worry about whether the class has been initialized or not. Cons: The problems with doing work in constructors are: There is no easy way for constructors to signal errors, short of using exceptions (which are forbidden). If the work fails, we now have an object whose initialization code failed, so it may be an indeterminate state. If the work calls virtual functions, these calls will not get dispatched to the subclass implementations. Future modification to your class can quietly introduce this problem even if your class is not currently subclassed, causing much confusion. If someone creates a global variable of this type (which is against the rules, but still), the constructor code will be called before main(), possibly breaking some implicit assumptions in the constructor code. For instance, gflags will not yet have been initialized. Decision: If your object requires non-trivial initialization, consider having an explicit Init() method. In particular, constructors should not call virtual functions, attempt to raise errors, access potentially uninitialized global variables, etc. Default Constructors link ▶You must define a default constructor if your class defines member variables and has no other constructors. Otherwise the compiler will do it for you, badly. Definition: The default constructor is called when we new a class object with no arguments. It is always called when calling new[] (for arrays). Pros: Initializing structures by default, to hold "impossible" values, makes debugging much easier. Cons: Extra work for you, the code writer. Decision: If your class defines member variables and has no other constructors you must define a default constructor (one that takes no arguments). It should preferably initialize the object in such a way that its internal state is consistent and valid. The reason for this is that if you have no other constructors and do not define a default constructor, the compiler will generate one for you. This compiler generated constructor may not initialize your object sensibly. If your class inherits from an existing class but you add no new member variables, you are not required to have a default constructor. Explicit Constructors link ▶Use the C++ keyword explicit for constructors with one argument. Definition: Normally, if a constructor takes one argument, it can be used as a conversion. For instance, if you define Foo::Foo(string name) and then pass a string to a function that expects a Foo, the constructor will be called to convert the string into a Foo and will pass the Foo to your function for you. This can be convenient but is also a source of trouble when things get converted and new objects created without you meaning them to. Declaring a constructor explicit prevents it from being invoked implicitly as a conversion. Pros: Avoids undesirable conversions. Cons: None. Decision: We require all single argument constructors to be explicit. Always put explicit in front of one-argument constructors in the class definition: explicit Foo(string name); The exception is copy constructors, which, in the rare cases when we allow them, should probably not be explicit. Classes that are intended to be transparent wrappers around other classes are also exceptions. Such exceptions should be clearly marked with comments. Copy Constructors link ▶Provide a copy constructor and assignment operator only when necessary. Otherwise, disable them with DISALLOW_COPY_AND_ASSIGN. Definition: The copy constructor and assignment operator are used to create copies of objects. The copy constructor is implicitly invoked by the compiler in some situations, e.g. passing objects by value. Pros: Copy constructors make it easy to copy objects. STL containers require that all contents be copyable and assignable. Copy constructors can be more efficient than CopyFrom()-style workarounds because they combine construction with copying, the compiler can elide them in some contexts, and they make it easier to avoid heap allocation. Cons: Implicit copying of objects in C++ is a rich source of bugs and of performance problems. It also reduces readability, as it becomes hard to track which objects are being passed around by value as opposed to by reference, and therefore where changes to an object are reflected. Decision: Few classes need to be copyable. Most should have neither a copy constructor nor an assignment operator. In many situations, a pointer or reference will work just as well as a copied value, with better performance. For example, you can pass function parameters by reference or pointer instead of by value, and you can store pointers rather than objects in an STL container. If your class needs to be copyable, prefer providing a copy method, such as CopyFrom() or Clone(), rather than a copy constructor, because such methods cannot be invoked implicitly. If a copy method is insufficient in your situation (e.g. for performance reasons, or because your class needs to be stored by value in an STL container), provide both a copy constructor and assignment operator. If your class does not need a copy constructor or assignment operator, you must explicitly disable them. To do so, add dummy declarations for the copy constructor and assignment operator in the private: section of your class, but do not provide any corresponding definition (so that any attempt to use them results in a link error). For convenience, a DISALLOW_COPY_AND_ASSIGN macro can be used: // A macro to disallow the copy constructor and operator= functions // This should be used in the private: declarations for a class #define DISALLOW_COPY_AND_ASSIGN(TypeName) \ TypeName(const TypeName&); \ void operator=(const TypeName&) Then, in class Foo: class Foo { public: Foo(int f); ~Foo(); private: DISALLOW_COPY_AND_ASSIGN(Foo); }; Structs vs. Classes link ▶Use a struct only for passive objects that carry data; everything else is a class. The struct and class keywords behave almost identically in C++. We add our own semantic meanings to each keyword, so you should use the appropriate keyword for the data-type you're defining. structs should be used for passive objects that carry data, and may have associated constants, but lack any functionality other than access/setting the data members. The accessing/setting of fields is done by directly accessing the fields rather than through method invocations. Methods should not provide behavior but should only be used to set up the data members, e.g., constructor, destructor, Initialize(), Reset(), Validate(). If more functionality is required, a class is more appropriate. If in doubt, make it a class. For consistency with STL, you can use struct instead of class for functors and traits. Note that member variables in structs and classes have different naming rules. Inheritance link ▶Composition is often more appropriate than inheritance. When using inheritance, make it public. Definition: When a sub-class inherits from a base class, it includes the definitions of all the data and operations that the parent base class defines. In practice, inheritance is used in two major ways in C++: implementation inheritance, in which actual code is inherited by the child, and interface inheritance, in which only method names are inherited. Pros: Implementation inheritance reduces code size by re-using the base class code as it specializes an existing type. Because inheritance is a compile-time declaration, you and the compiler can understand the operation and detect errors. Interface inheritance can be used to programmatically enforce that a class expose a particular API. Again, the compiler can detect errors, in this case, when a class does not define a necessary method of the API. Cons: For implementation inheritance, because the code implementing a sub-class is spread between the base and the sub-class, it can be more difficult to understand an implementation. The sub-class cannot override functions that are not virtual, so the sub-class cannot change implementation. The base class may also define some data members, so that specifies physical layout of the base class. Decision: All inheritance should be public. If you want to do private inheritance, you should be including an instance of the base class as a member instead. Do not overuse implementation inheritance. Composition is often more appropriate. Try to restrict use of inheritance to the "is-a" case: Bar subclasses Foo if it can reasonably be said that Bar "is a kind of" Foo. Make your destructor virtual if necessary. If your class has virtual methods, its destructor should be virtual. Limit the use of protected to those member functions that might need to be accessed from subclasses. Note that data members should be private. When redefining an inherited virtual function, explicitly declare it virtual in the declaration of the derived class. Rationale: If virtual is omitted, the reader has to check all ancestors of the class in question to determine if the function is virtual or not. Multiple Inheritance link ▶Only very rarely is multiple implementation inheritance actually useful. We allow multiple inheritance only when at most one of the base classes has an implementation; all other base classes must be pure interface classes tagged with the Interface suffix. Definition: Multiple inheritance allows a sub-class to have more than one base class. We distinguish between base classes that are pure interfaces and those that have an implementation. Pros: Multiple implementation inheritance may let you re-use even more code than single inheritance (see Inheritance). Cons: Only very rarely is multiple implementation inheritance actually useful. When multiple implementation inheritance seems like the solution, you can usually find a different, more explicit, and cleaner solution. Decision: Multiple inheritance is allowed only when all superclasses, with the possible exception of the first one, are pure interfaces. In order to ensure that they remain pure interfaces, they must end with the Interface suffix. Note: There is an exception to this rule on Windows. Interfaces link ▶Classes that satisfy certain conditions are allowed, but not required, to end with an Interface suffix. Definition: A class is a pure interface if it meets the following requirements: It has only public pure virtual ("= 0") methods and static methods (but see below for destructor). It may not have non-static data members. It need not have any constructors defined. If a constructor is provided, it must take no arguments and it must be protected. If it is a subclass, it may only be derived from classes that satisfy these conditions and are tagged with the Interface suffix. An interface class can never be directly instantiated because of the pure virtual method(s) it declares. To make sure all implementations of the interface can be destroyed correctly, they must also declare a virtual destructor (in an exception to the first rule, this should not be pure). See Stroustrup, The C++ Programming Language, 3rd edition, section 12.4 for details. Pros: Tagging a class with the Interface suffix lets others know that they must not add implemented methods or non static data members. This is particularly important in the case of multiple inheritance. Additionally, the interface concept is already well-understood by Java programmers. Cons: The Interface suffix lengthens the class name, which can make it harder to read and understand. Also, the interface property may be considered an implementation detail that shouldn't be exposed to clients. Decision: A class may end with Interface only if it meets the above requirements. We do not require the converse, however: classes that meet the above requirements are not required to end with Interface. Operator Overloading link ▶Do not overload operators except in rare, special circumstances. Definition: A class can define that operators such as + and / operate on the class as if it were a built-in type. Pros: Can make code appear more intuitive because a class will behave in the same way as built-in types (such as int). Overloaded operators are more playful names for functions that are less-colorfully named, such as Equals() or Add(). For some template functions to work correctly, you may need to define operators. Cons: While operator overloading can make code more intuitive, it has several drawbacks: It can fool our intuition into thinking that expensive operations are cheap, built-in operations. It is much harder to find the call sites for overloaded operators. Searching for Equals() is much easier than searching for relevant invocations of ==. Some operators work on pointers too, making it easy to introduce bugs. Foo + 4 may do one thing, while &Foo + 4 does something totally different. The compiler does not complain for either of these, making this very hard to debug. Overloading also has surprising ramifications. For instance, if a class overloads unary operator&, it cannot safely be forward-declared. Decision: In general, do not overload operators. The assignment operator (operator=), in particular, is insidious and should be avoided. You can define functions like Equals() and CopyFrom() if you need them. Likewise, avoid the dangerous unary operator& at all costs, if there's any possibility the class might be forward-declared. However, there may be rare cases where you need to overload an operator to interoperate with templates or "standard" C++ classes (such as operator<<(ostream&, const T&) for logging). These are acceptable if fully justified, but you should try to avoid these whenever possible. In particular, do not overload operator== or operator< just so that your class can be used as a key in an STL container; instead, you should create equality and comparison functor types when declaring the container. Some of the STL algorithms do require you to overload operator==, and you may do so in these cases, provided you document why. See also Copy Constructors and Function Overloading. Access Control link ▶Make data members private, and provide access to them through accessor functions as needed (for technical reasons, we allow data members of a test fixture class to be protected when using Google Test). Typically a variable would be called foo_ and the accessor function foo(). You may also want a mutator function set_foo(). Exception: static const data members (typically called kFoo) need not be private. The definitions of accessors are usually inlined in the header file. See also Inheritance and Function Names. Declaration Order link ▶Use the specified order of declarations within a class: public: before private:, methods before data members (variables), etc. Your class definition should start with its public: section, followed by its protected: section and then its private: section. If any of these sections are empty, omit them. Within each section, the declarations generally should be in the following order: Typedefs and Enums Constants (static const data members) Constructors Destructor Methods, including static methods Data Members (except static const data members) Friend declarations should always be in the private section, and the DISALLOW_COPY_AND_ASSIGN macro invocation should be at the end of the private: section. It should be the last thing in the class. See Copy Constructors. Method definitions in the corresponding .cc file should be the same as the declaration order, as much as possible. Do not put large method definitions inline in the class definition. Usually, only trivial or performance-critical, and very short, methods may be defined inline. See Inline Functions for more details. Write Short Functions link ▶Prefer small and focused functions. We recognize that long functions are sometimes appropriate, so no hard limit is placed on functions length. If a function exceeds about 40 lines, think about whether it can be broken up without harming the structure of the program. Even if your long function works perfectly now, someone modifying it in a few months may add new behavior. This could result in bugs that are hard to find. Keeping your functions short and simple makes it easier for other people to read and modify your code. You could find long and complicated functions when working with some code. Do not be intimidated by modifying existing code: if working with such a function proves to be difficult, you find that errors are hard to debug, or you want to use a piece of it in several different contexts, consider breaking up the function into smaller and more manageable pieces. Google-Specific Magic There are various tricks and utilities that we use to make C++ code more robust, and various ways we use C++ that may differ from what you see elsewhere. Smart Pointers link ▶If you actually need pointer semantics, scoped_ptr is great. You should only use std::tr1::shared_ptr under very specific conditions, such as when objects need to be held by STL containers. You should never use auto_ptr. "Smart" pointers are objects that act like pointers but have added semantics. When a scoped_ptr is destroyed, for instance, it deletes the object it's pointing to. shared_ptr is the same way, but implements reference-counting so only the last pointer to an object deletes it. Generally speaking, we prefer that we design code with clear object ownership. The clearest object ownership is obtained by using an object directly as a field or local variable, without using pointers at all. On the other extreme, by their very definition, reference counted pointers are owned by nobody. The problem with this design is that it is easy to create circular references or other strange conditions that cause an object to never be deleted. It is also slow to perform atomic operations every time a value is copied or assigned. Although they are not recommended, reference counted pointers are sometimes the simplest and most elegant way to solve a problem. cpplint link ▶Use cpplint.py to detect style errors. cpplint.py is a tool that reads a source file and identifies many style errors. It is not perfect, and has both false positives and false negatives, but it is still a valuable tool. False positives can be ignored by putting // NOLINT at the end of the line. Some projects have instructions on how to run cpplint.py from their project tools. If the project you are contributing to does not, you can download cpplint.py separately. Other C++ Features Reference Arguments link ▶All parameters passed by reference must be labeled const. Definition: In C, if a function needs to modify a variable, the parameter must use a pointer, eg int foo(int *pval). In C++, the function can alternatively declare a reference parameter: int foo(int &val). Pros: Defining a parameter as reference avoids ugly code like (*pval)++. Necessary for some applications like copy constructors. Makes it clear, unlike with pointers, that NULL is not a possible value. Cons: References can be confusing, as they have value syntax but pointer semantics. Decision: Within function parameter lists all references must be const: void Foo(const string &in, string *out); In fact it is a very strong convention in Google code that input arguments are values or const references while output arguments are pointers. Input parameters may be const pointers, but we never allow non-const reference parameters. One case when you might want an input parameter to be a const pointer is if you want to emphasize that the argument is not copied, so it must exist for the lifetime of the object; it is usually best to document this in comments as well. STL adapters such as bind2nd and mem_fun do not permit reference parameters, so you must declare functions with pointer parameters in these cases, too. Function Overloading link ▶Use overloaded functions (including constructors) only if a reader looking at a call site can get a good idea of what is happening without having to first figure out exactly which overload is being called. Definition: You may write a function that takes a const string& and overload it with another that takes const char*. class MyClass { public: void Analyze(const string &text); void Analyze(const char *text, size_t textlen); }; Pros: Overloading can make code more intuitive by allowing an identically-named function to take different arguments. It may be necessary for templatized code, and it can be convenient for Visitors. Cons: If a function is overloaded by the argument types alone, a reader may have to understand C++'s complex matching rules in order to tell what's going on. Also many people are confused by the semantics of inheritance if a derived class overrides only some of the variants of a function. Decision: If you want to overload a function, consider qualifying the name with some information about the arguments, e.g., AppendString(), AppendInt() rather than just Append(). Default Arguments link ▶We do not allow default function parameters, except in a few uncommon situations explained below. Pros: Often you have a function that uses lots of default values, but occasionally you want to override the defaults. Default parameters allow an easy way to do this without having to define many functions for the rare exceptions. Cons: People often figure out how to use an API by looking at existing code that uses it. Default parameters are more difficult to maintain because copy-and-paste from previous code may not reveal all the parameters. Copy-and-pasting of code segments can cause major problems when the default arguments are not appropriate for the new code. Decision: Except as described below, we require all arguments to be explicitly specified, to force programmers to consider the API and the values they are passing for each argument rather than silently accepting defaults they may not be aware of. One specific exception is when default arguments are used to simulate variable-length argument lists. // Support up to 4 params by using a default empty AlphaNum. string StrCat(const AlphaNum &a, const AlphaNum &b = gEmptyAlphaNum, const AlphaNum &c = gEmptyAlphaNum, const AlphaNum &d = gEmptyAlphaNum); Variable-Length Arrays and alloca() link ▶We do not allow variable-length arrays or alloca(). Pros: Variable-length arrays have natural-looking syntax. Both variable-length arrays and alloca() are very efficient. Cons: Variable-length arrays and alloca are not part of Standard C++. More importantly, they allocate a data-dependent amount of stack space that can trigger difficult-to-find memory overwriting bugs: "It ran fine on my machine, but dies mysteriously in production". Decision: Use a safe allocator instead, such as scoped_ptr/scoped_array. Friends link ▶We allow use of friend classes and functions, within reason. Friends should usually be defined in the same file so that the reader does not have to look in another file to find uses of the private members of a class. A common use of friend is to have a FooBuilder class be a friend of Foo so that it can construct the inner state of Foo correctly, without exposing this state to the world. In some cases it may be useful to make a unittest class a friend of the class it tests. Friends extend, but do not break, the encapsulation boundary of a class. In some cases this is better than making a member public when you want to give only one other class access to it. However, most classes should interact with other classes solely through their public members. Exceptions link ▶We do not use C++ exceptions. Pros: Exceptions allow higher levels of an application to decide how to handle "can't happen" failures in deeply nested functions, without the obscuring and error-prone bookkeeping of error codes. Exceptions are used by most other modern languages. Using them in C++ would make it more consistent with Python, Java, and the C++ that others are familiar with. Some third-party C++ libraries use exceptions, and turning them off internally makes it harder to integrate with those libraries. Exceptions are the only way for a constructor to fail. We can simulate this with a factory function or an Init() method, but these require heap allocation or a new "invalid" state, respectively. Exceptions are really handy in testing frameworks. Cons: When you add a throw statement to an existing function, you must examine all of its transitive callers. Either they must make at least the basic exception safety guarantee, or they must never catch the exception and be happy with the program terminating as a result. For instance, if f() calls g() calls h(), and h throws an exception that f catches, g has to be careful or it may not clean up properly. More generally, exceptions make the control flow of programs difficult to evaluate by looking at code: functions may return in places you don't expect. This causes maintainability and debugging difficulties. You can minimize this cost via some rules on how and where exceptions can be used, but at the cost of more that a developer needs to know and understand. Exception safety requires both RAII and different coding practices. Lots of supporting machinery is needed to make writing correct exception-safe code easy. Further, to avoid requiring readers to understand the entire call graph, exception-safe code must isolate logic that writes to persistent state into a "commit" phase. This will have both benefits and costs (perhaps where you're forced to obfuscate code to isolate the commit). Allowing exceptions would force us to always pay those costs even when they're not worth it. Turning on exceptions adds data to each binary produced, increasing compile time (probably slightly) and possibly increasing address space pressure. The availability of exceptions may encourage developers to throw them when they are not appropriate or recover from them when it's not safe to do so. For example, invalid user input should not cause exceptions to be thrown. We would need to make the style guide even longer to document these restrictions! Decision: On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code, the introduction of exceptions has implications on all dependent code. If exceptions can be propagated beyond a new project, it also becomes problematic to integrate the new project into existing exception-free code. Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions. Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project. The conversion process would be slow and error-prone. We don't believe that the available alternatives to exceptions, such as error codes and assertions, introduce a significant burden. Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. Because we'd like to use our open-source projects at Google and it's difficult to do so if those projects use exceptions, we need to advise against exceptions in Google open-source projects as well. Things would probably be different if we had to do it all over again from scratch. There is an exception to this rule (no pun intended) for Windows code. Run-Time Type Information (RTTI) link ▶We do not use Run Time Type Information (RTTI). Definition: RTTI allows a programmer to query the C++ class of an object at run time. Pros: It is useful in some unittests. For example, it is useful in tests of factory classes where the test has to verify that a newly created object has the expected dynamic type. In rare circumstances, it is useful even outside of tests. Cons: A query of type during run-time typically means a design problem. If you need to know the type of an object at runtime, that is often an indication that you should reconsider the design of your class. Decision: Do not use RTTI, except in unittests. If you find yourself in need of writing code that behaves differently based on the class of an object, consider one of the alternatives to querying the type. Virtual methods are the preferred way of executing different code paths depending on a specific subclass type. This puts the work within the object itself. If the work belongs outside the object and instead in some processing code, consider a double-dispatch solution, such as the Visitor design pattern. This allows a facility outside the object itself to determine the type of class using the built-in type system. If you think you truly cannot use those ideas, you may use RTTI. But think twice about it. :-) Then think twice again. Do not hand-implement an RTTI-like workaround. The arguments against RTTI apply just as much to workarounds like class hierarchies with type tags. Casting link ▶Use C++ casts like static_cast(). Do not use other cast formats like int y = (int)x; or int y = int(x);. Definition: C++ introduced a different cast system from C that distinguishes the types of cast operations. Pros: The problem with C casts is the ambiguity of the operation; sometimes you are doing a conversion (e.g., (int)3.5) and sometimes you are doing a cast (e.g., (int)"hello"); C++ casts avoid this. Additionally C++ casts are more visible when searching for them. Cons: The syntax is nasty. Decision: Do not use C-style casts. Instead, use these C++-style casts. Use static_cast as the equivalent of a C-style cast that does value conversion, or when you need to explicitly up-cast a pointer from a class to its superclass. Use const_cast to remove the const qualifier (see const). Use reinterpret_cast to do unsafe conversions of pointer types to and from integer and other pointer types. Use this only if you know what you are doing and you understand the aliasing issues. Do not use dynamic_cast except in test code. If you need to know type information at runtime in this way outside of a unittest, you probably have a design flaw. Streams link ▶Use streams only for logging. Definition: Streams are a replacement for printf() and scanf(). Pros: With streams, you do not need to know the type of the object you are printing. You do not have problems with format strings not matching the argument list. (Though with gcc, you do not have that problem with printf either.) Streams have automatic constructors and destructors that open and close the relevant files. Cons: Streams make it difficult to do functionality like pread(). Some formatting (particularly the common format string idiom %.*s) is difficult if not impossible to do efficiently using streams without using printf-like hacks. Streams do not support operator reordering (the %1s directive), which is helpful for internationalization. Decision: Do not use streams, except where required by a logging interface. Use printf-like routines instead. There are various pros and cons to using streams, but in this case, as in many other cases, consistency trumps the debate. Do not use streams in your code. Extended Discussion There has been debate on this issue, so this explains the reasoning in greater depth. Recall the Only One Way guiding principle: we want to make sure that whenever we do a certain type of I/O, the code looks the same in all those places. Because of this, we do not want to allow users to decide between using streams or using printf plus Read/Write/etc. Instead, we should settle on one or the other. We made an exception for logging because it is a pretty specialized application, and for historical reasons. Proponents of streams have argued that streams are the obvious choice of the two, but the issue is not actually so clear. For every advantage of streams they point out, there is an equivalent disadvantage. The biggest advantage is that you do not need to know the type of the object to be printing. This is a fair point. But, there is a downside: you can easily use the wrong type, and the compiler will not warn you. It is easy to make this kind of mistake without knowing when using streams. cout << this; // Prints the address cout << *this; // Prints the contents The compiler does not generate an error because << has been overloaded. We discourage overloading for just this reason. Some say printf formatting is ugly and hard to read, but streams are often no better. Consider the following two fragments, both with the same typo. Which is easier to discover? cerr << "Error connecting to '" hostname.first << ":" hostname.second << ": " hostname.first, foo->bar()->hostname.second, strerror(errno)); And so on and so forth for any issue you might bring up. (You could argue, "Things would be better with the right wrappers," but if it is true for one scheme, is it not also true for the other? Also, remember the goal is to make the language smaller, not add yet more machinery that someone has to learn.) Either path would yield different advantages and disadvantages, and there is not a clearly superior solution. The simplicity doctrine mandates we settle on one of them though, and the majority decision was on printf + read/write. Preincrement and Predecrement link ▶Use prefix form (++i) of the increment and decrement operators with iterators and other template objects. Definition: When a variable is incremented (++i or i++) or decremented (--i or i--) and the value of the expression is not used, one must decide whether to preincrement (decrement) or postincrement (decrement). Pros: When the return value is ignored, the "pre" form (++i) is never less efficient than the "post" form (i++), and is often more efficient. This is because post-increment (or decrement) requires a copy of i to be made, which is the value of the expression. If i is an iterator or other non-scalar type, copying i could be expensive. Since the two types of increment behave the same when the value is ignored, why not just always pre-increment? Cons: The tradition developed, in C, of using post-increment when the expression value is not used, especially in for loops. Some find post-increment easier to read, since the "subject" (i) precedes the "verb" (++), just like in English. Decision: For simple scalar (non-object) values there is no reason to prefer one form and we allow either. For iterators and other template types, use pre-increment. Use of const link ▶We strongly recommend that you use const whenever it makes sense to do so. Definition: Declared variables and parameters can be preceded by the keyword const to indicate the variables are not changed (e.g., const int foo). Class functions can have the const qualifier to indicate the function does not change the state of the class member variables (e.g., class Foo { int Bar(char c) const; };). Pros: Easier for people to understand how variables are being used. Allows the compiler to do better type checking, and, conceivably, generate better code. Helps people convince themselves of program correctness because they know the functions they call are limited in how they can modify your variables. Helps people know what functions are safe to use without locks in multi-threaded programs. Cons: const is viral: if you pass a const variable to a function, that function must have const in its prototype (or the variable will need a const_cast). This can be a particular problem when calling library functions. Decision: const variables, data members, methods and arguments add a level of compile-time type checking; it is better to detect errors as soon as possible. Therefore we strongly recommend that you use const whenever it makes sense to do so: If a function does not modify an argument passed by reference or by pointer, that argument should be const. Declare methods to be const whenever possible. Accessors should almost always be const. Other methods should be const if they do not modify any data members, do not call any non-const methods, and do not return a non-const pointer or non-const reference to a data member. Consider making data members const whenever they do not need to be modified after construction. However, do not go crazy with const. Something like const int * const * const x; is likely overkill, even if it accurately describes how const x is. Focus on what's really useful to know: in this case, const int** x is probably sufficient. The mutable keyword is allowed but is unsafe when used with threads, so thread safety should be carefully considered first. Where to put the const Some people favor the form int const *foo to const int* foo. They argue that this is more readable because it's more consistent: it keeps the rule that const always follows the object it's describing. However, this consistency argument doesn't apply in this case, because the "don't go crazy" dictum eliminates most of the uses you'd have to be consistent with. Putting the const first is arguably more readable, since it follows English in putting the "adjective" (const) before the "noun" (int). That said, while we encourage putting const first, we do not require it. But be consistent with the code around you! Integer Types link ▶Of the built-in C++ integer types, the only one used is int. If a program needs a variable of a different size, use a precise-width integer type from , such as int16_t. Definition: C++ does not specify the sizes of its integer types. Typically people assume that short is 16 bits, int is 32 bits, long is 32 bits and long long is 64 bits. Pros: Uniformity of declaration. Cons: The sizes of integral types in C++ can vary based on compiler and architecture. Decision: defines types like int16_t, uint32_t, int64_t, etc. You should always use those in preference to short, unsigned long long and the like, when you need a guarantee on the size of an integer. Of the C integer types, only int should be used. When appropriate, you are welcome to use standard types like size_t and ptrdiff_t. We use int very often, for integers we know are not going to be too big, e.g., loop counters. Use plain old int for such things. You should assume that an int is at least 32 bits, but don't assume that it has more than 32 bits. If you need a 64-bit integer type, use int64_t or uint64_t. For integers we know can be "big", use int64_t. You should not use the unsigned integer types such as uint32_t, unless the quantity you are representing is really a bit pattern rather than a number, or unless you need defined twos-complement overflow. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this. On Unsigned Integers Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation. However, in C, the advantages of such documentation are outweighed by the real bugs it can introduce. Consider: for (unsigned int i = foo.Length()-1; i >= 0; --i) ... This code will never terminate! Sometimes gcc will notice this bug and warn you, but often it will not. Equally bad bugs can occur when comparing signed and unsigned variables. Basically, C's type-promotion scheme causes unsigned types to behave differently than one might expect. So, document that a variable is non-negative using assertions. Don't use an unsigned type. 64-bit Portability link ▶Code should be 64-bit and 32-bit friendly. Bear in mind problems of printing, comparisons, and structure alignment. printf() specifiers for some types are not cleanly portable between 32-bit and 64-bit systems. C99 defines some portable format specifiers. Unfortunately, MSVC 7.1 does not understand some of these specifiers and the standard is missing a few, so we have to define our own ugly versions in some cases (in the style of the standard include file inttypes.h): // printf macros for size_t, in the style of inttypes.h #ifdef _LP64 #define __PRIS_PREFIX "z" #else #define __PRIS_PREFIX #endif // Use these macros after a % in a printf format string // to get correct 32/64 bit behavior, like this: // size_t size = records.size(); // printf("%"PRIuS"\n", size); #define PRIdS __PRIS_PREFIX "d" #define PRIxS __PRIS_PREFIX "x" #define PRIuS __PRIS_PREFIX "u" #define PRIXS __PRIS_PREFIX "X" #define PRIoS __PRIS_PREFIX "o" Type DO NOT use DO use Notes void * (or any pointer) %lx %p int64_t %qd, %lld %"PRId64" uint64_t %qu, %llu, %llx %"PRIu64", %"PRIx64" size_t %u %"PRIuS", %"PRIxS" C99 specifies %zu ptrdiff_t %d %"PRIdS" C99 specifies %zd Note that the PRI* macros expand to independent strings which are concatenated by the compiler. Hence if you are using a non-constant format string, you need to insert the value of the macro into the format, rather than the name. It is still possible, as usual, to include length specifiers, etc., after the % when using the PRI* macros. So, e.g. printf("x = %30"PRIuS"\n", x) would expand on 32-bit Linux to printf("x = %30" "u" "\n", x), which the compiler will treat as printf("x = %30u\n", x). Remember that sizeof(void *) != sizeof(int). Use intptr_t if you want a pointer-sized integer. You may need to be careful with structure alignments, particularly for structures being stored on disk. Any class/structure with a int64_t/uint64_t member will by default end up being 8-byte aligned on a 64-bit system. If you have such structures being shared on disk between 32-bit and 64-bit code, you will need to ensure that they are packed the same on both architectures. Most compilers offer a way to alter structure alignment. For gcc, you can use __attribute__((packed)). MSVC offers #pragma pack() and __declspec(align()). Use the LL or ULL suffixes a
Twitter Digg Facebook Del.icio.us Reddit Stumbleupon Newsvine Technorati Mr. Wong Yahoo! Google Windows Live Send as Email Add to your CodeProject bookmarks Discuss this article 85 Print Article Database » Database » Other databasesLicence CPOL First Posted 19 Jan 2012 Views 24,219 Downloads 992 Bookmarked 74 times RaptorDB - The Key Value Store V2 By Mehdi Gholam | 8 Mar 2012 | Unedited contribution C#.NETDBABeginnerIntermediateAdvanceddatabase Even faster Key/Value store nosql embedded database engine utilizing the new MGIndex data structure with MurMur2 Hashing and WAH Bitmap indexes for duplicates. See Also More like this More by this author Article Browse Code Stats Revisions (8) Alternatives 4.95 (56 votes) 1 2 3 4 5 4.95/5 - 56 votes μ 4.95, σa 1.05 [?] Is your email address OK? You are signed up for our newsletters but your email address is either unconfirmed, or has not been reconfirmed in a long time. Please click here to have a confirmation email sent so we can confirm your email address and start sending you newsletters again. Alternatively, you can update your subscriptions. Add your own alternative version Introduction What is RaptorDB? Features Why another data structure? The problem with a b+tree Requirements of a good index structure The MGIndex Page Splits Interesting side effects of MGIndex The road not taken / the road taken and doubled back! Performance Tests Comparing B+tree and MGIndex Really big data sets! Index parameter tuning Performance Tests - v2.3 Using the Code Differences to v1 Using RaptorDBString and RaptorDBGuid Global parameters RaptorDB interface Non-clean shutdowns Removing Keys Unit tests File Formats File Format : *.mgdat File Format : *.mgbmp File Format : *.mgidx File Format : *.mgbmr , *.mgrec History Download RaptorDB_v2.0.zip - 38.7 KB Download RaptorDB_v2.1.zip - 39 KB Download RaptorDB_v2.2.zip - 39 KB Download RaptorDB_v2.3.zip - 39.6 KB D
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column erator> value  value erator> column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is

4,436

社区成员

发帖
与我相关
我的任务
社区描述
Linux/Unix社区 内核源代码研究区
社区管理员
  • 内核源代码研究区社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧