ERROR 1062 (23000) at line 1: Duplicate entry '1332883220' for key 'group_key'

恋喵大鲤鱼
博客专家认证
2017-07-17 11:17:12
我在insert into uinPortrait时使用select uin,sum(addCnt),sum(successCnt) from t group by uin时报的错误。我确定数据表tablename没有主键和唯一索引,请大神告诉我是什么原因报这种错误,我各种google和baidu也没有找到原因。万分感谢!

tablename的建表语句如下:
CREATE TABLE IF NOT EXISTS uinPortrait(
uin int(10) unsigned NOT NULL DEFAULT 0,
addCnt int(10) unsigned NOT NULL DEFAULT 0,
successCnt int(10) unsigned NOT NULL DEFAULT 0
)ENGINE=MyISAM DEFAULT CHARSET=utf8;
...全文
501 3 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
恋喵大鲤鱼 2017-07-19
  • 打赏
  • 举报
回复
引用 楼主 K346K346 的回复:
我在insert into uinPortrait时使用select uin,sum(addCnt),sum(successCnt) from t group by uin时报的错误。我确定数据表tablename没有主键和唯一索引,请大神告诉我是什么原因报这种错误,我各种google和baidu也没有找到原因。万分感谢! tablename的建表语句如下: CREATE TABLE IF NOT EXISTS uinPortrait( uin int(10) unsigned NOT NULL DEFAULT 0, addCnt int(10) unsigned NOT NULL DEFAULT 0, successCnt int(10) unsigned NOT NULL DEFAULT 0 )ENGINE=MyISAM DEFAULT CHARSET=utf8;
你的推测是正确的,确实因为临时表大小(tmp_table_size)和内存表大小(max_heap_table_size )不够导致的,后面修改了这两个变量的参数,OK了,具体参见我的博客:http://blog.csdn.net/k346k346/article/details/75267332
二月十六 2017-07-18
  • 打赏
  • 举报
回复
这个表是不是有触发器之类的东西?
zjcxc 2017-07-18
  • 打赏
  • 举报
回复
可以成功放到临时表么? create temporary table xx as select uin,sum(addCnt),sum(successCnt) from t group by uin
EurekaLog 7.5 (18-August-2016) 1)..Important: Installation layout was changed. All packages now have version suffix (e.g. EurekaLogCore240.bpl). No files are copied to \bin folder of IDE. Run-time package (EurekaLogCore) is copied to Windows\System32 folder. Refer to help for more info. 2)....Added: RAD Studio 10.1 Berlin support 3)....Added: IDE F1 help integration (on CHM-based IDEs only, i.e. XE8+) 4)....Added "--el_injectjcl", "--el_createjcl", and "--el_createdbg" command-line options for ecc32/emake to inject JEDI/JCL debug info, create .jdbg file, and create .dbg file (Microsoft debug format). Later is supported when map2dbg.exe tool is placed in \Bin folder of EurekaLog installation (separate download is required) 5)....Added: Exception2HRESULT in EAppDLL to simplify developing DLLs with "DLL" profile 6)....Added: Use ShellExecute option for mailto send method 7)....Added: "Mandatory e-mail only when sending" option 8)....Added: Exception line highlighting in disassember view in EurekaLog exception dialog and Viewer 9)....Added: Detection/logging Delphi objects in disassembly view 10)..Added: Support for multi-monitor info 11)..Added: Support for detection of Windows 10 updates 12)..Added: OS edition detection 13)..Added: "User" and "Session" columns to processes list, processes list is also sorted by session first 14)..Added: Support for showing current user processes only 15)..Added: Expanding environment variables for "Support URL" 16)..Fixed: Range-check error on systems with MBCS ACP 17)..Fixed: 64-bit shared memory manager may not work 18)..Fixed: Possible "Unit XYZ was compiled with a different version of ABC" when using packages 19)..Fixed: FastMM shared MM compatibility 20)..Fixed: Minor bugs in stack tracing (which usually affected stacks for leaks) 21)..Fixed: Rare deadlocks in multi-threaded applications 22)..Fixed: Taking screenshot of minimized window 23)..Fixed: NT service may not log all exceptions 24)..Fixed: SSL port number for Bugzilla 25)..Fixed: Disabling "Activate Exception Filters" option was ignored 26)..Fixed: Missing FTP proxy settings 27)..Fixed: IntraWeb support is updated up to 14.0.64 28)..Fixed: Retrieving some process paths in processes list 29)..Fixed: CPU view rendering in EurekaLog exception dialog and Viewer 30)..Fixed: Some issues in naming threads 31)..Fixed: Removed exported helper _462EE689226340EAA982C5E8307B3F9E function (replaced with mapped file) 32)..Changed: Descriptions of EurekaLog project options now list corresponding property names of TEurekaModuleOptions class. 33)..Changed: Default template of HTML/web dialog now includes call stack by default 34)..Changed: EurekaLog 7 now can be installed over EurekaLog 6 automatically, with no additional actions/tools EurekaLog 7.4 (7.4.0.0), 26-January-2016 1)....Fixed: Performance issue in DLL exports debug information provider 2)....Fixed: Range-check error in Send dialog 3)....Fixed: Possible FPU control word unexpected change 4)....Fixed: JIRA sending to project with no version info 5)....Fixed: Viewer sorting affected by local region settings 6)....Fixed: Exception filters ignore settings for restart/terminate EurekaLog 7.3 Hotfix 2 (7.3.2.0), 20-October-2015 1)....Fixed: Added workaround for codegen bug in Delphi 7 (possibly - other), bug manifests itself as wrong date-time in reports or integer overflows 2)....Fixed: Some MAPI DLLs may not be loaded correctly 3)....Fixed: Handling SEC_I_INCOMPLETE_CREDENTIALS in SSPI code (added searching client certificate) 4)....Fixed: Range-check error when closing WinAPI dialog EurekaLog 7.3 Hotfix 1 (7.3.1.0), 2-October-2015 1)....Fixed: Long startup time on terminal services servers EurekaLog 7.3 (7.3.0.0), 24-September-2015 1)....Added: RAD Studio 10 Seattle support 2)....Added: Performance counters for run-time (internal logging with --el_debug) 3)....Fixed: spawned by ecc32/emake processes now start with the same priority 4)....Fixed: ThreadID = 0 in StandardEurekaNotify 5)....Fixed: Dialog auto-close timer may reset without user input 6)....Fixed: Possible hang when quickly loading/unloading EurekaLog-enabled DLL 7)....Fixed: Possible hang in COM DLLs 8)....Fixed: Removed some unnecessary file system access on startup 9)....Fixed: Possible wrong font size in EurekaLog tools 10)..Fixed: Ignore timeouts from Shell_NotifyIcon 11)..Fixed: Possible failure to handle/process stack overflow exceptions 12)..Changed: VCL/CLX/FMX now will assign Application.OnException handler when low-level hooks are disabled EurekaLog 7.2 Hotfix 6 (7.2.6.0), 14-July-2015 1)....Added: csoCaptureDelphiExceptions option 2)....Fixed: Handling of SECBUFFER_EXTRA in SSPI code 3)....Fixed: Several crashes in sending code for very old Delphi versions 4)....Fixed: Regression (from hotfix 5) crash in some IDEs EurekaLog 7.2 Hotfix 5 (7.2.5.0), 1-July-2015 1)....Added: HKCU\Software\EurekaLab\Viewer\4.0\UI\Statuses registry key to allow status customizations in Viewer 2)....Added: "Disable hang detection under debugger" option 3)....Fixed: Wrong button caption in standalone "Steps to reproduce" dialog 4)....Fixed: Wrong passing of Boolean parameters in JSON (affects JIRA) 5)....Fixed: Wrong sorting of BugID, Count and DateTime columns in Viewer 6)....Fixed: Empty "Count" field/column is now displayed as "1" in Viewer 7)....Fixed: Generic names with "," could not be decoded in Viewer 8)....Fixed: Updated Windows 10 detection for latest builds of Windows 10 9)....Fixed: Sleep and hybernation no longer trigger false-positive "application freeze" 10)..Fixed: Wrong function codes for hooking (affects ISAPI application type) 11)..Fixed: Wrong button caption in "Steps to Reproduce" dialog 12)..Fixed: Crash when taking snapshot of some proccesses by Threads Snapshot tool 13)..Fixed: Minor improvements in leak detection EurekaLog 7.2 Hotfix 4 (7.2.4.0), 10-June-2015 1)....Added "ECC32TradeSpeedForMemory" option - defaults to 0/False, could be changed to 1 via Custom/Manual tab. This option will switch from fast-methods to slower methods, but which take less memory. Use 0 (default) for small projects, use 1 for large projects (if ecc32 runs out of memory). 2)....Added: --el_DisableDebuggerPresent command-line option for compatibility with 3rd party debuggers (AQTime, etc.) 3)....Added: AQTime auto-detect 4)....Fixed: Performance optimizations 5)....Fixed: Windows 8+ App Menu shortcuts 6)....Fixed: Unmangling on x64 EurekaLog 7.2 Hotfix 3 (7.2.3.0), 20-May-2015 1)....Added: Support for token auth in Bugzilla (latest 4.x builds) 2)....Added: Support for API key auth in Bugzilla (5.x) 3)....Added: Support for /EL_DisableMemoryFilter command-line option 4)....Added: Asking e-mail when user switches to "details" from MS Classic without entering e-mail 5)....Fixed: Compatibility issues with older Bugzilla versions (3.x) 6)....Fixed: Passing settings between dialogs 7)....Fixed: "Ask for steps to reproduce" dialog is now DPI-aware 8)....Fixed: Silently ignore and fix invalid values in project options EurekaLog 7.2 Hotfix 2 (7.2.2.0), 30-April-2015 1)....Fixed: Confusing message in Manage tool when using with Trial/Pro 2)....Fixed: Range check error in processes information for x64 machines (affects startup of any EurekaLog-enabled module) 3)....Fixed: Auto-detect personality by project extension if --el_mode switch is missing 4)....Fixed: More details for diagnostic sending 5)....Fixed: Wrong settings for MAP files in C++ Builder 6)....Fixed: Wrong code page was used to decode ANSI bug reports 7)....Fixed: Attaching .PAS files instead of .OBJ in C++ Builder 2006+ Pro/Trial EurekaLog 7.2 Hotfix 1 (7.2.1.0), 3-April-2015 1)....Fixed: Wrong float-str convertion when ThousandSeparator is '.' EurekaLog 7.2 (7.2.0.0), 1-April-2015 1)....Important: TEurekaLogV7 component was renamed to TEurekaLogEvents. Please, update your projects by renaming or recreating the component 2)....Important: File layout was changed for BDS 2006+. Delphi and C++ Builder files are now located in StudioNum folders instead of old DelphiNum and CBuilderNum folders. Update your search paths if needed 3)....Added: Major improvements in DumpAllocationsToFile function (EMemLeaks unit) 4)....Added: MemLeaksSetParentBlock, MemLeaksOwn, EurekaTryGetMem functions (EMemLeaks unit) 5)....Added: Improvements for call stack of dynarrays/strings allocations (leaks) 6)....Added: "Elem size" when reporting leaks in dynarrays 7)....Added: Streaming unpacked debug info into temporal files instead of memory - this greatly reduces run-time application memory usage at cost of slightly slower exception processing. This also reduces memory footprint for ecc32/emake 8)....Added: Showing call stacks for 2 new types of fatal memory errors 9)....Added: EMemLeaks._ReserveOutOfMemory to control reserve size of out of memory errors (default is 50 Mb) 10)..Added: "MinLeaksLimitObjs" option (EMemLeaks unit) 11)..Added: Fatal memory problem now pauses all threads in application 12)..Added: Fatal memory problem now change thread name (to simplify debugging) 13)..Added: boPauseELThreads and boDoNotPauseELServiceThread options (currently not visible in UI) 14)..Added: Support for texts collections out of default path 15)..Added: Support for relative file paths to text collections and external settings 16)..Added: Support for environment variables in project option's paths 17)..Added: Support for relative file paths and environment variables for events and various module paths 18)..Added: Logging in Manage tool 19)..Added: Windows 10 version detection 20)..Added: Stack overflow tracing 21)..Added: Major improvements in removal of recursive areas from call stack 22)..Added: Statistics collection 23)..Added: Support for uploading multiple files in JIRA 24)..Added: EResLeaks improvements (new funcs: ResourceAdd, ResourceDelete, ResourceName; support for realloc-like functions) 25)..Fixed: Added workaround for bug in JIRA 5.x 26)..Fixed: Rare EurekaLog internal error 27)..Fixed: Ignored unhandled thread exceptions (when EurekaLog is disabled) now triggers default OS processing (WER) 28)..Fixed: Irnored exceptions (via per-exception/events) now bring up default RTL handler 29)..Fixed: Format error in Viewer 30)..Fixed: Leak of EurekaLog exception information object 31)..Fixed: Wrong chaining exceptions inside GetMem/FreeMem 32)..Fixed: Memory leak after low-level unhook of function 33)..Fixed: Re-parenting after ReallocMem 34)..Fixed: Editing SMTP server options 35)..Fixed: SMTP server not using real user e-mail in FROM field 36)..Fixed: Some multi-threading crashes 37)..Fixed: Fixed crashes in Manage tool 38)..Fixed: Range-check error in Viewer 39)..Fixed: EurekaLog error dialog appearing under other windows 40)..Fixed: AV when parsing TDS (emake/C++ Builder specific) 41)..Fixed: Unable to build call stacks for other threads due to insufficient rights 42)..Fixed: Version checks for BugZilla and JIRA 43)..Fixed: Not catching out-of-module AVs when "Capture exceptions only from current module" option is checked 44)..Fixed: Checking for remaining exceptions at shutdown (C++ Builder specific, AcquireExceptionObject returns wrong info) 45)..Fixed: "get call stack of ... threads" / "suspend ... threads" options (avoid rare multithreading race conditions) 46)..Fixed: Crash when naming thread without EurekaLog thread info 47)..Fixed: Detection of immediate caller for memory funcs 48)..Fixed: Non-working Assign for options 49)..Fixed: Handling of explicitly chained exceptions 50)..Fixed: Various exception/threading fixes for MS debug provider 51)..Fixed: Processing hardware unhandled exceptions (QC #55007) 52)..Fixed: Unchecking dialog options when export/import 53)..Fixed: BSTR leak 54)..Fixed: JIRA decimal separator bug 55)..Changed: Now unhandled exceptions will be handled by EurekaLog even if EurekaLog is disabled in the thread - only global EurekaLog-enabled status is respected 56)..Changed: Viewer version now matches version of EurekaLog 57)..Changed: DeleteServiceFilesOption now always False by default 58)..Changed: Speed improvements for known memory leaks (reserved leaks) 59)..Changed: Improved logging for sending 60)..Changed: Switching to detailed mode without entering (mandatory) e-mail: now EL will not block this 61)..Changed: .ToString for exception info now uses compact stack formatter 62)..Removed: Custom field editor (replaced it with link to "Custom" page) 63)..Removed: EurekaLog 7 no longer could be installed over EurekaLog 6. Manage tool from EurekaLog 7 will no longer work with EurekaLog 6. EurekaLog 7.1 update 1 (7.1.1.0), 19-October-2014 1)....Added: "Send in separated thread" option 2)....Added: Hang detection will now use Wait Chain Traversal (WCT) on Vista+ systems to detect deadlocks in any EurekaLog-enabled threads 3)....Added: OS install language and UI language fields in bug report 4)....Fixed: Viewer is not able to decrypt reports with generics 5)....Fixed: EVariantTypeCastError in Viewer when changing status of some bug reports 6)....Fixed: EcxInvalidDataControllerOperation in Viewer 7)....Fixed: Stack overflow at run-time for certain combination of project options 8)....Fixed: BMP re-draw bug in UI dialogs 9)....Fixed: Rogue "corrupted" error message for valid ZIPs of certain structure 10)..Fixed: Various range check errors in Viewer 11)..Fixed: Possible encoding errors for non-ASCII reports in Viewer on certain environments 12)..Fixed: Wrong count in Viewer when importing reports without proper "count" field 13)..Fixed: Duplicate reports may appear in bug report file when "Do not save duplicate errors" option is checked 14)..Fixed: False-positive detection of some virtual machines 15)..Fixed: Processing of exceptions from message handlers during message pumping cycle inside exception dialogs 16)..Fixed: Access Violation if exception dialog was terminated by exception 17)..Fixed: Hardware exceptions from unit's initialization/finalization may be unprocessed 18)..Changed: "VIEW" action for Viewer now will open ALL bug reports inside bug report file; reports will not be merged by BugID. "IMPORT" action remains the same: duplicate reports are merged, "count" is increased 19)..Changed: Charset field in bug report now shows both charset and code page EurekaLog 7.1 (7.1.0.00), 23-September-2014 1)....Added: XE7 support 2)....Added: XE6 support 3)....Added: New DLL demo 4)....Added: Custom profiles are now shown in "Application type" combo-box 5)....Added: Non-empty "steps to reproduce" will be added to existing bug tracker issues with empty "steps to reproduce" 6)....Added: Support for custom fields in FogBugz (API version 8 and above) 7)....Added: Support for unsequenced line numbers in PDB/DBG files (--el_source switch) 8)....Fixed: XML bug report were generated wrong 9)....Fixed: Strip relocations code for Win64 10)..Fixed: EurekaLog conditional symbols removed improperly when deactivating EurekaLog 11)..Fixed: Sending reports to non-default port numbers (affects web-based methods) 12)..Fixed: SSL validation check may reject valid SSL certificate (SMTP Client/Server) 13)..Fixed: SSL errors may be not reported 14)..Fixed: Viewer did not consider empty bug reports as corrupted 15)..Fixed: "DLL" profile now can be used with packages properly 16)..Fixed: Few rare memory leaks 17)..Fixed: Possible deadlock when using MS debug info provider 18)..Fixed: C++ Builder project files was saved incorrectly (RAD Studio 2007+) 19)..Fixed: "Show restart checkbox after N errors" counts handled exceptions 20)..Fixed: IDE expert's DPR parser (added support for multi-part idents) 21)..Fixed: Rare access violation in hook code 22)..Fixed: Thread handle leaks (added _NotifyThreadGone/_CleanupFinishedThreads functions to be called manually - only when low-level hooks are not installed) 23)..Fixed: EurekaLog's installer hang 24)..Fixed: Bug in object/class validation 25)..Fixed: Bug when using TThreadEx without EurekaLog 26)..Fixed: Leaks detection may not work with certain combination of options 27)..Fixed: Deadlock in some cases when using EurekaLog threading option set to "enabled in RTL threads, disabled in Windows threads". 28)..Changed: TEurekaExceptionInfo.CallStack will be nil until exception is actually raised 29)..Changed: FogBugz and BugZilla: changed bugs identification within project (to allow two bugs exists with same BugID in different projects) 30)..Changed: Blocked manual creation/destruction of ExceptionManager class and EurekaExceptionInfo 31)..Changed: ECC32/EMAKE runs from IDE without changing priority, added ECC32PriorityClass option 32)..Improved: Minor help and text improvements EurekaLog 7.0.07 Hotfix 2 (7.0.7.2), 11-December-2013 1)....Fixed: Delphi compiler code generation bug (Delphi 2007 and below) 2)....Fixed: Code hooks may rarely be set incorrectly (code stub relocation fails) 3)....Fixed: Win64 call stacks functions now work more similar to 32 bit call stacks EurekaLog 7.0.07 Hotfix 1 (7.0.7.1), 2-December-2013 1)....Added: Alternative caption for e-mail input control when e-mail is mandatory 2)....Fixed: Rare range check error in WinAPI visual dialogs 3)....Fixed: Wrong error detection for OnExceptionError event 4)....Fixed: Wrong TResponce processing 5)....Fixed: Problems with encrypted call stack decoding 6)....Fixed: OnPasswordRequest event may have no effect EurekaLog 7.0.07 (7.0.7.0), 25-November-2013 1)....Added: Ability to use Assign between call stack and TStrings 2)....Added: 64-bit disassembler 3)....Added: Support for variables and relative file paths in "Additional Files" send option 4)....Added: --el_source switch for ecc32/emake compilers 5)....Added: support for post-processing non-Embarcadero executables 6)....Added: EOTL.pas unit for better OmniThreadLibrary integration 7)....Added: RAD Studio XE5 support 8)....Added: New "Capture call stacks of EurekaLog-enabled threads" option 9)....Added: "Deferred call stacks" option for 64-bit 10)..Added: Copy report to clipboard now copies both report text and report file 11)..Added: "AttachBothXMLAndELReports" option to include both .elx and .el files into bug report 12)..Added: EMemLeaks.MemLeaksErrorsToIgnore option to exclude certain memory errors from being considered as fatal 13)..Added: Call stack with any encrypted entry will be fully encrypted now 14)..Added: Option to exclude certain memory errors from being considered as fatal (EMemLeaks.MemLeaksErrorsToIgnore) 15)..Added: New "HTTP Error Code" option for all web-based dialogs (CGI, ISAPI, etc.) 16)..Added: Support for Unicode in Simple MAPI send method (requires Windows 8 or latest Microsoft Office) 17)..Added: New value for call stack detalization option (show any addresses, including those not belonging to any executable module) 18)..Fixed: Wrong JSON escaping for strings (affects JIRA send method) 19)..Fixed: Range-check error in Viewer when viewing bug reports with high addresses 20)..Fixed: Selecting Win32 service application type is no longer resets to custom/unsupported 21)..Fixed: Possible hang when testing dialogs from EurekaLog project options dialog 22)..Fixed: Rare resetting of some options when saving .eof file 23)..Fixed: Exception pointer could be removed from call stack due to debug details filtering 24)..Fixed: Rare case when LastThreadException returned nil while there was active thread exception 25)..Fixed: Rare case when ShowLastThreadException do nothing 26)..Fixed: Improved compatibility for OmniThreadLibrary and AsyncCalls 27)..Fixed: Included fix for QC #72147 28)..Fixed: 64-bit MS Debug Info Provider (please, re-setup cache options using configuration dialog) 29)..Fixed: "Deferred call stacks" option failed to capture call stack when exception is re-raised between threads 30)..Fixed: "Deferred call stacks" option may produce cutted call stack in rare cases 31)..Fixed: Several minor call stacks improvements and optimizations 32)..Fixed: Several 64-bit Pointer Integer convertion issues 33)..Fixed: Multi-threading deadlock issue 34)..Fixed: Black screenshots in 64 bit applications 35)..Fixed: Copying to clipboard hot-key was registered globally 36)..Fixed: Shell (mailto) send method may fail (64 bit) 37)..Fixed: Possible wrong file paths for attaches in (S)MAPI send methods 38)..Fixed: Environment variables were not expanded in MAPI send method 39)..Fixed: (non-Unicode IDE) EurekaLog is not activated when application started from folder with Unicode characters 40)..Fixed: Encrypted call stacks may be encrypted partially by EurekaLog Viewer in rare cases 41)..Fixed: Crash when sending leak report with visual progress dialog (only some IDEs are affected) 42)..Fixed: ecc32/emake could not see external configuration file with the same name as project (e.g. Project1.eof for Project1.dpr) 43)..Fixed: Added missed RTL implementation for ExternalProps in Delphi 6 (affects Mantis sending) 44)..Fixed: IDE crash when switching to threads window 45)..Changed: Removed temporal solution which was used before option to defer call stack creation was introduced 46)..Changed: "Default EurekaLog state in new threads" option is changed from Boolean flag into enum. You need to re-setup this option 47)..Changed: Disable EurekaLog for thread when creating call stack or handle exception - this increases stability and performance 48)..Changed: LastException property is remove from exception manager as not thread safe. Use LastThreadException property instead 49)..Changed: Lock/Unlock from thread manager and exception manager are removed to avoid deadlocks 50)..Changed: ThreadsSnapshot tool now tries to capture call stack without injecting DLL 51)..Changed: Build events now runs with CREATE_NO_WINDOW flag (console window is hidden) 52)..Improved: More articles in help EurekaLog 7.0.06 (7.0.6.0), 1-June-2013 1)....Added: Experimental 64 bit C++ Builder support 2)....Added: New tab in EurekaLog project options: "External tools" 3)....Added: Option to catch all IDE errors (to debug your own IDE packages) 4)....Added: Option to catch only exceptions from current module 5)....Added: Option to defer building call stack 6)....Added: RAD Studio XE4 support 7)....Added: Support for AppWave 8)....Fixed: Fixed event handlers declarations for the EurekaLog component 9)....Fixed: Infinite recursive calls when using ToString from EndReport event handler 10)..Fixed: UPX compatibility issue 11)..Fixed: Range check errors for system error codes 12)..Fixed: Rare IDE stack overflow 13)..Fixed: JIRA unit was not added automatically 14)..Fixed: EurekaLog no longer tries to check for leaks when memory manager filter is disabled 15)..Fixed: Possible deadlock on shutdown with freeze checks active 16)..Fixed: Issues with settings dialog and Win32 Service application type 17)..Fixed: ThreadSnapshot tool was not able to take snapshots of Win64 processes 18)..Fixed: WCT is disabled for leaks 19)..Fixed: TContext declarations for Win64 20)..Fixed: Check for updates now correctly sets time of last check 21)..Fixed: (Win64) Several Pointer Integer convertion errors 22)..Fixed: Internal error when exception info object was deleted while it was still used by SysUtils exception object 23)..Fixed: Semeral problems with "EurekaLog look & feel" style for EurekaLog error dialog 24)..Fixed: Using text collection resets exception filters 25)..Fixed: Rare access violation if registering event handlers is placed too early 26)..Fixed: SMTP RFC date formatting 27)..Fixed: Rare empty call stack bug 28)..Fixed: Hang detection was not working if EurekaLog was disabled in threads 29)..Fixed: AV for double-free TEncoding 30)..Changed: ecc32/emake no longer alters arguments for dcc32/make unless new options --el_add_default_options is specified 31)..Changed: Save/load options methods was moved to TEurekaModuleOptions class 32)..Changed: Saving options to EOF file now adds hidden options and removes obsolete options (only when compatibility mode is off) 33)..Changed: Compiling installed packages now silently ignores EurekaLog instead of showing "File is in use" error message 34)..Improved: More readable disk/memory sizes in bug reports 35)..Improved: More descriptive settings dialog when using external configuration 36)..Improved: ThreadSnapshot tool now aquired DEBUG priviledge for taking snapshot. This allows it to bypass security access checks when opening target process. 37)..Improved: Changed BugID default generation to include error code for OS errors and error message for DB errors 38)..Improved: Mantis API (WSDL) was updated to the latest version (1.2.14) 39)..Improved: IntraWeb compatibility (old and new versions) 40)..Improved: COM applications compatibility 41)..Improved: Build events now accept shell commands 42)..Improved: More articles in help EurekaLog 7.0.05 (7.0.5.0), 7-February-2013 1)....Added: JIRA support 2)....Added: Virtual machine detection (new field in bug reports) 3)....Fixed: "Use Main Module options" option was loading empty options for some cases 4)....Fixed: Wrong record declarations for Simple MAPI on Win64 5)....Fixed: Performance issues with batch module options updating 6)....Fixed: Wrong leaks report with both MemLeaks/ResLeaks options active 7)....Fixed: Wrong info for nested exceptions in some cases 8)....Fixed: AV under debugger for Win64 (added support for _TExitDllException) 9)....Fixed: Wrong record declarations for process/thread info on Win64 10)..Fixed: Support for FinalBuilder on XE2/XE3 with spaces in file paths 11)..Fixed: Rare double-free of module information (ModuleInfoList) 12)..Fixed: Rare External Exception C000071C on shutdown (only under debuggger) 13)..Fixed: Added large addresses support in Viewer 14)..Fixed: Counter options in memory leaks category is now working properly 15)..Fixed: Rare range-check error in TEurekaModulesList.AddModuleFromFileName 16)..Fixed: FTP force directories dead lock 17)..Fixed: Fixed wrong index being used when clearing compatibility mode (EurekaLog project options dialog) 18)..Fixed: Default thread state do not affect main thread now 19)..Fixed: Sometimes wrong thread may be used when altering EurekaLog active state for external thread 20)..Fixed: Wrong DNS lookup on ANSI 21)..Fixed: Problems with IDE expert and projects on network paths 22)..Fixed: Added support for arguments in URLs (HTTP sending) 23)..Fixed: Possible deadlock in multithreaded applications 24)..Fixed: Problems with unicode characters in project files on non-Unicode IDEs 25)..Fixed: Infinite recursive calls when using ToString from EndReport event handler 26)..Fixed: Win64 GetCaller now returns pointer to call instruction, not return address 27)..Improved: Standalone Editor do not force save/load folder by default 28)..Improved: DLL profile now can use additional application type hooks automatically 29)..Improved: EurekaLog now able to work with read-only projects (see help for more info) EurekaLog 7.0.04 (7.0.4.0), 2-December-2012 1)....Added: Support for nested exceptions in DLLs 2)....Fixed: Options bug in EurekaLogSendEmail function 3)....Fixed: Weird behaviour for steps to reproduce and custom fields 4)....Fixed: Installation for single personality (BDS) 5)....Fixed: Range check error in EModules 6)....Fixed: Bug in exception destroy hook 7)....Fixed: OnExceptionNotify event is no longer called for handled exceptions without option checked 8)....Fixed: DEP checks on startup no longer cause exception 9)....Fixed: Invalid declaration for MS Debug API 10)..Fixed: OLE mode change error for "Test" send button 11)..Fixed: Fixes for multiply loading of the same DLL 12)..Fixed: Removed PNG compression from icons (tools) 13)..Fixed: Range-check error in dialogs with EurekaLog style enabled 14)..Fixed: Send progress dialog may keep busy forever processing window messages (message flood from rapid application GUI updates) 15)..Fixed: Thread pausing options now work correctly 16)..Improved: New features in exception filters - marking exceptions as "expected", filtering by properties (RTTI) 17)..Improved: Recovery from memory errors without debugging memory manager 18)..Improved: Viewer's password edit now hides password with asterisks 19)..Updated: Changed names of .inc files to avoid name conflicts with other libraries 20)..Updated: Help EurekaLog 7.0.03 (7.0.3.0), 6-October-2012 1)....Fixed: Removed some consts keywords for event handlers, so now C++ Builder can alter arguments (this change may require you to adjust your custom code) 2)....Fixed: Fallback code for false-positive results on memory probing 3)....Fixed: Range check errors in SSL/TLS implementation 4)....Fixed: "EurekaLog is not active" error message during send testing 5)....Fixed: Incorrect memory probing when DEP is off (old systems) 6)....Fixed: Installation of 64-bit BPLs 7)....Fixed: Dialog preview 8)....Fixed: Win64 fixes for XE3 9)....Fixed: Support for project groups (mixed project types) 10)..Fixed: Windows 2000 hooks compatibility 11)..Fixed: mailto double quotes escaping 12)..Fixed: Simple MAPI WOW compatibility 13)..Fixed: Simple MAPI modal issues 14)..Fixed: Various range check errors 15)..Changed: Removed minor version number from program group name 16)..Updated: Help EurekaLog 7.0.02 hot-fix 1 (7.0.2.1), 12-September-2012 1)....Fixed: Range check error in Viewer 2)....Fixed: Bug in hooking code EurekaLog 7.0.02 (7.0.2.0), 11-September-2012 1)....Added: Improved memory problems detection 2)....Added: Minor IDE Expert usability improvements 3)....Added: Auto-size feature for detailed error dialog 4)....Added: Workaround for QC #106935 5)....Added: Workaround for bug in InvokeRegistry (SOAP/Mantis) 6)....Fixed: Nested OS exceptions 7)....Fixed: Multiply Win64 fixes 8)....Fixed: Compatibility mode fixes 9)....Fixed: Altered behaviour of "Add BugID/Date/ComputerName" options 10)..Fixed: Blank screenshots 11)..Fixed: Check file for corruptions 12)..Fixed: Viewer is unable to decrypt certain bug reports 13)..Fixed: Internal DoNoTouch option now works for post-processing and condtionals 14)..Fixed: Possible out of memory error for "Do not store class/procedure names" option 15)..Fixed: EurekaLog did not properly install itself when there is only Delphi installed, but no C++ Builder of the same version (or visa versa) 16)..Fixed: Wrong argument for OnRaise event 17)..Fixed: Handling memory errors in initialization/finalization sections 18)..Fixed: Updating steps to reproduce and user e-mail in bug report 19)..Fixed: Proper Success/Failure for some errors during SMTP send 20)..Added: Workaround for wrong GUI fonts 21)..Added: Delphi XE3 support 22)..Added: Individual options for each exception EurekaLog 7.0.01 (7.0.1.0), 28-June-2012 1)....Added: New "Modal window" option (MS Classic and EurekaLog dialogs) 2)....Added: New "Owned window" option (MS Classic and EurekaLog dialogs) 3)....Added: New "Catch EurekaLog IDE Expert errors" option 4)....Added: Backup memory manager to recover from critical errors 5)....Added: Alternative methods to provide additional features when memory filter is not set 6)....Fixed: Contains fixes from hotfixes 1-3 7)....Fixed: Performance improvements 8)....Fixed: Improved IDE Expert's speed, stability and compatibility with other 3rd party extensions 9)....Fixed: MS Classic dialog size adjustments for large "click here" translations 10)..Fixed: Fixed resetting few EurekaLog project options to defaults 11)..Fixed: Multiplying exception filters when options are assigned (for example: when switching to/from "Custom" page in project options) 12)..Fixed: (Compatibility mode) Fixed send options merging 13)..Fixed: Updated help EurekaLog 7.0 hot-fix 3 (7.0.0.273), 20-June-2012 --------------------------- 1)....Fixed: ERangeError in EResLeaks (THandle Integer) 2)....Fixed: C++ Builder breakpoints for large projects 3)....Fixed: Help (updates policy changed) 4)....Fixed: Text collections applying 5)....Fixed: Build events are now called for unlocked file 6)....Fixed: Proper handling of C++ Builder project options files from Delphi code (settings editor and IDE expert) 7)....Fixed: Terminate/Checked sub-option for MS Classic dialog 8)....Fixed: Confusing message for already post-processed executables 9)....Fixed: Access violation for some EurekaLog IDE menu items when no project was loaded 10)..Fixed: Invoking help for "Variables" window 11)..Fixed: EurekaLog Viewer version info 12)..Fixed: Events in components 13)..Added: Retry option for "Sorry, you must close all running IDE instances before installation" 14)..Added: Italian translation 15)..Added: Actual change log is now included into installer 16)..Added: Even more setup logging 17)..Added: New help articles (recompilation and manual installation) EurekaLog 7.0 hot-fix 2 (7.0.0.261), 10-June-2012 --------------------------- 1)....Fixed: Wrong version info reporting to IDE 2)....Added: Workaround for Delphi 2005 TListView bug 3)....Added: Workaround for possible invalid FPU state in exception handlers 4)....Added: Missed declarations for ExceptionLog (compatibility mode) 5)....Fixed: Work for unsaved projects 6)....Added: Escaping for '--' in options (confuses IDE's XML parsing) 7)....Added: Storing thread's class/name in call stack for terminated threads 8)....Added: More setup logging 9)....Fixed: Help (broken links) 10)..Added: "Upgrade to EurekaLog 7" help topic 11)..Fixed: Clean up installed files EurekaLog 7.0 hot-fix 1 (7.0.0.256), 6-June-2012 --------------------------- 1)....Fixed: Invalid Format() arguments in ELogBuilder. EurekaLog 7.0, 1-June-2012 --------------------------- 1)....Improved: Main change - EurekaLog's core was rewritten (refactored) to allow more easy modification and remove hacks. 2)....Improved: New plugin-like architecture now allows you to exclude unused code. 3)....Improved: New plugin-like architecture now allows you to easily extends EurekaLog. 4)....Improved: Greatly extended documentation. 5)....Improved: Installer is now localized. 6)....Improved: Greatly speed ups creation of minimal bug report (with most information disabled). 7)....Changed: EurekaLog's root IDE menu was relocated to under Tools and extended with new items. 8)....Added: New examples. 9)....Added: New tools (address lookup, error lookup, threads snapshot, standalone settings editor). 10)..Added: Support for DBG/PDB formats of debug information (including symbol server support and auto-downloading). 11)..Added: Support for madExcept debug information (experimental). 12)..Added: WER (Windows Error Reporting) support. 13)..Added: Full unicode support. 14)..Added: Professional and Trial editions: added source code (interface sections only) 15)..Improved: Dialogs - new options and new customization possibilities: 16)..Added: All GUI dialogs: ability to test dialog directly from configuration dialog by displaying a sample window with currently specified settings. 17)..Improved: All GUI dialogs: dialogs are DPI-awared now (auto-scale for different DPI). 18)..Added: MessageBox dialog: added detailed mode (shows a compact call stack). 19)..Added: MessageBox dialog: added ability for asking a send consent. 20)..Added: MessageBox dialog: added support to switch to "native" message box for application. 21)..Added: MS Classic dialog: added control over "user e-mail" edit's visibility. 22)..Added: MS Classic dialog: added ability to personalize dialog view with application's name and icon. 23)..Added: MS Classic dialog: added ability to show terminate/restart checkbox initially checked. 24)..Added: EurekaLog dialog: added ability to personalize dialog view with application's name and icon. 25)..Added: EurekaLog dialog: added ability to show terminate/restart checkbox initially checked. 26)..Added: EurekaLog dialog: added ability to switch back to non-detailed view. 27)..Added: WEB dialog: added new tags to customize bug report page. 28)..Improved: WEB dialog: improved support for unicode and charset. 29)..Added: New dialog type: RTL dialog. 30)..Added: New dialog type: console output. 31)..Added: New dialog type: system logging. 32)..Added: New dialog type: Windows Error Reporting. 33)..Improved: Sending - new options and new customization possibilities: 34)..Added: All send methods: added ability to setup multiply send methods. 35)..Added: All send methods: added ability to change send method order. 36)..Added: All send methods: added separate settings for each send method. 37)..Added: All send methods: ability to test send method directly from configuration dialog by sending a demo bug report. 38)..Added: SMTP client send method: added SSL support. 39)..Added: SMTP client send method: added TLS support. 40)..Added: SMTP client send method: added option for using real e-mail address. 41)..Added: SMTP server send method: added option for using real e-mail address. 42)..Added: HTTP upload send method: added support for custom backward feedback messages. 43)..Added: FTP upload send method: added creating folders on FTP (like remote ForceDirectories). 44)..Added: Mantis send method: added API support (MantisConnect, out-of-the-box since Mantis 1.1.0, available as add-on for previous versions). 45)..Added: Mantis send method: added support for custom "Count" field. 46)..Added: Mantis send method: added options for controlling duplicates. 47)..Added: Mantis send method: added support for SSL/TLS. 48)..Added: FogBugz send method: added API support (out-of-the-box since ForBugz 7, available as add-on for FogBugz 6). 49)..Added: FogBugz send method: EurekaLog will update "Occurrences" field (count of bugs). 50)..Added: FogBugz send method: EurekaLog will respect "Stop reporting" option (BugzScout's setting). 51)..Added: FogBugz send method: EurekaLog will respect "Scout message" option (BugzScout's setting). 52)..Added: FogBugz send method: EurekaLog will store client's e-mail as issue's correspondent. 53)..Added: FogBugz send method: added options for controlling duplicates. 54)..Added: FogBugz send method: added support for "Area" field. 55)..Added: FogBugz send method: added support for SSL/TLS. 56)..Added: BugZilla send method: added API support. 57)..Added: BugZilla send method: added support for custom "Count" field. 58)..Added: BugZilla send method: added options for controlling duplicates. 59)..Added: BugZilla send method: added support for SSL/TLS. 60)..Added: New send method: Shell (mailto protocol). 61)..Added: New send method: extended MAPI. 62)..Added: Support for separate code and debug info injection. 63)..Added: Ability to use custom units before EurekaLog's units. 64)..Added: Support for external configuration file in IDE expert. 65)..Added: Now EurekaLog stores only those project options which are different from defaults (to save disk space and reduce noise in project file). 66)..Added: Now EurekaLog stores project options sorted (alphabet order). 67)..Added: Separate settings for saving modules and processes lists to bug report. 68)..Added: Support for taking screenshots of multiply monitors. 69)..Added: More screenshot customization options. 70)..Added: More control over bug report's file names. 71)..Added: New environment variables. 72)..Added: Deleting .map file after compilation. 73)..Added: Support for different .dpr and .dproj file names. 74)..Improved: memory leaks detection feature - new options and new customization possibilities: 75)..Added: Ability to track memory problems without activation of leaks checking. 76)..Added: Support for sharing memory manager. 77)..Added: Support for tracking leaks in applications built with run-time packages. 78)..Added: Option to zero-fill freed memory. 79)..Added: Option to enable leaks detection only when running under debugger. 80)..Added: Option for manual activation control for leaks detection (via command-line switches). 81)..Added: Option to select stack tracing method for memory problems. 82)..Added: Option to trigger memory leak reporting only for large leaked memory's size. 83)..Added: Option to control limit of number of reported leak. 84)..Added: CheckHeap function to force check of heap's consistency. 85)..Added: DumpAllocationsToFile function to save information about allocated memory to log file. 86)..Added: Registered leaks feature. 87)..Added: Run-time control over memory leak registering. 88)..Added: New recognized leak type: String (both ANSI and Unicode are supported). 89)..Added: Memory features support for C++ Builder. 90)..Added: Resource leaks detection feature. 91)..Improved: Compilation speed increased. 92)..Added: Support for generics in debug information. 93)..Added: Chained/nested exceptions support. 94)..Added: Wait Chain Traversal support. 95)..Added: Support for named threads. 96)..Added: Additional information for threads in call stack. 97)..Improved: EurekaLog Viewer Tool: 98)..Added: Now Viewer has its own help file 99)..Added: Viewer now supports a FireBird based database on local file or remote server. 100).Added: You can have more that one user account for FireBird based database. 101).Added: Viewer now can be launched in View mode (Viewer can be configured to any DB or View mode). 102).Added: Viewer's database now supports storing files, associated with the report (you can also add and remove files manually). 103).Added: Viewer supports "Import" and "View" commands for report files. 104).Improved: Extended support for more log formats (XML, packed ELF, etc). 105).Added: Columns in report's list now can be configured (you can hide and show them). 106).Added: There are a plenty of new columns added to report's list. 107).Added: Ability of auto-download reports from e-mail account. 108).Improved: printing - now you can print the entire report (including screenshots). Old behaviour of printing just one tab (call stack only, for example) also remains. 109).Added: Viewer can now have more that one run-time instance . 110).Added: File import status dialog is now configurable (you can disable it, if you want to). 111).Added: There is a preview area for screenshots, available in reports. 112).Improved: Now Viewer is more Vista-friendly (i.e. file associations are managed in HKCU, rather that in HKLM, storing configuration in user's Application Data, etc, etc). 113).Added: Report's list now supports multi-select, so operations can be performed on many reports at time. 114).Added: There are plenty of new command line abilities, like specifying several files and new switches. 115).Improved: Bunch of minor changes and improvements. WARNING: -------- There are many changes in this release. See the "Changed from the old 6.x version" help topic for further information! EurekaLog 7 also have "EurekaLog 6 backward compatibility mode". Please, refer to help file for more information. We also have the detailed "Upgrade guide" in our help system.
包含如下操作系统版本 FreeBSD Linux Solaris Windows 分别对应如下目录 MegaCLI for DOS MegaCLI for Linux MegaCLI for Solaris MegaCLI for FreeBSD MegaCLI for Windows ********************************************* LSI Corp. MegaRAID MegaCLI Release ********************************************* Release Date: 01/20/14 ======================== Supported Controllers ================== MegaRAID SAS 9270-8i MegaRAID SAS 9271-4i MegaRAID SAS 9271-8i MegaRAID SAS 9271-8iCC MegaRAID SAS 9286-8e MegaRAID SAS 9286CV-8e MegaRAID SAS 9286CV-8eCC MegaRAID SAS 9265-8i MegaRAID SAS 9285-8e MegaRAID SAS 9240-4i MegaRAID SAS 9240-8i MegaRAID SAS 9260-4i MegaRAID SAS 9260CV-4i MegaRAID SAS 9260-8i MegaRAID SAS 9260CV-8i MegaRAID SAS 9260DE-8i MegaRAID SAS 9261-8i MegaRAID SAS 9280-4i4e MegaRAID SAS 9280-8e MegaRAID SAS 9280DE-8e MegaRAID SAS 9280-24i4e MegaRAID SAS 9280-16i4e MegaRAID SAS 9260-16i MegaRAID SAS 9266-4i MegaRAID SAS 9266-8i MegaRAID SAS 9285CV-8e MegaRAID SAS 8704ELP MegaRAID SAS 8704EM2 MegaRAID SAS 8708ELP MegaRAID SAS 8708EM2 MegaRAID SAS 8880EM2 MegaRAID SAS 8888ELP MegaRAID SAS 8308ELP* MegaRAID SAS 8344ELP* MegaRAID SAS 84016E* MegaRAID SAS 8408E* MegaRAID SAS 8480E* MegaRAID SATA 300-8ELP* *These older controllers should work but have not been tested. Component: ========= SAS MegaRAID MegaCLI Release Date: 01/20/14 Version Numbers: MegaCLI =============== =========== Current Version 8.07.14 Previous Version 8.07.07 Contents: ========= This package contains MegaCLI for the following OSes: DOS FreeBSD Linux Solaris Windows Use the MegaCLI components from the folder that matches your OS. Enhancements and Bug Fixes ========================== SCGCQ00393585 (DFCT) - VD creation from MegaCli fails on Solaris Sparc 10u9 operating system. SCGCQ00413883 (DFCT) - "megacli -version -pd -a0" Segmentation Faults if PDs are missing SCGCQ00445356 (CSET) - Megacli crashes after invoking any command in SGI system with one 9280-8e and 2 quad port qlogic FC cards. SCGCQ
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column value  value column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is
Introduction Many organizations use disk image cloning to perform mass rollouts of Windows. This technique involves copying the disks of a fully installed and configured Windows computer onto the disk drives of other computers. These other computers effectively appear to have been through the same install process, and are immediately available for use. While this method saves hours of work and hassle over other rollout approaches, it has the major problem that every cloned system has an identical Computer Security Identifier (SID). This fact compromises security in Workgroup environments, and removable media security can also be compromised in networks with multiple identical computer SIDs.Demand from the Windows community has lead PowerQuest, Ghost Software and Altiris to develop programs that can change a computer‘s SID after a system has been cloned. However, PowerQuest‘s SID Changer and Ghost Software‘s Ghost Walker are only sold as part of each company‘s high-end product. Further, they both run from a DOS command prompt (Altiris‘ changer is similar to NewSID).NewSID is a program we developed that changes a computer‘s SID. It is free, comes with full source, and is a Win32 program, meaning that it can easily be run on systems that have been previously cloned. NewSID works Windows NT 4, Windows 2000, Windows XP and Windows .NET Server. Please read this entire article before you use this program.Version Information: Version 4.0 introduces support for Windows XP and .NET Server, a wizard-style interface, allows you to specify the SID that you want applied, Registry compaction and also the option to rename a computer (which results in a change of both NetBIOS and DNS names). Version 3.02 corrects a bug where NewSid would not correctly copy default values with invalid value types when renaming a key with an old SID to a new SID. NT actually makes use of such invalid values at certain times in the SAM. The symptom of this bug was error messages reporting access denied

56,912

社区成员

发帖
与我相关
我的任务
社区描述
MySQL相关内容讨论专区
社区管理员
  • MySQL
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧