★ why the filter doesn't work on

AACM 2006-09-15 11:11:39
Hibernate version : 3.0

why the filter doesn't work on <many-to-one> ?

All the tables in my database have a visibility flag so that queries will only retrieve objects that are not hidden. I'm successfully using Hibernate Filters at class level and collection level

My problem is that I can't see any way to apply the same filter to a nested object that is declared by a many-to-one element in the mapping file.


<hibernate-mapping>
<class name="User" table="USER">
....
<many-to-one name="group"
type="group"
column="GROUP_ID" />
....

<filter name="visiblefilter" />
</class>

<class name="Group" table="GROUP">
....

<filter name="visiblefilter" />
</class>


<filter-def name="visiblefilter">
visible = '1'
</filter-def>

</hibernate-mapping>


Session s = sessionFactory.openSession();
String stmt = "from User u where u.id= '1116'";
s.enableFilter("visiblefilter");

List list = s.createQuery(stmt).list();
User user = (User)list0.get(0);
Group group = user.getGroup(); // <-------- here !
......

the filter doesn't work on group.

the sql is
select ....
from Gourp g_
where g_id = ?

but not

select ....
from Gourp g_
where g_id = ? and visible = '1'

Any help is greatly appreciated.
...全文
167 16 打赏 收藏 转发到动态 举报
写回复
用AI写文章
16 条回复
切换为时间正序
请发表友善的回复…
发表回复
wmzsl 2006-09-15
  • 打赏
  • 举报
回复
i don't know.
tcmis 2006-09-15
  • 打赏
  • 举报
回复
studing..
AACM 2006-09-15
  • 打赏
  • 举报
回复
To: Saro

这样显式的查询在这里是可以的。
但是在实际的开发中,不是很可能,因为这样的情况是很多的。
这样取值不是很方便,也不够oo

这个问题真是麻烦呀,我觉得以前好像可以过滤的。
这两天偶然发现不行了,奇怪呀。
Saro 2006-09-15
  • 打赏
  • 举报
回复
我试了试。应该是不行的

开始还以为是lazy的问题,但把"from User u where u.id= '1116' "改成"from User u join fetch u.group where u.id= '1116' " 也是无用的.....

你这个需求,hql改成 join fetch u.group g where u.id= '1116' and g.visible=1
不就成了么,如果需要lazy load many端,想想就知道是不可能的
或者这样?

List list = s.createQuery(stmt).list();
User user = (User)list0.get(0);
Group group = (Group )session.load(Group.class,user.getGroup().getId();

多打几个字母而已,一样发送2条sql。
AACM 2006-09-15
  • 打赏
  • 举报
回复
To: Saro

<class name="User" table="USER" where="isValid = 1 " >

这个应该还支持,但是我还有其他的需求,就是那个值需要传进去。
就是有时候 isValid="1" ,有时候isValid="0".

哥们再帮我想想。
AACM 2006-09-15
  • 打赏
  • 举报
回复
大家真是好人呀,我在hibernate.org的论坛上问,没个人理我的,我真是太感动了,谢谢大伙!
我检讨,以后不发e文了。言归正传。

To:haoxiangni()
lazy = false 我试过了,不行。

To:村长
Region region = (Region)session.get(Region.class, “EMEA”);
Region.getOpportunities().size;
这样拿出来的是对的.

但是反过来,
Opportunity.getRegion() , 好像过滤器就不起作用了。

大伙们有空再帮忙看看。

Saro 2006-09-15
  • 打赏
  • 举报
回复
你这种情况,用如下方法难道不更方便?hibernate2.x就有的。

<class name="User" table="USER" where="isValid = 1 " >


javapassion 2006-09-15
  • 打赏
  • 举报
回复
PS:LZ以后别发英文了,呵呵,真没看明白,是用金山翻译了个大概意思看的!呵呵
javapassion 2006-09-15
  • 打赏
  • 举报
回复
才看到,等会儿要去开会,发个many-to-one加Filter的例子,LZ先研究着:

<filter-def name=”accessLevel”>
<filter-param name=”userLevel” type=”int”/>
</filter-def>

<class name=”Opportunity” …>

<many-to-one name=”region” column=”region_id” class=”Region”/>
<property name=”amount” type=”Money”>
<column name=”amt”/>
<column name=”currency”/>
</property>
<property name=”accessLevel” type=”int” column=”access_lvl”/>

<filter name=”accesssLevel” condition=”: userLevel >= access_lvl”/>
</class>

<class name=”Region” …>

<set name=”opportunities” lazy=”true”>
<key column=”region_id”/>
<one-to-many class=”Opportunity”/>
<filter name=”accessLevel” condition=”: userLevel >= access_lvl”/>
</set>
</class>
====================================================================
User user = ….;
Session session = ….;
session.enableFilter(“accessLevel”).setParameter(“userLevel”, user.getAccessLevel());
...
Region region = (Region)session.get(Region.class, “EMEA”);
Region.getOpportunities().size;
haoxiangni 2006-09-15
  • 打赏
  • 举报
回复
在many-to-one标签中加一个属性:lazy=false
gongzhy 2006-09-15
  • 打赏
  • 举报
回复
学习,顶一下
AACM 2006-09-15
  • 打赏
  • 举报
回复
谢谢大家的帮忙。

村长这样好像也不行。

我把我的问题再阐述一下:
1. 我的每个表中都有一个isValid字段,控制这条记录是否有效。若无效就相当于是删除的数据
2. User 和 Group 是 n对1 的关系。

我在映射文件里都设置了过滤,只显示isValid='1'数据。过滤器除了对 n对1 (<many-to-one>)的关系不起作用,其他的过滤都正常(<many-to-many>等都可以)。

比如:
User user = (User) session.get(User.class, new Integer(1));
后台的sql:select .... from User u_ where u_.id = ? and visible = '1' // 正常

但是:Group group = user.getGroup();
后台的sql:select .... from Gourp g_ where g_.id = ? // 过滤不起作用

就是说少了 and visible = '1' ,过滤器没有起作用。

谢谢!
javapassion 2006-09-15
  • 打赏
  • 举报
回复
PS to realyigo(可爱的小强强):
Hibernate Filter是Hibernate3新增的一种特性,提出允许在类或集合中,预定义过滤标准,即Criteria.预定义过滤标准类似"where"的属性,能够给类和各种集合元素定义一个限定条件.
javapassion 2006-09-15
  • 打赏
  • 举报
回复
我英语不好(四级没过),见谅:
我的猜测:AACM (大西)的这个Hibernate Filter的功能可能是安全性过滤,也就是说,特定的用户(User)在访问时拥有不同权限,即归于不同的权限组(Group).

这里的确是应该使用many-to-one进行操作,我建议的定义如下:
定义过滤器:
<filter-def name="accessLevel">
<filter-param name="userLevel" type="int" />
</filter-def>

<class name="User" table="USER">
...
<many-to-one name="group"
type="group"
column="GROUP_ID" />
...
<filter name="accessLevel" condition=":userLevel = group"/>
</class>
<class name="Group" table="GROUP">
...
<filter name="accessLevel" condition=":userLevel = XXX(对应的列)"/>
...
</class>

DAO类:
Group group = ...;
Session session = ...;
session.enableFilter("accessLevel").setParameter("userLevel", group.getXXX());//获取访问权限
User user = (User) session.get(User.class, new Integer(1));
user.getXXX();//获取访问权限
realyigo 2006-09-15
  • 打赏
  • 举报
回复
你的hibernate是什么版本啊?
我在2.1的api里面怎么找不到session的enableFilter方法啊。
imA 2006-09-15
  • 打赏
  • 举报
回复
sorry
Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column value  value column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is
[PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally called php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path compile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and lines ; beginning with a semicolon are silently ignored (as you probably guessed). ; Section headers (e.g. [Foo]) are also silently ignored, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.com] only apply to ; PHP files served from www.example.com. Directives set in these ; special sections cannot be overridden by user-defined INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specified using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables used to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expected ; directive because it is not set or is mistyped, a default value will be used. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quoted string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limited to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turned on using the values 1, On, True or Yes. ; They can be turned off using the values 0, Off, False or No. ; An empty string can be denoted by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically loaded extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP comes packaged with two INI files. One that is recommended to be used ; in production environments and one that is recommended to be used in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ; compatibility with older or less security conscience applications. We ; recommending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it comes to errors. We recommending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ; This is php.ini-development INI file. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more details as to why ; we recommend these changes in PHP's behavior. ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED ; Development Value: E_ALL ; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT ; html_errors ; Default Value: On ; Development Value: On ; Production value: On ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; max_input_time ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defined php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; tags as PHP source which should be processed as such. It is ; generally recommended that should be used and that this feature ; should be disabled, as enabling it may result in issues when generating XML ; documents, however this remains supported for backward compatibility reasons. ; Note that this directive does not control the tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits displayed in floating point numbers. ; http://php.net/precision precision = 14 ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exceeds this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffered output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controlled via Output Buffering Control ; functions. ; Possible Values: ; On = Enabled and buffer is unlimited. (Use with caution) ; Off = Disabled ; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = 4096 ; You can redirect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently converted to the specified encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_compression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output compression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be used for compression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of compression. PHP ; outputs chunks that are few hundreds bytes each as a result of ; compression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You need to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupted. ; http://php.net/zlib.output-compression zlib.output_compression = Off ; http://php.net/zlib.output-compression-level ;zlib.output_compression_level = -1 ; You cannot specify additional output handlers if zlib.output_compression ; is activated here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally recommended for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcoded to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be called (with the undefined class' ; name as parameter), if the unserializer finds an undefined class ; which should be instantiated. A warning appears if the specified function is ; not defined, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serialized store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decoded with unserialize, the data will remain the same. serialize_precision = 17 ; open_basedir, if set, limits all file operations to the defined directory ; and below. This directive makes most sense if used in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/open-basedir ;open_basedir = ; This directive allows you to disable certain functions for security reasons. ; It receives a comma-delimited list of function names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a comma-delimited list of class names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ; would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.comment = #FF9900 ;highlight.keyword = #007700 ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabled, the request will be allowed to complete even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupted by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be used by PHP. This value should ; be increased on systems where PHP opens many files to reflect the quantity of ; the file operations performed. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ; Enables or disables the circular reference collector. ; http://php.net/zend.enable-gc zend.enable_gc = On ; If enabled, scripts may be written in encodings that are incompatible with ; the scanner. CP936, Big5, CP949 and Shift_JIS are the examples of such ; encodings. To use this feature, mbstring extension must be enabled. ; Default: Off ;zend.multibyte = Off ; Allows to set the default encoding for the scripts. This value will be used ; unless "declare(encoding=...)" directive appears at the top of the script. ; Only affects if zend.multibyte is set. ; Default: "" ;zend.script_encoding = ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is installed on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 30 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpectedly ; long running scripts. ; Note: This directive is hardcoded to -1 for the CLI SAPI ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; How many GET/POST/COOKIE input variables may be accepted ; max_input_vars = 1000 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The recommended way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some common settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those related to E_NOTICE and E_STRICT, which together cover best practices and ; recommended coding standards in PHP. For performance reasons, this is the ; recommend error reporting setting. Your production server shouldn't be wasting ; resources complaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 5.4.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE - compile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitialized variable and ; relying on the fact it's automatically initialized to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward compatibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_COMPILE_ERROR - fatal compile-time errors ; E_COMPILE_WARNING - compile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generated error message ; E_USER_WARNING - user-generated warning message ; E_USER_NOTICE - user-generated notice message ; E_DEPRECATED - warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECATED - user-generated deprecation warnings ; ; Common Values: ; E_ALL (Show all errors, warnings and notices including coding standards.) ; E_ALL & ~E_NOTICE (Show all errors, except for notices) ; E_ALL & ~E_NOTICE & ~E_STRICT (Show all errors, except for notices and coding standards warnings.) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED ; Development Value: E_ALL ; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT ; http://php.net/error-reporting error_reporting = E_ALL ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's recommended that errors be logged on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handled ; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly recommended that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specified by the error_log ; directive found below. While errors should not be displayed on productions ; servers they should still be monitored and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = On ; Set maximum length of log_errors. In error_log information about the source is ; added. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeated messages. Repeated errors must occur in same file on same ; line unless ignore_repeated_source is set true. ; http://php.net/ignore-repeated-errors ignore_repeated_errors = Off ; Ignore source of message when ignoring repeated messages. When this setting ; is On you will not log errors with repeated messages from different files or ; source lines. ; http://php.net/ignore-repeated-source ignore_repeated_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug compile, and if ; error reporting includes E_WARNING in the allowed list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabled on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = On ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of formatting the ; error message as HTML for easier reading. This directive controls whether ; the error message is formatted as HTML or not. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: On ; http://php.net/html-errors html_errors = On ; If html_errors is set to On *and* docref_root is not empty, then PHP ; produces clickable error messages that direct to a page describing the error ; or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being used including ; the dot. PHP's default behavior is to leave these settings empty, in which ; case no links to documentation are generated. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "" ; Log errors to specified file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on Windows). ;error_log = syslog ;windows.show_crt_warning ; Default value: 0 ; Development value: 0 ; Production value: 0 ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; The separator used in PHP generated URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: ;arg_separator.output = "&" ; List of separator(s) used by PHP to parse input URLs into variables. ; PHP's default setting is "&". ; NOTE: Every character in this directive is considered as separator! ; http://php.net/arg-separator.input ; Example: ;arg_separator.input = ";&" ; This directive determines which super global arrays are registered when PHP ; starts up. G,P,C,E & S are abbreviations for the following respective super ; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty ; paid for the registration of these arrays and because ENV is not as commonly ; used as the others, ENV is not recommended on productions servers. You ; can still get access to the environment variables through getenv() should you ; need to. ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS"; ; http://php.net/variables-order variables_order = "GPCS" ; This directive determines which super global data (G,P,C,E & S) should ; be registered into the super global array REQUEST. If so, it also determines ; the order in which that data is registered. The values for this directive are ; specified in the same manner as the variables_order directive, EXCEPT one. ; Leaving this value empty will cause PHP to use the value set in the ; variables_order directive. It does not mean it will leave the super globals ; array REQUEST empty. ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; http://php.net/request-order request_order = "GP" ; This directive determines whether PHP registers $argv & $argc each time it ; runs. $argv contains an array of all the arguments passed to PHP when a script ; is invoked. $argc contains an integer representing the number of arguments ; that were passed when the script was invoked. These arrays are extremely ; useful when running scripts from the command line. When this directive is ; enabled, registering these variables consumes CPU cycles and memory each time ; a script is executed. For performance reasons, this feature should be disabled ; on production servers. ; Note: This directive is hardcoded to On for the CLI SAPI ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/register-argc-argv register_argc_argv = Off ; When enabled, the ENV, REQUEST and SERVER variables are created when they're ; first used (Just In Time) instead of when the script starts. If these ; variables are not used within a script, having this directive on will result ; in a performance gain. The PHP directive register_argc_argv must be disabled ; for this directive to have any affect. ; http://php.net/auto-globals-jit auto_globals_jit = On ; Whether PHP will read the POST data. ; This option is enabled by default. ; Most likely, you won't want to disable this option globally. It causes $_POST ; and $_FILES to always be empty; the only way you will be able to read the ; POST data will be through the php://input stream wrapper. This can be useful ; to proxy requests or to process the POST data in a memory efficient fashion. ; http://php.net/enable-post-data-reading ;enable_post_data_reading = Off ; Maximum size of POST data that PHP will accept. ; Its value may be 0 to disable the limit. It is ignored if POST data reading ; is disabled through enable_post_data_reading. ; http://php.net/post-max-size post_max_size = 8M ; Automatically add files before PHP document. ; http://php.net/auto-prepend-file auto_prepend_file = ; Automatically add files after PHP document. ; http://php.net/auto-append-file auto_append_file = ; By default, PHP will output a character encoding using ; the Content-type: header. To disable sending of the charset, simply ; set it to be empty. ; ; PHP's built-in default is text/html ; http://php.net/default-mimetype default_mimetype = "text/html" ; PHP's default character set is set to empty. ; http://php.net/default-charset ;default_charset = "UTF-8" ; Always populate the $HTTP_RAW_POST_DATA variable. PHP's default behavior is ; to disable this feature. If post reading is disabled through ; enable_post_data_reading, $HTTP_RAW_POST_DATA is *NOT* populated. ; http://php.net/always-populate-raw-post-data ;always_populate_raw_post_data = On ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; ; Windows: "\path1;\path2" ;include_path = ".;c:\php\includes" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; http://php.net/include-path ; The root of the PHP pages, used only if nonempty. ; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_redirect configuration below ; http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username used only ; if nonempty. ; http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithreaded servers, such as IIS or Zeus, and is automatically ; disabled on them. ; http://php.net/enable-dl enable_dl = Off ; cgi.force_redirect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefined, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; **You CAN safely turn this off for IIS, in fact, you MUST.** ; http://php.net/cgi.force-redirect ;cgi.force_redirect = 1 ; if cgi.nph is enabled it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY need to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; http://php.net/cgi.redirect-status-env ;cgi.redirect_status_env = ; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1 ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is supported by Apache. When this option is set to 1 PHP will send ; RFC2616 compliant header. ; Default is zero. ; http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP uploaded files (will use system default if not ; specified). ; http://php.net/upload-tmp-dir ;upload_tmp_dir = ; Maximum allowed size for uploaded files. ; http://php.net/upload-max-filesize upload_max_filesize = 2M ; Maximum number of files that can be uploaded via a single request max_file_uploads = 20 ;;;;;;;;;;;;;;;;;; ; Fopen wrappers ; ;;;;;;;;;;;;;;;;;; ; Whether to allow the treatment of URLs (like http:// or ftp://) as files. ; http://php.net/allow-url-fopen allow_url_fopen = On ; Whether to allow include/require to open URLs (like http:// or ftp://) as files. ; http://php.net/allow-url-include allow_url_include = Off ; Define the anonymous ftp password (your email address). PHP's default setting ; for this is empty. ; http://php.net/from ;from="john@doe.com" ; Define the User-Agent string. PHP's default setting for this is empty. ; http://php.net/user-agent ;user_agent="PHP" ; Default timeout for socket based streams (seconds) ; http://php.net/default-socket-timeout default_socket_timeout = 60 ; If your scripts have to deal with files from Macintosh systems, ; or you are running on a Mac and need to deal with files from ; unix or win32 systems, setting this flag will cause PHP to ; automatically detect the EOL character in those files so that ; fgets() and file() will work regardless of the source of the file. ; http://php.net/auto-detect-line-endings ;auto_detect_line_endings = Off ;;;;;;;;;;;;;;;;;;;;;; ; Dynamic Extensions ; ;;;;;;;;;;;;;;;;;;;;;; ; If you wish to have an extension loaded automatically, use the following ; syntax: ; ; extension=modulename.extension ; ; For example, on Windows: ; ; extension=msql.dll ; ; ... or under UNIX: ; ; extension=msql.so ; ; ... or with a path: ; ; extension=/path/to/extension/msql.so ; ; If you only provide the name of the extension, PHP will look for it in its ; default extension directory. ; ; Windows Extensions ; Note that ODBC support is built in, so no dll is needed for it. ; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5) ; extension folders as well as the separate PECL DLL download (PHP 5). ; Be sure to appropriately set the extension_dir directive. ; ;extension=php_bz2.dll ;extension=php_curl.dll ;extension=php_fileinfo.dll ;extension=php_gd2.dll ;extension=php_gettext.dll ;extension=php_gmp.dll ;extension=php_intl.dll ;extension=php_imap.dll ;extension=php_interbase.dll ;extension=php_ldap.dll ;extension=php_mbstring.dll ;extension=php_exif.dll ; Must be after mbstring as it depends on it ;extension=php_mysql.dll ;extension=php_mysqli.dll ;extension=php_oci8.dll ; Use with Oracle 10gR2 Instant Client ;extension=php_oci8_11g.dll ; Use with Oracle 11gR2 Instant Client ;extension=php_openssl.dll ;extension=php_pdo_firebird.dll ;extension=php_pdo_mysql.dll ;extension=php_pdo_oci.dll ;extension=php_pdo_odbc.dll ;extension=php_pdo_pgsql.dll ;extension=php_pdo_sqlite.dll ;extension=php_pgsql.dll ;extension=php_pspell.dll ;extension=php_shmop.dll ; The MIBS data available in the PHP distribution must be installed. ; See http://www.php.net/manual/en/snmp.installation.php ;extension=php_snmp.dll ;extension=php_soap.dll ;extension=php_sockets.dll ;extension=php_sqlite3.dll ;extension=php_sybase_ct.dll ;extension=php_tidy.dll ;extension=php_xmlrpc.dll ;extension=php_xsl.dll ;;;;;;;;;;;;;;;;;;; ; Module Settings ; ;;;;;;;;;;;;;;;;;;; [CLI Server] ; Whether the CLI web server uses ANSI color coding in its terminal output. cli_server.color = On [Date] ; Defines the default timezone used by the date functions ; http://php.net/date.timezone ;date.timezone = ; http://php.net/date.default-latitude ;date.default_latitude = 31.7667 ; http://php.net/date.default-longitude ;date.default_longitude = 35.2333 ; http://php.net/date.sunrise-zenith ;date.sunrise_zenith = 90.583333 ; http://php.net/date.sunset-zenith ;date.sunset_zenith = 90.583333 [filter] ; http://php.net/filter.default ;filter.default = unsafe_raw ; http://php.net/filter.default-flags ;filter.default_flags = [iconv] ;iconv.input_encoding = ISO-8859-1 ;iconv.internal_encoding = ISO-8859-1 ;iconv.output_encoding = ISO-8859-1 [intl] ;intl.default_locale = ; This directive allows you to produce PHP errors when some error ; happens within intl functions. The value is the level of the error produced. ; Default is 0, which does not produce any errors. ;intl.error_level = E_WARNING [sqlite] ; http://php.net/sqlite.assoc-case ;sqlite.assoc_case = 0 [sqlite3] ;sqlite3.extension_dir = [Pcre] ;PCRE library backtracking limit. ; http://php.net/pcre.backtrack-limit ;pcre.backtrack_limit=100000 ;PCRE library recursion limit. ;Please note that if you set this value to a high number you may consume all ;the available process stack and eventually crash PHP (due to reaching the ;stack size limit imposed by the Operating System). ; http://php.net/pcre.recursion-limit ;pcre.recursion_limit=100000 [Pdo] ; Whether to pool ODBC connections. Can be one of "strict", "relaxed" or "off" ; http://php.net/pdo-odbc.connection-pooling ;pdo_odbc.connection_pooling=strict ;pdo_odbc.db2_instance_name [Pdo_mysql] ; If mysqlnd is used: Number of cache slots for the internal result set cache ; http://php.net/pdo_mysql.cache_size pdo_mysql.cache_size = 2000 ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/pdo_mysql.default-socket pdo_mysql.default_socket= [Phar] ; http://php.net/phar.readonly ;phar.readonly = On ; http://php.net/phar.require-hash ;phar.require_hash = On ;phar.cache_list = [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = localhost ; http://php.net/smtp-port smtp_port = 25 ; For Win32 only. ; http://php.net/sendmail-from ;sendmail_from = me@example.com ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ; http://php.net/sendmail-path ;sendmail_path = ; Force the addition of the specified parameters to be passed as extra parameters ; to the sendmail binary. These parameters will always replace the value of ; the 5th parameter to mail(), even in safe mode. ;mail.force_extra_parameters = ; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename mail.add_x_header = On ; The path to a log file that will log all mail() calls. Log entries include ; the full path of the script, line number, To address and headers. ;mail.log = ; Log mail to syslog (Event Log on Windows). ;mail.log = syslog [SQL] ; http://php.net/sql.safe-mode sql.safe_mode = Off [ODBC] ; http://php.net/odbc.default-db ;odbc.default_db = Not yet implemented ; http://php.net/odbc.default-user ;odbc.default_user = Not yet implemented ; http://php.net/odbc.default-pw ;odbc.default_pw = Not yet implemented ; Controls the ODBC cursor model. ; Default: SQL_CURSOR_STATIC (default). ;odbc.default_cursortype ; Allow or prevent persistent links. ; http://php.net/odbc.allow-persistent odbc.allow_persistent = On ; Check that a connection is still valid before reuse. ; http://php.net/odbc.check-persistent odbc.check_persistent = On ; Maximum number of persistent links. -1 means no limit. ; http://php.net/odbc.max-persistent odbc.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/odbc.max-links odbc.max_links = -1 ; Handling of LONG fields. Returns number of bytes to variables. 0 means ; passthru. ; http://php.net/odbc.defaultlrl odbc.defaultlrl = 4096 ; Handling of binary data. 0 means passthru, 1 return as is, 2 convert to char. ; See the documentation on odbc_binmode and odbc_longreadlen for an explanation ; of odbc.defaultlrl and odbc.defaultbinmode ; http://php.net/odbc.defaultbinmode odbc.defaultbinmode = 1 ;birdstep.max_links = -1 [Interbase] ; Allow or prevent persistent links. ibase.allow_persistent = 1 ; Maximum number of persistent links. -1 means no limit. ibase.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ibase.max_links = -1 ; Default database name for ibase_connect(). ;ibase.default_db = ; Default username for ibase_connect(). ;ibase.default_user = ; Default password for ibase_connect(). ;ibase.default_password = ; Default charset for ibase_connect(). ;ibase.default_charset = ; Default timestamp format. ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ; Default date format. ibase.dateformat = "%Y-%m-%d" ; Default time format. ibase.timeformat = "%H:%M:%S" [MySQL] ; Allow accessing, from PHP's perspective, local files with LOAD DATA statements ; http://php.net/mysql.allow_local_infile mysql.allow_local_infile = On ; Allow or prevent persistent links. ; http://php.net/mysql.allow-persistent mysql.allow_persistent = On ; If mysqlnd is used: Number of cache slots for the internal result set cache ; http://php.net/mysql.cache_size mysql.cache_size = 2000 ; Maximum number of persistent links. -1 means no limit. ; http://php.net/mysql.max-persistent mysql.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/mysql.max-links mysql.max_links = -1 ; Default port number for mysql_connect(). If unset, mysql_connect() will use ; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the ; compile-time value defined MYSQL_PORT (in that order). Win32 will only look ; at MYSQL_PORT. ; http://php.net/mysql.default-port mysql.default_port = ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/mysql.default-socket mysql.default_socket = ; Default host for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysql.default-host mysql.default_host = ; Default user for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysql.default-user mysql.default_user = ; Default password for mysql_connect() (doesn't apply in safe mode). ; Note that this is generally a *bad* idea to store passwords in this file. ; *Any* user with PHP access can run 'echo get_cfg_var("mysql.default_password") ; and reveal this password! And of course, any users with read access to this ; file will be able to reveal the password as well. ; http://php.net/mysql.default-password mysql.default_password = ; Maximum time (in seconds) for connect timeout. -1 means no limit ; http://php.net/mysql.connect-timeout mysql.connect_timeout = 60 ; Trace mode. When trace_mode is active (=On), warnings for table/index scans and ; SQL-Errors will be displayed. ; http://php.net/mysql.trace-mode mysql.trace_mode = Off [MySQLi] ; Maximum number of persistent links. -1 means no limit. ; http://php.net/mysqli.max-persistent mysqli.max_persistent = -1 ; Allow accessing, from PHP's perspective, local files with LOAD DATA statements ; http://php.net/mysqli.allow_local_infile ;mysqli.allow_local_infile = On ; Allow or prevent persistent links. ; http://php.net/mysqli.allow-persistent mysqli.allow_persistent = On ; Maximum number of links. -1 means no limit. ; http://php.net/mysqli.max-links mysqli.max_links = -1 ; If mysqlnd is used: Number of cache slots for the internal result set cache ; http://php.net/mysqli.cache_size mysqli.cache_size = 2000 ; Default port number for mysqli_connect(). If unset, mysqli_connect() will use ; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the ; compile-time value defined MYSQL_PORT (in that order). Win32 will only look ; at MYSQL_PORT. ; http://php.net/mysqli.default-port mysqli.default_port = 3306 ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. ; http://php.net/mysqli.default-socket mysqli.default_socket = ; Default host for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysqli.default-host mysqli.default_host = ; Default user for mysql_connect() (doesn't apply in safe mode). ; http://php.net/mysqli.default-user mysqli.default_user = ; Default password for mysqli_connect() (doesn't apply in safe mode). ; Note that this is generally a *bad* idea to store passwords in this file. ; *Any* user with PHP access can run 'echo get_cfg_var("mysqli.default_pw") ; and reveal this password! And of course, any users with read access to this ; file will be able to reveal the password as well. ; http://php.net/mysqli.default-pw mysqli.default_pw = ; Allow or prevent reconnect mysqli.reconnect = Off [mysqlnd] ; Enable / Disable collection of general statistics by mysqlnd which can be ; used to tune and monitor MySQL operations. ; http://php.net/mysqlnd.collect_statistics mysqlnd.collect_statistics = On ; Enable / Disable collection of memory usage statistics by mysqlnd which can be ; used to tune and monitor MySQL operations. ; http://php.net/mysqlnd.collect_memory_statistics mysqlnd.collect_memory_statistics = On ; Size of a pre-allocated buffer used when sending commands to MySQL in bytes. ; http://php.net/mysqlnd.net_cmd_buffer_size ;mysqlnd.net_cmd_buffer_size = 2048 ; Size of a pre-allocated buffer used for reading data sent by the server in ; bytes. ; http://php.net/mysqlnd.net_read_buffer_size ;mysqlnd.net_read_buffer_size = 32768 [OCI8] ; Connection: Enables privileged connections using external ; credentials (OCI_SYSOPER, OCI_SYSDBA) ; http://php.net/oci8.privileged-connect ;oci8.privileged_connect = Off ; Connection: The maximum number of persistent OCI8 connections per ; process. Using -1 means no limit. ; http://php.net/oci8.max-persistent ;oci8.max_persistent = -1 ; Connection: The maximum number of seconds a process is allowed to ; maintain an idle persistent connection. Using -1 means idle ; persistent connections will be maintained forever. ; http://php.net/oci8.persistent-timeout ;oci8.persistent_timeout = -1 ; Connection: The number of seconds that must pass before issuing a ; ping during oci_pconnect() to check the connection validity. When ; set to 0, each oci_pconnect() will cause a ping. Using -1 disables ; pings completely. ; http://php.net/oci8.ping-interval ;oci8.ping_interval = 60 ; Connection: Set this to a user chosen connection class to be used ; for all pooled server requests with Oracle 11g Database Resident ; Connection Pooling (DRCP). To use DRCP, this value should be set to ; the same string for all web servers running the same application, ; the database pool must be configured, and the connection string must ; specify to use a pooled server. ;oci8.connection_class = ; High Availability: Using On lets PHP receive Fast Application ; Notification (FAN) events generated when a database node fails. The ; database must also be configured to post FAN events. ;oci8.events = Off ; Tuning: This option enables statement caching, and specifies how ; many statements to cache. Using 0 disables statement caching. ; http://php.net/oci8.statement-cache-size ;oci8.statement_cache_size = 20 ; Tuning: Enables statement prefetching and sets the default number of ; rows that will be fetched automatically after statement execution. ; http://php.net/oci8.default-prefetch ;oci8.default_prefetch = 100 ; Compatibility. Using On means oci_close() will not close ; oci_connect() and oci_new_connect() connections. ; http://php.net/oci8.old-oci-close-semantics ;oci8.old_oci_close_semantics = Off [PostgreSQL] ; Allow or prevent persistent links. ; http://php.net/pgsql.allow-persistent pgsql.allow_persistent = On ; Detect broken persistent links always with pg_pconnect(). ; Auto reset feature requires a little overheads. ; http://php.net/pgsql.auto-reset-persistent pgsql.auto_reset_persistent = Off ; Maximum number of persistent links. -1 means no limit. ; http://php.net/pgsql.max-persistent pgsql.max_persistent = -1 ; Maximum number of links (persistent+non persistent). -1 means no limit. ; http://php.net/pgsql.max-links pgsql.max_links = -1 ; Ignore PostgreSQL backends Notice message or not. ; Notice message logging require a little overheads. ; http://php.net/pgsql.ignore-notice pgsql.ignore_notice = 0 ; Log PostgreSQL backends Notice message or not. ; Unless pgsql.ignore_notice=0, module cannot log notice message. ; http://php.net/pgsql.log-notice pgsql.log_notice = 0 [Sybase-CT] ; Allow or prevent persistent links. ; http://php.net/sybct.allow-persistent sybct.allow_persistent = On ; Maximum number of persistent links. -1 means no limit. ; http://php.net/sybct.max-persistent sybct.max_persistent = -1 ; Maximum number of links (persistent + non-persistent). -1 means no limit. ; http://php.net/sybct.max-links sybct.max_links = -1 ; Minimum server message severity to display. ; http://php.net/sybct.min-server-severity sybct.min_server_severity = 10 ; Minimum client message severity to display. ; http://php.net/sybct.min-client-severity sybct.min_client_severity = 10 ; Set per-context timeout ; http://php.net/sybct.timeout ;sybct.timeout= ;sybct.packet_size ; The maximum time in seconds to wait for a connection attempt to succeed before returning failure. ; Default: one minute ;sybct.login_timeout= ; The name of the host you claim to be connecting from, for display by sp_who. ; Default: none ;sybct.hostname= ; Allows you to define how often deadlocks are to be retried. -1 means "forever". ; Default: 0 ;sybct.deadlock_retry_count= [bcmath] ; Number of decimal digits for all bcmath functions. ; http://php.net/bcmath.scale bcmath.scale = 0 [browscap] ; http://php.net/browscap ;browscap = extra/browscap.ini [Session] ; Handler used to store/retrieve data. ; http://php.net/session.save-handler session.save_handler = files ; Argument passed to save_handler. In the case of files, this is the path ; where data files are stored. Note: Windows users have to change this ; variable in order to use PHP's session functions. ; ; The path can be defined as: ; ; session.save_path = "N;/path" ; ; where N is an integer. Instead of storing all the session files in ; /path, what this will do is use subdirectories N-levels deep, and ; store the session data in those directories. This is useful if you ; or your OS have problems with lots of files in one directory, and is ; a more efficient layout for servers that handle lots of sessions. ; ; NOTE 1: PHP will not create this directory structure automatically. ; You can use the script in the ext/session dir for that purpose. ; NOTE 2: See the section on garbage collection below if you choose to ; use subdirectories for session storage ; ; The file storage module creates files using mode 600 by default. ; You can change that by using ; ; session.save_path = "N;MODE;/path" ; ; where MODE is the octal representation of the mode. Note that this ; does not overwrite the process's umask. ; http://php.net/session.save-path ;session.save_path = "/tmp" ; Whether to use cookies. ; http://php.net/session.use-cookies session.use_cookies = 1 ; http://php.net/session.cookie-secure ;session.cookie_secure = ; This option forces PHP to fetch and use a cookie for storing and maintaining ; the session id. We encourage this operation as it's very helpful in combating ; session hijacking when not specifying and managing your own session id. It is ; not the end all be all of session hijacking defense, but it's a good start. ; http://php.net/session.use-only-cookies session.use_only_cookies = 1 ; Name of the session (used as cookie name). ; http://php.net/session.name session.name = PHPSESSID ; Initialize session on request startup. ; http://php.net/session.auto-start session.auto_start = 0 ; Lifetime in seconds of cookie or, if 0, until browser is restarted. ; http://php.net/session.cookie-lifetime session.cookie_lifetime = 0 ; The path for which the cookie is valid. ; http://php.net/session.cookie-path session.cookie_path = / ; The domain for which the cookie is valid. ; http://php.net/session.cookie-domain session.cookie_domain = ; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript. ; http://php.net/session.cookie-httponly session.cookie_httponly = ; Handler used to serialize data. php is the standard serializer of PHP. ; http://php.net/session.serialize-handler session.serialize_handler = php ; Defines the probability that the 'garbage collection' process is started ; on every session initialization. The probability is calculated by using ; gc_probability/gc_divisor. Where session.gc_probability is the numerator ; and gc_divisor is the denominator in the equation. Setting this value to 1 ; when the session.gc_divisor value is 100 will give you approximately a 1% chance ; the gc will run on any give request. ; Default Value: 1 ; Development Value: 1 ; Production Value: 1 ; http://php.net/session.gc-probability session.gc_probability = 1 ; Defines the probability that the 'garbage collection' process is started on every ; session initialization. The probability is calculated by using the following equation: ; gc_probability/gc_divisor. Where session.gc_probability is the numerator and ; session.gc_divisor is the denominator in the equation. Setting this value to 1 ; when the session.gc_divisor value is 100 will give you approximately a 1% chance ; the gc will run on any give request. Increasing this value to 1000 will give you ; a 0.1% chance the gc will run on any give request. For high volume production servers, ; this is a more efficient approach. ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; http://php.net/session.gc-divisor session.gc_divisor = 1000 ; After this number of seconds, stored data will be seen as 'garbage' and ; cleaned up by the garbage collection process. ; http://php.net/session.gc-maxlifetime session.gc_maxlifetime = 1440 ; NOTE: If you are using the subdirectory option for storing session files ; (see session.save_path above), then garbage collection does *not* ; happen automatically. You will need to do your own garbage ; collection through a shell script, cron entry, or some other method. ; For example, the following script would is the equivalent of ; setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes): ; find /path/to/sessions -cmin +24 -type f | xargs rm ; Check HTTP Referer to invalidate externally stored URLs containing ids. ; HTTP_REFERER has to contain this substring for the session to be ; considered as valid. ; http://php.net/session.referer-check session.referer_check = ; How many bytes to read from the file. ; http://php.net/session.entropy-length ;session.entropy_length = 32 ; Specified here to create the session id. ; http://php.net/session.entropy-file ; Defaults to /dev/urandom ; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom ; If neither are found at compile time, the default is no entropy file. ; On windows, setting the entropy_length setting will activate the ; Windows random source (using the CryptoAPI) ;session.entropy_file = /dev/urandom ; Set to {nocache,private,public,} to determine HTTP caching aspects ; or leave this empty to avoid sending anti-caching headers. ; http://php.net/session.cache-limiter session.cache_limiter = nocache ; Document expires after n minutes. ; http://php.net/session.cache-expire session.cache_expire = 180 ; trans sid support is disabled by default. ; Use of trans sid may risk your users security. ; Use this option with caution. ; - User may send URL contains active session ID ; to other person via. email/irc/etc. ; - URL that contains active session ID may be stored ; in publicly accessible computer. ; - User may access your site with the same session ID ; always using URL stored in browser's history or bookmarks. ; http://php.net/session.use-trans-sid session.use_trans_sid = 0 ; Select a hash function for use in generating session ids. ; Possible Values ; 0 (MD5 128 bits) ; 1 (SHA-1 160 bits) ; This option may also be set to the name of any hash function supported by ; the hash extension. A list of available hashes is returned by the hash_algos() ; function. ; http://php.net/session.hash-function session.hash_function = 0 ; Define how many bits are stored in each character when converting ; the binary hash data to something readable. ; Possible values: ; 4 (4 bits: 0-9, a-f) ; 5 (5 bits: 0-9, a-v) ; 6 (6 bits: 0-9, a-z, A-Z, "-", ",") ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; http://php.net/session.hash-bits-per-character session.hash_bits_per_character = 5 ; The URL rewriter will look for URLs in a defined set of HTML tags. ; form/fieldset are special; if you include them here, the rewriter will ; add a hidden field with the info which is otherwise appended ; to URLs. If you want XHTML conformity, remove the form entry. ; Note that all valid entries require a "=", even if no value follows. ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; http://php.net/url-rewriter.tags url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" ; Enable upload progress tracking in $_SESSION ; Default Value: On ; Development Value: On ; Production Value: On ; http://php.net/session.upload-progress.enabled ;session.upload_progress.enabled = On ; Cleanup the progress information as soon as all POST data has been read ; (i.e. upload completed). ; Default Value: On ; Development Value: On ; Production Value: On ; http://php.net/session.upload-progress.cleanup ;session.upload_progress.cleanup = On ; A prefix used for the upload progress key in $_SESSION ; Default Value: "upload_progress_" ; Development Value: "upload_progress_" ; Production Value: "upload_progress_" ; http://php.net/session.upload-progress.prefix ;session.upload_progress.prefix = "upload_progress_" ; The index name (concatenated with the prefix) in $_SESSION ; containing the upload progress information ; Default Value: "PHP_SESSION_UPLOAD_PROGRESS" ; Development Value: "PHP_SESSION_UPLOAD_PROGRESS" ; Production Value: "PHP_SESSION_UPLOAD_PROGRESS" ; http://php.net/session.upload-progress.name ;session.upload_progress.name = "PHP_SESSION_UPLOAD_PROGRESS" ; How frequently the upload progress should be updated. ; Given either in percentages (per-file), or in bytes ; Default Value: "1%" ; Development Value: "1%" ; Production Value: "1%" ; http://php.net/session.upload-progress.freq ;session.upload_progress.freq = "1%" ; The minimum delay between updates, in seconds ; Default Value: 1 ; Development Value: 1 ; Production Value: 1 ; http://php.net/session.upload-progress.min-freq ;session.upload_progress.min_freq = "1" [MSSQL] ; Allow or prevent persistent links. mssql.allow_persistent = On ; Maximum number of persistent links. -1 means no limit. mssql.max_persistent = -1 ; Maximum number of links (persistent+non persistent). -1 means no limit. mssql.max_links = -1 ; Minimum error severity to display. mssql.min_error_severity = 10 ; Minimum message severity to display. mssql.min_message_severity = 10 ; Compatibility mode with old versions of PHP 3.0. mssql.compatability_mode = Off ; Connect timeout ;mssql.connect_timeout = 5 ; Query timeout ;mssql.timeout = 60 ; Valid range 0 - 2147483647. Default = 4096. ;mssql.textlimit = 4096 ; Valid range 0 - 2147483647. Default = 4096. ;mssql.textsize = 4096 ; Limits the number of records in each batch. 0 = all records in one batch. ;mssql.batchsize = 0 ; Specify how datetime and datetim4 columns are returned ; On => Returns data converted to SQL server settings ; Off => Returns values as YYYY-MM-DD hh:mm:ss ;mssql.datetimeconvert = On ; Use NT authentication when connecting to the server mssql.secure_connection = Off ; Specify max number of processes. -1 = library default ; msdlib defaults to 25 ; FreeTDS defaults to 4096 ;mssql.max_procs = -1 ; Specify client character set. ; If empty or not set the client charset from freetds.conf is used ; This is only used when compiled with FreeTDS ;mssql.charset = "ISO-8859-1" [Assertion] ; Assert(expr); active by default. ; http://php.net/assert.active ;assert.active = On ; Issue a PHP warning for each failed assertion. ; http://php.net/assert.warning ;assert.warning = On ; Don't bail out by default. ; http://php.net/assert.bail ;assert.bail = Off ; User-function to be called if an assertion fails. ; http://php.net/assert.callback ;assert.callback = 0 ; Eval the expression with current error_reporting(). Set to true if you want ; error_reporting(0) around the eval(). ; http://php.net/assert.quiet-eval ;assert.quiet_eval = 0 [COM] ; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs ; http://php.net/com.typelib-file ;com.typelib_file = ; allow Distributed-COM calls ; http://php.net/com.allow-dcom ;com.allow_dcom = true ; autoregister constants of a components typlib on com_load() ; http://php.net/com.autoregister-typelib ;com.autoregister_typelib = true ; register constants casesensitive ; http://php.net/com.autoregister-casesensitive ;com.autoregister_casesensitive = false ; show warnings on duplicate constant registrations ; http://php.net/com.autoregister-verbose ;com.autoregister_verbose = true ; The default character set code-page to use when passing strings to and from COM objects. ; Default: system ANSI code page ;com.code_page= [mbstring] ; language for internal character representation. ; http://php.net/mbstring.language ;mbstring.language = Japanese ; internal/script encoding. ; Some encoding cannot work as internal encoding. ; (e.g. SJIS, BIG5, ISO-2022-*) ; http://php.net/mbstring.internal-encoding ;mbstring.internal_encoding = EUC-JP ; http input encoding. ; http://php.net/mbstring.http-input ;mbstring.http_input = auto ; http output encoding. mb_output_handler must be ; registered as output buffer to function ; http://php.net/mbstring.http-output ;mbstring.http_output = SJIS ; enable automatic encoding translation according to ; mbstring.internal_encoding setting. Input chars are ; converted to internal encoding by setting this to On. ; Note: Do _not_ use automatic encoding translation for ; portable libs/applications. ; http://php.net/mbstring.encoding-translation ;mbstring.encoding_translation = Off ; automatic encoding detection order. ; auto means ; http://php.net/mbstring.detect-order ;mbstring.detect_order = auto ; substitute_character used when character cannot be converted ; one from another ; http://php.net/mbstring.substitute-character ;mbstring.substitute_character = none; ; overload(replace) single byte functions by mbstring functions. ; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(), ; etc. Possible values are 0,1,2,4 or combination of them. ; For example, 7 for overload everything. ; 0: No overload ; 1: Overload mail() function ; 2: Overload str*() functions ; 4: Overload ereg*() functions ; http://php.net/mbstring.func-overload ;mbstring.func_overload = 0 ; enable strict encoding detection. ;mbstring.strict_detection = Off ; This directive specifies the regex pattern of content types for which mb_output_handler() ; is activated. ; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\+xml) ;mbstring.http_output_conv_mimetype= [gd] ; Tell the jpeg decode to ignore warnings and try to create ; a gd image. The warning will then be displayed as notices ; disabled by default ; http://php.net/gd.jpeg-ignore-warning ;gd.jpeg_ignore_warning = 0 [exif] ; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS. ; With mbstring support this will automatically be converted into the encoding ; given by corresponding encode setting. When empty mbstring.internal_encoding ; is used. For the decode settings you can distinguish between motorola and ; intel byte order. A decode setting cannot be empty. ; http://php.net/exif.encode-unicode ;exif.encode_unicode = ISO-8859-15 ; http://php.net/exif.decode-unicode-motorola ;exif.decode_unicode_motorola = UCS-2BE ; http://php.net/exif.decode-unicode-intel ;exif.decode_unicode_intel = UCS-2LE ; http://php.net/exif.encode-jis ;exif.encode_jis = ; http://php.net/exif.decode-jis-motorola ;exif.decode_jis_motorola = JIS ; http://php.net/exif.decode-jis-intel ;exif.decode_jis_intel = JIS [Tidy] ; The path to a default tidy configuration file to use when using tidy ; http://php.net/tidy.default-config ;tidy.default_config = /usr/local/lib/php/default.tcfg ; Should tidy clean and repair output automatically? ; WARNING: Do not use this option if you are generating non-html content ; such as dynamic images ; http://php.net/tidy.clean-output tidy.clean_output = Off [soap] ; Enables or disables WSDL caching feature. ; http://php.net/soap.wsdl-cache-enabled soap.wsdl_cache_enabled=1 ; Sets the directory name where SOAP extension will put cache files. ; http://php.net/soap.wsdl-cache-dir soap.wsdl_cache_dir="/tmp" ; (time to live) Sets the number of second while cached file will be used ; instead of original one. ; http://php.net/soap.wsdl-cache-ttl soap.wsdl_cache_ttl=86400 ; Sets the size of the cache limit. (Max. number of WSDL files to cache) soap.wsdl_cache_limit = 5 [sysvshm] ; A default size of the shared memory segment ;sysvshm.init_mem = 10000 [ldap] ; Sets the maximum number of open links or -1 for unlimited. ldap.max_links = -1 [mcrypt] ; For more information about mcrypt settings see http://php.net/mcrypt-module-open ; Directory where to load mcrypt algorithms ; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt) ;mcrypt.algorithms_dir= ; Directory where to load mcrypt modes ; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt) ;mcrypt.modes_dir= [dba] ;dba.default_handler= [curl] ; A default value for the CURLOPT_CAINFO option. This is required to be an ; absolute path. ;curl.cainfo = ; Local Variables: ; tab-width: 4 ; End:
Keyboard shortcuts A quick reference guide to UltraEdit's default keyboard shortcuts Keymapping and custom hotkeys How to customize 键映射s and menu hotkeys Column Markers The benefit of a column maker is that it can help you to format your text/code, or in some cases to make it easier to read in complex nested logic. Quick Open UltraEdit and UEStudio provide multiple methods to quickly open files without using the standard Open File dialog. A favorite method among power users is the Quick Open in the File menu. The benefit of the quick open dialog is that it loads up very... Vertical & Horizontal Split Window This is a convenient feature when you're manually comparing files, when you want to copy/paste between multiple files, or when you simply want to divide up your edit space. Tabbed Child Windows Declutter your edit space by using the tabbed child windows feature Auto-Hide Child Windows When you're deep in your code, the most important thing is editing space. The all new auto-hide child windows give you The all new auto-hide child windows allow you to maximize your editing space by hiding the child windows against the edge of the editor. Customizing toolbars Did you know that you can not only change what is on UltraEdit's toolbars, you can also change the icon used, as well as create your own custom toolbars and tools? File tabs Understand how file tabs can be displayed, controlled and configured through the window docking system in UltraEdit/UEStudio. Create user/project tools Execute DOS or Windows commands in UltraEdit or UEStudio Temporary Files UltraEdit and UEStudio use temporary files... but what are temporary files? This power tip provides an explanation as well as some tips to get the most out of temp files. Backup and Restore Settings One of the staples of UltraEdit (and UEStudio) is its highly configurable interface and features. However, what happens when you're moving to a new system and you want to port your settings and customizations over along with UltraEdit? Add a webpage to your toolbar Use UltraEdit's powerful user tools to launch your favorite website from the click of a button on your toolbar Integrate Yahoo!, Google, Wikipedia and more with UltraEdit This tutorial will show you how to access the information you need in your browser by simply highlighting your text in the edit window and clicking your toolbar button How to install UE3 UE3 is the portable version of UltraEdit developed specifically for the U3 smart drive. You will need a U3-compatible USB drive for this power tip Scripting tutorial An introduction to UltraEdit's integrated scripting feature The List Lines Containing String option in Find The lists lines option can be a handy tool when searching because it presents all occurrences of the find string in a floating dialog box. You can use the dialog to navigate to each instance by double-clicking on one of the result lines... Scripting Access to the Clipboard How to access the Clipboard using the integrated scripting engine Scripting access to output window How to access the output window using the integrated scripting engine Writing a macro Steps to record and edit powerful macros to quickly and efficiently edit files Using "copied" and "selected" variables for dynamic macros Use copied and selected text in macros to dramatically increase the power and flexibility of UltraEdit macros Run a macro or script from the command line We are often asked if it is possible to run an UltraEdit macro or script on a file from the command line. The answer is yes - and it's not only possible, it's extremely simple! Using find/replace UltraEdit and UEStudio give you the ability to perform a find or replace through one or more files. Learn how to use UltraEdit/UEStudio's powerful find and replace. Multiline find and replace Search and replace text spanning several lines Incremental search Incremental search is an inline, progressive search that allows you to find matched text as you type, much like Firefox's search feature Regular expressions Regular Expressions are essentially patterns (rather than specific strings) that are used with Find/Replace operations. This guide can dramatically improve your speed and efficiency for Find/Replace Tagged expressions "Tagging" the find data allows UltraEdit/UEStudio to re-use the data similar to variable during a replace. For example, If ^(h*o^) ^(f*s^) matches "hello folks", ^2 ^1 would replace it with "folks hello". Perl compatible regular expressions An introduction to using Perl-style regular expressions for search/replace Perl regex tutorial: non-greedy regular expressions Have you ever built a complex Perl-style regular expression, only to find that it matches much more data than you anticipated? If you've ever found yourself pulling your hair out trying to build the perfect regular expression to match the least amoun... Remove blank lines A question we often see is "I have a lot of blank lines in my file and I don't want to go through and manually delete them. Is there an easier way to do this?" The answer is: yes! Configure FTP Set up and configure multiple FTP accounts TaskMatch Environments How to use TaskMatch Environments in UltraEdit and UEStudio Configure FTP backup Save a local copy of your files when you transfer them to FTP directories Encrypt and Decrypt Text Files Use UltraEdit to encrypt and decrypt your text files Link to remote directories Sync local directories with remote (FTP/SFTP) directories Compare Modified File Against Source File How to compare the modified file against the source file on disk. Column Based Find and Replace Need to restrict your search/replace to a specific column range? The column based search does just that... Compare Highlighted Text If you need to quickly compare of portions of text, rather than an entire file, then you need UltraEdit/UEStudio's selected text compare! The selected text compare allows you to select portions of text between 2 files and execute a compare on ONLY the se Using the SSH/telnet console A tutorial for UltraEdit/UEStudio's SSH/telent feature Adding a wordfile Adding a wordfile in UltraEdit v15.00 and greater Adding a wordfile (in v14.20 and earlier) Add a language definition to your wordfile for use with UltraEdit and UEStudio's powerful syntax highlighting Syntax highlighting and code folding Explanation of highlighting and folding definitions in the UltraEdit/UEStudio wordfile Create Your Own TaskMatch Environment How to create your own TaskMatch Environments Filtering the Explorer View How to filter the Explorer view in UltraEdit and UEStudio Group Files and Folders with Projects How to group your files and folders using Projects Adding or removing file extensions for syntax highlighting How to configure syntax highlighting to highlight different file types automatically Project Settings Advanced Project Features - Using the UltraEdit/UEStudio project settings dialog Scripting Techniques Scripting techniques for UltraEdit/UEStudio. Perl-style regular expressions for function strings Using Perl-Style regexes to identify functions in your syntax-highlighted files and populate the function list Autocorrect keywords in UltraEdit/UEStudio How to enable and disable autocorrect keywords with syntax highlighting Insert Menu Commands UltraEdit includes several special insert functions under the Insert menu. You can use these functions to insert a file into the active file, insert a string into the file at every specified increment, sample colors from anywhere on your screen, and more. Using Bookmarks UltraEdit and UEStudio provide a way for you to mark, access, and preview your favorite lines via bookmarks. We'll look at how to create, edit, and configure bookmarks in the bookmark viewer. Creating Search Favorites UltraEdit includes a Search and Replace Favorites feature that allows you to manage frequently used Find and Replace strings. Create, name, and edit your Search and Replace Favorites... Customizing The HTML Toolbar Commands The purpose of this power tip is to teach you how to customize the existing HTML tags and create your own HTML tags. Combine All Open Files into a Single Destination File Have you ever needed to combine multiple files into a single destination (output) file? You can use a combination of a script and tool to create a single file from multiple files. Sum Column/Selection in Column Mode This power tip demonstrates how to calculate the sum from a column of numeric data. Column mode How to use the features of UltraEdit's powerful column mode Advanced and column-based sort How to sort file data using the advanced sort options and the column sort options Working with CSV files Use UltraEdit's built-in handling for character-separated value files Word wrap and tab settings for different file types UltraEdit and UEStudio allow you to customize the word wrap and tab settings for any type of file. This power tip walks you through the steps to configure these customizations Versioned backup Set UltraEdit/UEStudio to automatically save versioned backups of your files Configure spell checker How to set the highly-configurable options for UltraEdit's integrated spell checker Special functions UltraEdit includes several special functions under the File menu. You can use these functions to insert a file into the current file, delete the active file, send the file through email, or insert a string into the file at every specified increment HTML preview Edit and preview your rendered HTML code in the edit window Custom templates Create templates for frequently used text. You can also assign hotkeys to your templates. Compare files/folders Integrated differences tool - comparing files and folders with UltraCompare Professional File change polling Monitor log files and more using UltraEdit's file change polling feature Vertically split the edit window Splitting the edit window in UltraEdit/UEStudio Large file text editor UltraEdit can be used to edit large text files. Learn how to configure UltraEdit to optimize editing large text files Multiple configuration environments of Ultraedit/UEstudio How to set up your separate environments for UltraEdit/UEStudio Java compiler Create a custom user tool to compile Java code, using the command line, from within UltraEdit Configure UltraEdit with javascript lint How to check your JavaScript source code for common mistakes without actually running the script or opening the web page Character properties at your fingertips Access the properties of a character with the click of a button Ctags Set up and configure Ctags for use in UltraEdit Visual SourceSafe integration Create a customized user tool to check out files from Visual SourceSafe Running WebFOCUS from UltraEdit Configure UltraEdit for use with WebFOCUS CSE HTML Validator CSE HTML Validator for Windows is the most powerful, easy to use, user configurable, and all-in-one HTML, XHTML, CSS, link, spelling, and accessibility checker available. This quick tutorial shows you how to use it and set it up in UltraEdit/UEStudio Working with Unicode in UltraEdit/UEStudio In this tutorial, we'll cover some of the basics of Unicode-encoded text and Unicode files, and how to view and manipulate it in UltraEdit. Search and delete lines found UEStudio and UltraEdit provide a way for you to search and delete found lines from your files. This short tutorial provides the steps for searching for and deleting lines by writing a simple script. Parsing XML files and editing XML files Parsing XML can be a time-consuming task, especially when large amounts of data are involved. As of v15.10, UltraEdit provides you with a the XML Window for the purpose of parsing your XML files. The XML window allows you to navigate through the XML... Using Bookmarks UltraEdit and UEStudio provide a way for you to mark, access, and preview your favorite lines via bookmarks. We'll look at how to create, edit, and configure bookmarks in the bookmark viewer. Using the CSS style builder UltraEdit and UEStudio both include a CSS style builder for you to easily configure and insert CSS styles into the active document. This power tip will show you how to use the style builder. SSH/Telnet Session Logging Log the input and output to/from the server in your SSH/Telnet sessions Edit, develop, debug, and run SAS programs This user-submitted power tip describes how to use UltraEdit as a SAS editor, as well as how to run and debug SAS programs from the editor itself Tabs to Spaces - Ignore tabs and spaces in string and comments Ever had to convert the tab characters to spaces, but wanted to leave the tabs in strings and comments untouched? In previous versions, the convert tabs to spaces feature didn't distinguish between tabs as whitespace/formatting vs. tabs in... Setting File Associations in UltraEdit/UEStudio A file association is used by Windows Explorer to determine which application will open the file when it is double-clicked (or opened) in Explorer. In the interest of speed, many UltraEdit/UEStudio users want to associate specific file types with... Windows Explorer Integration We know that many UltraEdit/UEStudio users don't operate solely from within the editor; rather, they are frequently working in Windows Explorer before going to the editor. As such, they want (and need) a quick and easy way to open files from within... Line Change Indicator Ever wanted to see what changes you've made since your last save, or have you ever wanted to know what lines you've changed during an edit session? As of UltraEdit v16.00, you can do just that with the line change indicator... Comment and Uncomment Selected Text How many times per day do you comment out a block of code? Do you ever get tired of manually typing your open and close comments? As of v16.00, simply highlight your code, click a button, and move on. It's that easy... Hide, Show, and Delete Found Lines in UltraEdit/UEStudio Over time, many of our users have asked for the ability to hide/show lines based on a Find string... you got it! As of v16.00, you can now hide/show and even delete text based on your search criteria. The following power tip will guide you through... Read Only Status Indicator Have you ever opened a file, tried incessantly to modify it, then realized it was read only? As of v16.00, UltraEdit includes an enhanced read only status indicator. For read only files, the file tab will display a lock icon. Additionally, you can... Regular Expression Builder Regular Expressions are essentially patterns, rather than literal strings, that are used to compare/match text in Find/Replace operations. As an example, the * character in a Perl regular expression matches the preceding character or expression zero or.. XML Manager: In-line editing of XML files The XML Manager allows you to navigate through complex XML data. But, what happens when you want to make a quick edit to your XML tags/data.... UltraEdit v16.00 extends the XML Manager with inline editing, giving you a faster and more elegant method... UltraEdit v16.00 Scripting Enhancements One of UltraEdit's trademark features is the ability to automate tasks through scripting. V16.00 extends the power of scripting further with includes, active document index, and more! Parse Source Code with the Function List The function list displays all the functions in the active file/project. Double clicking on a function name in the list repositions the file to the desired function. Also, as you navigate through a file, the function selected in the list changes to indica Brace Matching Brace matching is an often-used feature; it is indispensable for navigating through your code. Brace matching simply allows you to position your cursor next to an open (or close) brace and highlight the corresponding brace. Code Folding Code folding is indispensable for managing complex/nested code structures. Code folding allows you to collapse (hide) a section of code. The collapsible sections are based on the structure of the file/language Shared FTP accounts Do you use multiple IDM products - UltraEdit, UEStudio, or UltraCompare? Ever get sick of managing your FTP account information in each application? Now you can stop worrying about porting your FTP account settings! Simply configure it once and share you Auto-load macro with project Many UltraEdit/UEStudio users rely heavily on projects - and why not, projects are extremely helpful in managing related files and folder. Projects not only allow you to group/manage your files and folders, but projects also contain other items that... UEStudio 使用技巧 Using the classviewer A tour of UEStudio's classviewer which provides a parsed graphical representation of your project CVS/SVN Auto-Detect UEStudio can automatically detect and import your CVS/SVN account settings when you import a folder already under version control. IntelliTips UEStudio offers language intelligence in an exciting feature we call IntelliTips (like Intellisense). Imagine a function parameter list tooltip coupled with an intelligent auto complete tooltip for code elements of the current file Quickstart guide: Using UEStudio to develop Java applications A guide for using UEStudio to edit and develop Java applications Create a local PHP MySQL development environment How to set up a development environment for PHP/MySQL on your local machine. A development environment allows you to test your code as you develop your web application before publishing it to the web. Quickstart Guide: Using UEStudio with Borland C/C++ Compiler C/C++ developers can use UEStudio to set up and configure projects with the Borland C/C++ compiler Creating your first application Create, build, and run an application from within UEStudio Configuring VCS with UEStudio A guide for configuring version control support (VCS) in UEStudio 11 and later Configuring VCS with UEStudio (in v10.30 and earlier) A guide for configuring version control support (VCS) in UEStudio CVS Diff How to use the built-in CVS Diff commands with UEStudio and UltraCompare Add a file to version control system A trademark feature of UEStudio is it's powerful Version Control System. As you continue in your development, it is likely you will need to add files to the version control repository Compare files/folders A guide for comparing files or folders from UEStudio using the integrated diff tool Quickstart guide: Using the integrated debugger A guide for setting up integrated WinDbg debugging in UEStudio Quickstart guide: Using the integrated PHP debugger A guide for setting up the integrated PHP debugger in UEStudio Using the SSH/telnet console A guide for setting up SSH/telnet in UEStudio Keymapping and custom hotkeys A guide for customizing 键映射, menus and menu hotkeys in UEStudio Configuring SVN and CVS Accounts A cornerstone feature of UEStudio is the version control support. UEStudio supports CVS and SVN as well as multiple connection protocols. Before you can use version control, you must create an account. UEStudio has an auto-detect CVS/SVN feature, but... Group Files and Folders with Projects How to group your files and folders using Projects UltraEdit for Linux 使用技巧 FTP through Nautilus Did you know that you can access remote FTP files in UltraEdit for Linux with a variety of server connection protocols? Using Nautilus, the default file manager for the popular GNOME desktop, you can access files via FTP, SFTP, Windows shares, or even... Primary Select Using Linux's primary select feature in UltraEdit for Linux Custom terminal Set up a user tool to interact with the command line and specify a custom terminal for output Custom file browser UltraEdit for Linux allows you to right-click any file or folder in your Project (from the File View) and browse it on the file system. But did you know that you can configure which file browser is launched from UltraEdit? Scripting tutorial An introduction to the integrated scripting feature in UltraEdit for Linux Writing a macro Steps to record and edit powerful macros to quickly and efficiently edit files Vertical and horizontal split window editing This is a convenient feature when you're manually comparing files, when you want to copy/paste between multiple files, or when you simply want to divide up your edit space. Find and Replace A guide to the powerful features and options available under the "Search" menu. Find in Selected Text Find and Replace is a cornerstone feature for UltraEdit, so it is of course integral to UltraEdit for Linux. The Linux version offers the same features as in the Windows version, as well as additional features. One specific feature that was improved... Using bookmarks Provides a way for you to mark and quickly access lines of interest in your files via bookmarks. To add a bookmark, make sure the cursor is positioned on the line you'd like to bookmark. Press CTRL + F2.... Adding a wordfile Add a language definition to your wordfile for use with UltraEdit's powerful syntax highlighting Projects In UltraEdit for Linux, projects are a convenient, time-saving, feature that allow you to group and manage associated files. Additionally, Projects are integrated throughout the framework of UltraEdit making it easier to perform other actions on your... Search Favorites UltraEdit for Linux includes a Search and Replace Favorites feature that allows you to manage frequently used Find and Replace strings. Create, name, and edit your Search and Replace Favorites... Column mode How to use column and block selection mode in UltraEdit for Linux Templates How to create text editing templates in UltraEdit for Linux Keyboard shortcuts A quick reference guide to UltraEdit's (Linux) default keyboard shortcuts How to use the UltraEdit for Linux tar package This guide shows you how to download and use the tar.gz package of UltraEdit UltraEdit for Linux v1.20: Scripting enhancements One of UEx's trademark features is the ability to automate tasks through scripting; v1.2 extends the power of scripting further with includes. UltraEdit for Linux Command Line Support UltraEdit for Linux has many convenient command line options and flags for calling UEx from a console/terminal as part of a script, or simply for convenience. Advanced file sorting Sort files in UEx with a powerful array of options and settings, including optional sort keys UltraCompare 使用技巧 Compare text snippets A tutorial showing you how to compare text snippets without having to save your snippets into a file. Diff your snippets, merge your changes, save the result as a separate file, then clear out the snippets (and their temp files...) Increase your virtual memory Large file comparisons may require your system to use virtual memory. This tutorial shows you how to configure Windows to increase the amount of virtual memory on your system. Compare large files UltraCompare is a very robust file comparison tool which includes support for comparing large files even several GB large. This power tip shows you how to optimize UltraCompare for maximum performance when working with large files. Compare .zip, .rar., and .jar Archives Got Archives? UltraCompare's archive compare feature allows you to compare the contents of .zip files, .rar files, Java .jar files, and even password-protected .zip files. Use the archive compare and examine differences between archives or folders on th Version Control Comparison UltraCompare v6.40 includes major improvements to the command line support that allow greater flexibility when integrating with other applications. If you're using version control in a team development environment, then UltraCompare v6.40 is exactly... Visually inspect HTML code How to use UltraCompare Professional's integrated browser view to visually compare and inspect HTML code Compare directories using FTP/SFTP Configure FTP/SFTP accounts in UltraCompare Professional to backup or sync FTP directories and compare local and remote folders. Block and line mode merge Merge differences and save them between 2 or 3 files at the click of a button Sync files and folders with the Folder Synchronization feature Folder Synchronization is a powerful feature in UltraCompare which allows you to sync files between local, remote, network, and even FTP folders. Recursive compare Use recursive compare to evaluate subdirectories' content for differences Find and eliminate duplicate files Unnecessary and unwanted duplicate files can eat up valuable system disk space. This power tip will show you how to quickly and safely eliminate unwanted duplicate files from your system with the powerful Find Duplicates feature in UltraCompare Compare Word documents Compare multiple Microsoft Word documents - Identify and merge differences between Word documents. Command line tips Tips for running UltraCompare from a DOS command prompt Command line quick difference check Run a quick difference check between two files to quickly see if they're the same or different Ignore options Setting ignore options for file/folder comparisons in UltraCompare Ignore/compare column range Set parameters to ignore or compare up to 4 unique columns of data. Filtering files in folder mode Filtering files in UltraCompare while in folder mode Customizing the time/date format for folder comparison Many UltraCompare users in different regions of the world have different standard formats for dates and timestamps. UltraCompare provides the ability to customize the date and timestamp for your folder comparisons Editing files in UltraCompare How to use the integrated text editing capabilites within UltraCompare UltraCompare shell integration Tips for integrating UltraCompare into the right-click context menu in Windows Explorer Export/save text compare output How to export and save diff output from UltraCompare Web Compare If you work with web files, you are probably accustomed to downloading the file via FTP or viewing the source, saving the text, then doing a compare. We're sure you'll agree, this process is clunky and mechanical.... Manually Sync Your Compare Manually sync your compare lines UltraCompare Sessions If you're anything like us, you always have multiple applications running at once. Spawning multiple instances of any application makes it harder to work. So... UC gives you sessions to manage your compare operations! Customizing colors Tutorial on how to change the colors for folder/file compare in UltraCompare Reload previously active sessions When you're doing complex file and folder compare operations, it doesn't take long to open quite a few tabs. What happens when you close UC to move on to another task or to go home for the day- lose the session? Not with Reload active sessions... Session Manager If you've compared the same set of files/folders more than once... You need sessions. Sessions allow you to save compare options for a common set of files or folders which you can quickly recall anytime you open UltraCompare. Not only can you save... Workspace Manager The Workspace Manager is all about convenience, so the Explorer view allows you to drag/drop files and folders for quick and easy compare operations. Simply select the folder (or file) in the Explorer view and drag it to the compare frame. Bookmark Favorite Files/Folders in UltraCompare How to use Favorite in UltraCompare to bookmark your commonly used files/folders. FTP in Workspace Manager You can access your accounts through the Explorer tab of the Workspace Manager in UltraCompare Share FTP Accounts with UltraEdit/UEStudio Set up UltraEdit/UEStudio to share FTP accounts with UltraCompare FTP Folder Compare with CRC Have you wanted to do a quick folder compare - between a local directory and remote directory - without downloading the files first? No problem... As of v7.20, UltraCompare now supports an FTP CRC compare method. With the CRC compare feature... Mark and hide files and folders in folder compare Have you ever wanted to hide files/folders that aren't relevant for your immediate compare needs? We have... While UltraCompare offers many compare filters and ignore options, sometimes you just need more control... UltraSentry 使用技巧 Web browser cleanup Use UltraSentry to securely clean up history and temporary files associated with web browsers Application Cleaning Support Clean the sensitive data left behind after running your applications Delete browser cookies Protect your privacy and your security by securely deleting malicious or private cookies Download directory cleanup Securely delete your download history with UltraSentry Optimize your browser Using UltraSentry to improve speed, performance, and security of your browser Explorer/Microsoft office Integration Tips for integrating UltraSentry into the right-click context menu in Windows Explorer or MS Office Stealth mode Tutorial for running UltraSentry in the background or system tray Scheduling a task Tutorial for scheduling UltraSentry to automatically execute a specific cleaning task Run UltraSentry as a system service How to Schedule your profiles/cleaning operations and be sure that UltraSentry is running them whether you are logged in or not Using the Wizard UltraSentry's wizard makes secure/privacy cleaning operations quick and easy. This power tip shows you how to use the wizard. Total System Scrub Information on how to use UltraSentry's "Full System Scrub" profile to protect your privacy and secure your sensitive data Custom profiles This power tip describes how to set up your own custom profile so that you can securely clean only areas of the system that you wish to clean Securely delete email How to securely delete email on your system using UltraSentry Advanced features This power tip describes some of the advanced features and functionality of UltraSentry
中文名: 阿呆系列Photoshop CS6 原名: Photoshop CS6 For Dummies 作者: Peter Bauer图书分类: 软件 资源格式: PDF 版本: 文字版 出版社: John Wiley & Sons, Inc.书号: 978-1-118-22706-0发行时间: 2012年6月6日 地区: 美国 语言: 英文 简介: 内容介绍: Adobe Photoshop is one of the most important computer programs of our age. It’s made photo editing a commonplace thing, something for the everyperson. Still, Photoshop can be a scary thing (especially that first purchase price!), comprising a jungle of menus and panels and tools and options and shortcuts as well as a bewildering array of add-ons and plug-ins. And that’s why you’re holding this book in your hands. And why I wrote it. And why John Wiley & Sons published it. You want to make sense of Photoshop — or, at the very least, be able to work competently and efficiently in the program, accomplishing those tasks that need to get done. You want a reference that discusses how things work and what things do, not in a technogeek or encyclopedic manner, but rather as an experienced friend might explain something to you. Although step-by-step explanations are okay if they show how something works, you don’t need rote recipes that don’t apply to the work you do. You don’t mind discovering tricks, as long as they can be applied to your images and artwork in a productive, meaningful manner. You’re in the right place! About This Book This is a For Dummies book, and as such, it was produced with an eye toward you and your needs. From Day One, the goal has been to put into your hands the book that makes Photoshop understandable and useable. You won’t find a technical explanation of every option for every tool in every situation, but rather a concise explanation of those parts of Photoshop you’re most likely to need. If you happen to be a medical researcher working toward a cure for cancer, your Photoshop requirements might be substantially more specific than what you’ll find covered here. But for the overwhelming majority of the people who have access to Adobe Photoshop, this book provides the background needed to get your work done with Photoshop. As I updated this book, I intentionally tried to strike a balance between the types of images with which you’re most likely to work and those visually stimulating (yet far less common) images of unusual subjects from faraway places. At no point in this book does flavor override foundation. When you need to see a practical example, that’s what I show you. I worked to ensure that each piece of artwork illustrates a technique and does so in a meaningful, nondistracting way for you. You’ll see that I used mostly Apple computers in producing this book. That’s simply a matter of choice and convenience. You’ll also see (if you look closely) that I shoot mostly with Canon cameras and use Epson printers. That doesn’t mean that you shouldn’t shoot with Nikon, or that you shouldn’tprint with HP or Canon. If that’s what you have, if it’s what you’re comfortable with, and if it fulfills your needs, stick with it! You’ll also find that I mention Wacom drawing tablets here and there (and devoted one of the final chapters to the subject). Does that mean you should have one? If you do any work that relies on precise cursor movement (like painting, dodging, burning, path creation and editing, cloning, healing, patching, or lassoing, just to name a few), yes, I do recommend a Wacom Cintiq display or Intuos tablet. Next to more RAM and good color management, it’s the best investment just about any Photoshop user can make. One additional note: If you’re brand new to digital imaging and computers, this probably isn’t the best place to start. I do indeed make certain assumptions about your level of computer knowledge (and, to a lesser degree, your knowledge of digital imaging). But if you know your File➪Open from your File➪Close and can find your lens cap with both hands, read Chapter 1, and you’ll have no problem with Photoshop CS6 For Dummies. 本书来自:Photoshop CS6 For Dummies 更多书籍请到:http://www.fordummiespdf.com/ 内容截图: 目录: Table of Contents Introduction.................................................................. 1 About This Book...............................................................................................1 How This Book Is Organized...........................................................................2 Part I: Breezing through Basic Training...............................................2 Part II: Easy Enhancements for Digital Images....................................2 Part III: Creating “Art” in Photoshop....................................................3 Part IV: Power Photoshop.....................................................................3 Part V: The Part of Tens.........................................................................3 Conventions Used in This Book......................................................................4 Icons Used in This Book..................................................................................4 How to Use This Book......................................................................................5 Where to Go from Here....................................................................................5 Part I: Breezing through Basic Training.......................... 7 Chapter 1: Welcome to Photoshop! . 9 Exploring Adobe Photoshop...........................................................................9 What Photoshop is designed to do....................................................10 New features to help you do those jobs............................................10 Other things you can do with Photoshop.........................................13 Viewing Photoshop’s Parts and Processes.................................................15 Reviewing basic computer operations...............................................15 Photoshop’s incredible selective Undo.............................................17 Installing Photoshop: Need to know..................................................19 Chapter 2: Knowing Just Enough about Digital Images . 21 What Exactly Is a Digital Image?...................................................................22 The True Nature of Pixels..............................................................................22 How Many Pixels Can Dance on the Head of a Pin?...................................24 Resolution revelations.........................................................................25 Resolving image resolution.................................................................26 File Formats: Which Do You Need?..............................................................35 Formats for digital photos...................................................................35 Formats for web graphics....................................................................37 Formats for commercial printing........................................................38 Formats for PowerPoint and Word.....................................................39Chapter 3: Taking the Chef’s Tour of Your Photoshop Kitchen . 41 Food for Thought: How Things Work...........................................................42 Ordering from the menus....................................................................42 Your platter full of panels....................................................................43 The tools of your trade........................................................................46 Get Cookin’ with Customization...................................................................47 Clearing the table: Custom workspaces............................................48 Sugar and spice, shortcuts are nice...................................................50 Spoons can’t chop: Creating tool presets..........................................51 Season to Taste: The Photoshop Settings...................................................52 Standing orders: Setting the Preferences..........................................53 Ensuring consistency: Color Settings.................................................60 When Good Programs Go Bad: Fixing Photoshop......................................61 Chapter 4: Getting Images into and out of Photoshop 63 Bringing Images into Photoshop..................................................................64 Downloading from your digital camera.............................................65 Scanning prints.....................................................................................67 Keeping Your Images Organized..................................................................71 Creating a folder structure..................................................................71 Using Adobe Bridge..............................................................................72 Renaming image files easily.................................................................74 Printing Your Images......................................................................................75 Cropping to a specific aspect ratio.....................................................76 Remembering resolution.....................................................................78 Controlling color using File➪Print.....................................................79 Considering color management solutions.........................................81 Printing alternatives.............................................................................82 Sharing Your Images......................................................................................83 Creating PDFs and websites................................................................83 E-mailing your images..........................................................................84 Part II: Easy Enhancements for Digital Images.............. 85 Chapter 5: Adding Dark Shadows and Sparkling Highlights 87 Adjusting Tonality to Make Your Images Pop.............................................88 Histograms Simplified....................................................................................88 Using Photoshop’s Auto Corrections...........................................................92 Levels and Curves and You...........................................................................93 Level-headed you!.................................................................................95 Tonal corrections with the eyedroppers...........................................97 Adjusting your curves without dieting..............................................98Grabbing Even More Control......................................................................101 Using Shadow/Highlight.....................................................................102 Changing exposure after the fact......................................................105 Using Photoshop’s toning tools........................................................105 Chapter 6: Making Color Look Natural 107 What Is Color in Photoshop?......................................................................107 Color modes, models, and depths....................................................108 Recording color in your image..........................................................114 Making Color Adjustments in Photoshop..................................................114 Watching the Histogram and Info panels.........................................116 Choosing color adjustment commands...........................................116 Manual corrections in individual channels.....................................131 The People Factor: Flesh Tone Formulas..................................................132 Chapter 7: The Adobe Camera Raw 7 Plug-In 135 Understanding the Raw Facts.....................................................................135 What’s the big deal about Raw?........................................................137 Working in Raw...................................................................................138 Do You Have What It Takes?.......................................................................139 Working in the Camera Raw Plug-In...........................................................140 Tools and preview options................................................................140 The histogram.....................................................................................146 The preview area................................................................................147 Workflow Options and presets.........................................................148 The Basic panel...................................................................................150 The Detail panel..................................................................................153 HSL, grayscale, and split toning........................................................155 Compensating with Lens Corrections..............................................157 Adding special effects........................................................................158 Camera profiles, presets, and snapshots........................................159 The Camera Raw buttons..................................................................159 Chapter 8: Fine-Tuning Your Fixes . 161 What Is a Selection?......................................................................................162 Feathering and Anti-Aliasing.......................................................................164 Making Your Selections with Tools............................................................166 Marquee selection tools.....................................................................166 Lasso selection tools..........................................................................169 The Quick Selection tool....................................................................171 The Magic Wand tool.........................................................................172 Refine Edge..........................................................................................172 Your Selection Commands..........................................................................174 The primary selection commands....................................................175 The Color Range command...............................................................176 Selection modification commands...................................................177 Transforming the shape of selections..............................................178 Edit in Quick Mask mode...................................................................179 The mask-related selection commands...........................................180 Masks: Not Just for Halloween Anymore...................................................181 Saving and loading selections...........................................................181 Editing an alpha channel....................................................................182 Adding masks to layers and Smart Objects.....................................183 Masking with vector paths................................................................184 Adjustment Layers: Controlling Changes..................................................184 Adding an adjustment layer..............................................................185 Limiting your adjustments.................................................................186 Chapter 9: Common Problems and Their Cures . 189 Making People Prettier................................................................................190 Getting the red out . . . digitally........................................................190 The digital fountain of youth.............................................................191 Dieting digitally...................................................................................192 De-glaring glasses...............................................................................194 Whitening teeth...................................................................................194 Reducing Noise in Your Images..................................................................194 Decreasing digital noise.....................................................................195 Eliminating luminance noise.............................................................195 Fooling Around with Mother Nature..........................................................196 Removing the unwanted from photos..............................................196 Eliminating the lean: Fixing perspective..........................................200 Rotating images precisely..................................................................202 Part III: Creating “Art” in Photoshop.......................... 203 Chapter 10: Combining Images . 205 Compositing Images: 1 + 1 = 1.....................................................................205 Understanding layers.........................................................................206 Why you should use Smart Objects.................................................207 Using the basic blending modes.......................................................208 Opacity, transparency, and layer masks.........................................211 Creating clipping groups....................................................................212 Making composited elements look natural.....................................213Making Complex Selections.........................................................................214 Vanishing Point.............................................................................................216 Creating Panoramas with Photomerge......................................................220 Chapter 11: Precision Edges with Vector Paths 221 Pixels, Paths, and You..................................................................................222 Easy Vectors: Using Shapes........................................................................223 Your basic shape tools.......................................................................224 The Custom Shape tool......................................................................226 More custom shapes — free!.............................................................226 Changing the appearance of a shape...............................................228 Simulating a multicolor shape layer.................................................229 Using Your Pen Tool to Create Paths.........................................................231 Understanding paths..........................................................................231 Clicking and dragging your way down the path of knowledge.....232 A closer look at the Paths panel.......................................................234 Customizing Any Path..................................................................................238 Adding, deleting, and moving anchor points..................................238 Combining paths.................................................................................240 Tweaking type for a custom font......................................................241 Chapter 12: Dressing Up Images with Layer Styles 243 What Are Layer Styles?................................................................................243 Using the Styles Panel..................................................................................245 Creating Custom Layer Styles.....................................................................247 Exploring the Layer Style menu........................................................247 Exploring the Layer Style dialog box................................................248 Layer effects basics............................................................................250 Opacity, fill, and advanced blending................................................258 Saving Your Layer Styles.............................................................................261 Adding styles to the Style panel.......................................................261 Preserving your layer styles..............................................................262 Chapter 13: Giving Your Images a Text Message 263 Making a Word Worth a Thousand Pixels.................................................264 A type tool for every season, or reason...........................................266 What are all those options?...............................................................267 Taking control of your text with panels...........................................270 The panel menus — even more options..........................................274 Working with Styles............................................................................276 Putting a picture in your text............................................................277 Creating Paragraphs with Type Containers..............................................278 Selecting alignment or justification..................................................281 Ready, BREAK! Hyphenating your text............................................282Shaping Up Your Language with Warp Text and Type on a Path..........283 Applying the predefined warps.........................................................283 Customizing the course with paths..................................................284 Chapter 14: Painting in Photoshop . 287 Discovering Photoshop’s Painting Tools..................................................288 Painting with the Brush tool..............................................................290 Adding color with the Pencil tool.....................................................292 Removing color with the Eraser tool...............................................292 Working with Panels and Selecting Colors................................................293 An overview of options......................................................................293 Creating and saving custom brush tips...........................................296 Picking a color.....................................................................................297 Integrating Your iPad into Your Painting Workflow.................................299 Expressing yourself with PS Express...............................................299 Using Adobe Nav.................................................................................300 Getting colorful with Color Lava.......................................................301 Easing your way into Eazel................................................................302 Connecting with Photoshop..............................................................302 Fine Art Painting with Specialty Brush Tips and the Mixer Brush.........303 Exploring erodible brush tips...........................................................303 Introducing airbrush and watercolor tips.......................................304 Mixing things up with the Mixer Brush............................................305 Filling, Stroking, Dumping, and Blending Colors......................................307 Deleting and dumping to add color..................................................307 Using gradients...................................................................................308 Chapter 15: Filters: The Fun Side of Photoshop . 311 Smart Filters: Your Creative Insurance Policy..........................................311 The Filters You Really Need........................................................................313 Sharpening to focus the eye..............................................................313 Unsharp Mask......................................................................................314 Smart Sharpen.....................................................................................315 Blurring images and selections.........................................................316 The other Blur filters..........................................................................318 Correcting for the vagaries of lenses...............................................320 Cleaning up with Reduce Noise........................................................323 Getting Creative and Artistic.......................................................................324 Photo to painting with the Oil Paint filter........................................325 Working with the Filter Gallery.........................................................326 Push, Pull, and Twist with Liquify..............................................................328 Do I Need Those Other Filters?...................................................................330 Adding drama with Lighting Effects.................................................331 Bending and bubbling........................................................................331 Creating clouds...................................................................................332Part IV: Power Photoshop........................................... 333 Chapter 16: Streamlining Your Work in Photoshop 335 Ready, Set, Action!........................................................................................336 Recording your own Actions.............................................................337 Working with the Batch command...................................................341 Creating contact sheets and presentations.....................................343 Sticking to the Script....................................................................................343 Adding Extensions to Photoshop...............................................................345 Tooling around in Bridge.............................................................................346 Creating Fancy PDF Presentations and Multi-Page PDFs.........................348 Creating a PDF presentation..............................................................348 Collecting thumbnails in a contact sheet........................................351 Saving paper with picture packages.................................................353 Creating Web Galleries................................................................................353 Chapter 17: Working with Video and Animation . 357 Importing and Enhancing Video Clips.......................................................357 Getting video into Photoshop...........................................................358 Adjusting the length of video and audio clips................................360 Adding adjustment layers and painting on video layers...............361 Transitioning, titling, and adding special effects............................362 Transforming video layers.................................................................365 Rendering and exporting video.........................................................365 Creating Animations in Photoshop............................................................366 Building frame-based animations.....................................................366 Creating frame content......................................................................367 Tweening to create intermediary frames........................................368 Specifying frame rate..........................................................................369 Optimizing and saving your animation............................................370 Part V: The Part of Tens............................................. 371 Chapter 18: Ten (or so) Things to Do with Photoshop CS6 Extended . 373 Understanding Photoshop CS6 Extended.................................................373 Using Smart Object Stack Modes................................................................374 Working with 3D Artwork............................................................................375 Creating 3D Objects......................................................................................376 Importing 3D Objects...................................................................................376 Rendering and Saving 3D Scenes................................................................377 Measuring, Counting, and Analyzing Pixels..............................................377Photoshop CS6 For Dummies Measuring Length, Area, and More............................................................377 Calculating with Vanishing Point................................................................378 Counting Crows or Maybe Avian Flu..........................................................379 Viewing Your DICOM Medical Records.....................................................380 Ignoring MATLAB.........................................................................................381 Chapter 19: Ten Reasons to Love Your Wacom Tablet . 383 More Natural Movement..............................................................................383 Health and Safety..........................................................................................383 Artistic Control.............................................................................................383 Extended Comfort.........................................................................................384 Programmable Express Keys, Touch Rings, and Touch Strips..............385 The Optimal Tablet......................................................................................385 The Pen’s Switch...........................................................................................385 Setting Preferences.......................................................................................386 The Accessories............................................................................................386 Cintiq for the Ultimate Control...................................................................387 Chapter 20: Ten Things to Know about HDR . 389 Understanding What HDR Is........................................................................389 Capturing for Merge to HDR Pro.................................................................390 Preparing Raw “Exposures” in Camera Raw.............................................391 Working with Merge to HDR Pro.................................................................392 Saving 32-Bit HDR Images............................................................................395 HDR Toning...................................................................................................395 Painting and the Color Picker in 32-Bit......................................................396 Filters and Adjustments in 32-Bit...............................................................396 Selections and Editing in 32-Bit..................................................................397 Printing HDR Images....................................................................................397 Index........................................................................ 399

67,513

社区成员

发帖
与我相关
我的任务
社区描述
J2EE只是Java企业应用。我们需要一个跨J2SE/WEB/EJB的微容器,保护我们的业务核心组件(中间件),以延续它的生命力,而不是依赖J2SE/J2EE版本。
社区管理员
  • Java EE
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧