Difference between const and static readonly keywords C#
If in your C# class you want to use a variable whose value will not change during the lifetime of that class, then you have
two options:
1. Use the "const" keyword: for e.g. public const int myVar = 100;
2. Use the "static readonly" keyword: for e.g. public static readonly int myVar = ConfigurationManager.AppSettings["value"];
The const variable has a limitation: it can ONLY be set during compile time. This means you cannot set its value using any code
logic, like getting its value from some config file, or from some "if" condition. Whereas the value of a static readonly field
can be set at run time, and can thus be modified by the containing class using any custom logic.
In the static readonly case, the containing class is allowed to modify it only
* in the variable declaration (through a variable initializer)
* in the static constructor (instance constructors, if it's not static)
Dotnet, DotnetCore, Azure, C#,VB.net, Sql Server, WCF, MVC ,Linq, Javascript and Jquery
21 December 2010
What are the basic differences between user controls and custom controls?
What are the basic differences between user controls and custom controls?
User control
Deployment
Designed for single-application scenarios ,Deployed in the source form (.ascx) along with the source code of the application
If the same control needs to be used in more than one application, it introduces redundancy and maintenance problems
Custom control
Designed so that it can be used by more than one application Deployed either in the application's Bin directory or in the global assembly cache
Distributed easily and without problems associated with redundancy and maintenance.
Creation
User control
Creation is similar to the way Web Forms pages are created; well-suited for rapid application development (RAD)
Custom control
Writing involves lots of code because there is no designer support.
Content
User control
A much better choice when you need static content within a fixed layout, for example, when you make headers and footers
Custom Control
More suited for when an application requires dynamic content to be displayed; can be reused across an application,
for example, for a data bound table control with dynamic rows
Design
User Control
Writing doesn't require much application designing because they are authored at design time and mostly contain static data
Custom control
Writing from scratch requires a good understanding of the control's life cycle and the order in which events execute,
which is normally taken care of in user controls
How a user control is processed
When a page with a user control is requested, the following occurs:
The page parser parses the .ascx file specified in the Src attribute in the @ Register directive and generates a class that derives from the System.Web.UI.UserControl class.
The parser then dynamically compiles the class into an assembly.
If you are using Visual Studio, then at design time only, Visual Studio creates a code behind file for the user control, and the file is precompiled by the designer itself.
Finally, the class for the user control, which is generated through the process of dynamic code generation and compilation, includes the code for the code behind file (.ascx.cs) as well as the code written inside the .ascx file.
How to choose the base class for your custom control
To write a custom control, you should directly or indirectly derive the new class from the System.Web.UI.Control class or from the System.Web.UI.WebControls.WebControl class:
You should derive from System.Web.UI.Control if you want the control to render nonvisual elements. For example, and are examples of nonvisual rendering.
You should derive from System.Web.UI.WebControls.WebControl if you want the control to render HTML that generates a visual interface on the client computer.
User control
Deployment
Designed for single-application scenarios ,Deployed in the source form (.ascx) along with the source code of the application
If the same control needs to be used in more than one application, it introduces redundancy and maintenance problems
Custom control
Designed so that it can be used by more than one application Deployed either in the application's Bin directory or in the global assembly cache
Distributed easily and without problems associated with redundancy and maintenance.
Creation
User control
Creation is similar to the way Web Forms pages are created; well-suited for rapid application development (RAD)
Custom control
Writing involves lots of code because there is no designer support.
Content
User control
A much better choice when you need static content within a fixed layout, for example, when you make headers and footers
Custom Control
More suited for when an application requires dynamic content to be displayed; can be reused across an application,
for example, for a data bound table control with dynamic rows
Design
User Control
Writing doesn't require much application designing because they are authored at design time and mostly contain static data
Custom control
Writing from scratch requires a good understanding of the control's life cycle and the order in which events execute,
which is normally taken care of in user controls
How a user control is processed
When a page with a user control is requested, the following occurs:
The page parser parses the .ascx file specified in the Src attribute in the @ Register directive and generates a class that derives from the System.Web.UI.UserControl class.
The parser then dynamically compiles the class into an assembly.
If you are using Visual Studio, then at design time only, Visual Studio creates a code behind file for the user control, and the file is precompiled by the designer itself.
Finally, the class for the user control, which is generated through the process of dynamic code generation and compilation, includes the code for the code behind file (.ascx.cs) as well as the code written inside the .ascx file.
How to choose the base class for your custom control
To write a custom control, you should directly or indirectly derive the new class from the System.Web.UI.Control class or from the System.Web.UI.WebControls.WebControl class:
You should derive from System.Web.UI.Control if you want the control to render nonvisual elements. For example, and are examples of nonvisual rendering.
You should derive from System.Web.UI.WebControls.WebControl if you want the control to render HTML that generates a visual interface on the client computer.
14 December 2010
SQL Tips and Guidelines- Sql Server 2008
SQL Tips and Guidelines- Sql Server 2008
• As a common practice, every table should have a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases, such as an identity column or some other column where the value is unique. In many cases, the primary key is the ideal column for a clustered index.
• Indexes should be measured on all columns that are frequently used in WHERE, ORDER BY, GROUP BY, TOP and DISTINCT clauses.
• Do not automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
• Do not add more indexes on your OLTP tables to minimize the overhead that occurs with indexes during data modifications.
• Do not add the same index more than once on a table with different names.
• Ensure that all your tables are linked with foreign keys. foreign keys enhance the performance of queries with joins. Database tables inside each application are naturally related. Islands of tables are rarely needed if your application's business logic is well defined.
• Drop all those indexes that are not used by the Query Optimizer, generally. You probably won't want to add an index to a table under the following conditions:
•If the index is not used by the query optimizer. Use the Query Analyzer's "Show Execution Plan" option to see if your queries against a particular table use an index or not.
•If the table is small, most likely indexes will not be used.
•If the column(s) to be indexed are very wide.
•If the column(s) are defined as TEXT, NTEXT or IMAGE data types.
•If the table is rarely queried but insertion, updating is frequent.
• To provide up-to-date statistics, the query optimizer needs to make smart query optimization decisions. You will generally want to leave the "Auto Update Statistics" database option on. This helps to ensure that the optimizer statistics are valid, ensuring that queries are properly optimized when they are run.
• Keep the "width" of your indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index.
• If possible, try to create indexes on columns that have integer values instead of characters. Integer values use less overhead than character values.
• If you have two or more tables that are frequently joined together, then the columns used for the joins should have an appropriate index. If the columns used for the joins are not naturally compact, then consider adding surrogate keys to the tables that are compact in order to reduce the size of the keys. This will decrease I/O during the join process, which increases overall performance.
• When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index. This is because, in a unique index, each row is unique and once the needed record is found, SQL Server doesn't have to look any further.
• If a particular query against a table is run infrequently and the addition of an index greatly speeds the performance of the query, but the performance of INSERTS, UPDATES and DELETES is negatively affected by the addition of the index, consider creating the index for the table for the duration of when the query is run and then dropping the index. An example of this is when monthly reports are run at the end of the month on an OLTP application.
• Avoid using FLOAT or REAL data types as primary keys, as they add unnecessary overhead that can hurt performance.
• If you want to boost the performance of a query that includes an AND operator in the WHERE clause, consider the following:
•Of the search criteria in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
•If at least one of the search criteria in the WHERE clause is not highly selective, consider adding indexes to all of the columns referenced in the WHERE clause.
•If none of the columns in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
• The Query Optimizer will always perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or do not have a useful index). Because of this, if you use many queries with OR clauses, you will want to ensure that each referenced column in the WHERE clause has an index.
• If you have a query that uses ORs and it is not making the best use of indexes, consider rewriting it as a UNION and then testing performance. Only through testing can you be sure that one version of your query will be faster than another.
• If you use the SOUNDEX function against a table column in a WHERE clause, the Query Optimizer will ignore any available indexes and perform a table scan.
• Queries that include either the DISTINCT or the GROUP BY clauses can be optimized by including appropriate indexes. Any of the following indexing strategies can be used:
•Include a covering, non-clustered index (covering the appropriate columns) of the DISTINCT or the GROUP BY clauses.
•Include a clustered index on the columns in the GROUP BY clause.
•Include a clustered index on the columns found in the SELECT clause.
•Adding appropriate indexes to queries that include DISTINCT or GROUP BY is most important for those queries that run often.
• Avoid clustered indexes on columns that are already "covered" by non-clustered indexes. A clustered index on a column that is already "covered" is redundant. Use the clustered index for columns that can better make use of it.
• Ideally a clustered index should be based on a single column (not multiple columns) that are as narrow as possible. This not only reduces the clustered index's physical size, it also reduces the physical size of non-clustered indexes and boosts SQL Server's overall performance.
• When you create a clustered index, try to create it as a unique clustered index, not a non-unique clustered index.
• SET NOCOUNT ON at the beginning of each stored procedure you write. This statement should be included in every stored procedure, trigger, etc. that you write.
• Keep Transact-SQL transactions as short as possible within a stored procedure. This helps to reduce the number of locks, helping to speed up the overall performance of your SQL Server application.
• If you are creating a stored procedure to run in a database other than the Master database, don't use the prefix sp_ in its name. This special prefix is reserved for system stored procedures. Although using this prefix will not prevent a user defined stored procedure from working, what it can do is to slow down its execution ever so slightly.
• Before you are done with your stored procedure code, review it for any unused code, parameters or variables that you may have forgotten to remove while you were making changes and remove them. Unused code just adds unnecessary bloat to your stored procedures, although it will not necessarily negatively affect performance of the stored procedure.
• For best performance, all objects that are called within the same stored procedure should be owned by the same object owner or schema, preferably dbo, and should also be referred to in the format of object_owner.object_name or schema_owner.object_name.
• When you need to execute a string of Transact-SQL, you should use the sp_executesql stored procedure instead of the EXECUTE statement.
• If you use input parameters in your stored procedures, you should validate all of them at the beginning of your stored procedure. This way, if there is a validation problem and the client application needs to be notified of the problem, it happens before any stored procedure processing takes place, preventing wasted effort and boosting performance.
• When calling a stored procedure from your application, it is important that you call it using its qualified name, for example:
Collapse Copy Code
exec dbo.myProc
...instead of:
Collapse Copy Code
exec myProc
• If you think a stored procedure will return only a single value and not a record set, consider returning the single value as an output parameter.
• Use stored procedures instead of views. They offer better performance.
• Don't include code, variable or parameters that don't do anything.
• Don't be afraid to make broad-minded use of in-line and block comments in your Transact-SQL code. They will not affect the performance of your application and they will enhance your productivity when you have to come back to the code and try to modify it.
• If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications.
• If you have the choice of using a join or a sub-query to perform the same task within a query, generally the join is faster. This is not always the case, however, and you may want to test the query using both methods to determine which is faster for your particular application.
• If your application requires you to create temporary tables for use on a global or per connection use, consider the possibility of creating indexes for these temporary tables. While most temporary tables probably won't need -- or even use -- an index, some larger temporary tables can benefit from them. A properly designed index on a temporary table can be as great a benefit as a properly designed index on a standard database table.
• Instead of using temporary tables, consider using a derived table instead. A derived table is the result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, you can reduce I/O and often boost your application's performance.
• For better performance, if you need a temporary table in your Transact-SQL code, consider using a table variable instead of creating a conventional temporary table.
• Don't repeatedly reuse the same function to calculate the same result over and over within your Transact-SQL code.
• If you use BULK INSERT to import data into SQL Server, then use the TABLOCK hint along with it. This will prevent SQL Server from running out of locks during very large imports and will also boost performance due to the reduction of lock contention.
• Always specify the narrowest columns you can. The narrower the column, the less amount of data SQL Server has to store and the faster SQL Server is able to read and write data. In addition, if any sorts need to be performed on the column, the narrower the column, the faster the sort will be.
• If you need to store large strings of data and they are less than 8000 characters, use a VARCHAR data type instead of a TEXT data type. TEXT data types have extra overhead that drag down performance.
• Don't use the NVARCHAR or NCHAR data types unless you need to store 16-bit character (Unicode) data. They take up twice as much space as VARCHAR or CHAR data types, increasing server I/O and wasting unnecessary space in your buffer cache.
• If the text data in a column varies greatly in length, use a VARCHAR data type instead of a CHAR data type. The amount of space saved by using VARCHAR over CHAR on variable length columns can greatly reduce the I/O reads that the cache memory uses to hold data, improving overall SQL Server performance.
• If a column's data does not vary widely in length, consider using a fixed-length CHAR field instead of a VARCHAR. While it may take up a little more space to store the data, processing fixed-length columns is faster in SQL Server than processing variable-length columns.
• If you have a column that is designed to hold only numbers, use a numeric data type such as INTEGER instead of a VARCHAR or CHAR data type. Numeric data types generally require less space to hold the same numeric value than does a character data type. This helps to reduce the size of the columns and can boost performance when the columns are searched (WHERE clause), joined to another column or sorted.
• If you use the CONVERT function to convert a value to a variable length data type such as VARCHAR, always specify the length of the variable data type. If you do not, SQL Server assumes a default length of 30. Ideally, you should specify the shortest length to accomplish the required task. This helps to reduce memory use and SQL Server resources.
• Avoid using the new BIGINT data type unless you really need its additional storage capacity. The BIGINT data type uses 8 bytes of memory, versus 4 bytes for the INT data type.
• Don't use the DATETIME data type as a primary key. From a performance perspective, it is more efficient to use a data type that uses less space. For example, the DATETIME data type uses 8 bytes of space, while the INT data type only takes up 4 bytes. The less space used, the smaller the table and index, and the less I/O overhead that is required to access the primary key.
• If you are creating a column that you know will be subject to many sorts, consider making the column integer-based and not character-based. This is because SQL Server can sort integer data much faster than character data.
• Carefully evaluate whether your SELECT query needs the DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary. This is a bad habit that should be stopped.
• When you need to use SELECT INTO option, keep in mind that it can lock system tables, preventing other users from accessing the data they need while the data is being inserted. In order to prevent or minimize the problems caused by locked tables, try to schedule the use of SELECT INTO when your SQL Server is less busy. In addition, try to keep the amount of data inserted to a minimum. In some cases, it may be better to perform several smaller SELECT INTOs instead of performing one large SELECT INTO.
• If you need to verify the existence of a record in a table, don't use SELECT COUNT (*) in your Transact-SQL code to identify it. This is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXISTS to determine if the record in question exists, which is much more efficient.
• By default, some developers -- especially those who have not worked with SQL Server before -- routinely include code similar to this in their WHERE clauses when they make string comparisons:
Collapse Copy Code
SELECT column_name FROM table_name
WHERE LOWER (column_name) = 'name'
In other words, these developers are making the assumption that the data in SQL Server is case-sensitive, which it generally is not. If your SQL Server database is not configured to be case sensitive, you don't need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed. Just leave these functions out of your code. This will speed up the performance of your query, as any use of text functions in a WHERE clause hurts performance.
However, what if your database has been configured to be case-sensitive? Should you then use the LOWER and UPPER functions to ensure that comparisons are properly compared? No. The above example is still poor coding. If you have to deal with ensuring case is consistent for proper comparisons, use the technique described below, along with appropriate indexes on the column in question:
Collapse Copy Code
SELECT column_name FROM table_name
WHERE column_name = 'NAME' or column_name = 'name'
This code will run much faster than the first example.
• If you currently have a query that uses NOT IN, which offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead try to use one of the following options, all of which offer better performance:
•Use EXISTS or NOT EXISTS
•Use IN
•Perform a LEFT OUTER JOIN and check for a NULL condition
• When you have a choice of using the IN or the EXISTS clause in your Transact-SQL, you will generally want to use the EXISTS clause, as it is usually more efficient and performs faster.
• If you find that SQL Server uses a TABLE SCAN instead of an INDEX SEEK when you use an IN/OR clause as part of your WHERE clause, even when those columns are covered by an index, consider using an index hint to force the Query Optimizer to use the index.
• If you use LIKE in your WHERE clause, try to use one or more leading characters in the clause, if possible. For example, use:
Collapse Copy Code
LIKE 'm%' instead of LIKE ‘%m’
• If your application needs to retrieve summary data often, but you don't want to have the overhead of calculating it on the fly every time it is needed, consider using a trigger that updates summary values after each transaction into a summary table.
• When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, as it is much more efficient. For example...
Collapse Copy Code
SELECT task_id, task_name
FROM tasks
WHERE task_id in (1000, 1001, 1002, 1003, 1004)
...is much less efficient than this:
Collapse Copy Code
SELECT task_id, task_name
FROM tasks
WHERE task_id BETWEEN 1000 and 1004
• If possible, try to avoid using the SUBSTRING function in your WHERE clauses. Depending on how it is constructed, using the SUBSTRING function can force a table scan instead of allowing the optimizer to use an index (assuming there is one). If the substring you are searching for does not include the first character of the column you are searching for, then a table scan is performed.
• If possible, you should avoid using the SUBSTRING function and use the LIKE condition instead for better performance. Instead of doing this:
Collapse Copy Code
WHERE SUBSTRING(task_name,1,1) = 'b'
Try using this instead:
Collapse Copy Code
WHERE task_name LIKE 'b%'
• Avoid using optimizer hints in your WHERE clauses. This is because it is generally very hard to out-guess the Query Optimizer. Optimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to include a hint in a query, this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts -- not helps -- performance.
• If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. This assumes that no parentheses have been used to change the order of execution. Because of this, you may want to consider one of the following when using AND:
• Locate the least likely true AND expression first.
• If both parts of an AND expression are equally likely of being false, put the least complex AND expression first.
• You may want to consider using Query Analyzer or Management Studio to look at the execution plans of your queries to see which is best for your situation
• Don't use ORDER BY in your SELECT statements unless you really need to, as it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server.
• Whenever SQL Server has to perform a sorting operation, additional resources have to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:
•ORDER BY
•GROUP BY
•SELECT DISTINCT
•UNION
• If you have to sort by a particular column often, consider making that column a clustered index. This is because the data is already presorted for you and SQL Server is smart enough not to resort the data.
• If your WHERE clause includes an IN operator along with a list of values to be tested in the query, order the list of values so that the most frequently found ones are placed at the start of the list and the less frequently found ones are placed at the end of the list. This can speed up performance because the IN option returns true as soon as any of the values in the list produce a match. The sooner the match is made, the faster the query completes.
• If your application performs many wildcard (LIKE %) text searches on CHAR or VARCHAR columns, consider using SQL Server's full-text search option. The Search Service can significantly speed up wildcard searches of text stored in a database.
• The GROUP BY clause can be used with or without an aggregate function. However, if you want optimum performance, don't use the GROUP BY clause without an aggregate function. This is because you can accomplish the same end result by using the DISTINCT option instead, and it is faster. For example, you could write your query two different ways:
Collapse Copy Code
SELECT task_id
FROM tasks
WHERE task_id BETWEEN 10 AND 20
GROUP BY OrderID
...or:
Collapse Copy Code
SELECT DISTINCT task_id
FROM tasks
WHERE task_id BETWEEN 10 AND 20
• It is important to design applications that keep transactions as short as possible. This reduces locking and increases application concurrently, which helps to boost performance.
• In order to reduce network traffic between the client or middle-tier and SQL Server -- and also to boost your SQL Server-based application's performance -- only the data needed by the client or middle-tier should be returned by SQL Server. In other words, don't return more data (both rows and columns) from SQL Server than you need to the client or middle-tier and then further reduce the data to the data you really need at the client or middle-tier. This wastes SQL Server resources and network bandwidth.
• To make complex queries easier to analyze, consider breaking them down into their smaller constituent parts. One way to do this is to simply create lists of the key components of the query, such as:
•List all of the columns that are to be returned
•List all of the columns that are used in the WHERE clause
•List all of the columns used in the JOINs (if applicable)
•List all the tables used in JOINs (if applicable)
•Once you have the above information organized into this easy-to-comprehend form, it is much easier to identify those columns that could potentially make use of indexes when executed.
• As a common practice, every table should have a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases, such as an identity column or some other column where the value is unique. In many cases, the primary key is the ideal column for a clustered index.
• Indexes should be measured on all columns that are frequently used in WHERE, ORDER BY, GROUP BY, TOP and DISTINCT clauses.
• Do not automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
• Do not add more indexes on your OLTP tables to minimize the overhead that occurs with indexes during data modifications.
• Do not add the same index more than once on a table with different names.
• Ensure that all your tables are linked with foreign keys. foreign keys enhance the performance of queries with joins. Database tables inside each application are naturally related. Islands of tables are rarely needed if your application's business logic is well defined.
• Drop all those indexes that are not used by the Query Optimizer, generally. You probably won't want to add an index to a table under the following conditions:
•If the index is not used by the query optimizer. Use the Query Analyzer's "Show Execution Plan" option to see if your queries against a particular table use an index or not.
•If the table is small, most likely indexes will not be used.
•If the column(s) to be indexed are very wide.
•If the column(s) are defined as TEXT, NTEXT or IMAGE data types.
•If the table is rarely queried but insertion, updating is frequent.
• To provide up-to-date statistics, the query optimizer needs to make smart query optimization decisions. You will generally want to leave the "Auto Update Statistics" database option on. This helps to ensure that the optimizer statistics are valid, ensuring that queries are properly optimized when they are run.
• Keep the "width" of your indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index.
• If possible, try to create indexes on columns that have integer values instead of characters. Integer values use less overhead than character values.
• If you have two or more tables that are frequently joined together, then the columns used for the joins should have an appropriate index. If the columns used for the joins are not naturally compact, then consider adding surrogate keys to the tables that are compact in order to reduce the size of the keys. This will decrease I/O during the join process, which increases overall performance.
• When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index. This is because, in a unique index, each row is unique and once the needed record is found, SQL Server doesn't have to look any further.
• If a particular query against a table is run infrequently and the addition of an index greatly speeds the performance of the query, but the performance of INSERTS, UPDATES and DELETES is negatively affected by the addition of the index, consider creating the index for the table for the duration of when the query is run and then dropping the index. An example of this is when monthly reports are run at the end of the month on an OLTP application.
• Avoid using FLOAT or REAL data types as primary keys, as they add unnecessary overhead that can hurt performance.
• If you want to boost the performance of a query that includes an AND operator in the WHERE clause, consider the following:
•Of the search criteria in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
•If at least one of the search criteria in the WHERE clause is not highly selective, consider adding indexes to all of the columns referenced in the WHERE clause.
•If none of the columns in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
• The Query Optimizer will always perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or do not have a useful index). Because of this, if you use many queries with OR clauses, you will want to ensure that each referenced column in the WHERE clause has an index.
• If you have a query that uses ORs and it is not making the best use of indexes, consider rewriting it as a UNION and then testing performance. Only through testing can you be sure that one version of your query will be faster than another.
• If you use the SOUNDEX function against a table column in a WHERE clause, the Query Optimizer will ignore any available indexes and perform a table scan.
• Queries that include either the DISTINCT or the GROUP BY clauses can be optimized by including appropriate indexes. Any of the following indexing strategies can be used:
•Include a covering, non-clustered index (covering the appropriate columns) of the DISTINCT or the GROUP BY clauses.
•Include a clustered index on the columns in the GROUP BY clause.
•Include a clustered index on the columns found in the SELECT clause.
•Adding appropriate indexes to queries that include DISTINCT or GROUP BY is most important for those queries that run often.
• Avoid clustered indexes on columns that are already "covered" by non-clustered indexes. A clustered index on a column that is already "covered" is redundant. Use the clustered index for columns that can better make use of it.
• Ideally a clustered index should be based on a single column (not multiple columns) that are as narrow as possible. This not only reduces the clustered index's physical size, it also reduces the physical size of non-clustered indexes and boosts SQL Server's overall performance.
• When you create a clustered index, try to create it as a unique clustered index, not a non-unique clustered index.
• SET NOCOUNT ON at the beginning of each stored procedure you write. This statement should be included in every stored procedure, trigger, etc. that you write.
• Keep Transact-SQL transactions as short as possible within a stored procedure. This helps to reduce the number of locks, helping to speed up the overall performance of your SQL Server application.
• If you are creating a stored procedure to run in a database other than the Master database, don't use the prefix sp_ in its name. This special prefix is reserved for system stored procedures. Although using this prefix will not prevent a user defined stored procedure from working, what it can do is to slow down its execution ever so slightly.
• Before you are done with your stored procedure code, review it for any unused code, parameters or variables that you may have forgotten to remove while you were making changes and remove them. Unused code just adds unnecessary bloat to your stored procedures, although it will not necessarily negatively affect performance of the stored procedure.
• For best performance, all objects that are called within the same stored procedure should be owned by the same object owner or schema, preferably dbo, and should also be referred to in the format of object_owner.object_name or schema_owner.object_name.
• When you need to execute a string of Transact-SQL, you should use the sp_executesql stored procedure instead of the EXECUTE statement.
• If you use input parameters in your stored procedures, you should validate all of them at the beginning of your stored procedure. This way, if there is a validation problem and the client application needs to be notified of the problem, it happens before any stored procedure processing takes place, preventing wasted effort and boosting performance.
• When calling a stored procedure from your application, it is important that you call it using its qualified name, for example:
Collapse Copy Code
exec dbo.myProc
...instead of:
Collapse Copy Code
exec myProc
• If you think a stored procedure will return only a single value and not a record set, consider returning the single value as an output parameter.
• Use stored procedures instead of views. They offer better performance.
• Don't include code, variable or parameters that don't do anything.
• Don't be afraid to make broad-minded use of in-line and block comments in your Transact-SQL code. They will not affect the performance of your application and they will enhance your productivity when you have to come back to the code and try to modify it.
• If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications.
• If you have the choice of using a join or a sub-query to perform the same task within a query, generally the join is faster. This is not always the case, however, and you may want to test the query using both methods to determine which is faster for your particular application.
• If your application requires you to create temporary tables for use on a global or per connection use, consider the possibility of creating indexes for these temporary tables. While most temporary tables probably won't need -- or even use -- an index, some larger temporary tables can benefit from them. A properly designed index on a temporary table can be as great a benefit as a properly designed index on a standard database table.
• Instead of using temporary tables, consider using a derived table instead. A derived table is the result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, you can reduce I/O and often boost your application's performance.
• For better performance, if you need a temporary table in your Transact-SQL code, consider using a table variable instead of creating a conventional temporary table.
• Don't repeatedly reuse the same function to calculate the same result over and over within your Transact-SQL code.
• If you use BULK INSERT to import data into SQL Server, then use the TABLOCK hint along with it. This will prevent SQL Server from running out of locks during very large imports and will also boost performance due to the reduction of lock contention.
• Always specify the narrowest columns you can. The narrower the column, the less amount of data SQL Server has to store and the faster SQL Server is able to read and write data. In addition, if any sorts need to be performed on the column, the narrower the column, the faster the sort will be.
• If you need to store large strings of data and they are less than 8000 characters, use a VARCHAR data type instead of a TEXT data type. TEXT data types have extra overhead that drag down performance.
• Don't use the NVARCHAR or NCHAR data types unless you need to store 16-bit character (Unicode) data. They take up twice as much space as VARCHAR or CHAR data types, increasing server I/O and wasting unnecessary space in your buffer cache.
• If the text data in a column varies greatly in length, use a VARCHAR data type instead of a CHAR data type. The amount of space saved by using VARCHAR over CHAR on variable length columns can greatly reduce the I/O reads that the cache memory uses to hold data, improving overall SQL Server performance.
• If a column's data does not vary widely in length, consider using a fixed-length CHAR field instead of a VARCHAR. While it may take up a little more space to store the data, processing fixed-length columns is faster in SQL Server than processing variable-length columns.
• If you have a column that is designed to hold only numbers, use a numeric data type such as INTEGER instead of a VARCHAR or CHAR data type. Numeric data types generally require less space to hold the same numeric value than does a character data type. This helps to reduce the size of the columns and can boost performance when the columns are searched (WHERE clause), joined to another column or sorted.
• If you use the CONVERT function to convert a value to a variable length data type such as VARCHAR, always specify the length of the variable data type. If you do not, SQL Server assumes a default length of 30. Ideally, you should specify the shortest length to accomplish the required task. This helps to reduce memory use and SQL Server resources.
• Avoid using the new BIGINT data type unless you really need its additional storage capacity. The BIGINT data type uses 8 bytes of memory, versus 4 bytes for the INT data type.
• Don't use the DATETIME data type as a primary key. From a performance perspective, it is more efficient to use a data type that uses less space. For example, the DATETIME data type uses 8 bytes of space, while the INT data type only takes up 4 bytes. The less space used, the smaller the table and index, and the less I/O overhead that is required to access the primary key.
• If you are creating a column that you know will be subject to many sorts, consider making the column integer-based and not character-based. This is because SQL Server can sort integer data much faster than character data.
• Carefully evaluate whether your SELECT query needs the DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary. This is a bad habit that should be stopped.
• When you need to use SELECT INTO option, keep in mind that it can lock system tables, preventing other users from accessing the data they need while the data is being inserted. In order to prevent or minimize the problems caused by locked tables, try to schedule the use of SELECT INTO when your SQL Server is less busy. In addition, try to keep the amount of data inserted to a minimum. In some cases, it may be better to perform several smaller SELECT INTOs instead of performing one large SELECT INTO.
• If you need to verify the existence of a record in a table, don't use SELECT COUNT (*) in your Transact-SQL code to identify it. This is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXISTS to determine if the record in question exists, which is much more efficient.
• By default, some developers -- especially those who have not worked with SQL Server before -- routinely include code similar to this in their WHERE clauses when they make string comparisons:
Collapse Copy Code
SELECT column_name FROM table_name
WHERE LOWER (column_name) = 'name'
In other words, these developers are making the assumption that the data in SQL Server is case-sensitive, which it generally is not. If your SQL Server database is not configured to be case sensitive, you don't need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed. Just leave these functions out of your code. This will speed up the performance of your query, as any use of text functions in a WHERE clause hurts performance.
However, what if your database has been configured to be case-sensitive? Should you then use the LOWER and UPPER functions to ensure that comparisons are properly compared? No. The above example is still poor coding. If you have to deal with ensuring case is consistent for proper comparisons, use the technique described below, along with appropriate indexes on the column in question:
Collapse Copy Code
SELECT column_name FROM table_name
WHERE column_name = 'NAME' or column_name = 'name'
This code will run much faster than the first example.
• If you currently have a query that uses NOT IN, which offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead try to use one of the following options, all of which offer better performance:
•Use EXISTS or NOT EXISTS
•Use IN
•Perform a LEFT OUTER JOIN and check for a NULL condition
• When you have a choice of using the IN or the EXISTS clause in your Transact-SQL, you will generally want to use the EXISTS clause, as it is usually more efficient and performs faster.
• If you find that SQL Server uses a TABLE SCAN instead of an INDEX SEEK when you use an IN/OR clause as part of your WHERE clause, even when those columns are covered by an index, consider using an index hint to force the Query Optimizer to use the index.
• If you use LIKE in your WHERE clause, try to use one or more leading characters in the clause, if possible. For example, use:
Collapse Copy Code
LIKE 'm%' instead of LIKE ‘%m’
• If your application needs to retrieve summary data often, but you don't want to have the overhead of calculating it on the fly every time it is needed, consider using a trigger that updates summary values after each transaction into a summary table.
• When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, as it is much more efficient. For example...
Collapse Copy Code
SELECT task_id, task_name
FROM tasks
WHERE task_id in (1000, 1001, 1002, 1003, 1004)
...is much less efficient than this:
Collapse Copy Code
SELECT task_id, task_name
FROM tasks
WHERE task_id BETWEEN 1000 and 1004
• If possible, try to avoid using the SUBSTRING function in your WHERE clauses. Depending on how it is constructed, using the SUBSTRING function can force a table scan instead of allowing the optimizer to use an index (assuming there is one). If the substring you are searching for does not include the first character of the column you are searching for, then a table scan is performed.
• If possible, you should avoid using the SUBSTRING function and use the LIKE condition instead for better performance. Instead of doing this:
Collapse Copy Code
WHERE SUBSTRING(task_name,1,1) = 'b'
Try using this instead:
Collapse Copy Code
WHERE task_name LIKE 'b%'
• Avoid using optimizer hints in your WHERE clauses. This is because it is generally very hard to out-guess the Query Optimizer. Optimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to include a hint in a query, this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts -- not helps -- performance.
• If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. This assumes that no parentheses have been used to change the order of execution. Because of this, you may want to consider one of the following when using AND:
• Locate the least likely true AND expression first.
• If both parts of an AND expression are equally likely of being false, put the least complex AND expression first.
• You may want to consider using Query Analyzer or Management Studio to look at the execution plans of your queries to see which is best for your situation
• Don't use ORDER BY in your SELECT statements unless you really need to, as it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server.
• Whenever SQL Server has to perform a sorting operation, additional resources have to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:
•ORDER BY
•GROUP BY
•SELECT DISTINCT
•UNION
• If you have to sort by a particular column often, consider making that column a clustered index. This is because the data is already presorted for you and SQL Server is smart enough not to resort the data.
• If your WHERE clause includes an IN operator along with a list of values to be tested in the query, order the list of values so that the most frequently found ones are placed at the start of the list and the less frequently found ones are placed at the end of the list. This can speed up performance because the IN option returns true as soon as any of the values in the list produce a match. The sooner the match is made, the faster the query completes.
• If your application performs many wildcard (LIKE %) text searches on CHAR or VARCHAR columns, consider using SQL Server's full-text search option. The Search Service can significantly speed up wildcard searches of text stored in a database.
• The GROUP BY clause can be used with or without an aggregate function. However, if you want optimum performance, don't use the GROUP BY clause without an aggregate function. This is because you can accomplish the same end result by using the DISTINCT option instead, and it is faster. For example, you could write your query two different ways:
Collapse Copy Code
SELECT task_id
FROM tasks
WHERE task_id BETWEEN 10 AND 20
GROUP BY OrderID
...or:
Collapse Copy Code
SELECT DISTINCT task_id
FROM tasks
WHERE task_id BETWEEN 10 AND 20
• It is important to design applications that keep transactions as short as possible. This reduces locking and increases application concurrently, which helps to boost performance.
• In order to reduce network traffic between the client or middle-tier and SQL Server -- and also to boost your SQL Server-based application's performance -- only the data needed by the client or middle-tier should be returned by SQL Server. In other words, don't return more data (both rows and columns) from SQL Server than you need to the client or middle-tier and then further reduce the data to the data you really need at the client or middle-tier. This wastes SQL Server resources and network bandwidth.
• To make complex queries easier to analyze, consider breaking them down into their smaller constituent parts. One way to do this is to simply create lists of the key components of the query, such as:
•List all of the columns that are to be returned
•List all of the columns that are used in the WHERE clause
•List all of the columns used in the JOINs (if applicable)
•List all the tables used in JOINs (if applicable)
•Once you have the above information organized into this easy-to-comprehend form, it is much easier to identify those columns that could potentially make use of indexes when executed.
50 New Features of SQL Server 2008
50 New Features of SQL Server 2008
•Transparent Data Encryption
Enable encryption of an entire database, data files, or log files, without the need for application changes. Benefits of this include: Search encrypted data using both range and fuzzy searches, search secure data from unauthorized users, and data encryption without any required changes in existing applications.
•Extensible Key Management
SQL Server 2005 provides a comprehensive solution for encryption and key management. SQL Server 2008 delivers an excellent solution to this growing need by supporting third-party key management and HSM products.
•Auditing
Create and manage auditing via DDL, while simplifying compliance by providing more comprehensive data auditing. This enables organizations to answer common questions, such as, "What data was retrieved?"
•Enhanced Database Mirroring
SQL Server 2008 builds on SQL Server 2005 by providing a more reliable platform that has enhanced database mirroring, including automatic page repair, improved performance, and enhanced supportability.
•Automatic Recovery of Data Pages
SQL Server 2008 enables the principal and mirror machines to transparently recover from 823/824 types of data page errors by requesting a fresh copy of the suspect page from the mirroring partner transparently to end users and applications.
•Log Stream Compression
Database mirroring requires data transmissions between the participants of the mirroring implementations. With SQL Server 2008, compression of the outgoing log stream between the participants delivers optimal performance and minimizes the network bandwidth used by database mirroring.
•Resource Governor
Provide a consistent and predictable response to end users with the introduction of Resource Governor, allowing organizations to define resource limits and priorities for different workloads, which enable concurrent workloads to provide consistent performance to their end users.
•Predictable Query Performance
Enable greater query performance stability and predictability by providing functionality to lock down query plans, enabling organizations to promote stable query plans across hardware server replacements, server upgrades, and production deployments.
•Data Compression
Enable data to be stored more effectively, and reduce the storage requirements for your data. Data compression also provides significant performance improvements for large I/O bound workloads, like data warehousing.
•Hot Add CPU
Dynamically scale a database on demand by allowing CPU resources to be added to SQL Server 2008 on supported hardware platforms without forcing any downtime on applications. Note that SQL Server already supports the ability to add memory resources online.
•Policy-Based Management
Policy-Based Management is a policy-based system for managing one or more instances of SQL Server 2008. Use this with SQL Server Management Studio to create policies that manage entities on the server, such as the instance of SQL Server, databases, and other SQL Server objects.
•Streamlined Installation
SQL Server 2008 introduces significant improvements to the service life cycle for SQL Server through the re-engineering of the installation, setup, and configuration architecture. These improvements separate the installation of the physical bits on the hardware from the configuration of the SQL Server software, enabling organizations and software partners to provide recommended installation configurations.
•Performance Data Collection
Performance tuning and troubleshooting are time-consuming tasks for the administrator. To provide actionable performance insights to administrators, SQL Server 2008 includes more extensive performance data collection, a new centralized data repository for storing performance data, and new tools for reporting and monitoring.
•Language Integrated Query (LINQ)
Enable developers to issue queries against data, using a managed programming language, such as C# or VB.NET, instead of SQL statements. Enable seamless, strongly typed, set-oriented queries written in .NET languages to run against ADO.NET (LINQ to SQL), ADO.NET DataSets (LINQ to DataSets), the ADO.NET Entity Framework (LINQ to Entities), and to the Entity Data Service Mapping provider. Use the new LINQ to SQL provider that enables developers to use LINQ directly on SQL Server 2008 tables and columns.
•ADO.NET Data Services
The Object Services layer of ADO.NET enables the materialization, change tracking, and persistence of data as CLR objects. Developers using the ADO.NET framework can program against a database, using CLR objects that are managed by ADO.NET. SQL Server 2008 introduces more efficient, optimized support that improves performance and simplifies development.
•DATE/TIME
SQL Server 2008 introduces new date and time data types:
oDATE—A date-only type
oTIME—A time-only type
oDATETIMEOFFSET—A time-zone-aware datetime type
oDATETIME2—A datetime type with larger fractional seconds and year range than the existing DATETIME type
The new data types enable applications to have separate data and time types while providing large data ranges or user defined precision for time values.
•HIERARCHY ID
Enable database applications to model tree structures in a more efficient way than currently possible. New system type HierarchyId can store values that represent nodes in a hierarchy tree. This new type will be implemented as a CLR UDT, and will expose several efficient and useful built-in methods for creating and operating on hierarchy nodes with a flexible programming model.
•FILESTREAM Data
Allow large binary data to be stored directly in an NTFS file system, while preserving an integral part of the database and maintaining transactional consistency. Enable the scale-out of large binary data traditionally managed by the database to be stored outside the database on more cost-effective storage without compromise.
•Integrated Full Text Search
Integrated Full Text Search makes the transition between Text Search and relational data seamless, while enabling users to use the Text Indexes to perform high-speed text searches on large text columns.
•Sparse Columns
NULL data consumes no physical space, providing a highly efficient way of managing empty data in a database. For example, Sparse Columns allows object models that typically have numerous null values to be stored in a SQL Server 2005 database without experiencing large space costs.
•Large User-Defined Types
SQL Server 2008 eliminates the 8-KB limit for User-Defined Types (UDTs), allowing users to dramatically expand the size of their UDTs.
•Spatial Data Types
Build spatial capabilities into your applications by using the support for spatial data.
oImplement Round Earth solutions with the geography data type. Use latitude and longitude coordinates to define areas on the Earth's surface.
oImplement Flat Earth solutions with the geometry data type. Store polygons, points, and lines that are associated with projected planar surfaces and naturally planar data, such as interior spaces.
•Backup Compression
Keeping disk-based backups online is expensive and time-consuming. With SQL Server 2008 backup compression, less storage is required to keep backups online, and backups run significantly faster since less disk I/O is required.
•Partitioned Table Parallelism
Partitions enable organizations to manage large growing tables more effectively by transparently breaking them into manageable blocks of data. SQL Server 2008 builds on the advances of partitioning in SQL Server 2005 by improving the performance on large partitioned tables.
•Star Join Query Optimizations
SQL Server 2008 provides improved query performance for common data warehouse scenarios. Star Join Query optimizations reduce query response time by recognizing data warehouse join patterns.
•Grouping Sets
Grouping Sets is an extension to the GROUP BY clause that lets users define multiple groupings in the same query. Grouping Sets produces a single result set that is equivalent to a UNION ALL of differently grouped rows, making aggregation querying and reporting easier and faster.
•Change Data Capture
With Change Data Capture, changes are captured and placed in change tables. It captures complete content of changes, maintains cross-table consistency, and even works across schema changes. This enables organizations to integrate the latest information into the data warehouse.
•MERGE SQL Statement
With the introduction of the MERGE SQL Statement, developers can more effectively handle common data warehousing scenarios, like checking whether a row exists, and then executing an insert or update.
•SQL Server Integration Services (SSIS) Pipeline Improvements
Data Integration packages can now scale more effectively, making use of available resources and managing the largest enterprise-scale workloads. The new design improves the scalability of runtime into multiple processors.
•SQL Server Integration Services (SSIS) Persistent Lookups
The need to perform lookups is one of the most common ETL operations. This is especially prevalent in data warehousing, where fact records need to use lookups to transform business keys to their corresponding surrogates. SSIS increases the performance of lookups to support the largest tables.
•Analysis Scale and Performance
SQL Server 2008 drives broader analysis with enhanced analytical capabilities and with more complex computations and aggregations. New cube design tools help users streamline the development of the analysis infrastructure enabling them to build solutions for optimized performance.
•Block Computations
Block Computations provides a significant improvement in processing performance enabling users to increase the depth of their hierarchies and complexity of the computations.
•Writeback
New MOLAP enabled writeback capabilities in SQL Server 2008 Analysis Services removes the need to query ROLAP partitions. This provides users with enhanced writeback scenarios from within analytical applications without sacrificing the traditional OLAP performance.
•Enterprise Reporting Engine
Reports can easily be delivered throughout the organization, both internally and externally, with simplified deployment and configuration. This enables users to easily create and share reports of any size and complexity.
•Internet Report Deployment
Customers and suppliers can effortlessly be reached by deploying reports over the Internet.
•Manage Reporting Infrastructure
Increase supportability and the ability to control server behaviour with memory management, infrastructure consolidation, and easier configuration through a centralized store and API for all configuration settings.
•Report Builder Enhancements
Easily build ad-hoc and author reports with any structure through Report Designer.
•Forms Authentication Support
Support for Forms authentication enables users to choose between Windows and Forms authentication.
•Report Server Application Embedding
Report Server application embedding enables the URLs in reports and subscriptions to point back to front-end applications.
•Microsoft Office Integration
SQL Server 2008 provides new Word rendering that enables users to consume reports directly from within Microsoft Office Word. In addition, the existing Excel renderer has been greatly enhanced to accommodate the support of features, like nested data regions, sub-reports, as well as merged cell improvements. This lets users maintain layout fidelity and improves the overall consumption of reports from Microsoft Office applications.
•Predictive Analysis
SQL Server Analysis Services continues to deliver advanced data mining technologies. Better Time Series support extends forecasting capabilities. Enhanced Mining Structures deliver more flexibility to perform focused analysis through filtering as well as to deliver complete information in reports beyond the scope of the mining model. New cross-validation enables confirmation of both accuracy and stability for results that you can trust. Furthermore, the new features delivered with SQL Server 2008 Data Mining Add-ins for Office 2007 empower every user in the organization with even more actionable insight at the desktop.
•Transparent Data Encryption
Enable encryption of an entire database, data files, or log files, without the need for application changes. Benefits of this include: Search encrypted data using both range and fuzzy searches, search secure data from unauthorized users, and data encryption without any required changes in existing applications.
•Extensible Key Management
SQL Server 2005 provides a comprehensive solution for encryption and key management. SQL Server 2008 delivers an excellent solution to this growing need by supporting third-party key management and HSM products.
•Auditing
Create and manage auditing via DDL, while simplifying compliance by providing more comprehensive data auditing. This enables organizations to answer common questions, such as, "What data was retrieved?"
•Enhanced Database Mirroring
SQL Server 2008 builds on SQL Server 2005 by providing a more reliable platform that has enhanced database mirroring, including automatic page repair, improved performance, and enhanced supportability.
•Automatic Recovery of Data Pages
SQL Server 2008 enables the principal and mirror machines to transparently recover from 823/824 types of data page errors by requesting a fresh copy of the suspect page from the mirroring partner transparently to end users and applications.
•Log Stream Compression
Database mirroring requires data transmissions between the participants of the mirroring implementations. With SQL Server 2008, compression of the outgoing log stream between the participants delivers optimal performance and minimizes the network bandwidth used by database mirroring.
•Resource Governor
Provide a consistent and predictable response to end users with the introduction of Resource Governor, allowing organizations to define resource limits and priorities for different workloads, which enable concurrent workloads to provide consistent performance to their end users.
•Predictable Query Performance
Enable greater query performance stability and predictability by providing functionality to lock down query plans, enabling organizations to promote stable query plans across hardware server replacements, server upgrades, and production deployments.
•Data Compression
Enable data to be stored more effectively, and reduce the storage requirements for your data. Data compression also provides significant performance improvements for large I/O bound workloads, like data warehousing.
•Hot Add CPU
Dynamically scale a database on demand by allowing CPU resources to be added to SQL Server 2008 on supported hardware platforms without forcing any downtime on applications. Note that SQL Server already supports the ability to add memory resources online.
•Policy-Based Management
Policy-Based Management is a policy-based system for managing one or more instances of SQL Server 2008. Use this with SQL Server Management Studio to create policies that manage entities on the server, such as the instance of SQL Server, databases, and other SQL Server objects.
•Streamlined Installation
SQL Server 2008 introduces significant improvements to the service life cycle for SQL Server through the re-engineering of the installation, setup, and configuration architecture. These improvements separate the installation of the physical bits on the hardware from the configuration of the SQL Server software, enabling organizations and software partners to provide recommended installation configurations.
•Performance Data Collection
Performance tuning and troubleshooting are time-consuming tasks for the administrator. To provide actionable performance insights to administrators, SQL Server 2008 includes more extensive performance data collection, a new centralized data repository for storing performance data, and new tools for reporting and monitoring.
•Language Integrated Query (LINQ)
Enable developers to issue queries against data, using a managed programming language, such as C# or VB.NET, instead of SQL statements. Enable seamless, strongly typed, set-oriented queries written in .NET languages to run against ADO.NET (LINQ to SQL), ADO.NET DataSets (LINQ to DataSets), the ADO.NET Entity Framework (LINQ to Entities), and to the Entity Data Service Mapping provider. Use the new LINQ to SQL provider that enables developers to use LINQ directly on SQL Server 2008 tables and columns.
•ADO.NET Data Services
The Object Services layer of ADO.NET enables the materialization, change tracking, and persistence of data as CLR objects. Developers using the ADO.NET framework can program against a database, using CLR objects that are managed by ADO.NET. SQL Server 2008 introduces more efficient, optimized support that improves performance and simplifies development.
•DATE/TIME
SQL Server 2008 introduces new date and time data types:
oDATE—A date-only type
oTIME—A time-only type
oDATETIMEOFFSET—A time-zone-aware datetime type
oDATETIME2—A datetime type with larger fractional seconds and year range than the existing DATETIME type
The new data types enable applications to have separate data and time types while providing large data ranges or user defined precision for time values.
•HIERARCHY ID
Enable database applications to model tree structures in a more efficient way than currently possible. New system type HierarchyId can store values that represent nodes in a hierarchy tree. This new type will be implemented as a CLR UDT, and will expose several efficient and useful built-in methods for creating and operating on hierarchy nodes with a flexible programming model.
•FILESTREAM Data
Allow large binary data to be stored directly in an NTFS file system, while preserving an integral part of the database and maintaining transactional consistency. Enable the scale-out of large binary data traditionally managed by the database to be stored outside the database on more cost-effective storage without compromise.
•Integrated Full Text Search
Integrated Full Text Search makes the transition between Text Search and relational data seamless, while enabling users to use the Text Indexes to perform high-speed text searches on large text columns.
•Sparse Columns
NULL data consumes no physical space, providing a highly efficient way of managing empty data in a database. For example, Sparse Columns allows object models that typically have numerous null values to be stored in a SQL Server 2005 database without experiencing large space costs.
•Large User-Defined Types
SQL Server 2008 eliminates the 8-KB limit for User-Defined Types (UDTs), allowing users to dramatically expand the size of their UDTs.
•Spatial Data Types
Build spatial capabilities into your applications by using the support for spatial data.
oImplement Round Earth solutions with the geography data type. Use latitude and longitude coordinates to define areas on the Earth's surface.
oImplement Flat Earth solutions with the geometry data type. Store polygons, points, and lines that are associated with projected planar surfaces and naturally planar data, such as interior spaces.
•Backup Compression
Keeping disk-based backups online is expensive and time-consuming. With SQL Server 2008 backup compression, less storage is required to keep backups online, and backups run significantly faster since less disk I/O is required.
•Partitioned Table Parallelism
Partitions enable organizations to manage large growing tables more effectively by transparently breaking them into manageable blocks of data. SQL Server 2008 builds on the advances of partitioning in SQL Server 2005 by improving the performance on large partitioned tables.
•Star Join Query Optimizations
SQL Server 2008 provides improved query performance for common data warehouse scenarios. Star Join Query optimizations reduce query response time by recognizing data warehouse join patterns.
•Grouping Sets
Grouping Sets is an extension to the GROUP BY clause that lets users define multiple groupings in the same query. Grouping Sets produces a single result set that is equivalent to a UNION ALL of differently grouped rows, making aggregation querying and reporting easier and faster.
•Change Data Capture
With Change Data Capture, changes are captured and placed in change tables. It captures complete content of changes, maintains cross-table consistency, and even works across schema changes. This enables organizations to integrate the latest information into the data warehouse.
•MERGE SQL Statement
With the introduction of the MERGE SQL Statement, developers can more effectively handle common data warehousing scenarios, like checking whether a row exists, and then executing an insert or update.
•SQL Server Integration Services (SSIS) Pipeline Improvements
Data Integration packages can now scale more effectively, making use of available resources and managing the largest enterprise-scale workloads. The new design improves the scalability of runtime into multiple processors.
•SQL Server Integration Services (SSIS) Persistent Lookups
The need to perform lookups is one of the most common ETL operations. This is especially prevalent in data warehousing, where fact records need to use lookups to transform business keys to their corresponding surrogates. SSIS increases the performance of lookups to support the largest tables.
•Analysis Scale and Performance
SQL Server 2008 drives broader analysis with enhanced analytical capabilities and with more complex computations and aggregations. New cube design tools help users streamline the development of the analysis infrastructure enabling them to build solutions for optimized performance.
•Block Computations
Block Computations provides a significant improvement in processing performance enabling users to increase the depth of their hierarchies and complexity of the computations.
•Writeback
New MOLAP enabled writeback capabilities in SQL Server 2008 Analysis Services removes the need to query ROLAP partitions. This provides users with enhanced writeback scenarios from within analytical applications without sacrificing the traditional OLAP performance.
•Enterprise Reporting Engine
Reports can easily be delivered throughout the organization, both internally and externally, with simplified deployment and configuration. This enables users to easily create and share reports of any size and complexity.
•Internet Report Deployment
Customers and suppliers can effortlessly be reached by deploying reports over the Internet.
•Manage Reporting Infrastructure
Increase supportability and the ability to control server behaviour with memory management, infrastructure consolidation, and easier configuration through a centralized store and API for all configuration settings.
•Report Builder Enhancements
Easily build ad-hoc and author reports with any structure through Report Designer.
•Forms Authentication Support
Support for Forms authentication enables users to choose between Windows and Forms authentication.
•Report Server Application Embedding
Report Server application embedding enables the URLs in reports and subscriptions to point back to front-end applications.
•Microsoft Office Integration
SQL Server 2008 provides new Word rendering that enables users to consume reports directly from within Microsoft Office Word. In addition, the existing Excel renderer has been greatly enhanced to accommodate the support of features, like nested data regions, sub-reports, as well as merged cell improvements. This lets users maintain layout fidelity and improves the overall consumption of reports from Microsoft Office applications.
•Predictive Analysis
SQL Server Analysis Services continues to deliver advanced data mining technologies. Better Time Series support extends forecasting capabilities. Enhanced Mining Structures deliver more flexibility to perform focused analysis through filtering as well as to deliver complete information in reports beyond the scope of the mining model. New cross-validation enables confirmation of both accuracy and stability for results that you can trust. Furthermore, the new features delivered with SQL Server 2008 Data Mining Add-ins for Office 2007 empower every user in the organization with even more actionable insight at the desktop.
COALESCE (Transact-SQL)
COALESCE (Transact-SQL)
Returns the first nonnull expression among its arguments.
Return Types
Returns the data type of expression with the highest data type precedence.
If all expressions are nonnullable, the result is typed as nonnullable.
Remarks
If all arguments are NULL, COALESCE returns NULL.
Note Note
At least one of the null values must be a typed NULL.
COALESCE(expression1,...n) is equivalent to the following CASE expression:
CASE
WHEN (expression1 IS NOT NULL) THEN expression1
WHEN (expression2 IS NOT NULL) THEN expression2
...
ELSE expressionN
END
ISNULL and COALESCE though equivalent, can behave differently. An expression involving ISNULL with non-null parameters is considered to be NOT NULL, while expressions involving COALESCE with non-null parameters is considered to be NULL. In SQL Server, to index expressions involving COALESCE with non-null parameters, the computed column can be persisted using the PERSISTED column attribute as in the following statement:
Copy
CREATE TABLE #CheckSumTest
(
ID int identity ,
Num int DEFAULT ( RAND() * 100 ) ,
RowCheckSum AS COALESCE( CHECKSUM( id , num ) , 0 ) PERSISTED PRIMARY KEY
);
Returns the first nonnull expression among its arguments.
Return Types
Returns the data type of expression with the highest data type precedence.
If all expressions are nonnullable, the result is typed as nonnullable.
Remarks
If all arguments are NULL, COALESCE returns NULL.
Note Note
At least one of the null values must be a typed NULL.
COALESCE(expression1,...n) is equivalent to the following CASE expression:
CASE
WHEN (expression1 IS NOT NULL) THEN expression1
WHEN (expression2 IS NOT NULL) THEN expression2
...
ELSE expressionN
END
ISNULL and COALESCE though equivalent, can behave differently. An expression involving ISNULL with non-null parameters is considered to be NOT NULL, while expressions involving COALESCE with non-null parameters is considered to be NULL. In SQL Server, to index expressions involving COALESCE with non-null parameters, the computed column can be persisted using the PERSISTED column attribute as in the following statement:
Copy
CREATE TABLE #CheckSumTest
(
ID int identity ,
Num int DEFAULT ( RAND() * 100 ) ,
RowCheckSum AS COALESCE( CHECKSUM( id , num ) , 0 ) PERSISTED PRIMARY KEY
);
SET NOCOUNT { ON | OFF } in DOT NET
SET NOCOUNT { ON | OFF } in DOT NET
Stops the message indicating the number of rows affected by a Transact-SQL statement from being returned
as part of the results.
Syntax
SET NOCOUNT { ON | OFF }
Remarks
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement)
is not returned. When SET NOCOUNT is OFF, the count is returned.
The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
SET NOCOUNT ON eliminates the sending of DONE_IN_PROC messages to the client for each statement in a
stored procedure. When using the utilities provided with Microsoft® SQL Server™ to execute queries,
the results prevent "nn rows affected" from being displayed at the end Transact-SQL statements such
as SELECT, INSERT, UPDATE, and DELETE.
For stored procedures that contain several statements that do not return much actual data, this can
provide a significant performance boost because network traffic is greatly reduced.
Stops the message indicating the number of rows affected by a Transact-SQL statement from being returned
as part of the results.
Syntax
SET NOCOUNT { ON | OFF }
Remarks
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement)
is not returned. When SET NOCOUNT is OFF, the count is returned.
The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
SET NOCOUNT ON eliminates the sending of DONE_IN_PROC messages to the client for each statement in a
stored procedure. When using the utilities provided with Microsoft® SQL Server™ to execute queries,
the results prevent "nn rows affected" from being displayed at the end Transact-SQL statements such
as SELECT, INSERT, UPDATE, and DELETE.
For stored procedures that contain several statements that do not return much actual data, this can
provide a significant performance boost because network traffic is greatly reduced.
4 December 2010
DELEGATES IN C#
DELEGATES IN C#
A delegate type represents references to methods with a particular parameter list and return type.
Delegates make it possible to treat methods as entities that can be assigned to variables and passed as parameters.
Delegates are similar to the concept of function pointers found in some other languages, but unlike function pointers, delegates are object-oriented and type-safe.
The following example declares and uses a delegate type named Function.
using System;
delegate double Function(double x);
class Multiplier
{
double factor;
public Multiplier(double factor) {
this.factor = factor;
}
public double Multiply(double x) {
return x * factor;
}
}
class Test
{
static double Square(double x) {
return x * x;
}
static double[] Apply(double[] a, Function f) {
double[] result = new double[a.Length];
for (int i = 0; i < a.Length; i++) result[i] = f(a[i]);
return result;
}
static void Main() {
double[] a = {0.0, 0.5, 1.0};
double[] squares = Apply(a, Square);
double[] sines = Apply(a, Math.Sin);
Multiplier m = new Multiplier(2.0);
double[] doubles = Apply(a, m.Multiply);
}
}
An instance of the Function delegate type can reference any method that takes a double argument and returns
a double value. The Apply method applies a given Function to the elements of a double[], returning a double[]
with the results. In the Main method, Apply is used to apply three different functions to a double[].
A delegate can reference either a static method (such as Square or Math.Sin in the previous example) or an instance method (such as m.Multiply in the previous example). A delegate that references an instance method also references a particular object, and when the instance method is invoked through the delegate, that object becomes this in the invocation.Delegates can also be created using anonymous functions,which are “inline methods” that are created on the fly. Anonymous functions can see the local variables
of the sourrounding methods. Thus, the multiplier example above can be written more easily without using a Multiplier class:
double[] doubles = Apply(a, (double x) => x * 2.0);
An interesting and useful property of a delegate is that it does not know or care about the class
of the method it references; all that matters is that the referenced method has the same parameters and
return type as the delegate.
A delegate type represents references to methods with a particular parameter list and return type.
Delegates make it possible to treat methods as entities that can be assigned to variables and passed as parameters.
Delegates are similar to the concept of function pointers found in some other languages, but unlike function pointers, delegates are object-oriented and type-safe.
The following example declares and uses a delegate type named Function.
using System;
delegate double Function(double x);
class Multiplier
{
double factor;
public Multiplier(double factor) {
this.factor = factor;
}
public double Multiply(double x) {
return x * factor;
}
}
class Test
{
static double Square(double x) {
return x * x;
}
static double[] Apply(double[] a, Function f) {
double[] result = new double[a.Length];
for (int i = 0; i < a.Length; i++) result[i] = f(a[i]);
return result;
}
static void Main() {
double[] a = {0.0, 0.5, 1.0};
double[] squares = Apply(a, Square);
double[] sines = Apply(a, Math.Sin);
Multiplier m = new Multiplier(2.0);
double[] doubles = Apply(a, m.Multiply);
}
}
An instance of the Function delegate type can reference any method that takes a double argument and returns
a double value. The Apply method applies a given Function to the elements of a double[], returning a double[]
with the results. In the Main method, Apply is used to apply three different functions to a double[].
A delegate can reference either a static method (such as Square or Math.Sin in the previous example) or an instance method (such as m.Multiply in the previous example). A delegate that references an instance method also references a particular object, and when the instance method is invoked through the delegate, that object becomes this in the invocation.Delegates can also be created using anonymous functions,which are “inline methods” that are created on the fly. Anonymous functions can see the local variables
of the sourrounding methods. Thus, the multiplier example above can be written more easily without using a Multiplier class:
double[] doubles = Apply(a, (double x) => x * 2.0);
An interesting and useful property of a delegate is that it does not know or care about the class
of the method it references; all that matters is that the referenced method has the same parameters and
return type as the delegate.
Subscribe to:
Posts (Atom)
Implementing OAuth validation in a Web API
I mplementing OAuth validation in a Web API Implementing OAuth validation in a Web API using C# typically involves several key steps to sec...
-
ViewBag, ViewData, TempData and View State in MVC ASP.NET MVC offers us three options ViewData, ViewBag and TempData for passing data from...
-
// Export Datatable to Excel in C# Windows application using System; using System.Data; using System.IO; using System.Windows.Forms; ...