30 December 2011

Calculate Time Difference Between DateTime Objects


Calculate Time Difference Between DateTime Objects

To get time difference you need to use TimeSpan object, with code like this:
TimeSpan ts = DateTime1 - DateTime2;
For example, if you want to calculate time difference between server time and UTC time:
[ C# ]
protected void Page_Load(object sender, EventArgs e)
{
    // Declare and get DateTime values
    DateTime StartDate = System.DateTime.Now;
    DateTime EndDate = System.DateTime.UtcNow;
 
    // Find time difference between two dates
    TimeSpan TimeDifference = StartDate - EndDate;
 
    // Write difference in hours and minutes
    Response.Write("Time difference between server time and Coordinated Universal Time (UTC) is " + 
        TimeDifference.Hours.ToString() + " hours ");
    if (TimeDifference.Minutes != 0)
        Response.Write(" and " + TimeDifference.Minutes.ToString() + " minutes.");
 
}

29 December 2011

Web Farm in Asp.net


Web Farm in Asp.net
After developing our asp.net web application we host it on IIS Server.  Now one standalone server is sufficient to process ASP.NET Request and response for a small web sites but when the site comes for big organization where there an millions of daily user hits then we need to host the sites on multiple Server. This is called web farms. Where single site hosted on multiple IIS Server and they are  running behind the Load Balancer.
Fig : General Web Farm Architecture
This is the most common scenarios for any web based production environment. Where Client will hit an Virtual IP ( vIP) . Which is the IP address of Load Balancer. When Load balancer received the request based on the server load it will redirect the request to particular Server.

Trigger in Sql Server


What is a Trigger

A trigger is a special kind of a store procedure that executes in response to certain action on the table like insertion, deletion or updation of data. It is a database object which is bound to a table and is executed automatically. You can’t explicitly invoke triggers. The only way to do this is by performing the required action no the table that they are assigned to.

Types Of Triggers

There are three action query types that you use in SQL which are INSERT, UPDATE and DELETE. So, there are three types of triggers and hybrids that come from mixing and matching the events and timings that fire them.

Basically, triggers are classified into two main types:-

(i) After Triggers (For Triggers)
(ii) Instead Of Triggers

(i) After Triggers

These triggers run after an insert, update or delete on a table. They are not supported for views.
AFTER TRIGGERS can be classified further into three types as:

(a) AFTER INSERT Trigger.
(b) AFTER UPDATE Trigger.
(c) AFTER DELETE Trigger.


http://www.codeproject.com/KB/database/TriggersSqlServer.aspx

Web Garden in asp.net


Web Garden in asp.net
Overview of Web Garden
By default, each application pool runs with a single worker process (W3Wp.exe). We can assign multiple worker processes with a single application pool. An application pool with multiple worker processes is called a Web Garden. Many worker processes with the same application pool can sometimes provide better throughput performance and application response time. And each worker process should have its own thread and memory space.
Application Pool Creation
Web Garden (Application pool with multiple worker processes)
As Shown in the picture, in IIS Server, there may be multiple application pools and each application pool has at least a single worker process. A Web Garden should contain multiple worker processes.
There are certain restrictions in using a Web Garden with your web application. If we use Session Mode as "in proc", our application will not work correctly because the Session will be handled by a different worker process. To avoid this, we should use Session Mode as "out proc" and we can use "Session State Server" or "SQL-Server Session State".


reference:http://www.codeproject.com/KB/aspnet/ExploringIIS.aspx

28 December 2011

When to Use Generic Collections in C#


When to Use Generic Collections in C#


The following generic types correspond to existing collection types:

  • List is the generic class corresponding to ArrayList.
  • Dictionary is the generic class corresponding to Hashtable.
  • Collection is the generic class corresponding to CollectionBaseCollection can be used as a base class, but unlike CollectionBase it is not abstract, making it much easier to use.
  • ReadOnlyCollection is the generic class corresponding to ReadOnlyCollectionBase.ReadOnlyCollection is not abstract, and has a constructor that makes it easy to expose an existing List as a read-only collection.
  • The QueueStack, and SortedList generic classes correspond to the respective nongeneric classes with the same names.
  • There are several generic collection types that do not have nongeneric counterparts:
    • LinkedList is a general-purpose linked list that provides O(1) insertion and removal operations.
    • SortedDictionary is a sorted dictionary with O(log n) insertion and retrieval operations, making it a useful alternative to SortedList.
    • KeyedCollection is a hybrid between a list and a dictionary, which provides a way to store objects that contain their own keys.

Difference Between Primary key and Unique key

Difference Between Primary key and Unique key

1.Even though both the primary key and unique key are one or more columns that can uniquely identify a     row in a table, they have some important differences.
2.Most importantly, a table can have only a single primary key while it can have more than one unique key. 
Primary key can be considered as a special case of the unique key. 
3.Another difference is that primary keys have an implicit NOT NULL constraint while the unique key does    not have that constraint. 
4.Therefore, unique key columns may or may not contain NULL values but primary key columns cannot contain NULL values.


Database Keys


Database Keys

Super Key

A Super key is any combination of fields within a table that uniquely identifies each record within that table.

Candidate Key

A candidate is a subset of a super key. A candidate key is a single field or the least combination of fields that uniquely identifies each record in the table. The least combination of fields distinguishes a candidate key from a super key. Every table must have at least one candidate key but at the same time can have several.

Candidate Kay

As an example we might have a student_id that uniquely identifies the students in a student table. This would be a candidate key. But in the same table we might have the student’s first name and last name that also, when combined, uniquely identify the student in a student table. These would both be candidate keys.
In order to be eligible for a candidate key it must pass certain criteria.
  • It must contain unique values
  • It must not contain null values
  • It contains the minimum number of fields to ensure uniqueness
  • It must uniquely identify each record in the table

Once your candidate keys have been identified you can now select one to be your primary key

Primary Key

A primary key is a candidate key that is most appropriate to be the main reference key for the table. As its name suggests, it is the primary key of reference for the table and is used throughout the database to help establish relationships with other tables. As with any candidate key the primary key must contain unique values, must never be null and uniquely identify each record in the table.
As an example, a student id might be a primary key in a student table, a department code in a table of all departments in an organisation. This module has the code DH3D 35 that is no doubt used in a database somewhere to identify RDBMS as a unit in a table of modules. In the table below we have selected the candidate key student_id to be our most appropriate primary key

Primary Key


Secondary Key or Alternative Key

A table may have one or more choices for the primary key. Collectively these are known as candidate keys as discuss earlier. One is selected as the primary key. Those not selected are known as secondary keys or alternative keys.
For example in the table showing candidate keys above we identified two candidate keys, studentId and firstName + lastName. The studentId would be the most appropriate for a primary key leaving the other candidate key as secondary or alternative key. It should be noted for the other key to be candidate keys, we are assuming you will never have a person with the same first and last name combination. As this is unlikely we might consider fistName+lastName to be a suspect candidate key as it would be restrictive of the data you might enter. It would seem a shame to not allow John Smith onto a course just because there was already another John Smith.

Simple Key

Any of the keys described before (ie primary, secondary or foreign) may comprise one or more fields, for example if firstName and lastName was our key this would be a key of two fields where as studentId is only one. A simple key consists of a single field to uniquely identify a record. In addition the field in itself cannot be broken down into other fields, for example, studentId, which uniquely identifies a particular student, is a single field and therefore is a simple key. No two students would have the same student number.

Compound Key

A compound key consists of more than one field to uniquely identify a record. A compound key is distinguished from a composite key because each field, which makes up the primary key, is also a simple key in its own right. An example might be a table that represents the modules a student is attending. This table has a studentId and a moduleCode as its primary key. Each of the fields that make up the primary key are simple keys because each represents a unique reference when identifying a student in one instance and a module in the other.

Composite

A composite key consists of more than one field to uniquely identify a record. This differs from a compound key in that one or more of the attributes, which make up the key, are not simple keys in their own right. Taking the example from compound key, imagine we identified a student by their firstName + lastName. In our table representing students on modules our primary key would now be firstName + lastName + moduleCode. Because firstName + lastName represent a unique reference to a student, they are not each simple keys, they have to be combined in order to uniquely identify the student. Therefore the key for this table is a composite key.


23 December 2011

String is Immutable in C# and Java, Use of String builder in C#

String is Immutable in C# and Java, Use of String builder in C#

In C# a string is immutable and cannot be altered. When you alter a string, you are actually creating a new string, which in turn uses more memory than necessary, creates more work for the garbage collector and makes the code execution run slower. When a string is being modified frequently it begins to be a burden on performance .This seemingly innocent example below creates three string objects.

string msg = "Your total is ";//String object 1 is created
msg += "$500 ";            //String object 2 is created
msg += DateTime.Now();   //String object 3 is created

StringBuilder is a string-like object whose value is a mutable sequence of characters. The value is said to be mutable because it can be modified once it has been created by appending, removing, replacing, or inserting characters. You would modify the above code like this.

StringBuilder sb = new StringBuilder();
sb.Append("Your total is ");
sb.Append("$500 ");
sb.Append(DateTime.Now());

The individual characters in the value of a StringBuilder can be accessed with the Chars property. Index positions start from zero.



21 December 2011

SQL HAVING Clause

SQL HAVING Clause

The HAVING clause was added to SQL because the WHERE keyword could not be used with aggregate functions.

for example:

SELECT Customer,SUM(OrderPrice) FROM Orders
GROUP BY Customer
HAVING SUM(OrderPrice)<2000


19 December 2011

SqlBulkCopy in C#


SqlBulkCopy in C#

Lets you efficiently bulk load a SQL Server table with data from another source

 using (SqlBulkCopy bulkCopy =
 new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity))
 {
      bulkCopy.DestinationTableName =
      "dbo.BulkCopyDemoMatchingColumns";
     try
     {
         // Write from the source to the destination may be table name or datatable.
         bulkCopy.WriteToServer(dt);
     }
    catch (Exception ex)
    {
          Console.WriteLine(ex.Message);
    }
    finally
    {
       
    }
}

Single responsibility principle in OOPS


Single responsibility principle in OOPS

In object-oriented programming, the single responsibility principle states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.



The single responsibility principle says that these two aspects of the problem are really two separate responsibilities, and should therefore be in separate classes or modules. It would be a bad design to couple two things that change for different reasons at different times.
The reason it is important to keep a class focused on a single concern is that it makes the class more robust

16 December 2011

Varchar vs nvarchar in Sql Server


Varchar vs nvarchar in Sql Server

1. If your database will not be storing multilingual data you should use the varchar datatype instead. The reason for this is that nvarchar takes twice as much space as varchar.

multilingual means expressed in several languages

2.nvarchar support Unicode

  data types support Unicode data:

  nchar

  nvarchar

  ntext

Unitcode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems.


13 December 2011

Session-State Modes in asp.net


Session-State Modes in asp.net


1.InProc mode
2.StateServer
3.SQLServer
4.Custom mode
5.Off mode

InProc mode, which stores session state in memory on the Web server. This is
the default.

StateServer mode, which stores session state in a separate process called
the ASP.NET state service. This ensures that session state is preserved if
the Web application is restarted and also makes session state available to
multiple Web servers in a Web farm.

SQLServer mode stores session state in a SQL Server database. This ensures
that session state is preserved if the Web application is restarted and also
makes session state available to multiple Web servers in a Web farm.

Custom mode, which enables you to specify a custom storage provider.

Off mode, which disables session state.


http://msdn.microsoft.com/en-us/library/ms178586.aspx

8 December 2011

Microsoft SQL Server Constraints

Microsoft SQL Server Constraints



Data integrity rules fall into one of three categories: entity, referential, and domain. We want to briefly describe these terms to provide a complete discussion.

Entity Integrity

Entity integrity ensures each row in a table is a uniquely identifiable entity. You can apply entity integrity to a table by specifying a PRIMARY KEY constraint. For example, the ProductID column of the Products table is a primary key for the table.

Referential Integrity

Referential integrity ensures the relationships between tables remain preserved as data is inserted, deleted, and modified. You can apply referential integrity using a FOREIGN KEY constraint. The ProductID column of the Order Details table has a foreign key constraint applied referencing the Orders table. The constraint prevents an Order Detail record from using a ProductID that does not exist in the database. Also, you cannot remove a row from the Products table if an order detail references the ProductID of the row.
Entity and referential integrity together form key integrity.

Domain Integrity

Domain integrity ensures the data values inside a database follow defined rules for values, range, and format. A database can enforce these rules using a variety of techniques, including CHECK constraints, UNIQUE constraints, and DEFAULT constraints. These are the constraints we will cover in this article, but be aware there are other options available to enforce domain integrity. Even the selection of the data type for a column enforces domain integrity to some extent. For instance, the selection of datetime for a column data type is more restrictive than a free format varchar field.
The following list gives a sampling of domain integrity constraints.
  • A product name cannot be NULL.
  • A product name must be unique.
  • The date of an order must not be in the future.
  • The product quantity in an order must be greater than zero.

Unique Constraints

As we have already discussed, a unique constraint uses an index to ensure a column (or set of columns) contains no duplicate values. By creating a unique constraint, instead of just a unique index, you are telling the database you really want to enforce a rule, and are not just providing an index for query optimization. The database will not allow someone to drop the index without first dropping the constraint.
From a SQL point of view, there are three methods available to add a unique constraint to a table. The first method is to create the constraint inside of CREATE TABLE as a column constraint. A column constraint applies to only a single column. The following SQL will create a unique constraint on a new table: Products_2.

CREATE TABLE Products_2 
( 
    ProductID int PRIMARY KEY,
    ProductName nvarchar (40) Constraint IX_ProductName UNIQUE
)

This command will actually create two unique indexes. One is the unique, clustered index given by default to the primary key of a table. The second is the unique index using the ProductName column as a key and enforcing our constraint.
A different syntax allows you to create a table constraint. Unlike a column constraint, a table constraint is able to enforce a rule across multiple columns. A table constraint is a separate element in the CREATE TABLE command. We will see an example of using multiple columns when we build a special CHECK constraint later in the article. Notice there is now a comma after the ProductName column definition.

CREATE TABLE Products_2 
( 
    ProductID int PRIMARY KEY,
    ProductName nvarchar (40),
    CONSTRAINT IX_ProductName UNIQUE(ProductName) 
)
The final way to create a constraint via SQL is to add a constraint to an existing table using the ALTER TABLE command, as shown in the following command:

CREATE TABLE Products_2
( 
   ProductID int PRIMARY KEY,
   ProductName nvarchar (40)
)

ALTER TABLE Products_2 
    ADD CONSTRAINT IX_ProductName UNIQUE (ProductName)

If duplicate data values exist in the table when the ALTER TABLE command runs, you can expect an error message similr to the following:

Server: Msg 1505, Level 16, State 1, Line 1
CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 2. 
Most significant primary key is 'Hamburger'.
Server: Msg 1750, Level 16, State 1, Line 1
Could not create constraint. See previous errors.
The statement has been terminated.

Check Constraints

Check constraints contain an expression the database will evaluate when you modify or insert a row. If the expression evaluates to false, the database will not save the row. Building a check constraint is similar to building a WHERE clause. You can use many of the same operators (>, <, <=, >=, <>, =) in additional to BETWEEN, IN, LIKE, and NULL. You can also build expressions around AND and OR operators. You can use check constraints to implement business rules, and tighten down the allowed values and formats allowed for a particular column.
We can use the same three techniques we learned earlier to create a check constraint using SQL. The first technique places the constraint after the column definition, as shown below. Note the constraint name is optional for unique and check constraints.

CREATE TABLE Products_2
(
    ProductID int PRIMARY KEY,
    UnitPrice money CHECK(UnitPrice > 0 AND UnitPrice < 100)
)

In the above example we are restricting values in the UnitPrice column between 0 and 100. Let’s try to insert a value outside of this range with the following SQL.

INSERT INTO Products_2 VALUES(1, 101)

The database will not save the values and should respond with the following error.

Server: Msg 547, Level 16, State 1, Line 1
INSERT statement conflicted with COLUMN CHECK constraint 'CK__Products___UnitP__2739D489'.
The conflict occurred in database 'Northwind', table 'Products_2', column 'UnitPrice'.
The statement has been terminated.
The following sample creates the constraint as a table constraint, separate from the column definitions.

CREATE TABLE Products_2
(
    ProductID int PRIMARY KEY,
    UnitPrice money,
    CONSTRAINT CK_UnitPrice2 CHECK(UnitPrice > 0 AND UnitPrice < 100)
)
Remember, with a table constraint you can reference multiple columns. The constraint in the following example will ensure we have either a telephone number or a fax number for every customer.

CREATE TABLE Customers_2 
(
    CustomerID int,
    Phone varchar(24),
    Fax varchar(24), 
    CONSTRAINT CK_PhoneOrFax 
           CHECK(Fax IS NOT NULL OR PHONE IS NOT NULL)
)
You can also add check constraints to a table after a table exists using the ALTER TABLE syntax. The following constraint will ensure an employee date of hire is always in the past by using the system function GETTIME.

CREATE TABLE Employees_2
(
    EmployeeID int,
    HireDate datetime
)

ALTER TABLE Employees_2
  ADD CONSTRAINT CK_HireDate CHECK(hiredate < GETDATE())

Check Constraints and Existing Values

As with UNIQUE constraints, adding a CHECK constraint after a table is populated runs a chance of failure, because the database will check existing data for conformance. This is not optional behavior with a unique constraint, but it is possible to avoid the conformance test when adding a CHECK constraint using WITH NOCHECK syntax in SQL.

CREATE TABLE Employees_2
(
    EmployeeID int,
    Salary money
)

INSERT INTO Employees_2 VALUES(1, -1)

ALTER TABLE Employees_2 WITH NOCHECK 
    ADD CONSTRAINT CK_Salary CHECK(Salary > 0)

Check Constraints and NULL Values

Earlier in this section we mentioned how the database will only stop a data modification when a check restraint returns false. We did not mention, however, how the database allows the modification to take place if the result is logically unknown. A logically unknown expression happens when a NULL value is present in an expression. For example, let’s use the following insert statement on the last table created above.

INSERT INTO Employees_2 (EmployeeID, Salary) VALUES(2, NULL)

Even with the constraint on salary (Salary > 0) in place, the INSERT is successful. A NULL value makes the expression logically unknown. A CHECK constraint will only fail an INSERT or UPDATE if the expression in the constraint explicitly returns false. An expression returning true, or a logically unknown expression will let the command succeed.

Restrictions On Check Constraints

Although check constraints are by far the easiest way to enforce domain integrity in a database, they do have some limitations, namely:
  • A check constraint cannot reference a different row in a table.
  • A check constraint cannot reference a column in a different table.

NULL Constraints

Although not a constraint in the strictest definition, the decision to allow NULL values in a column or not is a type of rule enforcement for domain integrity.
Using SQL you can use NULL or NOT NULL on a column definition to explicitly set the nullability of a column. In the following example table, the FirstName column will accept NULL values while LastName always requires a non NULL value. Primary key columns require a NOT NULL setting, and default to this setting if not specified.

CREATE TABLE Employees_2 
(
    EmployeeID int PRIMARY KEY,
    FirstName varchar(50) NULL,
    LastName varchar(50) NOT NULL,
)

If you do not explicitly set a column to allow or disallow NULL values, the database uses a number of rules to determine the "nullability" of the column, including current configuration settings on the server. I recommended you always define a column explicitly as NULL or NOT NULL in your scripts to avoid problems when moving between different server environments.
Given the above table definition, the following two INSERT statements can succeed.

INSERT INTO Employees_2 VALUES(1, 'Geddy', 'Lee')
INSERT INTO Employees_2 VALUES(2, NULL, 'Lifeson')

However, the following INSERT statement should fail with the error shown below.

INSERT INTO Employees_2 VALUES(3, 'Neil', NULL)

Server: Msg 515, Level 16, State 2, Line 1
Cannot insert the value NULL into column 'LastName', table 'Northwind.dbo.Employees_2'; 
column does not allow nulls. INSERT fails.
The statement has been terminated.

You can declare columns in a unique constraint to allow NULL values. However, the constraint checking considers NULL values as equal, so on a single column unique constraint, the database allows only one row to have a NULL value.

Default Constraints

Default constraints apply a value to a column when an INSERT statement does not specify the value for the column. Although default constraints do not enforce a rule like the other constraints we have seen, they do provide the proper values to keep domain integrity in tact. A default can assign a constant value, the value of a system function, or NULL to a column. You can use a default on any column except IDENTITY columns and columns of type timestamp.
The following example demonstrates how to place the default value inline with the column definition. We also mix in some of the other constraints we have seen in this article to show you how you can put everything together.

CREATE TABLE Orders_2
(
    OrderID int IDENTITY NOT NULL ,
    EmployeeID int NOT NULL ,
    OrderDate datetime NULL DEFAULT(GETDATE()),
    Freight money NULL DEFAULT (0) CHECK(Freight >= 0),
    ShipAddress nvarchar (60) NULL DEFAULT('NO SHIPPING ADDRESS'),
    EnteredBy nvarchar (60) NOT NULL DEFAULT(SUSER_SNAME())
)

We can examine the behavior of the defaults with the following INSERT statement, placing values only in the EmployeeID and Frieght fields.

INSERT INTO Orders_2 (EmployeeID, Freight) VALUES(1, NULL)

If we then query the table to see the row we just inserted, we should see the following results.

  OrderID:1
  EmployeeID:1 
  OrderDate:2003-01-02
  Freight: NULL
  ShipAddress: NO SHIPPING ADDRESS
  EnteredBy: sa

Notice the Freight column did not receive the default value of 0. Specifying a NULL value is not the equivalent of leaving the column value unspecified, the database does not use the default and NULL is placed in the column instead.

Maintaining Constraints

In this section we will examine how to delete an existing constraint. We will also take a look at a special capability to temporarily disable constraints for special processing scenarios.

Dropping Constraints

First, let’s remove the check on UnitPrice in Product table.

ALTER TABLE Products
    DROP CONSTRAINT CK_Products_UnitPrice

If all you need to do is drop a constraint to allow a one time circumvention of the rules enforcement, a better solution is to temporarily disable the constraint, as we explain in the next section.

Disabling Constraints

Special situations often arise in database development where it is convenient to temporarily relax the rules. For example, it is often easier to load initial values into a database one table at a time, without worrying with foreign key constraints and checks until all of the tables have finished loading. After the import is complete, you can turn constraint checking back on and know the database is once again protecting the integrity of the data.
Note: The only constraints you can disable are the FOREIGN KEY constraint, and the CHECK constraint. PRIMARY KEY, UNIQUE, and DEFAULT constraints are always active.
Disabling a constraint using SQL is done through the ALTER TABLE command. The following statements disable the CHECK constraint on the UnitsOnOrder column, and the FOREIGN KEY constraint on the CategoryID column.

ALTER TABLE Products NOCHECK CONSTRAINT CK_UnitsOnOrder 
ALTER TABLE Products NOCHECK CONSTRAINT FK_Products_Categories

If you need to disable all of the constraints on a table, manually navigating through the interface or writing a SQL command for each constraint may prove to be a laborious process. There is an easy alternative using the ALL keyword, as shown below:

ALTER TABLE Products NOCHECK CONSTRAINT ALL

You can re-enable just the CK_UnitsOnOrder constraint again with the following statement:

ALTER TABLE Products CHECK CONSTRAINT CK_UnitsOnOrder

When a disabled constraint is re-enabled, the database does not check to ensure any of the existing data meets the constraints. We will touch on this subject shortly. To turn on all constraints for the Products table, use the following command:

ALTER TABLE Products CHECK CONSTRAINT ALL

Manually Checking Constaints

With the ability to disable and re-enable constraints, and the ability to add constraints to a table using the WITH NOCHECK option, you can certainly run into a condition where the referential or domain integrity of your database is compromised. For example, let’s imagine we ran the following INSERT statement after disabling the CK_UnitsOnOrder constraint:

INSERT INTO Products (ProductName, UnitsOnOrder) VALUES('Scott''s Stuffed Shells', -1)

The above insert statement inserts a -1 into the UnitsOnOrder column, a clear violation of the CHECK constraint in place on the column. When we re-enable the constraint, SQL Server will not complain as the data is not checked. Fortunately, SQL Server provides a Database Console Command you can run from any query tool to check all enabled constraints in a database or table. With CK_UnitsOnOrder re-enabled, we can use the following command to check for constraint violations in the Products table.

dbcc checkconstraints(Products)

To check an entire database, omit the parentheses and parameter from the DBCC command. The above command will give the following output and find the violated constraint in the Products table.

Table     Constraint       Where 
--------- ---------------- -------------------- 
Products  CK_UnitsOnOrder  UnitsOnOrder = '-1'


You can use the information in the DBCC output to track down the offending row.


reference:http://odetocode.com/articles/79.aspx

2 December 2011

Bucket Problem In C# Windows application

 Bucket Problem In C# Windows applicationThree-Bucket Problem
Here is another classic puzzle called the Three Bucket Problem. In this scenario, there are an 8-liter bucket filled with water and empty 3-liter and 5-liter buckets. You solve the puzzle by using the three buckets to divide the 8 liters of water into two equal parts of 4 liters. 

Expected Result:
Drag a bucket above the bucket you wish to pour into. Continue until you have divided the 8 liters into two buckets with 4 liters in each.
For Reference:
http://learningandtheadolescentmind.org/resources_02_bucket.html





//Bucket Problem 


//Three Buckets A,B,C in 8 litres,5 litres,3 litres respectively


//Fill the A and B with 4 litres of milk with out wasting any milk and C empty


using System;
using System.Windows.Forms;


namespace Bucket_Problem
{
    public partial class Form1 : Form
    {
      //Number of attempt by User
        int count = 0;
        ///



        /// Constant value maximum value for Buckets
        ///

        public const int bucketA=8,bucketB=5,bucketC=3;
       ///



       /// Initial value for buckets
       ///

        public int bucketAvalue=0, bucketBvalue=0, bucketCvalue=0;
        public int tmpA = 0, tmpB = 0, tmpC = 0;
        public Form1()
        {
            InitializeComponent();
        }


    
        private void Form1_Load(object sender, EventArgs e)
        {
            //Fill the Combo Box at Load Event
            FillComboboxatLoad();
           //Initial Value for Bucket
            txtAbucket.Text = "8";
            txtBbucket.Text = "0";
            txtCbucket.Text = "0";


        }
        
        ///



        /// Fill the Combo Box at Load Event
        ///

        
        public void FillComboboxatLoad()
        {
            cmbFrom.Items.Add("Select...");
            cmbFrom.Items.Add("Bucket A");
            cmbFrom.Items.Add("Bucket B");
            cmbFrom.Items.Add("Bucket C");
            cmbFrom.SelectedIndex = 0;
            
            cmbTo.Items.Add("Select...");
            cmbTo.Items.Add("Bucket A");
            cmbTo.Items.Add("Bucket B");
            cmbTo.Items.Add("Bucket C");
            cmbTo.SelectedIndex = 0;
        }
   
        ///



       /// Used For sending the Milk from One Bucket to Another Bucket
       ///

      
        private void btnGo_Click(object sender, EventArgs e)
        {


            if ((cmbFrom.SelectedItem.ToString().Trim() == "Select...")||(cmbTo.SelectedItem.ToString().Trim()) == "Select...")
            {   


            MessageBox.Show("You must Select the Bucket in Combo Box.", "Name Entry Error",
            MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
            }
            else
            {
             //Increment The Count Value
            count = count + 1;
            bucketAvalue=Convert.ToInt16(txtAbucket.Text);
            bucketBvalue =Convert.ToInt16(txtBbucket.Text);
            bucketCvalue = Convert.ToInt16(txtCbucket.Text);
            try
            {
                
                //From A->B Data Flow from A TO B
                //This Condition in used to Check whether A and B Selected on From and To


                if (cmbFrom.SelectedItem.ToString() == "Bucket A")
                {
                    if (cmbTo.SelectedItem.ToString() == "Bucket B")
                    {
                        if (bucketAvalue + bucketBvalue <= bucketB)
                        {
                            bucketBvalue = bucketAvalue + bucketBvalue;


                            bucketAvalue = 0;
                        }
                        else if (bucketAvalue + bucketBvalue > bucketB)
                        {
                            tmpB = (bucketB - bucketBvalue);
                            bucketAvalue = bucketAvalue - tmpB;
                            if (bucketAvalue < 0)
                                bucketAvalue = bucketAvalue * (-1);
                            bucketBvalue = bucketBvalue + tmpB;




                        }
                        txtAbucket.Text = bucketAvalue.ToString();
                        txtBbucket.Text = bucketBvalue.ToString();


                    }
                
                    //From A->C Data Flow from A TO C
                    //This Condition in used to Check whether A and C Selected on From and To
                    if (cmbTo.SelectedItem.ToString() == "Bucket C")
                    {
                        if (bucketAvalue + bucketCvalue <= bucketC)
                        {
                            bucketCvalue = bucketAvalue + bucketCvalue;


                            bucketAvalue = 0;
                        }
                        else if (bucketAvalue + bucketCvalue > bucketC)
                        {
                            tmpC = (bucketC - bucketCvalue);
                            bucketAvalue = bucketAvalue - tmpC;
                            if (bucketAvalue < 0)
                                bucketAvalue = bucketAvalue * (-1);
                            bucketCvalue = bucketCvalue + tmpC;




                        }
                        txtAbucket.Text = bucketAvalue.ToString();
                        txtCbucket.Text = bucketCvalue.ToString();


                    }




                }
                
                //From B->A Data Flow from B TO A
                //This Condition in used to Check whether B and A Selected on From and To
                if (cmbFrom.SelectedItem.ToString() == "Bucket B")
                {
                    if (cmbTo.SelectedItem.ToString() == "Bucket A")
                    {
                        if (bucketBvalue + bucketAvalue <= bucketA)
                        {
                            bucketAvalue = bucketBvalue + bucketAvalue;


                            bucketBvalue = 0;
                        }
                        else if (bucketAvalue + bucketBvalue > bucketA)
                        {
                            tmpA = (bucketA - bucketAvalue);
                            bucketBvalue = bucketBvalue - tmpA;
                            if (bucketBvalue < 0)
                                bucketBvalue = bucketBvalue * (-1);
                            bucketAvalue = bucketAvalue + tmpA;




                        }
                        txtAbucket.Text = bucketAvalue.ToString();
                        txtBbucket.Text = bucketBvalue.ToString();


                    }
                   
                    //From B->C Data Flow from B TO C
                    //This Condition in used to Check whether B and C Selected on From and To
                    if (cmbTo.SelectedItem.ToString() == "Bucket C")
                    {
                        if (bucketBvalue + bucketCvalue <= bucketC)
                        {
                            bucketCvalue = bucketBvalue + bucketCvalue;


                            bucketBvalue = 0;
                        }
                        else if (bucketBvalue + bucketCvalue > bucketC)
                        {
                            tmpC = (bucketC - bucketCvalue);
                            bucketBvalue = bucketBvalue - tmpC;
                            if (bucketBvalue < 0)
                                bucketBvalue = bucketBvalue * (-1);
                            bucketCvalue = bucketCvalue + tmpC;




                        }
                        txtBbucket.Text = bucketBvalue.ToString();
                        txtCbucket.Text = bucketCvalue.ToString();


                    }
                }


                //From C->A Data Flow from C TO A
                //This Condition in used to Check whether C and A Selected on From and To
            if (cmbFrom.SelectedItem.ToString() == "Bucket C")
            {
                    if (cmbTo.SelectedItem.ToString() == "Bucket A")
                    {
                        if (bucketCvalue + bucketAvalue <= bucketA)
                        {
                            bucketAvalue = bucketCvalue + bucketAvalue;


                            bucketCvalue = 0;
                        }
                        else if (bucketAvalue + bucketCvalue > bucketA)
                        {
                            tmpA = (bucketA - bucketAvalue);
                            bucketCvalue = bucketCvalue - tmpA;
                            if (bucketCvalue < 0)
                                bucketCvalue = bucketBvalue * (-1);
                            bucketAvalue = bucketAvalue + tmpA;




                        }
                        txtAbucket.Text = bucketAvalue.ToString();
                        txtCbucket.Text = bucketCvalue.ToString();


                    }
                    //FROM C->B  Data Flow from C TO B
              //  This Condition in used to Check whether C and C Selected on From and To
                    if (cmbTo.SelectedItem.ToString() == "Bucket B")
                    {
                        if (bucketBvalue + bucketCvalue <= bucketB)
                        {
                            bucketBvalue = bucketBvalue + bucketCvalue;


                            bucketCvalue = 0;
                        }
                        else if (bucketCvalue + bucketBvalue > bucketB)
                        {
                            tmpB = (bucketB - bucketBvalue);
                            bucketCvalue = bucketCvalue - tmpB;
                            if (bucketCvalue < 0)
                                bucketCvalue = bucketCvalue * (-1);
                            bucketBvalue = bucketBvalue + tmpB;
                        }
                        txtBbucket.Text = bucketBvalue.ToString();
                        txtCbucket.Text = bucketCvalue.ToString();
                    }
           
            }
            lbCount.Text = count.ToString();
            if ((txtAbucket.Text == "4") && (txtBbucket.Text == "4"))
            {
                lbCount.Text = "You Won Your count is" + count;
                
            }


            
         }
            
         catch (Exception ex)
         {
            throw ex;
         }
            }
        }
        private void cmbFrom_SelectedIndexChanged(object sender, EventArgs e)
        {
            //Fill the Combobox when it is Bucket A
            if (cmbFrom.SelectedItem.ToString() == "Bucket A")
            {
             
                cmbTo.Items.Clear();
                cmbTo.Items.Add("Select...");
                cmbTo.Items.Add("Bucket B");
                cmbTo.Items.Add("Bucket C");
               cmbTo.SelectedIndex = 0;
                
            }
            //Fill the Combobox when it is Bucket B
            if (cmbFrom.SelectedItem.ToString() == "Bucket B")
            {
               
                cmbTo.Items.Clear();
                cmbTo.Items.Add("Select...");
                cmbTo.Items.Add("Bucket A");
                cmbTo.Items.Add("Bucket C");
                cmbTo.SelectedIndex = 0;
            }
            //Fill the Combobox when it is Bucket C
            if (cmbFrom.SelectedItem.ToString() == "Bucket C")
            {
             
                cmbTo.Items.Clear();
                cmbTo.Items.Add("Select...");
                cmbTo.Items.Add("Bucket A");
                cmbTo.Items.Add("Bucket B");
                cmbTo.SelectedIndex = 0;
            }


        }


    }
}


Consistency level in Azure cosmos db

 Consistency level in Azure cosmos db Azure Cosmos DB offers five well-defined consistency levels to provide developers with the flexibility...