30 April 2024

OAuth and C# Integration

 OAuth and C# Integration

Authenticate users with Microsoft Accounts
      
Steps
  1. Register your application - Microsoft Identity Platform(Azure Active Directory)
  2. Install required Nuget packages.
    1. Microsoft.Identity.Client
  3. Configure OAuth Settings - including the ClientID, tenantId, Client Secret
  4. Implementing Authentication
  5. Handling Token acquistion - method AcquireTokenByAuthorization Code
  6. Refresh Token

Solid Principles - Project example

 Solid Principles - Project example


Single Responsibility Principle

your code should have only one job - Mail and SMS
Open/Closed Principle
We can override the class functionality, but not change the existing class. Abstract -> Overide. Example - Version Maintain

Liskov/Substitution Principle

Derived class must be substitutable for its base class.
Abstract Class -> Interface -> No Override method

Use Abstract class in interface and give more explanation

Interface Segregation Principle
clients should not be forced to implement interfaces they don't use. Instead of one fat interface, many small interfaces are preferred.
     

Dependency Inversion Principle

high-level modules/classes should not depend on low-level modules/classes. should depend upon abstractions.

Use dependency injection for injecting the dependency

25 October 2023

Consistency level in Azure cosmos db

 Consistency level in Azure cosmos db

Azure Cosmos DB offers five well-defined consistency levels to provide developers with the flexibility to choose the desired trade-off between consistency, availability, and latency for their applications. Here are the available consistency levels in Azure Cosmos DB:

 1. Strong Consistency:

   - Description: Guarantees linearizability, meaning that reads are guaranteed to return the most recent committed version of the data across all replicas. It ensures the highest consistency but might have higher latency and lower availability compared to other levels.

   - Use Case: Critical applications where strong consistency is required, such as financial or legal applications.

 2. Bounded staleness:

   - Description: Allows you to specify a maximum lag between reads and writes. You can set the maximum staleness in terms of time or number of versions (operations). This level provides a balance between consistency, availability, and latency.

   - Use Case: Scenarios where you can tolerate a slight delay in consistency, such as social media feeds or news applications.

 3. Session Consistency:

   - Description: Guarantees monotonic reads and writes within a single client session. Once a client reads a particular version of the data, it will never see a version of the data that's older than that in subsequent reads. This level provides consistency within a user session.

   - Use Case: User-specific data scenarios where consistency within a user session is important, like in online shopping carts.

 4. Consistent Prefix:

   - Description: Guarantees that reads never see out-of-order writes. Within a single partition key, reads are guaranteed to see writes in the order they were committed. This level provides consistency per partition key.

   - Use Case: Applications that require ordered operations within a partition key, such as time-series data or event logging.

 5. Eventual Consistency:

   - Description: Provides the weakest consistency level. Guarantees that, given enough time and lack of further updates, all replicas will converge to the same data. There is no strict guarantee about the order in which updates are applied to different replicas.

   - Use Case: Scenarios where eventual consistency is acceptable, such as content management systems or applications dealing with non-critical data.

6. Custom Consistency:

   - Description: Allows developers to define custom consistency levels based on specific requirements. This flexibility enables fine-tuning consistency for unique application needs.

   - Use Case: Applications with highly specific or unique consistency requirements not met by the predefined consistency levels.

When selecting a consistency level, it's essential to understand the requirements of your application, as well as the implications on latency, availability, and throughput. Each consistency level offers a different balance between these factors, allowing you to tailor the database behavior to match your application's needs.

Reprocess data in deadletter queue - Service Bus

Reprocess data in deadletter queue - Service Bus

Reprocessing data from the Dead Letter Queue in Azure Service Bus involves inspecting the failed messages, addressing the issues that caused the failures, and then resubmitting the corrected messages for processing. Here's how you can reprocess data from the Dead Letter Queue:

 1. Inspect Messages:

   - First, you need to identify and understand the reason why messages ended up in the Dead Letter Queue. Inspect the messages to determine the cause of the failure. You can do this by reading the messages' content, properties, and error details.

 2. Correct the Issues:

   - Once you've identified the issues, correct the problems with the messages. This might involve fixing data inconsistencies, updating message formats, or addressing any other issues that caused the messages to fail.

3. Manually Resubmit Messages:

   - After correcting the problematic messages, you can manually resubmit them to the main queue or topic for reprocessing. To do this, you need to create a new message with the corrected content and send it to the appropriate queue or topic.

   Example (using Azure SDK for .NET):

   ```csharp

   // Read the message from the Dead Letter Queue

   var deadLetterMessage = await deadLetterReceiver.ReceiveAsync();

      // Correct the issues with the message

   // ...

   // Create a new message with the corrected content

   var correctedMessage = new Message(Encoding.UTF8.GetBytes("Corrected Message Content"))

   {

       // Set message properties, if necessary

       MessageId = "NewMessageId"

   };

   // Send the corrected message back to the main queue or topic

   await queueClient.SendAsync(correctedMessage);

4. Automated Retry Mechanism:

   - Depending on the nature of the issues, you might implement an automated retry mechanism. For transient errors, you can configure Service Bus to automatically retry processing failed messages after a certain delay. You can set policies for automatic retries and define the maximum number of delivery attempts before a message is moved to the Dead Letter Queue.

   Example (using Azure SDK for .NET):

   ```csharp

   queueClient.ServiceBusConnection.OperationTimeout = TimeSpan.FromSeconds(30);

   queueClient.RetryPolicy = new RetryExponential(TimeSpan.FromSeconds(1), TimeSpan.FromMinutes(5), 10);

5. Monitoring and Alerting:

   - Implement monitoring and alerting to be notified when messages are moved to the Dead Letter Queue. This ensures that you're aware of processing failures promptly and can take appropriate actions to resolve them.

By following these steps, you can reprocess data from the Dead Letter Queue, ensuring that failed messages are corrected, resubmitted, and successfully processed in your Azure Service Bus application.

Uses of Deadletter queue in Service bus

 Uses of Deadletter queue in Service bus

The Dead Letter Queue (DLQ) in Azure Service Bus is a storage area used to hold messages that cannot be delivered to any receiver, whether it's due to message expiration, delivery attempts exceeding a specified limit, or issues with message content. Here are the primary uses of the Dead Letter Queue in Azure Service Bus:

1. Error Handling:

   - Unprocessable Messages: Messages that are malformed, contain incorrect data, or cannot be deserialized properly might be sent to the Dead Letter Queue. This separation allows developers to focus on resolving issues with problematic messages without affecting the main processing flow.

2. Message Expiry:

   - **Expired Messages:** Messages with a limited time to live (TTL) that have expired before being processed end up in the Dead Letter Queue. This ensures that expired messages do not get lost and can be analyzed for auditing purposes.

3. Delivery Failure:

   - Exceeded Delivery Attempts: If a message delivery attempt exceeds the maximum allowed retries (due to network issues or receiver failures), the message is moved to the Dead Letter Queue. This prevents infinite delivery loops for messages that cannot be successfully processed.

4. Auditing and Analysis:

   - Troubleshooting: Messages in the Dead Letter Queue can be analyzed to understand the reasons for failures. Developers can inspect these messages to identify patterns or issues leading to message failures.

   - **Auditing:** Dead Letter Queue acts as an audit trail for problematic messages, allowing administrators to track and monitor issues over time.

5. Retry Mechanism:

   - **Manual Retry:** Developers can manually inspect messages in the Dead Letter Queue, address the underlying issues, and then resubmit the messages for processing. This enables a manual retry mechanism for failed messages.

6. Compliance and Governance:

   - Compliance Requirements: In certain industries, compliance regulations require organizations to retain failed messages for auditing purposes. Dead Letter Queue ensures compliance with such requirements.

7. Preventing Data Loss:

   - Message Preservation: Messages in the Dead Letter Queue are preserved until manually removed or until a specified retention period expires. This prevents accidental data loss and allows for the recovery of important messages.

8. Notification and Alerting:

   - Alerting System: Integration with monitoring and alerting systems allows administrators to receive notifications when messages are moved to the Dead Letter Queue. This enables prompt response to message processing failures.

In summary, the Dead Letter Queue in Azure Service Bus provides a safety net for messages that cannot be successfully processed, ensuring that they are preserved, analyzed, and potentially retried. It plays a crucial role in maintaining data integrity, aiding in troubleshooting, and meeting compliance requirements within distributed systems.

Difference between Queue and topics in service bus

Difference between Queue and topics in service bus

Azure Service Bus provides two types of messaging entities: Queues and Topics/Subscriptions. While both serve as communication channels between different components of an application, they have distinct characteristics and use cases.

1. Queues:

- Point-to-Point Communication: Queues implement a one-to-one messaging pattern. A message sent to a queue is processed by a single consumer (receiver). It ensures that each message is consumed by only one receiver, making it suitable for point-to-point communication scenarios.

- Load Balancing: Multiple receivers can compete for messages in a queue. However, each message is processed by only one receiver. This enables load balancing among multiple consumers.

- Sequential Processing: Messages in a queue are processed in the order of arrival, ensuring sequential processing if needed.

- Guaranteed Delivery: Queues provide at-least-once delivery, meaning each message is guaranteed to be delivered to a receiver, ensuring reliable message processing.

- Example Use Case: Order processing system where each order needs to be processed by only one receiver to avoid duplication.

2. Topics/Subscriptions:

- Publish/Subscribe Pattern: Topics and Subscriptions implement a publish/subscribe messaging pattern. A message sent to a topic can be received by multiple consumers (subscribers) who have subscriptions to that topic. Subscriptions act as filters, allowing subscribers to receive specific subsets of messages.

- Message Multicasting: Messages sent to a topic are automatically multicasted to all eligible subscriptions. Each subscription can define rules to filter messages based on message properties.

- Multiple Subscribers: Multiple subscribers can receive the same message if they have subscriptions matching the message's properties. This allows for message broadcasting to interested parties.

- Example Use Case: News updates service where different subscribers might be interested in different categories of news (sports, politics, entertainment), and each category is a separate subscription.

 Key Differences:

- Communication Pattern: Queues facilitate point-to-point communication, while Topics/Subscriptions enable publish/subscribe communication patterns.

- Number of Subscribers: Queues have one receiver per message, whereas Topics/Subscriptions can have multiple subscribers receiving the same message if they match the subscription criteria.

- Filtering: Topics/Subscriptions allow message filtering based on properties, enabling more fine-grained control over which messages subscribers receive.

- Message Multicasting: Topics automatically multicast messages to all eligible subscriptions, allowing for efficient message distribution to multiple subscribers.

- Scalability: Topics/Subscriptions are more suitable for scenarios where messages need to be broadcasted to a large number of subscribers with different interests.

Choose between queues and topics/subscriptions based on your application's messaging requirements. If you need point-to-point communication and guaranteed delivery, use queues. If you need publish/subscribe capabilities and message filtering for multiple subscribers, use topics and subscriptions.

Duplicate Detection in service bus

 Duplicate Detection in service bus

Duplicate Detection in Azure Service Bus is a feature that helps prevent the storage of duplicate copies of messages within a specific timeframe. When you enable duplicate detection, Service Bus ensures that messages with the same `MessageId` property are either discarded or accepted based on your configuration.

Here are the key points to understand about Duplicate Detection in Azure Service Bus:

 1. MessageId Property:

   - Each message sent to a queue or topic in Azure Service Bus can have a `MessageId` property. This property should be set to a unique value for each message.

2. Duplicate Detection Window:

   - When you enable Duplicate Detection, you specify a **Duplicate Detection Window**. This window defines the time duration during which Service Bus examines the `MessageId` property to identify and eliminate duplicates.

 3. How it Works:

   - When a message is sent with a `MessageId`, Service Bus checks the `MessageId` against the messages in the Duplicate Detection Window.

   - If a message with the same `MessageId` is found within the specified window, the new message is treated as a duplicate and is not enqueued.

 4. Enabling Duplicate Detection:

   - You can enable Duplicate Detection when creating a queue or topic.

   - When creating the queue or topic, you can specify the `DuplicateDetectionHistoryTimeWindow`, which is the duration of the detection window.

   Example (using Azure SDK for .NET):

   csharp

   QueueDescription queueDescription = new QueueDescription("MyQueue")

   {

       // Set Duplicate Detection Window to 10 minutes

       DuplicateDetectionHistoryTimeWindow = TimeSpan.FromMinutes(10)

   };

  5. Message Expiration and Duplicate Detection:

   - If a message expires before the Duplicate Detection Window, it is removed from the system and won't be considered for duplicate detection even if a duplicate arrives later.

6. Considerations:

   - **Message Ordering:** If you require message ordering and use duplicate detection, ensure that the `MessageId` values are unique for all messages within the detection window. Otherwise, messages with the same `MessageId` might be considered duplicates and could affect the ordering.

7. Use Cases:

   - Duplicate Detection is useful in scenarios where it's crucial to ensure that a message is processed only once, preventing duplicates from causing unintended actions or data inconsistencies in the receiving application.

Enabling Duplicate Detection helps maintain data integrity and prevents unintended processing of duplicate messages within your Azure Service Bus queues and topics.

Implementing OAuth validation in a Web API

 I mplementing OAuth validation in a Web API Implementing OAuth validation in a Web API using C# typically involves several key steps to sec...