18 October 2023

Open/Closed Principle (OCP)

Open/Closed Principle (OCP)

It states that software entities (such as classes, modules, and functions) should be open for extension but closed for modification. In other words, the behavior of a module can be extended without altering its source code.

 Certainly! Let's extend the example to demonstrate the Open/Closed Principle (OCP) using an addition and subtraction scenario in C#.

First, we define an interface `IOperation` that represents different mathematical operations:

```csharp

// Operation interface

public interface IOperation

{

    int Apply(int x, int y);

}

Next, we implement two classes, `Addition` and `Subtraction`, that represent the addition and subtraction operations respectively:

```csharp

// Addition class implementing IOperation interface

public class Addition : IOperation

{

    public int Apply(int x, int y)

    {

        return x + y;

    }

}

// Subtraction class implementing IOperation interface

public class Subtraction : IOperation

{

    public int Apply(int x, int y)

    {

        return x - y;

    }

}

Now, let's create a calculator class `Calculator` that can perform addition and subtraction operations without modifying its existing code:

```csharp

// Calculator class adhering to the Open/Closed Principle

public class Calculator

{

    public int PerformOperation(IOperation operation, int x, int y)

    {

        return operation.Apply(x, y);

    }

}

In this example, the `Calculator` class is closed for modification because it doesn't need to be changed when new operations (like multiplication, division, etc.) are added. We can extend the functionality of the calculator by creating new classes that implement the `IOperation` interface.

Here's how you might use these classes:

```csharp

class Program

{

    static void Main()

    {

        Calculator calculator = new Calculator();


        // Perform addition operation

        IOperation addition = new Addition();

        int result1 = calculator.PerformOperation(addition, 10, 5);

        Console.WriteLine("Addition Result: " + result1); // Output: 15


        // Perform subtraction operation

        IOperation subtraction = new Subtraction();

        int result2 = calculator.PerformOperation(subtraction, 10, 5);

        Console.WriteLine("Subtraction Result: " + result2); // Output: 5

    }

}

In this way, the `Calculator` class is open for extension. You can add new operations by creating new classes that implement the `IOperation` interface without modifying the existing `Calculator` class, thereby following the Open/Closed Principle.

Single Responsibility Principle (SRP)

 Single Responsibility Principle (SRP) 

A class should have only one reason to change, meaning that a class should only have one job. If a class has more than one reason to change, it has more than one responsibility, and these responsibilities should be separated into different classes.

Certainly! Let's consider an example in C# to illustrate the Single Responsibility Principle (SRP). Imagine we are building a system to manage employees in a company. We can start with a class `Employee` that represents an employee in the company:

```csharp

public class Employee

{

    public int Id { get; set; }

    public string Name { get; set; }

    public decimal Salary { get; set; }


    public void CalculateSalaryBonus()

    {

        // Calculate salary bonus logic

        // ...

    }

    public void SaveToDatabase()

    {

        // Save employee to the database logic

        // ...

    }

}

In this example, the `Employee` class has two responsibilities:

1. **Calculating Salary Bonus**

2. **Saving to Database**

However, this violates the Single Responsibility Principle because the class has more than one reason to change. If the database schema changes or if the way salary bonuses are calculated changes, the `Employee` class would need to be modified for multiple reasons.

To adhere to the SRP, we should separate these responsibilities into different classes. Here's how you can refactor the code:

```csharp

public class Employee

{

    public int Id { get; set; }

    public string Name { get; set; }

    public decimal Salary { get; set; }

}

public class SalaryCalculator

{

    public decimal CalculateSalaryBonus(Employee employee)

    {

        // Calculate salary bonus logic

        // ...

    }

}

public class EmployeeRepository

{

    public void SaveToDatabase(Employee employee)

    {

        // Save employee to the database logic

        // ...

    }

}

In this refactored version, the `Employee` class is responsible only for representing the data of an employee. The `SalaryCalculator` class is responsible for calculating salary bonuses, and the `EmployeeRepository` class is responsible for saving employee data to the database. Each class now has a single responsibility, adhering to the Single Responsibility Principle. This separation allows for more flexibility and maintainability in the codebase.

17 October 2023

SonarCube in azure devops

 SonarCube in azure devops

Integrating SonarQube with Azure DevOps (formerly known as Visual Studio Team Services or VSTS) allows you to perform continuous code analysis and code quality checks as part of your CI/CD pipelines. Here's how you can set up SonarQube in Azure DevOps:

 Prerequisites:

1. SonarQube Server: You need a running instance of SonarQube. You can host SonarQube on your own server or use a cloud-based SonarCloud service.

2. SonarQube Scanner:Install the SonarQube Scanner on your build server or agent machine. The scanner is a command-line tool used to analyze projects and send the results to SonarQube.

### Steps to Integrate SonarQube with Azure DevOps:

1. Configure SonarQube Server:

   - Set up your SonarQube server and configure the quality profiles, rules, and other settings according to your project requirements.

2. Configure SonarQube in Azure DevOps:

   - In Azure DevOps, navigate to your project and go to the "Project Settings" > "Service connections" > "New service connection."

   - Select "SonarQube" and provide the SonarQube server URL and authentication details.

3. Add SonarQube Scanner Task to Pipeline:

   - In your Azure DevOps build or release pipeline, add the "SonarQubePrepare" and "SonarQubeAnalyze" tasks before your build tasks.

   - Configure the tasks with the appropriate SonarQube project key, project name, and other required parameters.

   Example YAML configuration:

   ```yaml

   steps:

   - task: SonarQubePrepare@4

     inputs:

       SonarQube: 'SonarQubeServiceConnection' # The name of the SonarQube service connection

       scannerMode: 'MSBuild'

       projectKey: 'YourProjectKey'

       projectName: 'YourProjectName'

       extraProperties: |

         sonar.exclusions=**/*.css, **/*.html  # Exclude specific file types from analysis

   

   - script: 'MSBuild.exe MySolution.sln'

     displayName: 'Build Solution'

   

   - task: SonarQubeAnalyze@4

   ```

4. Run Your Pipeline:

   - Queue your Azure DevOps pipeline. SonarQube tasks will run, and the analysis results will be sent to your SonarQube server.

5. View SonarQube Analysis:

   - Visit your SonarQube server's web interface to view the analysis results, including code quality metrics, issues, and other insights.

By integrating SonarQube with Azure DevOps pipelines, you can enforce code quality standards, identify and fix issues early in the development process, and maintain high-quality code in your projects. Remember to customize SonarQube rules and quality profiles to match your team's coding standards and best practices.

Dotnetcore code optimization

 Dotnetcore code optimization

Optimizing .NET Core code involves improving performance, reducing memory usage, and enhancing the overall efficiency of your application. Here are several tips and best practices to optimize your .NET Core applications:

 1. Profiling Your Code:

   - Use profiling tools like Visual Studio Profiler or JetBrains dotTrace to identify performance bottlenecks in your code.

Visual Studio - Diagonostic tools - Check the CPU, Memory Usage and Performance Profiler 

2. Use Asynchronous Programming:

   - Utilize asynchronous programming with `async` and `await` to perform I/O-bound operations asynchronously, preventing threads from being blocked and improving responsiveness.

 3. Optimize Database Queries:

   - Use efficient database queries and optimize indexes to reduce the number of database operations and improve query performance.

4. Caching:

   - Implement caching mechanisms, such as in-memory caching or distributed caching using Redis, to store and retrieve data, reducing the need for expensive computations or database queries.

 5. Minimize Database Round-Trips:

   - Reduce the number of database round-trips by batching multiple operations into a single call or by using tools like Dapper or Entity Framework Core's `Include` to optimize related data loading.

 6. Optimize LINQ Queries:

   - Be mindful of how LINQ queries are executed. Consider using `AsQueryable()` to ensure that queries are executed on the database side when working with Entity Framework Core.

7. Memory Management:

   - Avoid unnecessary object creation and ensure proper disposal of resources, especially when working with unmanaged resources. Implement the `IDisposable` pattern.

 8. Optimize Loops and Iterations:

   - Minimize the work done inside loops and iterations. Move invariant computations outside the loop whenever possible.

9. Use Value Types:

   - Prefer value types (structs) over reference types (classes) for small, frequently used objects, as they are allocated on the stack and can improve performance.

10. **Profiling and Logging:

   - Use profiling to identify bottlenecks and logging to track application behavior and identify issues.

11. Use DI/IoC Containers Wisely:

   - Avoid overuse of dependency injection and ensure that dependency injection containers are not misused, leading to unnecessary object creation.

12. Optimize Startup and Dependency Injection:

   - Minimize the services registered in the DI container during application startup to improve startup performance.

 13. Bundle and Minify Client-Side Assets:

   - Optimize CSS, JavaScript, and other client-side assets by bundling and minifying them to reduce the amount of data sent over the network.

14. **Use Response Caching and CDN:

   - Implement response caching for dynamic content and use Content Delivery Networks (CDNs) for static assets to reduce server load and improve response times.

15. GZip Compression:

   - Enable GZip compression on your web server to compress responses and reduce the amount of data transmitted over the network.

16. Enable JIT Compiler Optimizations:

   - JIT (Just-In-Time) compiler optimizations can significantly improve the performance of your application. .NET Core enables these optimizations by default, but you can ensure they are not disabled in your deployment settings.

17. Use Structs for Small Data Structures:

   - For small, data-oriented structures, use structs instead of classes. Structs are value types and can avoid unnecessary object allocations and garbage collection overhead.

18. Avoid Unnecessary Reflection:

   - Reflection can be slow. Avoid unnecessary use of reflection and consider using alternatives or caching reflection results when possible.

By applying these best practices and profiling your application to identify specific bottlenecks, you can significantly improve the performance and efficiency of your .NET Core applications. Remember that optimization efforts should be based on profiling data to focus on the parts of your codebase that have the most impact on performance.

const vs readonly in dotnet

 const vs readonly in dotnet

In C#, both `const` and `readonly` are used to declare constants, but they have different use cases and characteristics.

 1. const:

- Constants declared with the `const` keyword are implicitly static members. They must be initialized with a constant value during declaration and cannot be modified afterwards.

- `const` values are implicitly `static`, meaning they belong to the type itself rather than to a specific instance of the type.

- They are evaluated at compile-time and are replaced with their actual values in the compiled code.

- `const` members can be used in expressions that require constant values, such as array sizes or case labels in switch statements.

Example:

csharp

public class ConstantsClass

{

    public const int MaxValue = 100;

    // ...

}

2. readonly:

- `readonly` fields are instance members that are initialized either at the time of declaration or in a constructor. They can be modified only within the constructor of the class where they are declared.

- `readonly` fields can have different values for different instances of the same class, unlike `const` members which are shared across all instances of the class.

- They are evaluated at runtime and can have different values for different instances of the class.

- `readonly` members are useful when you want to assign a constant value to an instance member that might vary from one instance to another.

Example:

csharp

public class ReadOnlyExample

{

    public readonly int InstanceValue;


    public ReadOnlyExample(int value)

    {

        InstanceValue = value; // Initialized in the constructor

    }

}

Key Differences:

- `const` members are implicitly `static` and are shared across all instances of the class. `readonly` members are instance-specific and can have different values for different instances.

- `const` values are evaluated at compile-time, while `readonly` values are evaluated at runtime.

- `const` members must be initialized at the time of declaration, while `readonly` members can be initialized in the constructor.

In summary, use `const` when you want a constant value that is the same for all instances of a class, and use `readonly` when you want a constant value that can vary from one instance to another but doesn't change once it's set in the constructor. Choose the appropriate keyword based on the scope and mutability requirements of your constants.

Lazy loading and Eager loading in dotnet

 Lazy loading and Eager loading in dotnet

**Lazy Loading** and **Eager Loading** are two different strategies for loading related data in object-relational mapping (ORM) frameworks like Entity Framework in .NET. They are techniques used to optimize the performance of database queries by determining when and how related data is loaded from the database.

Lazy Loading:

**Lazy Loading** is a technique where related data is loaded from the database on-demand, as it's accessed by the application. In other words, the data is loaded from the database only when it's actually needed. This can lead to more efficient queries because not all related data is loaded into memory upfront. Lazy loading is often the default behavior in many ORMs.

Pros of Lazy Loading:

- Efficient use of resources: Only loads data when necessary, conserving memory.

- Simplifies data retrieval logic: Developers don't need to explicitly specify what related data to load.

Cons of Lazy Loading:

- Potential for N+1 query problem: If a collection of entities is accessed, and each entity has related data, it can result in multiple database queries (1 query for the main entities and N queries for related data, where N is the number of main entities).

- Performance overhead: The additional queries can impact performance, especially if not managed properly.

Eager Loading:

**Eager Loading**, on the other hand, is a technique where related data is loaded from the database along with the main entities. This means that all the necessary data is loaded into memory upfront, reducing the number of database queries when accessing related data. Eager loading is often achieved through explicit loading or query options.

Pros of Eager Loading:

- Reduced database queries: Loads all necessary data in a single query, minimizing database round-trips.

- Better performance for specific scenarios: Eager loading can be more efficient when you know in advance that certain related data will be needed.

Cons of Eager Loading:

- Potential for loading unnecessary data: If related data is not always needed, loading it eagerly can result in unnecessary data retrieval, impacting performance and memory usage.

Choosing Between Lazy Loading and Eager Loading:

- **Use Lazy Loading**:

  - When you need to minimize the amount of data loaded into memory initially.

  - When you have a large object graph, and loading everything eagerly would be inefficient.

  - When you're dealing with optional or rarely accessed related data.

- **Use Eager Loading**:

  - When you know that specific related data will always be accessed together with the main entities.

  - When you want to minimize the number of database queries for performance reasons, especially for smaller object graphs.

Choosing the right loading strategy depends on the specific use case and the nature of the data and relationships in your application. It's important to consider the trade-offs and design your data access logic accordingly. Many ORM frameworks provide ways to control loading behavior explicitly, allowing developers to choose the most suitable strategy for their applications.

IEnumerator in Dotnet

 IEnumerator in Dotnet

`IEnumerator` is an interface provided by the .NET framework, primarily used for iterating through collections of objects. It defines methods for iterating over a collection in a forward-only, read-only manner. The `IEnumerator` interface is part of the System.Collections namespace.

Here's the basic structure of the `IEnumerator` interface:

csharp

public interface IEnumerator

{

    bool MoveNext();

    void Reset();

    object Current { get; }

}

- `MoveNext()`: Advances the enumerator to the next element of the collection. It returns `true` if there are more elements to iterate; otherwise, it returns `false`.

- `Reset()`: Resets the enumerator to its initial position before the first element in the collection.

- `Current`: Gets the current element in the collection. This is an object because `IEnumerator` is not type-specific.

Implementing IEnumerator:

You can implement the `IEnumerator` interface in your custom collection classes to enable them to be iterated using `foreach` loops.

Here's an example of a custom collection that implements the `IEnumerator` interface:

```csharp

public class CustomCollection : IEnumerable, IEnumerator

{

    private object[] _items;

    private int _currentIndex = -1;


    public CustomCollection(object[] items)

    {

        _items = items;

    }

    public IEnumerator GetEnumerator()

    {

        return this;

    }

    public bool MoveNext()

    {

        _currentIndex++;

        return _currentIndex < _items.Length;

    }


    public void Reset()

    {

        _currentIndex = -1;

    }

    public object Current

    {

        get

        {

            try

            {

                return _items[_currentIndex];

            }

            catch (IndexOutOfRangeException)

            {

                throw new InvalidOperationException();

            }

        }

    }

}

In this example, `CustomCollection` implements both `IEnumerable` and `IEnumerator`. The `GetEnumerator` method returns the current object as the enumerator, and `MoveNext`, `Reset`, and `Current` methods are implemented to fulfill the `IEnumerator` contract.

Usage of `IEnumerator` allows you to create custom collections that can be iterated in a `foreach` loop, providing a consistent and familiar way to work with your data structures. When implementing `IEnumerator`, be mindful of the iteration state, especially after reaching the end of the collection.

Implementing OAuth validation in a Web API

 I mplementing OAuth validation in a Web API Implementing OAuth validation in a Web API using C# typically involves several key steps to sec...