QA InfoTech , Independent Software Testing Service Logo
jobs@qainfotech.com
Sales: Contact Sales +1 469-759-7848 sales@qainfotech.com
QA Infotech your Software testing partner
Menu
  • About
    • Team
    • Overview
    • Values and Culture
    • QA InfoTech Foundation
    • Careers
  • Services
    • QUALITY ENGINEERING
      • Functional Testing
      • Automation Testing
      • Mobile Testing
      • Performance Testing
      • Accessibility Testing
      • Usability Testing
      • Security Testing
      Quality ASSURANCE
      • Globalization, Translation & Localization Testing
      • Courseware & Content Testing
      • Crowdsourced Testing
      • Cloud Testing
      Software Development
      • eLearning
      • Data Sciences
      • Accessibility Development
      • Mobility Solutions
      • Web Development
      • Front End Frameworks
      • Salesforce Development
      • Cloud Solutions
      • Enterprise Content Management
      • Odoo
      • ServiceNow
      • AS400
      Digital Assurance
      • Digital Assurance
      • Data Sciences & Analytics
      • Quality Consulting & Test Center of Excellence (TCOE)
      • SAP Testing
      • Selenium Test Automation
      • Blockchain Testing
  • Verticals
    • e-learning
      List Start
    • e-Learning
    • Publishing
    • BFSI
    • Health Care
    • Media
    • Travel
    • Retail
    • Government
    • OpenERP
    • List Stop
  • Knowledge Center
    • Case Studies
    • White Paper
    • Webinars
    • Blogs
  • WebinarsNew
  • News
    • News
    • Press Release
  • Contact
  • Get A Quote
  • Home
  • »
  • Business
  • »
  • Dot Net
Choose the Work Flow for Entity Framework
07 Apr, 2017

Choose the Work Flow for Entity Framework

  • nehasaini_qait
  • Business,Dot Net
  • Tags: database, dotnet, entity framework, framework, technology
  • no comments

framework

Two things are therein mind while choosing the workflow.

Things that are outside our control:

New database/Existing Database:

Thing that are inside our control:

Creating model using design or writing code.

Model First—

  1. Create model in designer.
  2. Generate DB from the model.
  3. Classes that are going to interact auto generated from model.

Database First—

  1. Reverse engineer model in designer.
  2. Classes are auto-generated from Model.

Code First (New Database) —

  1. Find my model in code. Model is made up of main classes that are going to interact with the application. Optionally can provide the code for mapping and configuration.
  2. Database is created from the model.
  3. If I can my model, then can use the Code First Model to evolve database.

Code First (Existing Database) —

  1. Define classes and mapping in code.
  2. Reverse engineering tools are available.
04 Dec, 2015

C#: A Review on Generics

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net,Events
  • no comments

I think a good C# developer needs to get a handle on .NET generics.  Most of the advanced features in C# deal with lots of generics and having a very good understanding of generics will help considerably, most especially when dealing with generic delegates.  So here in this post, we will review generics.

Generic type definitions can be methods, classes, structures, and interfaces.

// an example generic class
public class MyGenericClass<T>
{
    public T MyProperty;
}

 

 

The placeholders (e.g. <T>) are called generic type parameters, or type parameters.

You specify the actual types to substitute for the type parameters during instantiation.

// instantiate generic class with string type
MyGenericClass<string> c = new MyGenericClass<string>();

 

 

When instantiated, a generic type definition becomes a constructed generic type.

You can place limits or constraints on generic type parameters.

// limit type to value types except Nullable
public class MyGenericClass<T> where T : struct {/*...*/}

// limit type to reference types which can include classes,
//  interfaces, delegates, and array types
public class MyGenericClass<T> where T : class {/*...*/}

// limit type to types with public parameterless constructor
// must be specified last in multiple constraints
public class MyGenericClass<T> where T : new() {/*...*/}

// limit type to specified base class or to types derived from it
public class MyGenericClass<T> where T : MyBaseClass {/*...*/}

// limit type to specified interface or to types that implement it
public class MyGenericClass<T> where T : IMyInterface {/*...*/}

// limit type to specified type parameter    

// in a generic member function, it limits its type to the type 
//  parameter of the containing type
public class MyGenericClass<T>
{
    void MyMethod<U>(List<U> items) where U : T {/*...*/}
}

// in a generic class, it enforces an inheritance relationship
//  between the two type parameters
public class MyGenericClass<T, U> where U : T {/*...*/}

// type parameter can have multiple constraints 
//  and constraints can also be generics
//  and constraints can be applied to multiple type parameters
public class MyGenericClass<T, U> 
    where T : MyClass, IMyInterface, System.IComparable<T>, new()
    where U : struct
{
    // ...
}

 

 

A method is considered a generic method definition if it has two parameter lists: a list of type parameters enclosed in <> and a list of formal parameters enclosed in ().  A method belonging to a generic or non-generic type does not make the method generic or non-generic.  Only the existence of the two parameter lists will make the method generic, as in the example below.

public class MyClass
{
    // a generic method inside a non-generic class
    T MyGenericMethod<T>(T arg) {/*...*/}
}

 

 

A type nested in a generic type is considered by CLR to be generic even if it doesn’t have generic type parameters of its own.  When you instantiate a nested type, you need to specify the type arguments for all enclosing generic types.

// generic type
public class MyGenericType<T, U>
{
    // nested type
    public class MyNestedType
    {
        // ...
    }
}

// ... somewhere in code you instantiate the nested type   
MyGenericType<string, int>.MyNestedType nt = 
    new MyGenericType<string, int>.MyNestedType();

 

 

The following are some common generic collection counterparts provided by the .NET framework:

  • Dictionary<TKey, TValue> which is the generic version of Hashtable.  It uses KeyValuePair<TKey, TValue> for enumeration instead of DictionaryEntry.
  • List<T> which is the generic version of ArrayList.
  • Queue<T> and Stack<T> which is the generic versions of collections with same names.
  • SortedList<TKey, TValue> which is a hybrid of a dictionary and list, just like it’s nongeneric version of same name.
  • SortedDictionary<TKey, TValue> which is a pure dictionary, and LinkedList<T>.  Both don’t have nongeneric versions.
  • Collection<T> which is a base class for generating custom collection types, ReadOnlyCollection<T> which provides read-only collection from any type implementing IList<T>, and KeyedCollection<TKey, TItem> for storing objects containing their own keys.

 

There are also generic interface counterparts for ordering and equality comparisons and for shared collection functionality:

  • System.IComparable<T> and System.IEquatable<T> which define methods for ordering comparisons and equality comparisons.
  • IComparer<T> and IEqualityComparer<T> in the System.Collections.Generic namespace, which offer alternative way for types that do not implement System.IComparable<T> and System.IEquatable<T>.  They are used by methods and constructors of many of the generic collection classes.  An example would be passing a generic IComparer<T> object to the constructor of SortedDictionary<TKey, TValue> to specify a sort order.  Generic classes Comparer<T> and EqualityComparer<T> are their base class implementations.
  • ICollection<T> which provides basic functionality for adding, removing, copying, and enumerating elements in a generic collection type.  It inherits from IEnumerable<T> and the nongeneric IEnumerable.
  • IList<T> which extends ICollection<T> with methods for indexed retrieval.
  • IDictionary<TKey, TValue> which extends ICollection<T> with methods for keyed retrieval.  Generic dictionary types also inherit from nongeneric IDictionary.
  • IEnumerable<T> which provides a generic enumerator structure used by foreach.  It inherits from nongeneric IEnumerator because MoveNext and Reset methods appear only on the nongeneric interface.  This means consumer of the nongeneric interface can also consume the generic interface because the generic interface provides for nongeneric implementation.

 

You also have generic delegates in .NET framework.  An example is the EventHandler<TEventArgs> which you can use in handling events with custom event arguments.  No need to declare your own delegate type for the event.  If you need to brush up on events and delegates, see my post on raising events and nongeneric delegates.

public event EventHandler<PublishedEventArgs> Published;

 

 

There are also a bunch of useful generic delegates available for manipulating arrays and lists

  • Action<T> which allows you to perform action on an element by passing an Action<T> delegate instance and an array to the generic method Array.ForEach<T>.  You can also pass an Action<T> delegate instance to the nongeneric method List<T>.ForEach.
  • Predicate<T> which allows you to specify a search criteria to Array’s Exists<T>, Find<T>, FindAll<T>, and so on and also to List<T>’s Exists, Find, FindAll, and so on.
  • Comparsion<T> which allows you to provide a sort order.
  • Converter<TInput, TOutput> which allows you to convert between two types of arrays or lists.

 

Ok so that’s all we have for generics.  Just a review of the basics that a C# developer need to know.

03 Dec, 2015

SQL SERVER – Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net,Events
  • no comments

There are two different methods of retrieving the list of Primary Keys and Foreign Keys from database.

Method 1: INFORMATION_SCHEMA

SELECT
DISTINCT
Constraint_Name AS [Constraint],
Table_Schema AS [Schema],
Table_Name AS [TableName]
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
GO

Method 2: sys.objects

SELECT OBJECT_NAME(OBJECT_ID) AS NameofConstraint,
SCHEMA_NAME(schema_id) AS SchemaName,
OBJECT_NAME(parent_object_id) AS TableName,
type_desc AS ConstraintType
FROM sys.objects
WHERE type_desc IN ('FOREIGN_KEY_CONSTRAINT','PRIMARY_KEY_CONSTRAINT')
GO

I am often asked about my preferred method of retrieving list of Primary Keys and Foreign Keys from database. I have a standard answer. I prefer method 3, which is querying sys database. The reason is very simple. sys. schema always provides more information and all the data can be retrieved in our preferred fashion with the preferred filter.

Let us look at the example we have on our hand. When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same.

Let us play a small puzzle here. Try to modify both the scripts in such a way that we are able to see the original definition of the key, that is, create a statement for this primary key and foreign key.

If I get an appropriate answer from my readers, I will publish the solution on this blog with due credit.

Understanding LINQ to SQL  Object-Relational Mapping
03 Dec, 2015

Understanding LINQ to SQL Object-Relational Mapping

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net
  • no comments

According to Wikipedia, Object-relational mapping is:

a programming technique for converting data between incompatible type systems in relational databases and object-oriented programming languages.

This is the LINQ to SQL sample code at the beginning of this series:

using (NorthwindDataContext database = new NorthwindDataContext())
{
    var results = from product in database.Products
                  where product.Category.CategoryName == "Beverages"
                  select new
                  {
                      product.ProductName,
                      product.UnitPrice
                  };
    foreach (var item in results)
    {
        Console.WriteLine(
            "{0}: {1}", 
            item.ProductName, 
            item.UnitPrice.ToString(CultureInfo.InvariantCulture));
    }
}

According to this post, the above query expression will be compiled to query methods:

var results = database.Products.Where(product => product.Category.CategoryName == "Beverages")
                               .Select(product => new
                                                      {
                                                          product.ProductName,
                                                          product.UnitPrice
                                                      });

It is querying the ProductName and UnitPrice fields of the Products table in the Northwind database, which belong to the specified CategoryName. To work with SQL Server representations (fields, tables, databases) in C# representations (object models), the mappings between SQL representations and C# representations need to be created. LINQ to SQL provides an Object-relational mapping designer tool to create those objects models automatically.

Create C# models from SQL schema

The easiest way of modeling is to use Visual Studio IDE. This way works with:

  • SQL Server 2000
  • SQL Server 2005
  • SQL Server 2008
  • SQL Server 2008 R2

Take the Northwind database as an example. First, setup a data connection to the Northwind database:

image

Then, create a “LINQ to SQL Classes” item to the project:

image

By creating a Northwind.dbml file, the O/R designer is opened:

image

Since the above query works with the Products table and the Categories table, just drag the 2 tables and drop to the O/R designer:

image

In the designer, the modeling is done. Please notice that the foreign key between Categories table and Products table is recognized, and the corresponding association is created in the designer.

Now the object models are ready to rock. Actually the designer has automatically created the following C# code:

  • Category class: represents each record in Categories table;
    • CategoryID property (an int): represents the CategoryID field; So are the other properties shown above;
    • Products propery (a collection of Product object): represents the associated many records in Products table
  • Product class: represents each record in Products table;
    • ProductID property (an int): represents the ProductID field; So are the other properties shown above;
    • Category propery (a Category object): represents the associated one records in Products table;
  • NorthwindDataContext class: represents the Northwind database;
    • Categories property (a collection of the Category objects): represents the Categories table;
    • Products property (a collection of the Product objects): represents the Products table;

Besides, database, tables, fields, other SQL stuff can also be modeled by this O/R designer:

image

SQL representationC# representationSample
DatabaseDataContext derived classNothwindDataContext
Table, ViewDataContext derived class’s propertyNothwindDataContext.Categories
RecordEntity classCategory
FieldEntity class’s propertyCategory.CategoryName
Foreign keyAssociation between entity classesCategory.Products
Stored procedure, functionDataContext derived class’s methodNothwindDataContext.SalesByCategory()

Another way to generate the models is to use the command line tool SqlMetal.exe. Please check MSDN for details of code generation.

And, please notice that, the Category entity class is generated from the Categories table. Here plural name is renamed to singular name, because a Category object is the mapping of one record of Categories table. This can be configured in Visual Studio:

image

Implement the mapping

Now take a look at how the SQL representations are mapped to C# representations.

The Northwind.dbml is nothing but an XML file:

<?xml version="1.0" encoding="utf-8"?>
<!-- [Northwind] database is mapped to NorthwindDataContext class. -->
<Database Name="Northwind" Class="NorthwindDataContext" xmlns="http://schemas.microsoft.com/linqtosql/dbml/2007">
    <!-- Connection string -->
    <Connection Mode="WebSettings" ConnectionString="Data Source=qablog.qaitdevlabs.com;Initial Catalog=Northwind;Integrated Security=True" SettingsObjectName="System.Configuration.ConfigurationManager.ConnectionStrings" SettingsPropertyName="NorthwindConnectionString" Provider="System.Data.SqlClient" />

    <!-- Categories property is a member of NorthwindDataContext class. -->
    <Table Name="dbo.Categories" Member="Categories">
        <!-- [Categories] table is mapped to Category class. -->
        <Type Name="Category">
            <!-- [CategoryID] (SQL Int) field is mapped to CategoryID property (C# int). -->
            <Column Name="CategoryID" Type="System.Int32" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" />
            <!-- [CategoryName] (SQL NVarChar(15)) field is mapped to CategoryName property (C# string). -->
            <Column Name="CategoryName" Type="System.String" DbType="NVarChar(15) NOT NULL" CanBeNull="false" />
            <!-- Other fields. -->
            <Column Name="Description" Type="System.String" DbType="NText" CanBeNull="true" UpdateCheck="Never" />
            <Column Name="Picture" Type="System.Data.Linq.Binary" DbType="Image" CanBeNull="true" UpdateCheck="Never" />
            <!-- [Categories] is associated with [Products] table via a foreign key.
            So Category class has a Products peoperty to represent the associated many Product objects. -->
            <Association Name="Category_Product" Member="Products" ThisKey="CategoryID" OtherKey="CategoryID" Type="Product" />
        </Type>
    </Table>

    <!-- Products property is a member of NorthwindDataContext class. -->
    <Table Name="dbo.Products" Member="Products">
        <!-- [Products] table is mapped to Product class. -->
        <Type Name="Product">
            <!-- Fields. -->
            <Column Name="ProductID" Type="System.Int32" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" />
            <Column Name="ProductName" Type="System.String" DbType="NVarChar(40) NOT NULL" CanBeNull="false" />
            <Column Name="SupplierID" Type="System.Int32" DbType="Int" CanBeNull="true" />
            <Column Name="CategoryID" Type="System.Int32" DbType="Int" CanBeNull="true" />
            <Column Name="QuantityPerUnit" Type="System.String" DbType="NVarChar(20)" CanBeNull="true" />
            <Column Name="UnitPrice" Type="System.Decimal" DbType="Money" CanBeNull="true" />
            <Column Name="UnitsInStock" Type="System.Int16" DbType="SmallInt" CanBeNull="true" />
            <Column Name="UnitsOnOrder" Type="System.Int16" DbType="SmallInt" CanBeNull="true" />
            <Column Name="ReorderLevel" Type="System.Int16" DbType="SmallInt" CanBeNull="true" />
            <Column Name="Discontinued" Type="System.Boolean" DbType="Bit NOT NULL" CanBeNull="false" />
            <!-- [Products] is associated with [Products] table via a foreign key.
            So Product class has a Category peoperty to represent the associated one Category object. -->
            <Association Name="Category_Product" Member="Category" ThisKey="CategoryID" OtherKey="CategoryID" Type="Category" IsForeignKey="true" />
        </Type>
    </Table>
</Database>

It describes how the SQL stuff are mapped to C# stuff.

A Northwind.dbml.layout file is created along with the dbml. It is also an XML, describing how the O/R designer should visualize the objects models:

<?xml version="1.0" encoding="utf-8"?>
<ordesignerObjectsDiagram dslVersion="1.0.0.0" absoluteBounds="0, 0, 11, 8.5" name="Northwind">
    <DataContextMoniker Name="/NorthwindDataContext" />
    <nestedChildShapes>
        <!-- Category class -->
        <classShape Id="81d67a31-cd80-4a91-84fa-5d4dfa2e8694" absoluteBounds="0.75, 1.5, 2, 1.5785953776041666">
            <DataClassMoniker Name="/NorthwindDataContext/Category" />
            <nestedChildShapes>
                <!-- Properties -->
                <elementListCompartment Id="a261c751-8ff7-471e-9545-cb385708d390" absoluteBounds="0.765, 1.96, 1.9700000000000002, 1.0185953776041665" name="DataPropertiesCompartment" titleTextColor="Black" itemTextColor="Black" />
            </nestedChildShapes>
        </classShape>

        <!-- Product class -->
        <classShape Id="59f11c67-f9d4-4da9-ad0d-2288402ec016" absoluteBounds="3.5, 1, 2, 2.7324039713541666">
            <DataClassMoniker Name="/NorthwindDataContext/Product" />
            <nestedChildShapes>
                <!-- Properties -->
                <elementListCompartment Id="6c1141a2-f9a9-4660-8730-bed7fa15bc27" absoluteBounds="3.515, 1.46, 1.9700000000000002, 2.1724039713541665" name="DataPropertiesCompartment" titleTextColor="Black" itemTextColor="Black" />
            </nestedChildShapes>
        </classShape>

        <!-- Association arrow -->
        <associationConnector edgePoints="[(2.75 : 2.28929768880208); (3.5 : 2.28929768880208)]" fixedFrom="Algorithm" fixedTo="Algorithm">
            <AssociationMoniker Name="/NorthwindDataContext/Category/Category_Product" />
            <nodes>
                <!-- From Category class -->
                <classShapeMoniker Id="81d67a31-cd80-4a91-84fa-5d4dfa2e8694" />
                <!-- To Product class -->
                <classShapeMoniker Id="59f11c67-f9d4-4da9-ad0d-2288402ec016" />
            </nodes>
        </associationConnector>
    </nestedChildShapes>
</ordesignerObjectsDiagram>

A Northwind.designer.cs is also created, containing the auto generated C# code.

This is how the NorthwindDataContext looks like:

[Database(Name = "Northwind")]
public partial class NorthwindDataContext : DataContext
{
    public Table<Category> Categories
    {
        get
        {
            return this.GetTable<Category>();
        }
    }

    public Table<Product> Products
    {
        get
        {
            return this.GetTable<Product>();
        }
    }
}

And this is the Category class:

[Table(Name = "dbo.Categories")]
public partial class Category : INotifyPropertyChanging, INotifyPropertyChanged
{
    private int _CategoryID;

    private EntitySet<Product> _Products;

    [Column(Storage = "_CategoryID", AutoSync = AutoSync.OnInsert, 
        DbType = "Int NOT NULL IDENTITY", IsPrimaryKey = true, IsDbGenerated = true)]
    public int CategoryID
    {
        get
        {
            return this._CategoryID;
        }
        set
        {
            if ((this._CategoryID != value))
            {
                this.OnCategoryIDChanging(value);
                this.SendPropertyChanging();
                this._CategoryID = value;
                this.SendPropertyChanged("CategoryID");
                this.OnCategoryIDChanged();
            }
        }
    }

    // Other properties.

    [Association(Name = "Category_Product", Storage = "_Products", 
        ThisKey = "CategoryID", OtherKey = "CategoryID")]
    public EntitySet<Product> Products
    {
        get
        {
            return this._Products;
        }
        set
        {
            this._Products.Assign(value);
        }
    }
}

The Products looks similar.

Customize the mapping

Since the mapping info are simply stored in the XML file and C# code, they can be customized in the O/R designer easily:

image

After renaming Category class to CategoryEntity, the XML and C# is refined automatically:

<?xml version="1.0" encoding="utf-8"?>
<Database Name="Northwind" Class="NorthwindDataContext" xmlns="http://schemas.microsoft.com/linqtosql/dbml/2007">
    <Table Name="dbo.Categories" Member="CategoryEntities">
        <Type Name="CategoryEntity">
            <!-- Fields -->
        </Type>
    </Table>
    <Table Name="dbo.Products" Member="Products">
        <Type Name="Product">
            <!-- Fields -->
            <Association Name="Category_Product" Member="CategoryEntity" Storage="_Category" ThisKey="CategoryID" OtherKey="CategoryID" Type="CategoryEntity" IsForeignKey="true" />
        </Type>
    </Table>
</Database>

and

[Database(Name = "Northwind")]
public partial class NorthwindDataContext : DataContext
{
    public Table<CategoryEntity> CategoryEntities { get; }
}

[Table(Name = "dbo.Categories")]
public partial class CategoryEntity : INotifyPropertyChanging, INotifyPropertyChanged
{
}

[Table(Name = "dbo.Products")]
public partial class Product : INotifyPropertyChanging, INotifyPropertyChanged
{
    [Association(Name = "Category_Product", Storage = "_Category",
        ThisKey = "CategoryID", OtherKey = "CategoryID", IsForeignKey = true)]
    public CategoryEntity CategoryEntity { get; set; }
}

Properties, associations, and inheritances and also be customized:

image

For example, The ProductID property can be renamed to ProductId to be compliant to .NET Framework Design Guidelines.

More options are available to customize the data context, entities, and properties:

image

Please notice this mapping is one way mapping, from SQL Server to C#. When the mapping information is changed in O/R designer, SQL Server is not affected at all.

And, LINQ to SQL is designed to provide a simple O/R mapping, not supporting advenced functionalities, like multi-table inheritance, etc. According to MSDN:

The single-table mapping strategy is the simplest representation of inheritance and provides good performance characteristics for many different categories of queries.

Please check this link for more details.

Work with the models

The auto generated models are very easy and extensible.

Partial class

All the generated C# classes are partial classes. For example, it is very easy to add a NorthwindDataContext,cs file and a Category.cs file to the project, and write the extension code.

Partial method

There are also a lot of partial method in the generated code:

[Database(Name = "Northwind")]
public partial class NorthwindDataContext : DataContext
{
    #region Extensibility Method Definitions

    partial void OnCreated();
    partial void InsertCategory(Category instance);
    partial void UpdateCategory(Category instance);
    partial void DeleteCategory(Category instance);
    partial void InsertProduct(Product instance);
    partial void UpdateProduct(Product instance);
    partial void DeleteProduct(Product instance);

    #endregion
}

For example, the OnCreated() can be implemented in the NorthwindDataContext,cs:

public partial class NorthwindDataContext
{
    // OnCreated will be invoked by constructors.
    partial void OnCreated()
    {
        // The default value is 30 seconds.
        this.CommandTimeout = 40;
    }
}

When the Northwind is constructed, the OnCreated() is invoked, and the custom code is executed.

So are the entities:

[Table(Name = "dbo.Categories")]
public partial class Category : INotifyPropertyChanging, INotifyPropertyChanged
{
    #region Extensibility Method Definitions

    partial void OnLoaded();
    partial void OnValidate(ChangeAction action);
    partial void OnCreated();
    partial void OnCategoryIDChanging(int value);
    partial void OnCategoryIDChanged();
    partial void OnCategoryNameChanging(string value);
    partial void OnCategoryNameChanged();
    partial void OnDescriptionChanging(string value);
    partial void OnDescriptionChanged();
    partial void OnPictureChanging(Binary value);
    partial void OnPictureChanged();

    #endregion
}

For example, the OnValidated() is very useful for the data correction:

[Table(Name = "dbo.Categories")]
public partial class Category
{
    partial void OnValidate(ChangeAction action)
    {
        switch (action)
        {
            case ChangeAction.Delete:
                // Validates the object when deleted.
                break;
            case ChangeAction.Insert:
                // Validates the object when inserted.
                break;
            case ChangeAction.None:
                // Validates the object when not submitted.
                break;
            case ChangeAction.Update:
                // Validates the object when updated.
                if (string.IsNullOrWhiteSpace(this._CategoryName))
                {
                    throw new ValidationException("CategoryName is invalid.");
                }
                break;
            default:
                break;
        }
    }
}

When the category object (representing a record in Categories table) is updated, the custom code checking the CategoryName will be executed.

And, because each entity class’s Xxx property’s setter involves OnXxxChanging() partial method:

[Table(Name = "dbo.Categories")]
public partial class CategoryEntity : INotifyPropertyChanging, INotifyPropertyChanged
{
    [Column(Storage = "_CategoryName", DbType = "NVarChar(15) NOT NULL", CanBeNull = false)]
    public string CategoryName
    {
        get
        {
            return this._CategoryName;
        }
        set
        {
            if ((this._CategoryName != value))
            {
                this.OnCategoryNameChanging(value);
                this.SendPropertyChanging();
                this._CategoryName = value;
                this.SendPropertyChanged("CategoryName");
                this.OnCategoryNameChanged();
            }
        }
    }
}

Validation can be also done in this way:

public partial class CategoryEntity
{
    partial void OnCategoryNameChanging(string value)
    {
        if (string.IsNullOrWhiteSpace(value))
        {
            throw new ArgumentOutOfRangeException("value");
        }
    }
}

INotifyPropertyChanging and INotifyPropertyChanged interfaces

Each auto generated entity class implements INotifyPropertyChanging and INotifyPropertyChanged interfaces:

namespace System.ComponentModel
{
    public interface INotifyPropertyChanging
    {
        event PropertyChangingEventHandler PropertyChanging;
    }

    public interface INotifyPropertyChanged
    {
        event PropertyChangedEventHandler PropertyChanged;
    }
}

For example, in the above auto generated CategoryName code, after setting the CategoryName, SendPropertyChanged() is invoked, passing the propery name “CategoryName” as argument:

[Table(Name = "dbo.Categories")]
public partial class CategoryEntity : INotifyPropertyChanging, INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    protected virtual void SendPropertyChanged(String propertyName)
    {
        if (this.PropertyChanged != null)
        {
            this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
        }
    }
}

This is very useful to track changes of the entity object:

using (NorthwindDataContext database = new NorthwindDataContext())
{
    Category category = database.Categories.Single(item => item.CategoryName = "Beverages");
    category.PropertyChanged += (_, e) =>
        {
            Console.Write("Propery {0} is changed", e.PropertyName);
        };

    // Work with the category object.
    category.CategoryID = 100;
    // ...
}

And this is used for change tracking by DataContext, which will be explained later.

Programmatically access the mapping information

The mapping information is stored in DataContext.Mapping as a MetaModel object. Here is an example:

public static class DataContextExtensions
{
    public static Type GetEntityType(this DataContext database, string tableName)
    {
        return database.Mapping.GetTables()
                               .Single(table => table.TableName.Equals(
                                   tableName, StringComparison.Ordinal))
                               .RowType
                               .Type;
    }
}

The method queries the mapping information with the table name, and returns the entity type:

using (NorthwindDataContext database = new NorthwindDataContext())
{
    Type categoryType = database.GetEntityType("dbo.Categories");
}

Create SQL schema from C# models

Usually, many people design the SQL database first, then model it with the O/R designer, and write code to work with the C# object models. But this is not required. It is totally Ok to create POCO models first without considering the SQL stuff:

public partial class Category
{
    public int CategoryID { get; set; }

    public string CategoryName { get; set; }

    public EntitySet<Product> Products { get; set; }
}

Now it is already able to start coding with this kind of models.

Later, there are 2 ways to integrate the C# program with SQL Server database:

  • Generate object models from designed SQL Server database;
  • Decorate POCO models with mapping attributes, Invoke CreateDatabase() method of DataContext to create the expected database schema in SQL Server.

For example, the C# models can be polluted with O/R mapping knowledge like this:

[Table(Name = "Categories")]
public class Category
{
    [Column(DbType = "Int NOT NULL IDENTITY", IsPrimaryKey = true)]
    public int CategoryId { get; set; }

    [Column(DbType = "NVarChar(15) NOT NULL")]
    public string CategoryName { get; set; }

    [Association(Name = "Category_Products",
        ThisKey = "CategoryId", OtherKey = "CategoryId")]
    public EntitySet<Product> Products { get; set; }
}

[Table(Name = "Products")]
public class Product
{
    [Column(DbType = "Int NOT NULL IDENTITY", IsPrimaryKey = true)]
    public int ProductId { get; set; }

    [Column(DbType = "NVarChar(40) NOT NULL")]
    public string ProductName { get; set; }

    [Column(DbType = "Int")]
    public int CategoryId { get; set; }

    [Association(Name = "Category_Products", IsForeignKey = true,
        ThisKey = "CategoryId", OtherKey = "CategoryId")]
    public Category Category { get; set; }
}

[Database(Name = "SimpleNorthwind")]
public class SimpleNorthwindDataContext : DataContext
{
    public SimpleNorthwindDataContext(IDbConnection connection)
        : base(connection)
    {
    }

    public Table<Category> Categories { get; set; }

    public Table<Product> Products { get; set; }
}

Now it is ready to create database schema in SQL server:

using (SimpleNorthwindDataContext database = new SimpleNorthwindDataContext(new SqlConnection(
    @"Data Source=qablog.qaitdevlabs.com;Initial Catalog=SimpleNorthwind;Integrated Security=True")))
{
    if (database.DatabaseExists())
    {
        database.DeleteDatabase();
    }

    database.CreateDatabase();
}

Isn’t this easy? This is the generated SimpleNorthwind database in SQL Server:

image

Implement Lazy Loading in C# Using Lazy Class
03 Dec, 2015

Implement Lazy Loading in C# Using Lazy Class

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net
  • no comments

Lazy loading is a nice and very important concept in the programming world. Sometimes it helps to improve performance and adapt best practices in application design. Let’s discuss why lazy loading is useful and how it helps to develop a high performance application.

Lazy loading is essential when the cost of object creation is very high and the use of the object is vey rare. So, this is the scenario where it’s worth implementing lazy loading.
The fundamental idea of lazy loading is to load object/data when needed.

At first we will implement a traditional concept of loading (it’s not lazy loading) and then we will try to understand the problem in this. Then we will implement lazy loading to solve the problem.

Have a look at the following code.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace ConsoleAPP

{

public class PersonalLoan

{

public string AccountNumber { get; set; }

public string AccounHolderName { get; set; }

public Loan LoanDetail { get; set; }

public PersonalLoan(string accountNumber)

{

this.AccountNumber = accountNumber;

this.AccounHolderName = “Sourav”;

this.LoanDetail = new Loan(this.AccountNumber);

}

}

public class Loan

{

public string AccountNumber { get; set; }

public float LoanAmount { get; set; }

public bool IsLoanApproved { get; set; }

public Loan(string accountNumber)

{

Console.WriteLine(“Loan loading started”);

this.AccountNumber = accountNumber;

this.LoanAmount = 1000;

this.IsLoanApproved = true;

Console.WriteLine(“Loan loading started”);

}

}

class Program

{

static void Main(string[] args)

{

PersonalLoan p = new PersonalLoan(“123456”);

 

 

Console.ReadLine();

}

}

}
This is not lazy loading since we are seeing that the LoadDetail property is being populated at the time of PersonalLoan object creation. If object creation of LoanDetail is very costly then it will be a very time and resource intensive operation to create an object of a PersonalLoad class.

Ok, somehow we will implement a mechanism to populate the LoadDetail property in delay, I mean if needed we will populate the property otherwise not.

It can solve our problem and obviously it will improve performance in the application. Have a look at the following example. We can see that the LoanDetail property will not be populated when an object of the PersonalLoan class is created.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace ConsoleAPP

{

public class PersonalLoan

{

public string AccountNumber { get; set; }

public string AccounHolderName { get; set; }

public Loan LoanDetail { get; set; }

public PersonalLoan(string accountNumber)

{

this.AccountNumber = accountNumber;

this.AccounHolderName = “Sourav”;

}

}

public class Loan

{

public string AccountNumber { get; set; }

public float LoanAmount { get; set; }

public bool IsLoanApproved { get; set; }

public Loan(string accountNumber)

{

Console.WriteLine(“Loan loading started”);

this.AccountNumber = accountNumber;

this.LoanAmount = 1000;

this.IsLoanApproved = true;

Console.WriteLine(“Loan loading started”);

}

}

class Program

{

static void Main(string[] args)

{

PersonalLoan p = new PersonalLoan(“123456”);

 

//Load Detail started to load

p.LoanDetail = new Loan(“123456”);

Console.WriteLine(p.LoanDetail.AccountNumber);

Console.WriteLine(p.LoanDetail.IsLoanApproved);

Console.WriteLine(p.LoanDetail.LoanAmount);

 

Console.ReadLine();

}

}

}
So, we are seeing that the LoanDetail Property is still null. As soon as we load the property in our code , it will be populated.

Finally

Lazy loading is a nice feature of application development, the developer should implement it wisely to enhance performance and reduce the cost of application execution.

Implement lazy loading using Lazy<T> class

As we know, lazy loading is a nice feature of applications, not only to improve the performance of the application but also it helps to manage memory and other resource efficiently. Basically we can use lazy initialization when a large object is created or the execution of a resource- intensive task in particular when such creation or execution might not occur during the lifetime of the program.

And there are many choices to implement lazy initialization. We can use our own implementation to delay object population or we can use the Lazy<T> class of the .NET library to do it.

To prepare for lazy initialization, you create an instance of Lazy<T>. The type argument of the Lazy<T> object that you create specifies the type of the object that you want to initialize lazily. The constructor that you use to create the Lazy<T> object determines the characteristics of the initialization. Lazy initialization occurs the first time the Lazy<T>.Value property is accessed.

The Lazy<T> class contains two properties by which we can detect the status of the lazy class.

IsValueCreated

This property will tell us whether or not the value is initializing in a lazy class.

Value

It gets the lazy initialized value of the current Lazy<T> instance.

Fine. We will now implement one simple class and we will see how lazy<T> works with it. Have a look at the following example.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

namespace ConsoleAPP

{

public class Test

{

private List<string> list = null;

public Test()

{

Console.WriteLine(“List Generated:”);

list = new List<string>() {

“Sourav”,”Ram”

};

}

public List<string> Names

{

get

{

return list;

}

}

}

class Program

{

static void Main(string[] args)

{

Lazy<Test> lazy = new Lazy<Test>();

Console.WriteLine(“Data Loaded : ” + lazy.IsValueCreated);

Test t = lazy.Value;

 

foreach (string tmp in t.Names)

{

Console.WriteLine(tmp);

}

Console.ReadLine();

}

}

}
The Test class has been declared and the instance (lazy) has created a Lazy<T> class. We are then checking whether the value is populated or not in the Test class. In the output we are seeing the value is “False” so, the value is still not populated.

Whenever the line “Test t = lazy.Value” executes, the value will be populated in the Test class and this is how the Test class will initialize lazily.

In the next line we are accessing the property of the test class that will return a string array and we are printing it. Here is sample output in the following.

The question may occur to you, is lazy<T> thread safe ?

By default, all public and protected members of the Lazy<T> class are thread safe and may be used concurrently from multiple threads. These thread-safety guarantees may be removed optionally and per instance, using parameters to the type’s constructors.

Thanks for reading, Happy learning. In the next article we will see how to implement lazy loading in Entity Framework.

02 Dec, 2015

Introducing U-SQL – A Language that makes Big Data Processing Easy

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net
  • no comments

Microsoft announced the new Azure Data Lake services for analytics in the cloud that includes a hyper-scale repository, a new analytics service built on YARN that allows data developers and data scientists to analyze all data, and HDInsight, a fully managed Hadoop, Spark, Storm and HBase service. Azure Data Lake Analytics includes U-SQL, a language that unifies the benefits of SQL with the expressive power of your own code. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across relational stores such as Azure SQL Database. In this blog post I will outline the motivation for U-SQL, some of our inspiration, and design philosophy behind the language, and show you a few examples of the major aspects of the language.

Why U-SQL?

If you analyze the characteristics of Big Data analytics, several requirements arise naturally for an easy to use, yet powerful language:

  • Process any type of data. From analyzing BotNet attack patterns from security logs to extracting features from images and videos for machine learning, the language needs to enable you to work on any data.
  • Use custom code easily to express your complex, often proprietary business algorithms. The example scenarios above may all require custom processing that is often not easily expressed in standard query languages, ranging from user defined functions, to custom input and output formats.
  • Scale efficiently to any size of data without you focusing on scale-out topologies, plumbing code, or limitations of a specific distributed infrastructure.

How do existing Big Data languages stack up to these requirements?

SQL-based languages (such as Hive and others) provide you with a declarative approach that natively does the scaling, parallel execution, and optimizations for you. This makes them easy to use, familiar to a wide range of developers, and powerful for many standard types of analytics and warehousing. However, their extensibility model and support for non-structured data and files are often bolted on and harder to use. For example, even if you just want to quickly explore your data in a file or remote data source, you need to create catalog objects to schematize file data or remote sources before you can query them, which reduces your agility. And although SQL-based languages often have several extensibility points for custom formatters, user-defined functions, and aggregators, they are rather complex to build, integrate, and maintain, with varying degrees of consistency in the programming models.

Programming language-based approaches to process Big Data, for their part, provide an easy way to add your custom code. However, a programmer often has to explicitly code for scale and performance, often down to managing the execution topology and workflow such as the synchronization between the different execution stages or the scale-out architecture. This code can be difficult to write correctly, and optimized for performance. Some frameworks support declarative components such as language integrated queries or embedded SQL support. However, SQL may be integrated as strings and thus lacking tool support, the extensibility integration may be limited or – due to the procedural code that does not guard against side-effects – hard to optimize, and does not provide for reuse.

Taking the issues of both SQL-based and procedural languages into account, we designed U-SQL from the ground-up as an evolution of the declarative SQL language with native extensibility through user code written in C#. This unifies both paradigms, unifies structured, unstructured, and remote data processing, unifies the declarative and custom imperative coding experience, and unifies the experience around extending your language capabilities.

U-SQL is built on the learnings from Microsoft’s internal experience with SCOPE and existing languages such as T-SQL, ANSI SQL, and Hive. For example, we base our SQL and programming language integration and the execution and optimization framework for U-SQL on SCOPE, which currently runs hundred thousands of jobs each day internally. We also align the metadata system (databases, tables, etc.), the SQL syntax, and language semantics with T-SQL and ANSI SQL, the query languages most of our SQL Server customers are familiar with. And we use C# data types and the C# expression language so you can seamlessly write C# predicates and expressions inside SELECT statements and use C# to add your custom logic. Finally, we looked to Hive and other Big Data languages to identify patterns and data processing requirements and integrate them into our framework.

In short, basing U-SQL language on these existing languages and experiences should make it easy for you to get started and powerful enough for the hardest problems.

Show me U-SQL!

Let’s assume that I have downloaded my Twitter history of all my tweets, retweets, and mentions as a CSV file and placed it into my Azure Data Lake Store.

 

In this case I know the schema of the data I want to process, and for starters I want to just count the number of tweets for each of the authors in the tweet “network”:

@t = EXTRACT date string

, time string

, author string

, tweet string

FROM “/input/MyTwitterHistory.csv”

USING Extractors.Csv();

 

@res = SELECT author

, COUNT(*) AS tweetcount

FROM @t

GROUP BY author;

 

OUTPUT @res TO “/output/MyTwitterAnalysis.csv”

ORDER BY tweetcount DESC

USING Outputters.Csv();

The above U-SQL script shows the three major steps of processing data with U-SQL:

  1. Extract data from your source. Note that you just schematize it in your query with the EXTRACT statement. The datatypes are based on C# datatypes and I use the built-in Extractors library to read and schematize the CSV file.
  2. Transform using SQL and/or custom user defined operators (which we will cover another time). In the example above, it is a familiar SQL expression that does a GROUP BY aggregation.
  3. Output the result either into a file or into a U-SQL table to store it for further processing.

Note that U-SQL’s SQL keywords have to be upper-case to provide syntactic differentiation from syntactic C# expressions with the same keywords but different meaning.

Also notice that each of the expressions are assigned to a variable (@t and @res). This allows U-SQL to incrementally transform and combine data step by step expressed as an incremental expression flow using functional lambda composition (similar to what you find in the Pig language). The execution framework, then, composes the expressions together into a single expression. That single expression can then be globally optimized and scaled out in a way that isn’t possible if expressions are being executed line by line. The following picture shows you the graph generated for the next query in this blog post:

 

Going back to our example, I now want to add additional information about the people mentioned in the tweets and extend my aggregation to return how often people in my tweet network are authoring tweets and how often they are being mentioned. Because I can use C# to operate on the data, I can use an inline C# LINQ expression to extract the mentions into an ARRAY. Then I turn the array into a rowset with EXPLODE and apply the EXPLODE to each row’s array with a CROSS APPLY. I union the authors with the mentions, but need to drop the leading @-sign to align it with the author values. This is done with another C# expression where I take the Substring starting at position 1.

@t = EXTRACT date string

, time string

, author string

, tweet string

FROM “/input/MyTwitterHistory.csv”

USING Extractors.Csv();

@m = SELECT new SQL.ARRAY<string>(

tweet.Split(‘ ‘).Where(x => x.StartsWith(“@”))) AS refs

FROM @t;

 

@t = SELECT author, “authored” AS category

FROM @t

UNION ALL

SELECT r.Substring(1) AS r, “mentioned” AS category

FROM @m CROSS APPLY EXPLODE(refs) AS Refs(r);

 

@res = SELECT author

, category

, COUNT(*) AS tweetcount

FROM @t

GROUP BY author, category;

 

OUTPUT @res TO “/output/MyTwitterAnalysis.csv”

ORDER BY tweetcount DESC

USING Outputters.Csv();

As a next step I can use the Azure Data Lake Tools for Visual Studio to refactor the C# code into C# functions using the tool’s code-behind functionality. When I then submit the script, it automatically deploys the code to the service.

 

I can also deploy and register the code as an assembly in my U-SQL metadata catalog. This allows me and other people to use the code in future scripts. The following script shows how to refer to the functions, assuming the assembly was registered as TweetAnalysis:

REFERENCE ASSEMBLY TweetAnalysis;

 

@t = EXTRACT date string

, time string

, author string

, tweet string

FROM “/input/MyTwitterHistory.csv”

USING Extractors.Csv();

 

@m = SELECT Tweets.Udfs.get_mentions(tweet) AS refs

FROM @t;

 

@t = SELECT author, “authored” AS category

FROM @t

UNION ALL

SELECT Tweets.Udfs.cleanup_mentions(r) AS r, “mentioned” AS category

FROM @m CROSS APPLY EXPLODE(refs) AS Refs(r);

 

@res = SELECT author

, category

, COUNT(*) AS tweetcount

FROM @t

GROUP BY author, category;

 

OUTPUT @res

TO “/output/MyTwitterAnalysis.csv”

ORDER BY tweetcount DESC

USING Outputters.Csv();

Because I noticed that I need to do a bit more cleanup around the mentions besides just dropping the @ sign, the assembly also contains a cleanup_mentions functions that does additional processing beyond dropping the @.

This is why U-SQL!

I hope you got a glimpse at why we think U-SQL makes it easy to query and process Big Data and that you understand our thinking behind the language. Over the next couple of weeks we will be expanding more on the language design philosophy and provide more sample code and scenarios over at our Big Data topic in the Azure blog. We’ll also dive in deeper into many of the additional capabilities such as:

  • Operating over set of files with patterns
  • Using (Partitioned) Tables
  • Federated Queries against Azure SQL DB
  • Encapsulating your U-SQL code with Views, Table-Valued Functions, and Procedures
  • SQL Windowing Functions
  • Programming with C# User-defined Operators (custom extractors, processors)
  • Complex Types (MAP, ARRAY)
  • Using U-SQL in data processing pipelines
  • U-SQL in a lambda architecture for IOT analytics

U-SQL makes Big Data processing easy because it:

  • Unifies declarative queries with the expressiveness of your user code
  • Unifies querying structured and unstructured data
  • Unifies local and remote queries
  • Increases productivity and agility from Day 1 for YOU!

Not Just U-SQL – Azure Data Lake provides Productivity on All Your Data

U-SQL is just one of the ways that we are working to make Azure Data Lake the most productive environment for authoring, debugging and optimizing analytics at any scale. With rich support for authoring and monitoring Hive jobs, a C# based authoring model for building Storm jobs for real time streaming, and supporting every stage of the job lifecycle from development to operationalization, the Azure Data Lake services let you focus more on the questions you want to answer than spending time debugging distributed infrastructure. Our goal is to make big data technology simpler and more accessible to the greatest number of people possible: big data professionals, engineers, data scientists, analysts and application developers.

02 Dec, 2015

Magic Table in SQL Server

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net,Events
  • no comments
Magic Tables
Magic tables are nothing but the logical tables maintained by SQL server internally.
There are two types of Magic tables available in SQL server:
  • Inserted
  • Deleted
We can not see or access these tables directly, not even their data-type. The only method to have access to these tables is Triggers operation either After Trigger or Instead of trigger.
Inserting into Table (Inserted Table):
Whenever we do  insert anything in our base table in database, a table gets created automatically by the
SQL server, named as INSERTED. In this table current updated or inserted record will be available. we can access this table of record via triggers.
 
Updating Table (Inserted & Deleted Table):
Whenever we do any deletion operation on our base table, in spite of one, two tables are created, one is INSERTED and another is called DELETED. Deleted table consist of the current record after the deletion operation and  Inserted table consists of the previous record. We can access it via Triggers functionality.
 
Deleting (Deleted Table):
Whenever we do deletion in base table in database, a table gets created automatically by the SQL server, named as
DELETED table. This table consist of current updated record after deletion operation. Again we can have access to these records via triggers.
02 Dec, 2015

Hashtable in C#

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net,Events
  • no comments

Hashtable. This optimizes lookups. It computes a hash of each key you add. It then uses this hash code to look up the element very quickly.
Don’t use this. It is an older .NET Framework type. It is slower than the generic Dictionary type. But if an old program uses Hashtable, it is helpful to know how to use this type.Dictionary
First example. We create a Hashtable with a constructor. When it is created, the Hashtable has no values. We directly assign values with the indexer, which uses the square brackets.

Next:The example adds three integer keys, with one string value each, to the Hashtable object.

Result:The program displays all the DictionaryEntry objects returned from the enumerator in the foreach-loop.

WriteLine:The WriteLine call contains a format string that displays the key-value pairs with a comma.

Based on: .NET 4.5

C# program that adds entries to Hashtable

using System;
using System.Collections;

class Program
{
    static void Main()
    {
	Hashtable hashtable = new Hashtable();
	hashtable[1] = "One";
	hashtable[2] = "Two";
	hashtable[13] = "Thirteen";

	foreach (DictionaryEntry entry in hashtable)
	{
	    Console.WriteLine("{0}, {1}", entry.Key, entry.Value);
	}
    }
}

Output

13, Thirteen
2, Two
1, One

Foreach. You can loop through the Hashtables by using the DictionaryEntry type in a foreach-loop. You can alternatively get the Keys collection and copy it into an ArrayList.Foreach
DictionaryEntry. A DictionaryEntry contains two objects: the key and the value. This is similar to a KeyValuePair from the newer generic Dictionary.DictionaryEntryKeyValuePair
ContainsKey. You will want to call ContainsKey on your Hashtable with the key contents. This method returns true if the key is found, regardless of the value.

Also:Contains works the same way. We see an example of using the indexer with the square brackets.

C# program that uses Contains method

using System;
using System.Collections;

class Program
{
    static Hashtable GetHashtable()
    {
	// Create and return new Hashtable.
	Hashtable hashtable = new Hashtable();
	hashtable.Add("Area", 1000);
	hashtable.Add("Perimeter", 55);
	hashtable.Add("Mortgage", 540);
	return hashtable;
    }

    static void Main()
    {
	Hashtable hashtable = GetHashtable();

	// See if the Hashtable contains this key.
	Console.WriteLine(hashtable.ContainsKey("Perimeter"));

	// Test the Contains method. It works the same way.
	Console.WriteLine(hashtable.Contains("Area"));

	// Get value of Area with indexer.
	int value = (int)hashtable["Area"];

	// Write the value of Area.
	Console.WriteLine(value);
    }
}

Output

True
True
1000

Objects. An indexer is a property that receives an argument inside square brackets. The Hashtable implements indexers. It returns plain objects so you must cast them.Indexer
Multiple types. The example here adds string keys and int keys. Each of the key-value pairs has different types. You can put them all in the same Hashtable.

Warning:This code might throw exceptions. Casting is a delicate operation. It is hard to get right.

Info:If the cast was applied to a different type, the statement could throw an InvalidCastException. We avoid this with “is” or “as.”

C# program that uses multiple types

using System;
using System.Collections;

class Program
{
    static Hashtable GetHashtable()
    {
	Hashtable hashtable = new Hashtable();

	hashtable.Add(300, "Carrot");
	hashtable.Add("Area", 1000);
	return hashtable;
    }

    static void Main()
    {
	Hashtable hashtable = GetHashtable();

	string value1 = (string)hashtable[300];
	Console.WriteLine(value1);

	int value2 = (int)hashtable["Area"];
	Console.WriteLine(value2);
    }
}

Output

Carrot
1000

Cast. You can use the as-operator to attempt to cast an object to a specific reference type. If the cast does not succeed, the result will be null.Null

Is:You can also use the is-operator. This operator returns true or false based on the result.

Is

As:With Hashtable, you can reduce the number of casts by using the as-operator. This is a performance warning given by FxCop.

AsFxCop

C# program that casts Hashtable values

using System;
using System.Collections;
using System.IO;

class Program
{
    static void Main()
    {
	Hashtable hashtable = new Hashtable();
	hashtable.Add(400, "Blazer");

	// This cast will succeed.
	string value = hashtable[400] as string;
	if (value != null)
	{
	    Console.WriteLine(value);
	}

	// This cast won't succeed, but won't throw.
	StreamReader reader = hashtable[400] as StreamReader;
	if (reader != null)
	{
	    Console.WriteLine("Unexpected");
	}

	// You can get the object and test it.
	object value2 = hashtable[400];
	if (value2 is string)
	{
	    Console.Write("is string: ");
	    Console.WriteLine(value2);
	}
    }
}

Output

Blazer
is string: Blazer

Keys, values. We can loop over keys and values, or store them in an ArrayList. This example shows all the keys, then all the values, and then stores the keys in an ArrayList.

Note:This Hashtable example uses the Keys property. This property returns all the keys.

Keys:The first loop in the program loops over the collection returned by the Keys instance property on the Hashtable instance.

Values:The second loop in the program shows how to enumerate only the values in the Hashtable instance.

Console.WriteLine

Copy:We create a new ArrayList with the copy constructor and pass it the Keys (or Values) property as the argument.

ArrayList

C# program that loops over Keys, Values

using System;
using System.Collections;

class Program
{
    static void Main()
    {
	Hashtable hashtable = new Hashtable();
	hashtable.Add(400, "Blaze");
	hashtable.Add(500, "Fiery");
	hashtable.Add(600, "Fire");
	hashtable.Add(800, "Immolate");

	// Display the keys.
	foreach (int key in hashtable.Keys)
	{
	    Console.WriteLine(key);
	}

	// Display the values.
	foreach (string value in hashtable.Values)
	{
	    Console.WriteLine(value);
	}

	// Put keys in an ArrayList.
	ArrayList arrayList = new ArrayList(hashtable.Keys);
	foreach (int key in arrayList)
	{
	    Console.WriteLine(key);
	}
    }
}

Output

800       (First loop)
600
500
400
Immolate  (Second loop)
Fire
Fiery
Blaze
800       (Third loop)
600
500
400

Keys and values, notes. The Keys and Values public accessors return a collection of the keys and values in the Hashtable at the time they are accessed.

However:If you need to look at all the keys and values in pairs, it is best to enumerate the Hashtable instance itself.
Count, Clear. You can count the elements in a Hashtable with the Count property. The example also shows using the Clear method to erase all the Hashtable contents.

Tip:An alternative to Clear() is to reassign your Hashtable reference to a new Hashtable().

Note:This example shows how to use the Count property. This property returns the number of elements.

First:We add data to the Hashtable. It captures the Count, which is 4. It then uses Clear on the Hashtable, which now has 0 elements.

C# program that uses Count

using System;
using System.Collections;

class Program
{
    static void Main()
    {
	// Add four elements to Hashtable.
	Hashtable hashtable = new Hashtable();
	hashtable.Add(1, "Sandy");
	hashtable.Add(2, "Bruce");
	hashtable.Add(3, "Fourth");
	hashtable.Add(10, "July");

	// Get Count of Hashtable.
	int count = hashtable.Count;
	Console.WriteLine(count);

	// Clear the Hashtable.
	hashtable.Clear();

	// Get Count of Hashtable again.
	Console.WriteLine(hashtable.Count);
    }
}

Output

4
0

Count property, notes. Count returns the number of elements in the Hashtable. This property does not perform lengthy computations or loops.

Note:MSDN states that, for Count, “retrieving the value of this property is an O(1) operation.”

Time:This property is a constant-time accessor. It returns an integer and is a simple accessor with low resource demands.
Benchmark. We test the Hashtable collection against the Dictionary. The benchmark first populates an equivalent version of each collection.

Then:It tests one key that is found and one that is not found. It repeats this 20 million times.

Hashtable used in benchmark: C#

Hashtable hashtable = new Hashtable();
for (int i = 0; i < 10000; i++)
{
    hashtable[i.ToString("00000")] = i;
}

Dictionary used in benchmark: C#

var dictionary = new Dictionary<string, int>();
for (int i = 0; i < 10000; i++)
{
    dictionary.Add(i.ToString("00000"), i);
}

Statements benchmarked: C#

hashtable.ContainsKey("09999")
hashtable.ContainsKey("30000")

dictionary.ContainsKey("09999")
dictionary.ContainsKey("30000")

Benchmark of 20 million lookups

Hashtable result:  966 ms
Dictionary result: 673 ms

Results, benchmark. Hashtable is slower than the Dictionary code. I calculate that Hashtable here is 30% slower. This means that for strongly-typed collections, the Dictionary is faster.
Constructors. The 15 overloaded constructors provide ways to specify capacities. They let you copy existing collections. You can also specify how the hash code is computed.Constructor
A summary. Hashtable is an older collection that is obsoleted by the Dictionary collection. Knowing how to use it is critical when maintaining older programs. These programs are important.

02 Dec, 2015

Unit Test Best Practices and Guidelines

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net,Events
  • no comments

The following are Unit Test best practices and guidelines:

  1. Test one object or class per unit test class.
    Yes, you should refactor your code if you cannot test one class only.
  2. Name your test class after the class it tests.
    If you have a class named SomeClassY, then your test class should be named SomeClassY_Tests.
  3. Perform one test per test function.
    The test class can have as many functions as you need. Perform one test per function. That doesn’t mean one Assert call. Multiple assertions might be needed to perform one test.  Think of it this way. When a test fails, you should know exactly what failed and why just because of which test function failed.
  4. A unit test should run on the Build and Continuous Integration (CI) systems.
    Unit tests are there to help you succeed and prevent you from failing. If they run rarely, they rarely help. They should run every time you check in code and every time your build kicks off. You should be automatically notified if any code you wrote breaks an existing Unit Test.
  5. A unit test should never alter the system in any way.
    Don’t touch files, databases, registry, network, etc… A test that does so is a functional test not a Unit Test. If an object cannot be tested without touching the system, refactor the object to use an Interface (and if needed a wrapper) for interacting with the system so such can be faked, or mocked with a tool such as RhinoMocks or MOQ. This is important because if a Unit Test is running on the Build or CI system, you could actually introduce a change to the system that hides a bug and allows the bug to exist in a released product.
  6. Make the test function names self documenting.
    This means if you want to test passing in a bool to FunctionX you might call your test functions something like this:
    FunctionX_True_Test()
    FunctionX_False_Test()
    Think of it this way. When a test fails, you should know exactly what failed and why just because of the function name.
  7. Never assume 100% code coverage means 100% tested.
    For example, 100% coverage of a function that takes a string as a parameter might be 100% tested with one test. However, you may need to test passing in at least five string instances to avoid all types of bugs: expected string, unexpected string, null, empty, white space, and double-byte strings. Similarly a function that takes a bool parameter should be tested with both true and false passed in.
  8. Test in the simplest way possible.
    Don’t elaborate, don’t add extra code. Just make a valid test as small as possible. Warning! That doesn’t mean you can forget the best practices and guidelines above. For example, if the simplest way is to test everything in one function do NOT do it. Follow the best practices and guidelines.
  9. Get training and keep learning about Unit Testing.
    You won’t do it correctly without training and continued learning. It doesn’t matter if you do your own research and train yourself by reading online articles, blog posts, or books. Just get yourself trained and keep learning. There are many test frameworks, mocking frameworks, wrappers (such as System Wrapper), and encapsulation issues, and without training you may end up with Unit Tests that are not maintainable. You will find many opinions about best practices, some matter, some don’t, but you should know each side of the opinions and why those opinions exist whether you agree with them or not (this list included).

I hope this list helps you.

02 Dec, 2015

Lambda Expression

  • Yogeshwar Singh Chauhan
  • Business,Company,Dot Net
  • no comments

Lambda. In lambda calculus, a function becomes a variable. Behavior is now just another unit of data. In programs we apply the => operator to indicate a lambda.

Syntax. To the left, we have arguments. The result is on the right. Often we pass lambda expressions as arguments, for sorting or for searching. We use them in queries.

An example. Perhaps the most common place to use lambdas is with List. Here we use FindIndex, which receives a Predicate method. We specify this as a lambda expression.

Based on: .NET 4.5

C# program that uses lambda, List

using System;
using System.Collections.Generic;

class Program
{
    static void Main()
    {
	List<int> elements = new List<int>() { 10, 20, 31, 40 };
	// ... Find index of first odd element.
	int oddIndex = elements.FindIndex(x => x % 2 != 0);
	Console.WriteLine(oddIndex);
    }
}

Output

2

Lambda details

x          The argument name.
=>         Separates argument list from result expression.
x % 2 !=0  Returns true if x is not even.

Detailed examples. We take a closer look at lambdas and anonymous functions. The => operator separates the parameters to a method from its statements in the method’s body.

Tip:Lambda expressions use the token => in an expression context. In this context, the token is not a comparison operator.

Token

Goes to:The => operator can be read as “goes to.” It is always used when declaring a lambda expression.

Invoke:With Invoke, a method on Func and Action, we execute the methods in the lambdas.

C# program that uses lambda expressions

using System;

class Program
{
    static void Main()
    {
	//
	// Use implicitly typed lambda expression.
	// ... Assign it to a Func instance.
	//
	Func<int, int> func1 = x => x + 1;
	//
	// Use lambda expression with statement body.
	//
	Func<int, int> func2 = x => { return x + 1; };
	//
	// Use formal parameters with expression body.
	//
	Func<int, int> func3 = (int x) => x + 1;
	//
	// Use parameters with a statement body.
	//
	Func<int, int> func4 = (int x) => { return x + 1; };
	//
	// Use multiple parameters.
	//
	Func<int, int, int> func5 = (x, y) => x * y;
	//
	// Use no parameters in a lambda expression.
	//
	Action func6 = () => Console.WriteLine();
	//
	// Use delegate method expression.
	//
	Func<int, int> func7 = delegate(int x) { return x + 1; };
	//
	// Use delegate expression with no parameter list.
	//
	Func<int> func8 = delegate { return 1 + 1; };
	//
	// Invoke each of the lambda expressions and delegates we created.
	// ... The methods above are executed.
	//
	Console.WriteLine(func1.Invoke(1));
	Console.WriteLine(func2.Invoke(1));
	Console.WriteLine(func3.Invoke(1));
	Console.WriteLine(func4.Invoke(1));
	Console.WriteLine(func5.Invoke(2, 2));
	func6.Invoke();
	Console.WriteLine(func7.Invoke(1));
	Console.WriteLine(func8.Invoke());
    }
}

Output

2
2
2
2
4

2
2

A syntax review. Above we see many usages of lambda expressions. Sorry for the long example. The => operator separates arguments from methods. It does not compare numbers.

Left side:This is the parameters. It can be empty. Sometimes it can be implicit (derived from the right).

Right side:This is a statement list inside curly brackets with a return statement, or an expression.
Func1 through func8. Above, func1 through func8 denote anonymous function instances. The C# compiler often turns different syntax forms into the same code.
Func. The key part of Func is that it returns a value. It can have zero, or many, arguments. But its invariant is a return value, indicated by the TResult parametric type.Func

Func examples

Func<TResult>              Has one result value, no parameter.
Func<T, TResult>           Has one result value, one parameter.
Func<T1, T2, TResult>      Has one result value, two parameters.
Func<T1, T2, T3, TResult>  ....

Action. This class indicates a function that receives no parameter and returns no value. It matches a void method with no arguments. This guy is a solitary character.Action
Delegate. The delegate keyword denotes an anonymous function. After this keyword, we use a formal parameter list. We can omit the list if there are no parameters.Delegate
Anonymous functions. This term includes both delegates and lambda syntaxes. An anonymous function has no name. Perhaps it is running from the law.Anonymous Functions

Overloading:Because it has no name, method overloading is not possible for anonymous functions.

Overload

Note:Many developers regard lambda expressions as a complete improvement over (and replacement for) the delegate syntax.
Predicate. Here we use this type with an int parameter. With a lambda expression, we specify that the function returns true if the argument is equal to 5.Predicate

Invoke:In this program, the Invoke method is used to show that the Predicate works as expected.

C# program that uses Predicate

using System;

class Program
{
    static void Main()
    {
	Predicate<int> predicate = value => value == 5;
	Console.WriteLine(predicate.Invoke(4));
	Console.WriteLine(predicate.Invoke(5));
    }
}

Output

False
True

Comparison. This type is specifically used to compare objects. It is useful when calling the List.Sort or Array.Sort methods. It can be used with any object type.Comparison

Performance:Using methods such as List.Sort or Array.Sort (with a Comparison) is often faster than using LINQ to sort on a property.
Events. Like any other method, events can be specified as lambda expressions. With events, many event handlers are called when a certain thing happens. This can simplify some programs.Event
Performance. I benchmarked a lambda against an anonymous method, one using the delegate keyword. I used the functions as arguments to the Count() extension.

Result:I found no differences. The lambda expression performed the same as the explicit Func instance.

Thus:Lambda expressions cause no excess performance hit beyond other delegate syntaxes.

Locals used in benchmark: C#

int[] array = { 1 };
Func<int, bool> f = delegate(int x)
{
    return x == 1;
};

Lambda expression tested: C#

int c = array.Count(element => element == 1);

Delegate tested: C#

int c = array.Count(f);

Expression-bodied methods. A method can be specified with lambda expression syntax. We provide a method name, and the method is compiled like a lambda. A “return” statement is implicit.Return: Expression-Bodied
Expressive power. Lambdas advance a language. We can achieve the same thing with regular, non-lambda methods. But they make a language easier to use, more “expressive.”

Higher-order procedures can serve as powerful abstraction mechanisms, vastly increasing the expressive power of our language.

Structure and Interpretation of Computer Programs
Specification. The C# language specification describes anonymous function types. The annotated edition of The C# Programming Language (3rd Edition) covers all syntaxes available.

Tip:We can find more detail on this topic using the precise technical terminology on page 314 of this book.

Boring:This is pretty boring. Proceed at your own risk. Unless you are thinking about making a C# website, it may not be worth the effort.
Some help. Lambdas have unique syntactic rules. We had some help from the C# specification itself. We used lambdas with zero, one or many arguments, and with a return value.
Anonymous functions. These have no names, but we learned lots of their details. With the delegate keyword, we also specify method objects.

Page 1 of 3 123 Next >

Site Categories

  • Accessibility Testing (29)
  • Automation Testing (27)
  • Banking Application Testing (2)
  • Blockchain (2)
  • Blogs (378)
  • Business (44)
  • Case Studies (37)
  • Cloud Testing (5)
  • Company (16)
  • Compatibility Testing (1)
  • DevLabs Expert Group (25)
  • DevOps (2)
  • Dot Net (27)
  • E-Learning testing (3)
  • Events (6)
  • Fun at Devlabs (1)
  • Functional Testing (4)
  • Healthcare App Testing (10)
  • Innovation (5)
  • Java (3)
  • Job Openings (31)
  • Mobile Testing (20)
  • News (144)
  • News & Updates (7)
  • Open Source (9)
  • Our-Team (9)
  • Performance Testing (24)
  • Press Releases (37)
  • QA Thought Leadership (3)
  • Salesforce App Development (2)
  • Security Testing (16)
  • Software Testing (37)
  • Testimonials (24)
  • Translation & Globalization Testing (10)
  • Uncategorized (3)
  • Usability Testing (1)
  • Webinars (26)
  • White Papers (35)
  • Popular
  • Recent
  • Tags
  • Zend Framework April 16, 2013
  • Effective Regression Testing Strategy for Localized Applications Effective Regression Testing Strategy for Localized Applications March 23, 2015
  • Moving from a commercial to an open source performance testing tool August 12, 2015
  • 3 Tier Architecture in .Net Framework March 21, 2013
  • Zend at QAIT Devlabs March 26, 2013
  • Key Focus Areas while Testing a Healthcare App Key Focus Areas while Testing a Healthcare App September 18, 2020
  • Need for the Right Performance Testing Strategy for your Mobile App Need for the Right Performance Testing Strategy for your Mobile App September 12, 2020
  • Key Points to Remember Before Starting Salesforce Customization Key Points to Remember Before Starting Salesforce Customization September 8, 2020
  • Top 5 Automation Testing Tools for Mobile Applications Top 5 Automation Testing Tools for Mobile Applications September 2, 2020
  • Improve Salesforce Application Performance Leveraging Platform Cache using Lightning Web Component Improve Salesforce Application Performance Leveraging Platform Cache using Lightning Web Component August 28, 2020
  • Jobs - 13
  • Hiring - 13
  • mobile app testing - 8
  • performance testing - 7
  • accessibility-testing - 6
  • #AccessibilityTesting - 6
  • #PerformanceTesting - 6
  • automation testing - 5
  • accessibility - 4
  • #PerformanceTestingServices - 4
  • Performance Testing Services - 4
  • mobile - 3
  • testing - 3
  • functional testing - 3
  • agile cycle - 3
  • DevOps - 3
  • performance - 3
  • software testing services - 3
  • data analytics - 3
  • #SoftwareTesting - 3
  • #TestAutomation - 3
  • #AppSecurity - 3
  • #SecureBankingApps - 3
  • #TestingBankingApplications - 3
  • #SoftwareTestingStrategy - 3

Site Archives

  • September 2020 (4)
  • August 2020 (9)
  • July 2020 (15)
  • June 2020 (9)
  • May 2020 (13)
  • April 2020 (13)
  • March 2020 (23)
  • February 2020 (7)
  • January 2020 (18)
  • December 2019 (9)
  • November 2019 (10)
  • October 2019 (8)
  • September 2019 (9)
  • August 2019 (6)
  • July 2019 (4)
  • June 2019 (7)
  • May 2019 (18)
  • April 2019 (15)
  • March 2019 (5)
  • February 2019 (1)
  • January 2019 (5)
  • December 2018 (3)
  • October 2018 (4)
  • August 2018 (4)
  • July 2018 (15)
  • June 2018 (1)
  • May 2018 (3)
  • April 2018 (7)
  • March 2018 (5)
  • February 2018 (15)
  • January 2018 (3)
  • December 2017 (8)
  • November 2017 (13)
  • October 2017 (19)
  • September 2017 (13)
  • August 2017 (11)
  • July 2017 (7)
  • June 2017 (6)
  • May 2017 (5)
  • April 2017 (2)
  • March 2017 (6)
  • January 2017 (3)
  • December 2016 (7)
  • October 2016 (3)
  • September 2016 (3)
  • August 2016 (6)
  • July 2016 (4)
  • June 2016 (3)
  • May 2016 (6)
  • April 2016 (3)
  • March 2016 (7)
  • February 2016 (3)
  • January 2016 (3)
  • December 2015 (20)
  • November 2015 (2)
  • October 2015 (28)
  • September 2015 (4)
  • August 2015 (2)
  • July 2015 (14)
  • June 2015 (2)
  • May 2015 (2)
  • April 2015 (5)
  • March 2015 (18)
  • February 2015 (11)
  • January 2015 (4)
  • December 2014 (3)
  • November 2014 (4)
  • October 2014 (6)
  • September 2014 (7)
  • August 2014 (6)
  • July 2014 (7)
  • June 2014 (6)
  • May 2014 (4)
  • April 2014 (7)
  • March 2014 (7)
  • February 2014 (8)
  • January 2014 (7)
  • December 2013 (3)
  • November 2013 (6)
  • October 2013 (6)
  • September 2013 (10)
  • August 2013 (3)
  • July 2013 (4)
  • June 2013 (6)
  • May 2013 (3)
  • April 2013 (12)
  • March 2013 (6)
  • February 2013 (2)
  • January 2013 (1)
  • December 2012 (2)
  • November 2012 (3)
  • October 2012 (3)
  • September 2012 (5)
  • August 2012 (2)
  • July 2012 (6)
  • June 2012 (1)
  • May 2012 (2)
  • April 2012 (3)
  • March 2012 (8)
  • February 2012 (4)
  • January 2012 (3)
  • December 2011 (1)
  • November 2011 (4)
  • October 2011 (3)
  • September 2011 (2)
  • August 2011 (3)
  • June 2011 (4)
  • May 2011 (1)
  • April 2011 (4)
  • February 2011 (1)
  • January 2011 (1)
  • October 2010 (2)
  • August 2010 (4)
  • July 2010 (2)
  • June 2010 (3)
  • May 2010 (3)
  • April 2010 (1)
  • March 2010 (5)
  • February 2010 (1)
  • January 2010 (2)
  • December 2009 (3)
  • November 2009 (1)
  • October 2009 (2)
  • July 2009 (1)
  • June 2009 (2)
  • May 2009 (2)
  • March 2009 (2)
  • February 2009 (4)
  • December 2008 (2)
  • November 2008 (1)
  • October 2008 (1)
  • September 2008 (1)
  • August 2008 (2)
  • May 2008 (1)
  • February 2008 (1)
  • September 2007 (1)
  • August 2007 (1)
  • May 2007 (2)
  • June 2006 (1)

Tag Cloud

#AccessibilityTesting #AppSecurity #AutomationTesting #MobileAppTesting #MobileTesting #PerformanceTesting #PerformanceTestingServices #SecureBankingApps #SoftwareTestAutomation #SoftwareTesting #SoftwareTestingStrategy #TestAutomation #TestingBankingApplications .NEt accessibility accessibility-testing agile cycle automation automation testing BigData cloud computing cloud testing data analytics DevOps education functional testing functional testing services globalization Hiring Jobs localization testing mobile mobile app testing Mobile Testing Offshore QA Testing performance performance testing Performance Testing Services Security testing services Selenium Test Automation software testing software testing services technology testing xAPI

Post Calendar

January 2021
MTWTFSS
« Sep  
 123
45678910
11121314151617
18192021222324
25262728293031

About QA InfoTech

Q A QA InfoTech is a C M M i CMMi Level III and I S O ISO 9001: 2015, I S O ISO 20000-1:2011, I S O ISO 27001:2013 certified company. We are one of the reputed outsourced Q A QA testing vendors with years of expertise helping clients across the globe. We have been ranked amongst the 100 Best Companies to work for in 2010 and 2011 & 50 Best Companies to work for in 2012 , Top 50 Best IT & IT-BMP organizations to work for in India in 2014, Best Companies to work for in IT & ITeS 2016 and a certified Great Place to Work in 2017-18. These are studies conducted by the Great Place to Work® Institute. View More

Get in Touch

Please use Tab key to navigate between different form fields.

This site is automatically   protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Services

  • Functional Testing
  • Automation Testing
  • Mobile Testing
  • Performance Testing
  • Accessibility Testing
  • Security Testing
  • Localization Testing
  • Cloud Testing
  • Quality Consulting

Useful Links

  • Blogs
  • Downloads
  • Case Studies
  • Webinars
  • Team
  • Pilot Testing
  • Careers
  • QA TV
  • Contact

Office Locations

Michigan, USA
Toronto, Canada
Noida, INDIA ( HQ )
Bengaluru, INDIA
Michigan, USA

  • 32985 Hamilton Court East, Suite 121, Farmington Hills, Michigan, 48334
  • +1-469-759-7848
  • info@qainfotech.com

Toronto, Canada

  • 6 Forest Laneway, North York, Ontario, M2N5X9
  • info@qainfotech.com

Noida, INDIA ( HQ )

  • A-8, Sector 68 Noida, Uttar Pradesh, 201309
  • +91-120-6101-805 / 806
  • info@qainfotech.com

Bengaluru, INDIA

  • RMZ Ecoworld, Outer Ring Road, Bellandur, Bengaluru, Karnataka, 560103
  • +91-95600-00079
  • info@qainfotech.com

Copyright ©2020 qainfotech.com. All rights reserved | Privacy Policy | Disclaimer

Scroll
QA InfoTech logo
  • About
    ▼
    • Team
    • Values and Culture
    • Overview
    • QA InfoTech Foundation
    • Careers
  • Services
    ▼
    • Software Development
      ▼
      • eLearning
      • Data Sciences
      • Accessibility Development
      • Mobility Solutions
      • Web Development
      • Front End Frameworks
      • Salesforce Development
      • Cloud Solutions
      • Enterprise Content Management
      • Odoo
      • ServiceNow
      • AS400
    • Functional Testing Services
    • Automation Testing Services & Tools
    • Mobile Testing Services
    • Performance Testing Services
    • Accessibility Testing Services
    • Usability Testing
    • Security Testing
    • Translation & Globalization Testing
    • Courseware & Content Testing
    • Crowdsourced Testing
    • Cloud Testing
    • Digital Assurance
    • Data Sciences and Analytics
    • SAP Testing
    • Selenium Test Automation
    • Blockchain Applications Testing
  • Verticals
    ▼
    • e-Learning
    • Health Care
    • Retail
    • Publishing
    • Media
    • Government
    • BFSI
    • Travel
    • OpenERP
  • Knowledge Center
    ▼
    • Case Studies
    • White Paper
    • Webinars
    • Blogs
  • WebinarsNew
  • News
    ▼
    • News
    • Press Release
  • Contact
  • Get a Quote
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Accept CookiesPrivacy policy