Fault Injection Testing

Any distributed application of reasonable complexity is distraught with numerous possible failure modes and fault domains. To make your application resilient to such failures and ensure that your fault tolerant code works as expected having some degree of Fault Injection testing is a necessity. In this post I will describe the framework which Microsoft provides for Fault Injection testing and its application to test the Idempotent semantics of our MongoDB command implementation outlined in the previous post. But, before we go there lets first try to answer the million dollar question.

What is Fault Injection?

Fault Injection is a mechanism of artificially modifying the behavior of the application by simulating faults. It is very similar to “Mocking” except that it is specifically designed to test your error handling logic. There are essentially 2 kinds of fault injection tests:

  • Tests which are designed to concentrate on a particular failure scenario
  • Tests which are chaotic and generate random failures to see if anything interesting happens


Microsoft provides a library called TestApi which comprises of various utilities for testing purposes including Fault Injection Framework. Its a powerful framework for injecting faults in managed code at runtime. The main artifact of the framework is FaultRule which comprises of 3 things:

  • Method – which tells  us the method where this fault will be injected when invoked
  • Condition – which tells us the circumstance under which this rule is triggered like if called by X, on Nth call
  • Fault – which tells us what to do when the fault condition is satisfied like throw an exception, return a given value

A collection of these fault rules is a FaultSession or FaultScope (depending on the execution mode) which is applied to the Application Under Test (AUT) to perform fault injection testing. A good description of the framework can be found here. It is also important to note that the framework comes with a set of pre-baked Fault Conditions and Faults (look for BuiltInConditions and BuiltInFaults classes). But, it is also flexible in the sense that there is nothing stopping you from creating your own conditions and faults (by implementing ICondition and IFault interfaces respectively).

This is great stuff, but, if you want to use this framework for testing your Mongo command implementation then there is some additional work that needs to be done. So, I have created a framework which builds on top of the TestApi framework and is available on GitHub as part of the Mongo Framework outlined in previous post. The framework comprises of following main artifacts:

  • TestMongoRepository which is decorator of DefaultMongoRepository (which implements IMongoRepository) and injects faults when the fault condition is met.
  • Helper methods (DynamicOverride and DynamicRestore) to plug in the TestMongoRepository during test execution.
  • IFaultInjectionTestScenario which encapsulates a fault scenario and provides way to
    • Setup the fault rules which need to be triggered to test particular scenario.
    • Assert that the expected faults actually triggered so that we don’t get false positives.
  • IFaultInjectionTest which identifies a fault injection test and is essentially used to capture fault events which get thrown as faults are triggered.
  • FaultInjection is a scope class which binds the IFaultInjectionTestScenario and IFaultInjectionTest together and provides the boilerplate code to setup the rules, run the scenario and assert the faults.
  • Custom Conditions because the built-in conditions are not sufficient. For example, trigger randomly, trigger for first N calls, composite trigger.
  • Custom Faults to simulate various Mongo transient faults.

In addition to the above, you will also need to create a TestBase class for each of your data access interface. The specifics of which is covered below.


The example scenario is that we need to implement fault injection test for the ComputeTagsCommand we implemented in the previous post. To refresh your memory we implemented the cool Read-Modify-Write pattern to make the implementation idempotent and now our goal is to ensure that it works as expected under failure scenarios (esp. retries).

The first step is to create a test base class (PostFaultTestBase) which will be used for all fault injection tests related to IPostDataAccess. The base class will have the common logic to override IMongoRepository to use TestMongoRepository (so that we can inject faults) during test setup and restore it to the original during test teardown. Also, it will capture all the fault events which TestMongoRepository throws when faults are triggered (this will be used to ensure that expected faults were actually thrown). Implementation of this class can be found on GitHub.

Follow these steps to create fault injection test:

  1. Derive your test class from PostFaultTestBase
    public class PostFaultInjectionTests : PostFaultTestBase
  2. Define your FaultInjectionTestScenario which provides the way to configure the FaultRules (SetupRules) and validate that faults occurred as per the rule specification (AssertFaults)
    class RandomlyFaultQueryAndUpdatesTestScenario : BaseFaultInjectionTestScenario
       private readonly System.Exception expectedFault = MongoTransientFaults.GetRandomTransientFault();
       public override void SetupRules()
    	int maxQueryFailures = RANDOM.Next(1, MAX_ATTEMPTS);
    	int maxUpdateFailures = MAX_ATTEMPTS - maxQueryFailures;
    	Rules.Add(new FaultRule(MongoRepositoryHelper.Method.UPDATE_CAS,
    	Rules.Add(new FaultRule(MongoRepositoryHelper.Method.SINGLE,
    				CustomConditions.TriggerRandomly(1, maxQueryFailures),
       public override void AssertFaults()
    	Assert.IsTrue(Events.Count > 0, "Make sure there are no false positives");
    	Assert.IsTrue(Events.All(e => e.IsException));
    	Assert.IsTrue(Events.All(e => e.Exception == expectedFault));
  3. Write your tests as normal with the assertion logic. The only change you need to do is wrap the method under test inside a FaultInjection scope and provide the FI test scenario.
    public void TestComputeTagsWithFault()
        // Compute tags for posts
        using (new FaultInjection.FaultInjection(new RandomlyFaultQueryAndUpdatesTestScenario(), this))
    	PostAccess.ComputeTags(new[] { post1.Title });

That is it my friend, its not as complicated as it sounds. The complete implementation of this test can be found on GitHub.


MongoDB Client Framework for .Net

10gen has an official Mongo C# driver which provides API for basic Mongo interaction and  excellent LINQ support. However, there is a lot more that goes into writing Data Access Layer (DAL) code which typically includes constructs like Data Access Objects, Repository Pattern, ORM configuration, Transient Fault Handling et al. This is where the Mongo.Framework tries to bridge the gap and provide the patterns that have become industry standard which everybody needs.


Mongo.Framework builds on top the 10gen’s official Mongo C# driver and provides standard DAL patterns and constructs that enable rapid application development.

The framework provides following capabilities:

  • Constructs for Data Access Layer (DAL) abstraction
  • Repository pattern to interface with Mongo driver
  • Standardized pattern to define Object Model, its ORM and versioning scheme
  • Transient fault handling and configurable retry logic using the Enterprise Library
  • Command pattern to provide stored procedure like semantics
  • Constructs to enable ACID semantics using Compare And Swap(CAS) approach
  • Constructs to generate various hashes (MD5, Murmur2)
  • Constructs to generate identifiers similar to Identity column in SQL
  • Fault injection framework to test ACID semantics of commands under faults

The source code is available at GitHub.


This section explains the design of the framework and elaborates upon some of the key constructs and design philosophies.

Data Access

The center piece of Mongo Framework is the MongoDataAccess class. This class is responsible for executing Mongo Commands and providing transient fault handling by using configurable retry logic. It is injected with IMongoDataAccessAbstractFactory which provides access to various factories (each of which is covered in sections below). The purpose of this class is to serve as the base class for concrete implementations of Data Access Object (DAO)  which enables the derived class to leverage all the boilerplate code MongoDataAccess provides.


MongoCommand in turn uses another abstraction layer called MongoRepository to interface with the underlying Mongo cluster. The rationale for this abstraction is explained in “Mongo Repository” section below. But, for now remember that it contains Mongo specific CRUD operations on entities.

Mongo Command

MongoCommand plays the same role that Stored Procedure plays in SQL world. The purpose of this class it to provide the smallest unit of transaction. Generally, for each of the methods in the DAO interface a corresponding command implementation is created. It is important to note that in case of transient faults the whole command is replayed which implies interesting dynamics which will be covered in future posts.


BaseMongoCommand implements most of the common command logic and provides utility methods for concrete command implementations. It internally has 2 abstract template methods (ExecuteInternal and ValidateInternal) which is all that a concrete command needs to implement.

Mongo Repository

IMongoRepository is based on Repository Pattern which abstracts the CRUD operations specific to Mongo. This additional layer provides the flexibility to use different Mongo driver/connector implementations. The “Default” implementation is based on 10gen’s official C# driver. Another important reason for putting this abstraction in place is to allow mocking of IMongoRepository interface and do fault simulation to test the transient fault handling/ retry logic.


The repository interface is templatized and the template parameter is data model object (aka entity) which represents a document in a Mongo collection. This implies that for each collection within Mongo there will be a corresponding Model object in DAL and the DAL will access that document by creating a MongoRepository instance with that Model object as the template parameter.

var post = new Post()
    Title = "My First Post",
    Body = "This isn't a very long post.",
    Tags = new List<string>(),
    Comments = new List<Comment>  {
        { new Comment() { TimePosted = new DateTime(2013,1,3),
                          Email = "jackie@gmail.com",
                          Body = "Why are you wasting your time!" }},


Transient Fault Handling

Transient faults, as the name implies, are temporary exception scenarios (like network failures, primary mongodb going down etc.) from which the system can recover within a short period of time. Any seasoned developer knows that one of the most important considerations while writing fault tolerant code is retry logic to handle transient faults. It is unfortunate that 10gen driver doesn’t provide this out-of-the-box. But, don’t sweat the framework is your savior.


 The framework provides transient fault handling capability by leveraging the Enterprise Library Azure Integration Pack.

  • The Enterprise Library provides a RetryPolicy interface which comprises of ITransientErrorDetectionStrategy (to detect whether a given exception is transient or not) and RetryStrategy (which specifies the retry logic – fixed interval, incremental or exponential back off).
  • The framework provides its own implementation for ITransientErrorDetectionStrategy to detect various Mongo driver and DB specific transient fault conditions.
  • The framework currently uses FixedInterval based RetryStrategy but it can easily be changed to incremental or exponential backoff.

ID Generation

Mongo has no concept of Identity column as a result we have to provide our own implementation in order to achieve the same effect. The approach used for id generation is to create an “Identity” collection which contains the last id of various types as shown below. And FindAndModify method  is used to atomically update the Id of a given type.

{ "_id" : "Server", "seq" : 736 }
{ "_id" : "Share", "seq" : 1185 }


 The implementation is wrapped into IIdGenerator and IMongoIdGeneratorFactory interface to have necessary abstractions in place for future enhancements and mocking ability for testing purposes.

Hash Generation

Hash generation provides the ability to generate hash code (MD5, SHA1 etc.) of a given string. This is required for generating hash fields which are then used as shard key for random distribution of documents. The rationale for wrapping this into various interfaces is same as that for Id Generation.

HashGenerationThe framework currently supports MD5 and Murmur2 hash codes.

Data Model

Last but not least aspect of the framework is the construction of data model used to serialize and de-serialize data from MongoDB. A data model is basically a class which is the application code counterpart of a document in Mongo.

For every Mongo collection you want to access in DAL you need to create a corresponding data model class. This model will be the template parameter that you will pass into the IMongoRepository interface to access/manipulate the collection. It is important to note that 10gen C# driver takes care of all the ORM for us and the only thing we need to do is correctly setup the class mapping (i.e. which field in data model maps to which field in the mongo collection).

Following are the design principles you need to adhere to for creating the data model:

  1. The collection name is specified by setting the MongoCollectionName attribute on model class.
  2. Derive the class from IVersionedEntity to enable schema versioning (if desired). I will explain in later posts about how this can be used for Schema Evolution.
  3. Declare all the data fields as properties in the “Data Model” region.
  4. Declare all the object mapping in the static constructor of the high level data model class.
    • Mapping for any dependent/contained classes needs to be defined in the high level data model class only.
    • Use the SetIgnoreExtraElements to ensure backward compatibility.
    • Declare all the field names as const variables in “Field Names” region so that same can be used during query construction to ensure consistency.
    • It is important to note that MongoRepository invokes the static constructor of data model at appropriate time to register the mapping at runtime.
  5. Declare all the adapter logic for to/from conversion of data model to VO or DTO in the “Adapter interface to convert to/from model” region.

Following is an example data model for illustration:

public class Post
    #region Field names

    public const string COLLECTION_NAME = "post";
    public const string FN_TITLE = "title";
    public const string FN_COMMENTS = "comments";
    public const string FN_COMMENT_TIME = "time";


    #region Data model

    public ObjectId Id { get; set; }
    public string Title { get; set; }
    public IList<Comment> Comments { get; set; }


    #region Object mapping

    static Post()
        // Setup the mapping from collection columns to model data members
        BsonClassMap.RegisterClassMap<Post>(cm =>
            cm.SetIgnoreExtraElements(true); // For backward compatibility
            cm.SetIdMember(cm.GetMemberMap(c => c.Id));
            cm.GetMemberMap(c => c.Title)
            cm.GetMemberMap(c => c.Comments)

        BsonClassMap.RegisterClassMap<Comment>(cm =>
            cm.SetIgnoreExtraElements(true); // For backward compatibility
            cm.GetMemberMap(c => c.TimePosted)
              .SetSerializationOptions(new DateTimeSerializationOptions(DateTimeKind.Utc));


public class Comment
    public DateTime TimePosted { get; set; }


This section will elaborate upon how to actually use the framework with the help of an example. The complete reference implementation of this example is available on GitHub in Mongo.Framework.Example project.

Example Use-Case

Your company is planning to create yet another blogging platform. But, this one is going to be different :-). It will change the whole blogging space once for all.

The product manager has come up with following amazing ideas:

  • Each blog post will consist of title and body
  • Users will be able to comment on posts
  • Each comment will consist of the time it was posted, the email of the commenter and actual text.

The CTO after lot of thinking has decided to use Mongo as the persistence store and has tasked you to create the DAL layer with following API requirements:

  • Ability to create a new post
  • Ability to get all the posts
  • Ability to get the posts which were commented by given person
  • Ability to delete posts

You are super excited and want to show to the world how awesome you are by implementing this in less than a day. You are actually really awesome because you decide to use the Mongo.Framework.


  1. Once you are done setting by the DAL project (and including the Mongo.Framework), the first thing you need to do is define the contract which maps the above requirements into an interface. That is, you need to create the DAO interface.
    public interface IPostDataAccess
       void CreatePost(Post post);
       Post[] GetPosts();
       string[] FindPostWithCommentsBy(string commenter);
       void DeleteAllPost();
  2. Implement the concrete data access implementation deriving from MongoDataAccess to get all the goodies. 
    class PostDataAccess : MongoDataAccess, IPostDataAccess
      public PostDataAccess(ConnectionInfo connectionInfo, IMongoDataAccessAbstractFactory factories)
        : base(connectionInfo, factories)
      public PostDataAccess(ConnectionInfo connectionInfo)
        : this(connectionInfo, new DefaultMongoAbstractFactory(connectionInfo))
      public void CreatePost(Post post)
         throw new NotImplementedException();
  3.  Implement the data model following the guidelines described in the Theory section. Surprisingly, the example data model in illustration of that section applies to this use-case.
  4. For each of the methods in the DAO implement a corresponding command class. Derive the command class from BaseMongoCommand to get all the boilerplate code.
    class CreatePostCommand : BaseMongoCommand
      public Post PostToCreate { get; set; }
      protected override object ExecuteInternal()
        return RETURN_CODE_SUCCESS;
  5. Glue the command to the DAO by invoking the command in the corresponding method.
    class PostDataAccess : MongoDataAccess, IPostDataAccess
       public void CreatePost(Post post)
          var command = new CreatePostCommand() { PostToCreate = post };
  6. Add the connection string for the mongo instance in you app.config file
         <add name="PostsDB"
              connectionString="server=;database=posts;journal=true;w=majority;" />
  7. That is it my friend. Now, you are ready to use your newly minted DAL code.
    // Instantiate the data access for posts
    var connectionInfo = ConnectionInfo.FromAppConfig("PostsDB");
    var postDataAccess = new PostDataAccess(connectionInfo);
    var post = new Post()
       Title = "My Second Post",
       Body = "This isn't a very long post either.",
       Tags = new List<string>(),
       Comments = new List<Comment>()
    // Insert the posts into the DB

In future posts, I will explain some of the more advance scenarios where the framework really shines like CAS approach for atomic transaction guarantees and fault injection tests to ensure ACID semantics under transient faults.