Suppose I have this object in my DAL (ORM etc)
public class Student
{
public string Name {get;set;}
public string Address {get;set;}
public string Phone {get;set;}
public Parent Parent {get;set;}
}
public class Parent
{
public string Name {get;set;}
public string Address {get;set;}
public string Phone {get;set;}
}
And I have a ViewModel that looks like this
public class StudentDetailVM
{
public string Name {get;set;}
public string Address {get;set;}
public string Phone {get;set;}
public string ParentName {get;set;}
public string ParentPhone {get;set;}
}
In that case I need to flatten the objects. I can do this with a tool like Automapper, ValueInjector, or I could do it manually. This is tedious work if there are many such classes to handle, but there appears to be a performance / developer efficiency tradeoff between all three approaches.
I'm looking for guidance on when to use Automapper vs Valueinjector vs a manual mapping. I'm sure manual mapping is the fastest one, but by how much?
Are some scenarios much slower/faster than others (e.g. flattening, etc)?
Would it make sense to do a hybrid approach to mapping objects between layers?
The reason I ask is because a Codeplex project called emitmapper was created to address performance issues in automapper, and I remember seeing a comment that said automapper may take up to .5ms to map a large class. (reference needed)
I also remember seeing an article that describes how users have a higher chance of staying on your site if it loads within 70ms, as opposed to 90ms or more. (I'm looking for this link too). If automapper is consuming most of my page-load time, combined with network latency, then I see potential to not use automapper and create manual classes for my high volume pages and stick with a hybrid approach.
Bottom line: I would run the tests myself, but I don't know enough about .NET internals to create accurate results that can be used as a reusable guideline.
Bottom line: I would run the tests myself, but I don't know enough about .NET internals to create accurate results that can be used as a reusable guideline.
You don't need to know .NET internals. You just need to know what your performance requirements are and what your typical usage is going to look like. Profile the code under a typical usage scenario in the all the variety of ways, and choose that which meets your performance requirements and is easiest to maintain (i.e., don't necessarily choose the most performant; there are other criteria).