Thursday, May 6, 2010

Building a better dynamic! .NET 4.0

So, you installed .NET 4.0 and think some of the new features are pretty cool...  but you have to wonder, what cool new things you can do beyond the "var" equivalent in javascript.  (!!!Disclaimer!!! var in .NET is not even remotely the same as var in javascript.  Dynamic objects in C# are created using the "dynamic" keyword NOT the var keyword.  The var keyword is compile time in C#!  I had to say it... all the other cool kids are.  ;-)  )


Batter up, who wants to build a BadAssDynamic
Now I know everyone is hating what I named this, your boss is looking at your computer screen, but to protect the innocent colleague of mine *cough*Tim*cough* and his love for the power rangers, I will not use the other cool name I used in a real project.


First up:   How often did you really want to access a property name at runtime by it's string name?  Wouldn't it have been nice if you could have done:
dynamic obj = new CustomObjectTypePerhapsAnEntity();
obj["PropertyName"] = value;


Well that is what I am here to show you how to do today.  Lets take a look at some code!




using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;


namespace NOC.Enterprise.Common.NetExtensions
{
    public class BadAssDynamic
    {
        protected Dictionary<string, Property> properties;
        protected dynamic obj;


        public dynamic this[string i]
        {
            get
            {
                return properties[i].Value;
            }
            set
            {
                properties[i].Value = value;
            }
        }


        private BadAssDynamic() { }
        public BadAssDynamic(dynamic d)
        {
            obj = d;
            Type t = d.GetType();


            this.SeedProperties(t);
        }


        public BadAssDynamic(Type t)
        {
            var constructors = t.GetConstructors();
            var constructor = constructors.Single(x => x.GetParameters().Count() == 0);


            if (constructor == null)
                throw new NullReferenceException("BadAssDynamic(..) requires the type to contain a parameterless constructor.");


            obj = constructor.Invoke(new object[] { });


            this.SeedProperties(t);
        }


        private void SeedProperties(Type t)
        {
            properties = new Dictionary<string, Property>();
            foreach (PropertyInfo p in t.GetProperties(BindingFlags.Instance | BindingFlags.Public))
            {
                properties.Add(p.Name, new Property(this, p));
            }
        }


        public dynamic GetActualObject()
        {
            return obj;
        }


        public class Property
        {
            protected BadAssDynamic _dynamic;
            protected PropertyInfo _pi;


            public Property(BadAssDynamic badAssDynamic, PropertyInfo pi)
            {
                _dynamic = badAssDynamic;
                _pi = pi;
            }


            public dynamic Value
            {
                get { return _pi.GetValue(_dynamic.obj, null); }
                set { _pi.SetValue(_dynamic.obj, value, null); }
            }
        }
    }
}






Woah... lots of italics code, and yes this time straight from my IDE.


So what does this do? 


First the above code was not entirely possible until the addition of "dynamic" with .NET 4.0.  It still uses some old school reflection despite having the "dynamic" keyword, but Microsoft only gave us a brain dead object.


The BadAssDynamic has a sub-class of "Property" which due to another brain dead move my M$ you have to have a manual reference to the parent within the subclass.  This is essentially a wrapper around the PropertyInfo object, you could live without it if you truly desire.




        private void SeedProperties(Type t)
        {
            properties = new Dictionary<string, Property>();
            foreach (PropertyInfo p in t.GetProperties(BindingFlags.Instance | BindingFlags.Public))
            {
                properties.Add(p.Name, new Property(this, p));
            }
        }




The above code is the magic that drives this object.  Upon initialization of the BadAssDynamic it populates a dictionary of type <string, Property> for all public instance properties belonging to the object.  This is then accessed by the array index accessors




        public dynamic this[string i]
        {
            get
            {
                return properties[i].Value;
            }
            set
            {
                properties[i].Value = value;
            }
        }

This is the core functionality of the BadAssDynamic, you might not know of a piratical use for this yet... BUT next blog post...  I'll show you some crazy uses for this object.

If you are looking for IT solutions for your businesses, would like to provide feedback, or give us a referral for new business opportunities (We provide profit sharing on referrals), you can visit our website at www.NightOwlCoders.com.  We service all of the US for Web Site design, and offer consulting services throughout Pittsburgh, Butler, Cranberry, and New Castle, PA.  Lawrence County, Butler Country, Mercer County, and Allegheny County.

~Aqua~

Wednesday, April 7, 2010

Now what do I do with these detached objects!?

Introduction
So I know all of you that actually read my initial post are thinking, "Great... Now I have detached objects.  I spent an hour making the modifications you told me to, but I still can't do anything with them!"  So take a deep breath and lets cover how to save your detached Linq 2 SQL entities.


First, Lets take advantage of those partial L2S Entities
Some of you know what partial classes are, some of you may not.  So lets begin there.


"A partial class, or partial type, is a feature of some object oriented computer programming languages in which the declaration of a class may be split across multiple source-code files, or multiple places within a single file."  You can read more here: http://en.wikipedia.org/wiki/Partial_Classes


.NET requires the class be the same name and be in the same namespace.  So lets go back to our User example.


User
int UserID
EntityRef<UserType> _UserType
EntityRef<Country> _Country
EntitySet<Address> Addresses


Your definition for this class likely looks similar to:  

[Table(Name="dbo.USER")]
[DataContract()]
public partial class USER : INotifyPropertyChanging, INotifyPropertyChanged {.......

So my suggestion create a *.CS file named something along the lines of "EntityExtensions.cs"

I know we have a class already taking advantage of this if you are following from yesterday's post, however, lets re-iterate what we have again.








public partial class User
{
  public void DetachAll()
  {
     this._Country = default(EntityRef);
     this._UserType = default(EntityRef);
     if(this._Addresses.HasLoadedOrAssignedValues)
        foreach(Address a in this._Addresses)
           a.DetatchAll();
  }

  private bool IsMarkedForDeletion;
  public void MarkForDeletion()
  {
    IsMarkedForDeletion = true;
  }
     public bool IsMarkedForDeletion()
     {
          return IsMarkedForDeletion ;
     }
  public bool IsNew()
  {
    return this._UserID &lt;= 0;
  }
}

Ok, now lets explain the two latest additions.  IsMarkedForDeletion is a handy way of deleting attached objects on one submit.  And IsNew is a handy way of inserting newly attached objects on one submit as well.  The same fields should be added to the Address class as well.

Next we need to know how to save these detached objects as efficiently as possible.  The following code would be in your data access layer.



public USER DeepSave(USER obj)
{
     using(YourDataContextObject data = new YourDataContextObject)
     {
          obj.DetachAll();

          if(obj.IsNew())
          {
               //Check the results, all attached objects are submitted on this single submit.
               data.USERs.InsertOnSubmit(obj);
               data.SubmitChanges();
          }
          else
          {
               data.USERs.Attach(obj, true);
               
               if(obj.ADDRESSes.HasLoadedOrAssignedValues)
               {
                    data.ADDRESSes.AttachAll(obj.ADDRESSes.Where(x =>; !x.IsNew() &amp;&amp; !x.IsMarkedForDeletion());
                    data.ADDRESSes.InsertAllOnSubmit(obj.ADDRESSes.Where(x =>; x.IsNew());
                    data.ADDRESSes.DeleteAllOnSubmit(obj.ADDRESSes.Where(x => x.IsMarkedForDeletion());
               }

               try
               {
                    data.SubmitChanges();
               }
               catch(ChangeConflictException ex)
               {
                    //Do something to handle Change Conflicts due to Optimistic Concurrency
               }
          }                    
     }

     //Now here comes the hack to detach a saved object, you can do this by shoving the object into a
     //serialized state.  Or do what I do below, this is a double database hit HOWEVER it is less painful than 
     //some of the other options.
     using(YourDataContextObject data = new YourDataContextObject())
     {
          data.ObjectTrackingEnabled = false;

          if(obj.ADDRESSes.HasLoadedOrAssignedValues)
          {
               DataLoadOptions dlo = new DataLoadOptions();
               dlo.LoadWith(x => x.ADDRESSes);
               data.LoadOptions = dlo;
          }

          obj = data.USERs.Single(x =>; x.UserID == obj.UserID);
     }    
}


In Summary
Now you can save and re-save your object and associated children as many times as you want.  The magic is in the detachment of the object from its data context.  This is why-what-how optimistic concurrency works.  The only thing to watch out for is when a RowVersion field does not match the RowVersion of an existing object, SubmitChanges will throw a ChangeConflictException.  This can be handled any number of different ways, most people choose to throw the error back to the UI and tell the user they need to refresh the error.  If you want to get more creative, you can enumerate the ChangeConflictException and tell exactly which fields the conflict occurred on and do a smart merge yourself.

I hope you have enjoyed part 2 (the final part) of how to handle n-tier and Linq.  Not sure what I'll post next, but it will be worth while to check back!

If you are looking for IT solutions for your businesses, would like to provide feedback, or give us a referral for new business opportunities (We provide profit sharing on referrals), you can visit our website at www.NightOwlCoders.com.  We service all of the US for Web Site design, and offer consulting services throughout Pittsburgh, Butler, Cranberry, and New Castle, PA.

~Aqua~


Tuesday, April 6, 2010

N-Tier, Entities, Linq, and Optimistic Concurrency... OH MY!

Introduction
Everyone knows that when developing an enterprise level software solution it is best to layer your application to keep core logic centralized and abstracted away from your presentation layer, and direct access to the storage device of your choice abstracted into its own layer.

If you are a newcomer to linq or haven't worked in the n-tier world, you are browsing the net today because you hit the same wall most developers hit when it comes to abstracting your data context away from your linq 2 sql classes and that glorious n-tier approach.

Microsoft pulled some real bone head moves that really makes developing an n-tier application with linq more than difficult, however, the benefits of linq and entities out-weigh the pains of your blood, sweat, and tears research today because I have your answer (Unless you would rather keep plugging away in the ADO world).


Optimistic Concurrency
First lets start with a term and feature that makes it all possible, optimistic concurrency. This is defined as: "In the field of relational database management systems, optimistic concurrency control (OCC) is a concurrency control method that assumes that multiple transactions can complete without affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. Before committing, each transaction verifies that no other transaction has modified its data. If the check reveals conflicting modifications, the committing transaction rolls back." (You can read more on wikipedia: http://en.wikipedia.org/wiki/Optimistic_concurrency)

Row Versioning is accomplished in SQL Server 1 of 2 ways.
  • The first, which I recommend, is the use of a TIMESTAMP column on each of your tables (You will see why shortly). This is an auto-generated field in SQL Server 2008 and auto-updated on row edits. SQL Server 2008 handles all of the heavy lifting here (All you guys and gals stuck in the SQL Server 2005 or prior world are out of luck, see the second approach). If you are going this route skip ahead to the "What do I do with Optimistic Concurrency and Linq 2 SQL Classes?" section.

  • The second approach is the use of a DATETIME column on each of your tables. Most people would name a field of this type "LAST_UPDATED" or something equivalent. SQL Server in this scenario does none of the heavy lifting for you, so I'll explain the constraints in what follows:
First, when you define your table you have to set a DEFAULT value.
CREATE [YourTableName]
(
  --Your columns
  LAST_UPDATE DATETIME NOT NULL DEFAULT(GETDATE())
)
Second, you have to define a trigger as follows:
CREATE TRIGGER trig_rv[YourTableName]
ON [YourTableName]
  AFTER UPDATE AS
  BEGIN
    UPDATE [YourTableName]
    SET LAST_UPDATE = getdate()
    WHERE
      [YourTableName].[YourPrimaryKey] IN (SELECT [YourPrimaryKey] FROM inserted);
  END  
GO


That's it, now you can catch up to your comrades on SQL Server 2008 in what follows.

What do I do with Optimistic Concurrency and Linq 2 SQL Classes?
Ok, so now you have jumped through the SQL Server hoops, next you need to jump through the Linq to SQL designer hoops (This isn't that painful, and I have a "Find & Replace" solution that accomplishes the same thing).

First, you have to understand a few of those magical [Column(...)] tags you see all through your Linq to SQL classes.
  • UpdateCheck (Needs done for EVERY column)
    This is the magic that makes the Optimistic Concurrency work. It has the following values:
    -Never (Never checks if the field has been changed)
    -Always (Always checks)
    -WhenChanged (Only checks when changed)

    The magical keyword you are looking for here is "UpdateCheck=UpdateCheck.Never", this goes on ALL of your table columns. If you are using auto-generated Linq 2 SQL entities you can do "Find & Replace"
    Find: [Column(S
    Replace:
    [Column(UpdateCheck=UpdateCheck.Never,S
    Finally, Replace All.
  • AutoSync (Only needs updated for the Row Version column)
    I'm not going to go over all of the values for this, but I will outline why and what makes optimistic concurrency work. "AutoSync=AutoSync.Always" This will sync the value of your Row Version column to the database. You only put this on the row version column for each table. This is part 2 of what makes your optimistic concurrency work
  • IsDbGenerated=true (Only needs updated for the Row Version column)
  • IsVersion=true (Only needs updated for the Row Version column)
  • Now I told you that I would tell you a nifty little cheat to do a Find & Replace to bulk update this in your designer.cs file. For starters, I hope you named your row version column the same for every table, if not you will be doing this part manually. (Those of you using Timestamp, double check but these are likely already set for you)
    Find: Copy the entire [Column(...)] tag of any of your row version properties. If you named them all the same, this is a walk in the park.
    Replace: [Column(UpdateCheck = UpdateCheck.Never, Storage="_<<YOURCOLUMNNAME>>", AutoSync=AutoSync.Always, DbType="DateTime NOT NULL", IsDbGenerated=true, IsVersion=true)]
  • Ta-da, you have now Enabled Optimistic Concurrency. I'd suggest you keep reading, however most of you will jump out and go try it. It's not exactly as straight forward as you would think. The following sections explain how-what-why you have to do what you have to do.
C#, Linq, and Optimistic Concurrency
There are a few tricks to get those annoying "An attempt to Attach and Entity that is not new..." errors to go away, and the whole host of other errors you will go through reading the quick hit notes on everyone else's blog.

Detatching objects from their associated EntityRef objects
  • Type tables are the main offender here. This is where the magic of partial classes works wonderfully. Say you have the following entities.

    User
    int UserID
    EntityRef _UserType
    EntityRef _Country
    EntitySet

    _Addresses
    //Other data, doesn't matter
    Address
    int AddressID
    EntityRef _User
    EntityRef _AddressType
    //Other data, doesn't matter

    You will first start by making a matching partial class in the same namespace as the model.

    public partial class User
    {
    public void DetachAll()
    {
    this._Country = default(EntityRef<Country>);
    this._UserType = default(EntityRef<UserType>);
    if(this._Addresses.HasLoadedOrAssignedValues)
    foreach(Address a in this._Addresses)
    a.DetatchAll();
    }
    }

    public partial class Address
    {
    public void DetachAll()
    {
    this._AddressType = default(EntityRef<AddressType>);
    }
    }
This will allow you to waterfall detach objects upon saving the parent object. You will need to detach EntityRef objects prior to attaching them to any active data context. This is the first key that you will find as a stumbling block in your journey to a perfect n-tier linq implementation. (The only EntityRef you leave attached is the reference to the parent object. In the instance of the Address object, it will have an EntityRef which you want to leave set for singular inserts, updates, etc.)

  • Next, any object you get from the database you must detach from the datacontext. (There might be better ways of doing this, however I have not found any yet. Any time an object is attached to a data context, it must be detached to be updated again.)

    We will take a GetUserByID(int ID) function as an example, which would be a standard method to have in a business layer.

    public User GetUserByID(int ID, bool AttachChildren)
    {
    User u;
    using(YourDataContextObject data = new YourDataContextObject())
    {
    //This line makes the Entity think it is new, remember that annoying error that
    //was making you pull your hair out. This is the solution.

    data.ObjectTrackingEnabled = false;

    if(AttachChildren)
    {
    //This will attach your children objects to the parent, without this
    //they assume lazy loading. Which in n-tier is not what we want.
    DataLoadOptions dlo = new DataLoadOptions();
    dlo.LoadWith<User>(x => x.Addresses);
    dlo.LoadWith<Address>(x => x.AddressType);


    data.LoadOptions = dlo;
    }

    //Forgive me for not making this into standard Linq syntax, blog spot isn't exactly easy
    //to write code in. You will grow to love the Linq methods anyway.
    u = data.Users.SingleOrDefault(x => x.UserID == ID);
    }
    return u;
    }

That's all folks!
That wraps up today's post regarding n-tier linq applications. There can be a LOT more regarding tricks to make life easier in Save methods to handle Insert, Update, and Delete in 1 method.. but that's really outside the scope of today's post.

If you are looking for IT solutions for your businesses, would like to provide feedback, or give us a referral for new business opportunities (We provide profit sharing on referrals), you can visit our website at www.NightOwlCoders.com.

~Aqua~