Archive for the ‘dotNet’ Category

Gotcha: nHibernate NonUniqueObjectException

Quick note on a gotcha. Was getting the infamous NonUniqueObjectException: a different object with the same identifier value was already associated with the session from NHibernate.

Typically, this error occurs when you are using objects across NHibernate ISession’s. In my case, I was using the same ISession but was still receiving the error when I added two New objects to a parent object’s collections.
After running through “there must be another session being created” thought over and over, I finally discovered that the ID mapping for the child object did not have a generator assigned. I’m guessing this was causing NHibernate not to realize that when id equaled 0 it was transient. In my implementation, I just included logic to always make sure the child object’s identity is unique and allow the database to resolve the actual identity for the next session.

An interesting red herring that I’m glad to be rid of.


Distributed WPF Application: Part 1

So, in the spirit of a distributed WPF application, let’s focus on the User (shocking).  The capital U in user is intentional. They are the all knowing (well, they know what they want to do…), the all powerful (well, they are paying the bills…) and the all something else (I needed a third) driver of the application.

For WPF, the possibilities are to have a desktop style application, an XBAP portal, or use Silverlight. For this series, I’ll be examining using an XBAP. Although not my favorite of choices, it does provide the User a sense of comfort in seeing the application running within the Browser (IE). Of course, this comfort is an illusion and in fact is very, very different from the web paradigm the User may be so comfortable in experiencing. Hopefully more on that later…

A key difference, in my opinion, between XBAP and web application is the deployment. The XBAP deployment through ClickOnce is more aligned with desktop applications than with web applications. This fact alone will affect so many architectural and technological decisions alone that it should not be taken lightly.

If the preceding warning still has you wanting to pursue having a XBAP application that takes full advantages of the benefits of WPF while running a thin presentation that consumes services on application server(s), hopefully this series will benefit you. If not, hopefully this series will provide some insight into your architecture.

In terms of implementation of the UI, follow the current best practices – enlightening ain’t I…
At the time of writing, this is MVP, MVVM, or some hybrid that allows for reality.  I’ll refer to the Josh Smith in his article here.

It terms of the model that is used, I would suggest starting with an object that fully meets all requirements of the UI. If the UI needs a combobox, add a Dictionary<int,string> to it so that the UI can bind to it. Use the object the UI binds to, the ViewModel, as a report – but with some extra metadata that will aide you.  For example, a User doesn’t care about a Dictionary<int,string>, They care about the string – however, the int will go a long way later on in resolving that string.

All the data necessary to show information to the User, in the clearest way possible, should be available in that ViewModel. In that vein, make sure that the data you are showing to the User is necessary for that screen. Try to focus on the User’s task at hand. A generic editable list of items (aka Excel) may be what the User is used to, but really try to extract what they are trying to use those row edits for.

Unfortunately, this least technical aspect will likely affect the overall implementation of the architecture – and the importance of structuring your screens to match this can not be undervalued.

Distributed WPF Application

I’ve been going over some ideas (not my own ideas of course) that have been focused around distributed loose-coupled message driven systems, WPF XBAP portals, enterprise architecture, and domain driven design. I’m hoping to post the outline of the ideas as a means to aggregate the ideas in one place, which will hopefully help my and others understanding.

My additional requirements of to be aware of the common symptoms found in system design. From Principles, Patterns, and Practices [Martin],
The symptoms are:

  • Rigidity: The design is difficult to change.
  • Fragility: The design is easy to break.
  • Immobility: The design is difficult to reuse.
  • Viscosity: It is difficult to do the right thing.
  • Needless complexity: Overdesign.
  • Needless repetition: Mouse abuse. [copy/paste programming]
  • Opacity: Disorganized expression.

In addition, I was looking to follow the SOLID principles of programming.
As detailed in Principles, Patterns, and Practices [Martin],

  • Single-Responsibility Principle (SRP): A class should have only one reason to change.
  • Open/Closed Principle (OCP): Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.
  • Liskov Substitution Principle: Subtypes must be substitutable for their base types.
  • Interface Segregation Principle: Clients should not be forced to depend on methods they do not use.
  • Dependency-Inversion (DI) Principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.

See Unlce Bob’s Principles of Object Oriented Programming Article for more detail (or his PPP book).

Debugging is a code smell

Production can be a real pain in the ass. This seems to get amplified if the software being run in production doesn’t do its job of telling you what’s wrong. Something I’ve noticed during my tenure at a software company I worked for, is that the product regularly required a developer to be involved in order to determine configuration issues.

Usually the developer needed to hop on a webex and walk through the problem with the Client’s support staff. This was no small feat, as the support staff required programmers, DBAs, and the client Project Managers to be involved – not the kind of situation that instills confidence for your client.

If the problem could not be determined, a copy of the production database (that was usually 10-50 gigs) would need to be scrubbed, and transported so that development could debug the situation locally in order to find out what’s wrong. This seemed ridiculous, and was actually indicative of a larger problem: There was no visibility into what the code was actually doing.

How do you combat such a horrible problem? Slowly.
Unfortunately, there is No Silver Bullet. Buy in needs to be available to fund the necessary steps to identify critical areas of the application that are continually bad performers and address them.

How to address them?
Unit testing comes to mind. Identify areas of your system with high coupling and attack. Focus on what’s best for the BUSINESS. Create a wish list that would help the BUSINESS if more transparency were available.

All in all, its all been said before. I would recommend looking at: The PPP, Clean Code and Refactoring.

Command Query Separation(CQS) Presentation

Just recently watched Greg Young’s presentation on Command Query Separation I have been lurking on the Domain Driven Design Yahoo group for a bit, and had seen back and forths about CQS but hadn’t fully followed the concept until seeing this presentation. It was quite eye opening.

The presentation is here, I would highly recommend watching it.

nHibernate: Component MetaData

Adding some auditing events to nHibernate through its new event system. A particular requirement of the auditing implementation needs the component types be unwrapped and audited along with the parent object. As a first entry into the nHibernate metadata, I found that calling GetClassMetaData() for a component type would return a null reference, since the component is not an entity.

Instead, if you use the Property’s IType for the component, you can cast it down to a NHibernate.Type.ComponentType. This will then allow you to access the Propery names and values for a component object.

Fun stuff.

MSBuild: Custom Task MetaData

Been working with CruiseControl.Net to connect to our CMSynergy source control for some continous integration goodness. While configuring CCNet, I’ve had the oppurtunity to spend some more time with MSBuild – which always makes for some interesting times.

I plan on posting more info once this round is over, but I wanted to make a note about custom tasks from MSBuild. The setup for this, is that I am using a custom task to parse some input files based  on how the release is being defined – and then output the files that should be included in the final install package. Without going too much further into detail, the intial condition was that if this is a .0 release, all files are returned.
The task xml looked like:
<CustomTask SourceFiles=”@(SourceFiles)” Release=”$(ReleaseCondition)”>
<OutPut TaskParameter=”PublishFiles”  ItemName=”OutFiles”/>

The interesting bit to note here, was that just setting the output property to the input property (PublishFiles = SourceFiles) – ended up with the MetaData RecursiveDir on PublishFiles being reset  once the output was set to a new variable OutFiles. To get around this, I ended up setting a new custom metadata on the PublishFiles property that copies the RecursiveDir property from SourceFiles.

Gotcha: Connection Property on ADO.Net Transaction

In a recent project, the application utilized a custom Data Access Layer that provided an API for managing connectivity to the database. The API had an abstract definition that business logic would call into in order to define the explicit Unit of Work boundaries that were being performed. This forced the business logic to create an abstract transaction object to begin, create or rollback the unit of work. The concrete implementation had connectors for both MSSQL and Oracle that manipulated the respective ADO.Net libraries.

The Transaction class for ADO.Net has a property for a connection reference. When an abstract transaction was started, the concrete implemalentation would create a new OracleConnection or SQLConnection and then grab a transaction from that. Once, the transaction had ended the connection property would be checked if still available, and if it was – it was closed.

At a client site, we started noticing connection leak messages were appearing in the application. When checking the open sessions for the user, it was found that the connections would quickly reach the default maximum of 100. I managed to narrow it down to the fact that the concrete implementation of the transactions was the cause. Upon further investigation, I found that when an ADO.Net transaction is committed, it will lose its reference to the connection, and since the connection was not being garbage collected fast enough within this tight loop, the pool limit would be hit. Once this was resolved by holding an explicit reference to the connection, the open connections went down drammatically and all was right in the world.

Entity Framework – POCO style

The entity framework, Microsoft’s current effort to bring object-relation mapping to the non open source world, is currently in version 1 release.  However, one of the short comings is the lack of POCO support. 

POCO, being Plain Old CLR Object, is the thought that your domain objects should not have any knowledge of persitence or infrastructure. Your domain objects, should just be worried about what they know best: interaction with your domain logic. Make sense, right? Unfortunately the Entity Framework does require some infrasture logic within your domain logic by requiring inheritance from some of the base classes.

Anyway, this limitation that is planned to be included in later releases, has some silver lining as a POCO adapter has been created for the Entity Framework, which can be found here.