Was receiving a null parameter on a service call. After looking through KnownTypes, Asynchronous vs synchronous, etc. more a while, turns out:
The parameter on the asynchronous OperationContract was named request, whereas the parameter on the synchronous OperationContract was named command.
"If your WCF service is receiving null values instead of the parameters you are expecting, check that the parameter names on your methods on the client version of the contract match those on the server version of the contract. And on the client side, be sure to check both the synchronous and asynchronous definitions of your methods."
Thanks to the above quote from http://blog.functionalfun.net/2009/09/if-your-wcf-service-is-unexpectedly.html
Quick note on a gotcha. Was getting the infamous NonUniqueObjectException: a different object with the same identifier value was already associated with the session from NHibernate.
Typically, this error occurs when you are using objects across NHibernate ISession’s. In my case, I was using the same ISession but was still receiving the error when I added two New objects to a parent object’s collections.
After running through “there must be another session being created” thought over and over, I finally discovered that the ID mapping for the child object did not have a generator assigned. I’m guessing this was causing NHibernate not to realize that when id equaled 0 it was transient. In my implementation, I just included logic to always make sure the child object’s identity is unique and allow the database to resolve the actual identity for the next session.
An interesting red herring that I’m glad to be rid of.
So, in the spirit of a distributed WPF application, let’s focus on the User (shocking). The capital U in user is intentional. They are the all knowing (well, they know what they want to do…), the all powerful (well, they are paying the bills…) and the all something else (I needed a third) driver of the application.
For WPF, the possibilities are to have a desktop style application, an XBAP portal, or use Silverlight. For this series, I’ll be examining using an XBAP. Although not my favorite of choices, it does provide the User a sense of comfort in seeing the application running within the Browser (IE). Of course, this comfort is an illusion and in fact is very, very different from the web paradigm the User may be so comfortable in experiencing. Hopefully more on that later…
A key difference, in my opinion, between XBAP and web application is the deployment. The XBAP deployment through ClickOnce is more aligned with desktop applications than with web applications. This fact alone will affect so many architectural and technological decisions alone that it should not be taken lightly.
If the preceding warning still has you wanting to pursue having a XBAP application that takes full advantages of the benefits of WPF while running a thin presentation that consumes services on application server(s), hopefully this series will benefit you. If not, hopefully this series will provide some insight into your architecture.
In terms of implementation of the UI, follow the current best practices – enlightening ain’t I…
At the time of writing, this is MVP, MVVM, or some hybrid that allows for reality. I’ll refer to the Josh Smith in his article here.
It terms of the model that is used, I would suggest starting with an object that fully meets all requirements of the UI. If the UI needs a combobox, add a Dictionary<int,string> to it so that the UI can bind to it. Use the object the UI binds to, the ViewModel, as a report – but with some extra metadata that will aide you. For example, a User doesn’t care about a Dictionary<int,string>, They care about the string – however, the int will go a long way later on in resolving that string.
All the data necessary to show information to the User, in the clearest way possible, should be available in that ViewModel. In that vein, make sure that the data you are showing to the User is necessary for that screen. Try to focus on the User’s task at hand. A generic editable list of items (aka Excel) may be what the User is used to, but really try to extract what they are trying to use those row edits for.
Unfortunately, this least technical aspect will likely affect the overall implementation of the architecture – and the importance of structuring your screens to match this can not be undervalued.
I’ve been going over some ideas (not my own ideas of course) that have been focused around distributed loose-coupled message driven systems, WPF XBAP portals, enterprise architecture, and domain driven design. I’m hoping to post the outline of the ideas as a means to aggregate the ideas in one place, which will hopefully help my and others understanding.
My additional requirements of to be aware of the common symptoms found in system design. From Principles, Patterns, and Practices [Martin],
The symptoms are:
- Rigidity: The design is difficult to change.
- Fragility: The design is easy to break.
- Immobility: The design is difficult to reuse.
- Viscosity: It is difficult to do the right thing.
- Needless complexity: Overdesign.
- Needless repetition: Mouse abuse. [copy/paste programming]
- Opacity: Disorganized expression.
In addition, I was looking to follow the SOLID principles of programming.
As detailed in Principles, Patterns, and Practices [Martin],
- Single-Responsibility Principle (SRP): A class should have only one reason to change.
- Open/Closed Principle (OCP): Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.
- Liskov Substitution Principle: Subtypes must be substitutable for their base types.
- Interface Segregation Principle: Clients should not be forced to depend on methods they do not use.
- Dependency-Inversion (DI) Principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.
Production can be a real pain in the ass. This seems to get amplified if the software being run in production doesn’t do its job of telling you what’s wrong. Something I’ve noticed during my tenure at a software company I worked for, is that the product regularly required a developer to be involved in order to determine configuration issues.
Usually the developer needed to hop on a webex and walk through the problem with the Client’s support staff. This was no small feat, as the support staff required programmers, DBAs, and the client Project Managers to be involved – not the kind of situation that instills confidence for your client.
If the problem could not be determined, a copy of the production database (that was usually 10-50 gigs) would need to be scrubbed, and transported so that development could debug the situation locally in order to find out what’s wrong. This seemed ridiculous, and was actually indicative of a larger problem: There was no visibility into what the code was actually doing.
How do you combat such a horrible problem? Slowly.
Unfortunately, there is No Silver Bullet. Buy in needs to be available to fund the necessary steps to identify critical areas of the application that are continually bad performers and address them.
How to address them?
Unit testing comes to mind. Identify areas of your system with high coupling and attack. Focus on what’s best for the BUSINESS. Create a wish list that would help the BUSINESS if more transparency were available.
Just recently watched Greg Young’s presentation on Command Query Separation I have been lurking on the Domain Driven Design Yahoo group for a bit, and had seen back and forths about CQS but hadn’t fully followed the concept until seeing this presentation. It was quite eye opening.
The presentation is here, I would highly recommend watching it.
My company uses a version of the CheckPoint SSL VPN for all its remoting needs. Basically, you open Internet Explorer and browse to the VPN address. The first time this loads up, an ActiveX is installed that checks for virus scanning software, and then you log in. Once logged in, it installs a Service which does some virtual networking through SSL.
Unfortunately, this seems to barely work with Vista and I haven’t gotten it to work with Windows 7. Instead of the hours it most likely would have taken to try to get it working on Win7, I decided to take an alternate approach, and use the XP mode to connect. After enabling virtualization under the OS Security in my BIOS, XP mode was installed and ready to go. I browsed to the SSL VPN address, installed it, and was able to remote into my work desktop. Hooray.
Next, I wanted to use the more seamless integration of XP mode, so I created a folder in the Start Menu for my virtual user, and created a IE Shortcut. Once XP hibernated, I was able to launch the XP IE directly from Win7 and VPN in.
The current bug-a-boo is that the VPN connectivity is local from my virtual XP machine, and I want to develop on my Win7 desktop while connecting to TFS at work. I’m thinking I’m going to install a TFS proxy on the XP virtual machine, and connect to that from my home machines. That way I can kick up the SSL VPN seamlessly from Win7, and then route my development activity through the XP Machine using the TFS Proxy. Fun, fun.
Adding some auditing events to nHibernate through its new event system. A particular requirement of the auditing implementation needs the component types be unwrapped and audited along with the parent object. As a first entry into the nHibernate metadata, I found that calling GetClassMetaData() for a component type would return a null reference, since the component is not an entity.
Instead, if you use the Property’s IType for the component, you can cast it down to a NHibernate.Type.ComponentType. This will then allow you to access the Propery names and values for a component object.
Just a quick post about something neat discovered with MSBuild.
I have a task that is building an ItemGroup which contains a whole bunch of paths that point to different kinds of projects to build. These files contain dotNet sln and VB6 vbproj files. I want to build the sln files with MSBuild and use a MSBuild extension task that wraps the VB6 IDe to build the vbproj files. To do this, I set a condition on each build task that checks the extension like so:
<MSBuild Condition=”@(FilesToBuild->’FilePattern%(Extension)’) = ‘FilePattern.sln'” …/>
<VB6 Condition=”@(FilesToBuild->’FilePattern%(Extension)’) = ‘FilePattern.vbproj'” …/>
And Voila! When the build script runs, VB6 projects are built using the VB6 task, and dotNet projects are built using the MSBuild task.
Been working with CruiseControl.Net to connect to our CMSynergy source control for some continous integration goodness. While configuring CCNet, I’ve had the oppurtunity to spend some more time with MSBuild – which always makes for some interesting times.
I plan on posting more info once this round is over, but I wanted to make a note about custom tasks from MSBuild. The setup for this, is that I am using a custom task to parse some input files based on how the release is being defined – and then output the files that should be included in the final install package. Without going too much further into detail, the intial condition was that if this is a .0 release, all files are returned.
The task xml looked like:
<CustomTask SourceFiles=”@(SourceFiles)” Release=”$(ReleaseCondition)”>
<OutPut TaskParameter=”PublishFiles” ItemName=”OutFiles”/>
The interesting bit to note here, was that just setting the output property to the input property (PublishFiles = SourceFiles) – ended up with the MetaData RecursiveDir on PublishFiles being reset once the output was set to a new variable OutFiles. To get around this, I ended up setting a new custom metadata on the PublishFiles property that copies the RecursiveDir property from SourceFiles.