Archive

Posts Tagged ‘WCF RIA Services’

Mastering LOB Development for Silverlight 5: A Case Study in Action

May 11, 2012 1 comment

I recently picked up a copy of the Mastering LOB Development for Silverlight 5: A Case Study in Action and started going through the book.

If you have never done any development in Silverlight and would like a practical pragmatic example of how to setup your solutions in Visual Studio as well as get exposed to the different frameworks that are available, then this is an excellent choice! The book is broken up into eleven chapters and it starts from introducing you to the basics of developing in XAML and quickly builds upon this knowledge to lead you into understanding windows and controls.

Chapter 3 is about data binding but it doesn’t stress the need for separation of concerns enough. I would have rather seen examples done the right way instead of the easy way. There were no advanced examples of attached properties or use of the System.Windows.Interactivity. The title of this book implies that we will be covering Silverlight 5 but I don’t see a single mention of debugging data bindings which is brand new to Silverlight 5 nor is there any hint of ancestor relative binding, implicit data templates, custom markup extensions, etc.

Chapter 4 is on Architecture but it too is lacking in being truly comprehensive. There is no sign of Prism 4.0 nor is there any mention to Caliburn.Micro. Both of the frameworks are very powerful and more mature than either of the frameworks mentioned in the book. Although both MVVM Light Toolkit is a good tool but MEF is more of a tool for DI/IOC and not a full fledged framework that can handle messaging like event aggregation that the other frameworks provide. Another sad aspect is that there is no mention of compartmentalizing your architecture to only download XAPs on demand. Most Silverlight frameworks support this out of the box.

Chapter 5 talks about data access. The chapter focuses on RIA Services but I find it very problematic that it does not cover any authorization or authentication. It does go into good detail about using RIA Services in conjunction with Entity Framework. However, I have spent way too much time fighting RIA Services to know that it is not appropriate for enterprise level development except for smaller line of business applications. The other common issue I have with this chapter is that it doesn’t address the common scenario of “Server Not Found“. This happens when RIA Services has an exception on the server and this will happen a lot. There are techniques to solve this and it is important for you as the developer to know this.

Chapter 6 discusses Out of Browser applications and this is for the most part has good coverage of the topic.

Chapter 7 is on Testing. This is a good chapter as well but it is unfortunate that it took us until this chapter to get a mention of DI/IOC. Dependency injection and inversion of control is just as important in your architecture and design as it is in your testing projects. I wish they would have covered some of the behavior driven development libraries that are available to you such as SpecFlow. BDD really compliments and makes your development experience so much better.

Chapter 8 is on Error Control. It covers the basics but still doesn’t address how to deal with exceptions on the server.

Chapter 9 is on Integrating with other Web Applications. I personally don’t see this as an important chapter for a book on line of business applications. I would much rather have a consistent architecture and UI instead of dealing with mashups.

Chapter 10 is on Consuming Web Services. This chapter covers the basics but doesn’t go deep as learning WCF requires a book just for itself. I did like that they cover consuming a Twitter API and processing JSON.

Chapter 11 is on Security. It is here that we see our authentication and authorization for both RIA Service and WCF. We also look at what how to make cross domain calls.

This book is a great reference for building line of business applications. Although I believe that it is missing some fundamental topics in building a line of business application and is a little weak on the coverage for Silverlight 5, it is still a very good read and you will walk away armed with good knowledge for building line of business applications.

Hope the review helps…

A Case for going ORM-less

December 22, 2011 6 comments

Problem Statement
I spend a lot of time architecting and designing software infrastructures to make the development of enterprise applications easy. For example, in Silverlight or WPF, I have spent a lot of time trying to make the development cycle for other developers easier so that they can focus primarily on business requirements and less on infrastructure and ceremony. Data access is an area within Microsoft technologies that, I believe, needs to be re-examined as a whole.

There once was a time when we had really nice products that allowed us to build business solutions without really needing to worry about the database. Microsoft Access, Microsoft Visual FoxPro, and DBase come to mind. In those days we didn’t need to worry about the model or the database and we just “developed” our applications. Granted these were the classical 2-tier architectures but boy you sure could develop software quickly. I still have clients today that launched their businesses off of products like these.

Unfortunately, we have gone to the other extreme of the pendulum and are being forced to use a new paradigm such as Object Relational Mapper (ORM) in conjunction with our regular development. Regardless of what technology you use, this has become extremely painful especially when developing for Silverlight.

If you have worked in Silverlight or any other environment where you are forced to make service calls asynchronously, you quickly realize just what a pain it is to solve the data access problem. Regardless whether you use WCF RIA Services, WCF Data Services, or roll your own WCF service, you still have a lot of work and maintenance ahead of you. When you think of all the service operations you need to expose, it can become a very time-consuming task to maintain.

Let’s look at the layers involved in building a Silverlight application and what it takes to save data over the wire. Consider the following diagram:

Because we are dealing with a distributed architecture, I wanted to show you what is required both on the server and on the client.

Server
As you can see we have our database layer. This is just a simple representation of where your data is being stored. For most corporate applications, this means a relational database like that of Microsoft SQL Server.

Since we are focusing on Microsoft technologies, we are going to come face to face with Entity Framework. Entity Framework is a database agnostic ORM that allows us to manage our relationships in several ways.

  • Database First – we build the database model first and then reverse engineer the database to build out the entities.
  • Model First – we build the entity model using the Entity Framework designer. This creates our entities and database as we go.
  • Code First – we create the object model first and provide some hooks to synchronize creating the database.

I am going to discuss the Database First approach here. This assumes that you already have your database created and ready for you to use. Next you create an Entity Framework model. Creating the EDMX file is pretty easy and you are presented with the designer surface that represents your models.

If we were building a Windows Presentation Foundation (WPF), Windows Forms, ASP.NET Web Forms, or ASP.NET MVC application, we would be okay going this far and stopping since our code-behind for our views or viewmodels could, in theory, have a reference to the Entity Framework ObjectContext and start manipulating data from there. However, this is not the case for Silverlight or any client that must make service requests over the wire. With Silverlight, we add more salt to our wounds since it does not use the same binaries as does the rest of WPF and Web development and thus we can’t share a common entity library.

In order for us to use the Entity Framework, we need to use something like WCF RIA Services, WCF Data Services, or expose our own WCF services. We will use the standard WCF RIA Services example. You do this by adding a new Domain Service class to your project. This has to be done after you have completed creating your Entity Framework model and compiled your web project at least once.

In order for us to get access to our data in Silverlight, we need to have the ability to send and receive requests over the web. WCF RIA Services does this by allowing us to create a Domain Service on the server and creating a hidden set of files on the client that exposes the proxy as to how we communicate with the service.

Client
Now let’s look at what happens on the right side of the diagram. In the hidden folder that Visual Studio generates, we now have access to a Domain Context that allows us to communicate with the server. We then usually wrap the models that are exposed by the generated code in our own classes. Typically this is done by using ViewModels but we could use Controllers or any other paradigm. Finally, we use data-binding in our XAML to finish the job of getting our data to our end-users.

Done! You may think this is great but this is a huge problem for any shop that does a lot of agile development and is constantly changing the backend data model. These changes have a ripple effect which forces us to try and keep our Entity Framework model in sync with the data model. You then must drop your WCF RIA Services objects that were created and re-create them. Because WCF RIA Services generates code into a hidden folder on the client, it is very hard for you to just modify one entity and only generate the change for that one item. It is has been the case that you have to drop the Domain Service and metadata file and re-generate.

Analysis
Can you feel my pain yet? Even with Code First, there is no way to avoid the static proxy that is generated on the client to support communicating across the web. Wouldn’t it be nice if we could just get rid of all of these added tools and headaches and go back to how we developed software in the past? If you look at all the different ORMs tools out there on the market, you are either forced to develop your code in a certain way such as using “virtual” for your properties, how about being required to use navigation properties which are meaningless in our object-oriented architecture? We shouldn’t need all of this baggage just to have the ability to persist data back to a database.

SQL Profiling
If you have used NHibernate or EntityFramework, it is almost mandatory to profile what these ORMs are producing as SQL. It is unfortunate that the SQL code that gets generated is so horribly ugly. This is yet another area where we have gone the wrong direction and are allowing tools to dictate how our code should look and run instead of allowing the developer to define these rules.

How do we proceed?
In the next coming posts, I am going to be discussing a solution that is really based on what we had before. A going back to the basics, if you will. I will be presenting to you a library that will allow you to develop code the way you want but still have the ability to do data access the way you want it.

Although I am not a big fan of ORMs, I believe that I would categorize what I will be presenting as a micro-ORM. This approach will be very much convention over configuration based with a fluent application programming interface (API) to allow you to override the default conventions. You may think that this is basically Fluent NHibernate or Code First all over again but I will show you how this approach differs in its mechanism for providing a rich yet flexible data access. Another aspect that I will be presenting is that you will be the full author of the SQL that is used during data access. We will be using templates and you can use them out of the box or you can customize them to meet your coding style and requirements in your company.

Closing
If you have read my posts on the WCF Web API, I am using this as the backbone for my data transport. Without the flexibility and power of the WCF Web API, I would be just presenting another static solutions but I am able to provide a rich generic model here that allows you to do very little scaffolding to get access to your data.

In the next post, I will introduce the solution I came up with to solve the issues I find with the current way we write data driven applications, especially in Silverlight.

Introducing the WCF Web API

December 19, 2011 1 comment

Probably the biggest thing I like about the WCF Web API is that it is REST based. For me, this is very important because I don’t have to create a client proxy to use the service. I like this because it mitigates the amount of maintenance that I have to deal whenever I change the service or make modifications.

Moving over to the WCF Web API has been a natural migration for me. I deal with very large Silverlight applications that, typically, sit on top of WCF RIA Services. This is very painful because the amount of ceremony involved in keeping the solutions up-to-date whenever a model changes in the database.

With WCF Web API, because I don’t have to worry about a client-side proxy, I don’t care about the changes to the service. I still must be careful not to change my existing service calls that I make from the client but this can be controlled. Adding a new column to a table in my database no longers causes a ripple effect throughout my services that using WCF RIA Services would cause.

If you haven’t played with the WCF Web API, I strongly encourage you to take a look at it. It has some really awesome features as well as a great test platform to allow you to test your service all in one place.

By exposing my services using the WCF Web API, I can now consume them from any platform and any device and never need worry about a client proxy. This allows me to use the exact same service that I am using for Silverlight in a tablet based device or a mobile phone. This gives me a lot of flexibility by writing my service once and using it everywhere.

Over the next couple of posts, I am going to be going into greater detail concerning building a generic data access service using the WCF Web API.

You can read up on the WCF Web API documentation here.

WCF RIA Services – “The maximum uri length of 2083 was exceeded.”

November 2, 2011 Leave a comment

I came across this exception the other day and thought it was weird enough to merit a post on it. In the enterprise system that I have designed I have the concept of a quick search and advanced search capabilities. You can see an example of quick search below:

When the user performs a quick search, the results are loaded into the Sidebar as well as into a DataPager for paging the data. Quick search is handled by allowing a user to enter in a ID or a business name. I handle this by creating two FilterDescriptors that operate against the properties that I want as shown below:

SearchFilterDescriptors.Add(new SearchFilterDescriptor("ID", FilterOperator.IsEqualTo, PropertyType.Integer));
SearchFilterDescriptors.Add(new SearchFilterDescriptor("Name", FilterOperator.StartsWith, PropertyType.String));

This works just fine in scenarios when I know exactly what I want to filter against but there are scenarios that I don’t know what the filter definition is going to be ahead of time. This is where the Advanced search screen comes into play:

With this screen, a user can select many properties that make up an “Account” record and then click the search button. A back-end process is created to find the set of records found and then I am dynamically creating FilterDescriptor objects. It was When I used a search criteria that brought back a lot of records that I received the following error:

Doing some research, I found this forum’s post as well as this blog entry discussing what I could do to fix this.

The interesting thing to note is that WCF RIA Services uses GET by default for all Query attributes. This explains why I was getting the exception due to the fact that I was building a query string that was larger than the maximum length.

It turns out that all I really needed to do was make a slight modification to my Query on the server as shown below:

[Query(HasSideEffects=true)]
public IQueryable<Organization> ReturnOrganization()
{
    return this.ObjectContext.Organization
        .OrderBy(x => x.OrganizationName);
}

By adding the HasSideEffects=true parameter WCF RIA Services uses Post instead of the default Get. Now we can have as complex a query as we want. This allowed me to have a dynamic advanced search screen and not worry about how many filters I was creating.

The way I accomplished this dynamically was by creating an instance of a FilterDescriptorCollection

var filter = new FilterDescriptorCollection();
foreach(var item in e.Value)
{
    var dto = item;
    filter.Add(new FilterDescriptor("ID", FilterOperator.IsEqualTo, dto.Value));
}
_eventAggregator.GetEvent<AdvancedSearchEvent>().Publish(filter);

I use Prism’s EventAggregator to send an event so that the other screen can be notified of the Advanced Search Event. On the main screen, I have this code in place to apply the filter:

public void OnAdvancedSearchEvent(FilterDescriptorCollection filter)
{
    if (filter.Count > 0)
    {
        View.DDS.FilterDescriptors.Clear();
        View.DDS.FilterOperator = FilterDescriptorLogicalOperator.Or;
        foreach(var item in filter)
        {
            var fd = item;
            View.DDS.FilterDescriptors.Add(fd);
        }
        View.DDS.Load();
    }
}

Another approach to this could be to handle the filter completely on the server and use a boolean parameter to indicate when to use the filter. Since my filter information was actually coming from the database I could have accomplished the same user experience but I would have never run into the this exception.

Hope this helps….

Using WCF RIA Services to allow projections that support Lookups – Part II

August 30, 2011 Leave a comment

Now that we are fairly comfortable with the idea of using a generic class to act as a property bag, we can now take this implementation a little further and provide some cool advancements.

One area where I have found this to be a useful solution is dealing with my reporting architecture. Here is a screen in my application that uses this advancement. I basically have a dynamic report driver screen. It allows me to evaluate any uploaded report and provide parameters if necessary.

Running the report gives us the following screen:

We use ComponentOne for our reporting because they have a very nice and mature independent report designer that allows our clients to define their reports without the need for Visual Studio. Typically our user base is not IT or technically advanced and trying to teach them to use Visual Studio for just doing reports doesn’t make sense. The nice thing about the reports is that they are just like Microsoft Reporting Services, in that the definition of the report is just XML. The end-user creates the report and then uses my application to upload the report definition to the database where it is stored. This makes for a very beefy record and can really bog down your application if you want to test your reports. The one catch with reporting is that they can also have parameters. This calls for a data structure that supports one-to-many.

Let’s look quickly at the two tables involved in handling reporting.

AS you can see I have a reporting table and a report parameter table. If I wanted to use the approach that I used from my last post, I wouldn’t be able to accomplish this. We will need to modify the data object to support this type of hierarchy.

public class ResultDTO
{
    [Key]
    public int Value { get; set; }
    public string Name { get; set; }
    public string FriendlyName { get; set; }
    public int ParentValue { get; set; }
};

public class ResultWithCollectionDTO
{
    [Key]
    public int Value { get; set; }
    public string Name { get; set; }
    public string FriendlyName { get; set; }
    public int ScreenId { get; set; }
    [Include]
    [Association("ResultWithCollectionDTO_ResultDTO", "Value", "ParentValue")]
    [Composition]
    public IEnumerable<ResultDTO> Children { get; set; }
};

If we look at the ResultDTO object, only one change has been made. It now has a int property labeled, “ParentValue”. Although not required from an ObjectOriented perspective, this is required from a WCF RIA Services navigation perspective.

Let’s look at the newly introduced table. We see that it has the same properties as the ResultDTO as well as an int property labeled, “ScreenId”. This is necessary because the underlying architecture has an identifier for each screen and depending on what screen you are on you could have one or more reports associated with it. By having this metadata I can then automatically show a default report for a screen or group reports together with the same ScreenId and use that grouping for hydrating a ComboBox for selecting which report to render.

The last property is what gives us the hierarchy we need to support parent child relationships. Using WCF RIA Services and eager loading the whole object is what I am trying to avoid by providing this solution. Because the report table has the report XML definition, it is extremely large and I don’t want to download this information when all I am trying to do is bind to a report generically. This property is just an IEnumerable collection of ResultDTO objects. I have to have the following attributes in order for WCF RIA Services to honor it correctly. The “Include” attribute basically tells WCF RIA Services to eagerly load this collection of child objects. The “Association” attribute requires a name for the relationship, the parent value and the associated child value. Finally, we have the “Composition” attribute which indicates that this member represents an association taht is part of a compositional hierarchy.

Now let’s turn our attention to how we would write a query to take advantage of this new generic model.

[Query()]
public IQueryable<ResultWithCollectionDTO> ReturnRSReportDTOIncludingParameters()
{
    var resultSet = this.ObjectContext.rs_r_Report
        .Include("rs_rp_ReportParameter")
        .OrderBy(x => x.rs_r_FriendlyName);
    var result = from c in resultSet
                 select new ResultWithCollectionDTO()
                 {
                     Value = c.rs_r_ReportIdent,
                     Name = c.rs_r_ReportName,
                     FriendlyName = c.rs_r_FriendlyName,
                     ScreenId = c.rs_r_ScreenId ?? 0,
                     Children = (from d in c.rs_rp_ReportParameter
                                 select new ResultDTO()
                                 {
                                     ParentValue = c.rs_r_ReportIdent,
                                     Value = d.rs_rp_ReportParameterIdent,
                                     Name = d.rs_rp_ParameterName,
                                     FriendlyName = d.rs_rp_DefaultValue
                                 })
                 };
    return result;
}

As you can see, this query very similar to the one we already used in the previous post but we need to do some projection shaping so that the data fits into our newly defined models. We first need to ensure that the data from our ObjectContext eagerly loads our report parameters with our report object. Next, we just go through the steps of shaping the ResultWithCollectionDTO object and then perform a sub select statement for the report parameters. We make sure that we also bring in the report identity as the “ParentValue” property.

You would think that we are done but I need to do one more thing if my screen that I show your earlier is to work. When we create our models, WCF RIA Services will treat them as read-only unless we provide a little more implementation. This means that if I try to “TwoWay” databind to my parameters, an exception will be thrown. I want to be able to set values for the parameters but I don’t care about saving this since I am really treating these objects as non-persisted. It is only on the client-side that I care about binding and sending these values to my report engine.

In order to get this accomplished, you provide the following in your DomainService:

[Delete]
public void DeleteResultDTO(ResultDTO dto)
{
    // Do nothing....
}
[Insert]
public void InsertResultDTO(ResultDTO dto)
{
    // Do nothing....
}
[Update]
public void UpdateResultDTO(ResultDTO dto)
{
    // Do nothing....
}

[Delete]
public void DeleteResultWithCollectionDTO(ResultWithCollectionDTO dto)
{
    // Do nothing....
}
[Insert]
public void InsertResultWithCollectionDTO(ResultWithCollectionDTO dto)
{
    // Do nothing....
}
[Update]
public void UpdateResultWithCollectionDTO(ResultWithCollectionDTO dto)
{
    // Do nothing....
}

I typically put this code in a partial class so as I won’t lose it when I need to update my DomainService. Now that I have the Delete, Insert, and Update methods in place, my databinding on the client-side will work just fine.

Again, remember that you will only need to do this once or twice depending on how you want to implement your generic objects. The nice thing about this is you have full access to your complete object but you also have access to a “lite” version that you can use for your lookups. Couple this with the built-in data paging and filtering that you get for free from WCF RIA Services and you have a pretty flexible architecture.

Hope this helps…

Using WCF RIA Services to allow projections that support Lookups

August 26, 2011 Leave a comment

WCF RIA Services is quite powerful and provides a lot of flexibility. You can pretty much perform all of your persistence operations and server side processing using WCF RIA Services without really needing to create a separate service.

One aspect that seems to be a little more difficult is to provide a light version of a table for lookups. What I mean by lookups is simply the bare amount of data necessary to hydrate a ComboBox or a ListBox. Typically you don’t need to bring back your whole table or object just to provide a selector. In some cases, your tables may even have other data that is really meaningless to the user and is just for tracking purposes such as metadata that describe who made the last modification and when.

Now that we know what the problem statement is, how can we go about solving this in a manner that is generic enough to handle most any ComboBox or ListBox.

Introducing the ResultDTO object.

public class ResultDTO
{
    [Key]
    public int Value { get; set; }
    public string Name { get; set; }
    public string FriendlyName { get; set; }
};

With this simple class, I can now do projections that will allow me to reshape larger objects to this smaller object and only bring back the information that I need. Typically I find that I have several tables that are very wide and I only need just key information when displaying them as a ComboBox. I don’t want to bring down all the columns per record since that would be inefficient but I still want to be able to databind using the key from the underlying table and some label in my ComboBox or ListBox.

Let’s review the class that I have presented above. I first need to define a key so that WCF RIA Services can function properly. This Value property is used as the unique key to the underlying table that I am representing. Next I have two string properties: Name and FriendlyName. You don’t need both but I sometimes like the option to bind to one or the other depending on how the text is formatted in the database.

Okay, now let’s look at how you would expose a query via WCF RIA Services that reshapes the table into our new ResultDTO object:

public IQueryable<ResultDTO> ReturnReportDTO()
{
    var result = from c in this.ObjectContext.rs_r_Report
          select new ResultDTO()
          {
              Value = c.rs_r_ReportIdent,
              Name = c.rs_r_ReportName,
              FriendlyName = c.rs_r_FriendlyName
          };
    return result;
}

As you can see, I am simply doing a projection that is creating a new ResultDTO object from the underlying rs_r_Report object. This pattern allows us to basically project any object into our ResultDTO object and only bring back the items that we need.

By using this pattern you can now have a full and lite version of your objects so that you can support full data editing and also read-only lookups.

In the next post I will describe how we can take this a step further and create a custom object that has a nested collection and still get WCF RIA Services to allow us to work with it.
Hope this helps!

Getting something better than “Server not found.” from WCF in Silverlight redux

June 6, 2011 1 comment

I have had a lot of question lately about my blog post concerning “Server not found.” messages coming from WCF to Silverlight. I decided to create a quick post that provides a sample application that has everything wired up with some test buttons.

Here is a screen shot of the application:

Here is a screen shot with an error thrown from the server:

The application just has some buttons at the bottom of the screen that make calls to the service operations. Here is what the code looks like for the WCF service:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;
using System.ServiceModel.Activation;
using Web.Core;

namespace ServerExceptions.Web
{
    // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "BusinessService" in code, svc and config file together.
    [WcfErrorBehavior()]
    [WcfSilverlightFaultBehavior()]
    public class BusinessService : IBusinessService
    {
        public void RaiseException()
        {
            throw new Exception("Can't log in to the database.");
        }

        public string RaiseInvalidOperation()
        {
            throw new InvalidOperationException("You can't call this method!");
        }

        public void RaiseIndexOutOfRange()
        {
            throw new IndexOutOfRangeException();
        }

        public string GetCustomerName()
        {
            return "Matt Duffield";
        }

    };
}

This is a contrived example but the most important thing to note are the attributes above the service definition.

  • WcfErrorBehavior
  • WcfSilverlightFaultBehavior

It is these attributes that give us the ability to capture the error and send it back down the wire to the client. If you comment out the attributes and try any example that throws an error you will get our favorite message:

Here is what the calling code looks like after you have created a client side reference to the WCF service:

#region Button Click events

private void RaiseException_Click(object sender, RoutedEventArgs e)
{
    BusinessServiceClient client = new BusinessServiceClient();
    client.RaiseExceptionCompleted += (s, ea) =>
    {
        if (ea.Error != null)
        {
            Message = ea.Error.Message;
        }
        else
        {
            Message = "No exceptions.";
        }
    };
    client.RaiseExceptionAsync();
}

private void RaiseInvalidOperation_Click(object sender, RoutedEventArgs e)
{
    BusinessServiceClient client = new BusinessServiceClient();
    client.RaiseInvalidOperationCompleted += (s, ea) =>
    {
        if (ea.Error != null)
        {
            Message = ea.Error.Message;
        }
        else
        {
            Message = "No exceptions.";
        }
    };
    client.RaiseInvalidOperationAsync();
}

private void RaiseIndexOutOfRange_Click(object sender, RoutedEventArgs e)
{
    BusinessServiceClient client = new BusinessServiceClient();
    client.RaiseIndexOutOfRangeCompleted += (s, ea) =>
    {
        if (ea.Error != null)
        {
            Message = ea.Error.Message;
        }
        else
        {
            Message = "No exceptions.";
        }
    };
    client.RaiseIndexOutOfRangeAsync();
}

private void GetCustomerName_Click(object sender, RoutedEventArgs e)
{
    BusinessServiceClient client = new BusinessServiceClient();
    client.GetCustomerNameCompleted += (s, ea) =>
    {
        if (ea.Error != null)
        {
            Message = ea.Error.Message;
        }
        else
        {
            Message = ea.Result;
        }
    };
    client.GetCustomerNameAsync();
}

#endregion

The code is pretty simple and I am not going into the classes behind the attributes. Please refer to this post if you are interested in them. One thing I will point out is that the code that gives us the ability to send these error across the wire is in the Core folder. I did this so that you could pull this out and put it in it own assembly. That way you can use this for all of your projects and not need to write this code over and over.

On the server, you can have a lot of complex logic going on and use as many third parties as you want. I would simply wrap my service methods with a try/catch block and then return the friendly message I want displayed in my user interface.

You can download the sample application here.

Hope this helps….

Is your DomainService slow on the first call but not afterwards?

April 7, 2011 1 comment

For the past three years, I have been working on a line of business application targeted to manage and run cities and towns. As you can imagine this has a very large data model and it continues to grow. One of the challenges of using Silverlight and Entity Framework over WCF RIA Services is that as the model grows the application seems to get slower and slower for the initial data request of a Domain Context. There are some very cool tricks that you can do to pre-compile your views so that your initial data request is no slower than any other request.

I am going to show you a couple approaches that are available to you right now and review each of them with pros and cons to each.

The first approach comes from a MSDN article How to: Pre-Generate Views to Improve Query Performance and does a great job at walking you through the steps required to get this accomplished. I am not going to go into the steps required for this approach but I believe that this approach involves too much ceremony and configuration in order for it to be the best choice. Here are some of my thoughts as to why I don’t like this approach:

  • I don’t like the requirement of needing to have a pre-build event. As you have more .edmx files this entry area becomes very cumbersome.
  • I don’t like that I need to change the default behavior of the .edmx model Metadata Artifact Processing to Copy to Output Directory. As more and more .edmx files are needed it is very easy to forget this step.
  • Because you have your model being copied to the output directory, you need to bring in the .csdl, .ssdl, and .msl files from the output directory and add each of them as an Embedded Resource. Once you do this you will also need to change your connection string as indicated in the article accordingly.

Even though this isn’t my favorite approach, it works and does a great job of pre-compiling your views.

If you have already read the first article, you will notice that it points to another option, How to use T4 template for View Generation. This approach uses T4 templates to provide the pre-compiled views. The T4 templates are available for download from the link provided. This method has none of the negative points that I mentioned in the first article. You have to be sure that when you bring in the T4 template that you name it the same as your model, e.g. .Views.tt. You must also be sure that the file that is generated is included as part of your project. I like this approach many times over the previous approach since it doesn’t involve as much ceremony and configuration to get it working. NOTE: If you have updated your Entity Framework model and DomainService, you need to be sure to remember to right-click your .Views.tt file and choose “Run Custom Tool” to make sure that your pre-compiled views are up to date.

You are now armed with enough information to go and pre-compile your views and make your application run quickly again. I spent quite a bit of time researching and looking for good blog posts on this and want to list a couple of them for you here so that you can gain some understanding when and where you will actually see some dividends from all of your efforts based on the size of your model.

The following is a table of links that have the name of the post and a description of what it is about:

Link Description
Isolating Performance with Precompiled/Pre-generated Views in the Entity Framework 4 This post goes into great detail analyzing sample data and providing benchmarks on performance.
Solving the “No logical space left to create more user strings” Error and Improving performance of Pre-generated Views in Visual Studio .NET 4 Entity Framework Discusses potential issues with the pre-compilation process and shows a way to improve on performance along with providing a tool (MakeRes.exe) to get this accomplished.
Connection Strings MSDN article explaining Entity Framework connection strings. I used this when I was working on the first implementation. I also dynamically create my connection strings instead of using the web.config file so this was useful.
EDM Generator (EdmGen.exe) MSDN article explaining the use of the EdmGen.exe tool.
How to: Use EdmGen.exe to Generate the Model and Mapping Files MSDN article walking you through an actual generation of the .csdl, .ssdl, and .msl files.

I recommend that if you are planning on pre-compiling your views, you should also consider using the MakeRes.exe tool and use the dictionary based solution instead of the default views that are generated for you.

That is all there is to it. You will be happy to know that you will not have to make any changes from your client code as all this happens on the server.

I hope this helps out and thanks for reading…

Getting something better than “Server not found.” from WCF in Silverlight

April 6, 2011 3 comments

When developing applications in Silverlight it is inevitable that you will need to perform a request back to the server for data or processing. The common practice is to use Windows Communication Foundation (WCF). I find that WCF usually handles all of my needs and it works well when everything is happy and just right. However, if anything is not in its correct place, we can have tons of problems. Worse yet, we get your all time favorite message back in Silverlight stating:

“Server not found.”

Basically, WCF is telling us to go screw ourselves but in a much more politically correct way. Well there are several ways to attack this problem but I want to show you what I have come to love and use for all my projects involving WCF and Silverlight.

To give us some background, please review the following MSDN reference concerning Creating and Handling Faults in Silverlight. This article explains why we get our beloved message from WCF and what we can do about it. I think this is a must read for anybody wanting to understand what is going on and how they can go about fixing it themselves. I don’t think this is be the best solution but I do think it is a great reference. The reason for looking for something else is that I would like a solution that is a little easier to use and has less configuration required. I tend to follow this pattern because I have clients that want elegant solutions but they want it to be the least disruptive to their development process as possible.

Let’s move to the next blog post that I find very instrumental in dealing with exceptions in WCF. Oleg Sych wrote an excellent article, Simplifying WCF: Using Exceptions as Faults, back in 2008 that I believe is still very pertinent for us today. His solutions is very similar to the MSDN article that we already looked at but I believe it is more comprehensive and provides a good code base with which to use if you wanted to take his approach. I like what I read here but I still wanted something a little less intrusive from the perspective of ceremony and configuration.

This leads us to our last blog post. Here we find Jeroen Bernsen’s entry WCF ExceptionHandling in Silverlight. Like the other two articles, this post tries to solve the problem of dealing with exceptions thrown in WCF and how to get them back to the client in a friendly and easy way. If you read his post, you will see that you only need to create few objects to get our solution working and there is no need to modify your weg.config like in the other solutions. This is the reason why I like this solution the best.

I am going to provide the code below but you can also just follow along Jeroen’s post if you like.

The following class tells WCF to send fault messages with a 200 instead of a 500 response code. This change enables Silverlight to read the body of the message.

public class WcfSilverlightFaultBehavior : IDispatchMessageInspector
{
	public void BeforeSendReply(ref Message reply, object correlationState)
	{
		if (reply.IsFault)
		{
			HttpResponseMessageProperty property = new HttpResponseMessageProperty();

			// Here the response code is changed to 200.
			property.StatusCode = System.Net.HttpStatusCode.OK;

			reply.Properties[HttpResponseMessageProperty.Name] = property;
		}
	}

	public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
	{
		// Do nothing to the incoming message.
		return null;
	}
};

Next we have a sealed class that basically creates our attribute for using the previous class.

public sealed class WcfSilverlightFaultBehaviorAttribute : WcfBehaviorAttributeBase
{
	public WcfSilverlightFaultBehaviorAttribute()
		: base(typeof(WcfSilverlightFaultBehavior))
	{
	}
};

As you can see this is our attribute definition. We are deriving from a base class which we will review shortly.

The next class implements the IErrorHandler and allows us to handle WCF exceptions and package them in a way that we can still read on the client side.

public class WcfErrorBehavior : IErrorHandler
{

	void IErrorHandler.ProvideFault(Exception error, MessageVersion version, ref Message fault)
	{
		try
		{
			// Add code here to build faultreason for client based on exception
			FaultReason faultReason = new FaultReason(error.Message);
			ExceptionDetail exceptionDetail = new ExceptionDetail(error);

			// For security reasons you can also decide to not give the ExceptionDetail back
			// to the client or change the message, etc
			FaultException faultException =
				new FaultException(exceptionDetail, faultReason,
					FaultCode.CreateSenderFaultCode(new FaultCode("0")));

			MessageFault messageFault = faultException.CreateMessageFault();
			fault = Message.CreateMessage(version, messageFault, faultException.Action);
		}
		catch
		{
			// Todo log error
		}
	}

	///
	/// Handle all WCF Exceptions
	///
	bool IErrorHandler.HandleError(Exception ex)
	{
		try
		{
			// Add logging of exception here!
			Debug.WriteLine(ex.ToString());
		}
		catch
		{
			// Todo log error
		}

		// return true means we handled the error.
		return true;
	}

};

IErrorHandler has to methods that we must implement: ProvideFault and HandleError. You can read more on this interface here.

HandleError is just a boolean method that indicates whether or not a Fault message has already been generated (true) or for WCF to do its normal processing (false).

ProvideFault is a void method that allows us to package the fault exactly how we want to. This is very nice and gives us all the flexibility we need to customize or change how we wish to package our faults.

Next we have a sealed class that basically creates our attribute for using the previous class.

public sealed class WcfErrorBehaviorAttribute : WcfBehaviorAttributeBase
{
	public WcfErrorBehaviorAttribute()
		: base(typeof(WcfErrorBehavior))
	{
	}
};

As you can see this is our attribute definition. We are deriving from a base class which we will now review.

public abstract class WcfBehaviorAttributeBase : Attribute, IServiceBehavior
{
	private Type _behaviorType;

	///
	/// Constructor
	///
	/// <param name="typeBehavior" />IDispatchMessageInspector, IErrorHandler of IParameterInspector
	public WcfBehaviorAttributeBase(Type typeBehavior)
	{
		_behaviorType = typeBehavior;
	}

	void IServiceBehavior.AddBindingParameters(ServiceDescription serviceDescription,
		System.ServiceModel.ServiceHostBase serviceHostBase,
		System.Collections.ObjectModel.Collection endpoints,
		System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
	{
	}

	void IServiceBehavior.ApplyDispatchBehavior(ServiceDescription serviceDescription,
		System.ServiceModel.ServiceHostBase serviceHostBase)
	{
		object behavior;
		try
		{
			behavior = Activator.CreateInstance(_behaviorType);
		}
		catch (MissingMethodException e)
		{
			throw new ArgumentException("The Type specified in the BehaviorAttribute " +
				"constructor must have a public empty constructor.", e);
		}
		catch (InvalidCastException e)
		{
			throw new ArgumentException("The Type specified in the BehaviorAttribute " +
				"constructor must implement IDispatchMessageInspector, IParamaterInspector of IErrorHandler", e);
		}

		foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
		{
			if (behavior is IParameterInspector)
			{
				foreach (EndpointDispatcher epDisp in channelDispatcher.Endpoints)
				{
					foreach (DispatchOperation op in epDisp.DispatchRuntime.Operations)
						op.ParameterInspectors.Add((IParameterInspector)behavior);
				}
			}
			else if (behavior is IErrorHandler)
			{
				channelDispatcher.ErrorHandlers.Add((IErrorHandler)behavior);
			}
			else if (behavior is IDispatchMessageInspector)
			{
				foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints)
				{
					endpointDispatcher.DispatchRuntime.MessageInspectors.Add((IDispatchMessageInspector)behavior);
				}
			}
		}

	}

	void IServiceBehavior.Validate(ServiceDescription serviceDescription,
		System.ServiceModel.ServiceHostBase serviceHostBase)
	{
	}

};

As you can see in the code, this class derives from Attribute so that we can use it as an Attribute and also implements the IServiceBehavior interface. By implementing the IServiceBehavior interface, we no longer need to do any ceremony in the weg.config file. We have a single Type variable that allow us to be flexible from our classes that derive from us. As long as they pass in the type of behavior we handle the rest in the base class. IServiceBehavior has three methods: AddBindingParameters, ApplyDispatchBehavior, and Validate. You can read more on this interface here.

AddBindingParameter is used to pass to a binding element the custom information for the service so that it can support the service correctly.

ApplyDispatchBehavior is used to change run-time property values or insert custom extension objects.

Validate is used to examine the description and validate that the service can execute properly.

The method that we are interested in is the ApplyDispatchBehavior. First we try to create an instance of the underlying type that was passed in the constructor. Next we iterate over all of the ChannelDispatchers. Based on what type of behavior object we have, we determine what needs to happen in the current iteration of the ChannelDispatcher.

All of these classes will live on the server side and will need to be accessible from your WCF service.

Here is what you need to do to mark up your service:

[WcfErrorBehavior()]
[WcfSilverlightFaultBehavior()]
[AspNetCompatibilityRequirements(RequirementsMode=AspNetCompatibilityRequirementsMode.Allowed)]
public class TestService : ITestService
{
	public WcfErrorBehaviorAttribute()
		: base(typeof(WcfErrorBehavior))
	{
		public int DoSomething()
		{
			...
		}
	}
};

Congratulations, you are done! You can now have all of those nasty exceptions from WCF finally show up in your Silverlight client. Here is a sample screen shot of an exception thrown due to bad username and password to SQL Server before we apply the attributes:

Now with our attributes applied:

This makes all the difference! Hope this helps and again I want to stress that the credit of this blog goes to the articles that we previously mentioned.

Getting empty string default constraints to work in WCF RIA Services through a Fluent API

January 16, 2011 Leave a comment

This post is based on the excellent post by Nikhil Kothari. You should read his blog post before you continue as all I am going to introduce to how I was able to port his code to .NET 4.0 and get the Fluent API working.

First of all, let’s take a look at my file structure:

Entity Framework/RIA Services file structure

One of the issues that I have with WCF RIA Services is the whole workflow that is involved when anything changes in the data model:

  1. Exclude my partial Domain Service files.

    Fluent RIA Services - Exclude partial files

  2. Delete my Domain Service files.
  3. Open my Entity Framework (.edmx) file in the visual designer.
  4. Delete any of the model objects that have been changed. (I have found that I need to delete since Visual Studio doesn’t really serialize the changes from the database correctly.
  5. From the context menu, click on Update Model from Database…

    Fluent RIA Services - Update Model from Database

  6. Select the model objects that have changed.
  7. Build the project.
  8. Add a Domain Service.
  9. Add the partial keyword “partial” to the DomainService class.

    Fluent RIA Services - partial keyword to DomainService class

  10. Include your custom partial files in the project.

    Fluent RIA Services - Include partial files

  11. Add all of your [Include()] statements back to the .metadata file. (I have another blog post for this that get’s this to work.)
  12. Build your project.

Clearly this is a very intensively workflow and it is very easy to screw things up and your application will stop working as before. The problem I have with WCF RIA Services is that it doesn’t support or enforce default values from the database or check constraints on the client.

The use case that I am trying demonstrate is if you have a column in a table that is set as non-null but has a default value of empty string. I have a client that uses this as a standard in their database model and it becomes painful for us to get this to work. As you can see from the above workflow is that the DomainService is deleted and recreated. I have moved a lot of my custom .metadata logic to my partial file. Let’s take a look at what is required to allow empty string for a Required attribute using a slight port to Nikhil Kothari’s implementation:

public MemberValidationMetadata Required()
{
	return Required(null);
}

public MemberValidationMetadata Required(bool allowEmptyStrings)
{
	AddMetadata(new RequiredAttribute() { AllowEmptyStrings = allowEmptyStrings });
	return this;
}

This code snippet comes from the MemberValidationMetadata.cs file. It basically is an overload from the default Required() method and provides support for passing in a boolean to determine if AllowEmptyStrings is true or false.

public class rv_sbt_SiteBillTypeMetadata : MetadataClass
{
	public rv_sbt_SiteBillTypeMetadata()
	{
		this.Validation(p => p.rv_sbt_BillTypeDefinition).Required(true);
		this.Validation(p => p.rv_sbt_BillTypeDescription).Required(true);
		this.Validation(p => p.rv_sbt_BillTypeLongDescription).Required(true);
		this.Validation(p => p.rv_sbt_BillTypeName).Required(true);
		this.Validation(p => p.rv_sbt_CreatedBy).Required(true);
		this.Validation(p => p.rv_sbt_CustomerValue).Required(true);
		this.Validation(p => p.rv_sbt_ModifiedBy).Required(true);

		this.Projection(p => p.rs_sdf_SiteDocumentFormat).Include();
		this.Projection(p => p.rv_al_AccountingLink).Include();
		this.Projection(p => p.rv_obt_OrganizationBillType).Include();
		this.Projection(p => p.rv_sbtd_SiteBillTypeDetail).Include();
		this.Projection(p => p.rv_sis_SiteInstalledSubsystem).Include();
		this.Projection(p => p.rv_urbt_UserReportBillType).Include();
		this.Projection(p => p.rv_xrf_ReturnFrequency).Include();
	}
};

As you can see, I am using the overload Required method and it allows me to now support the empty string default constraint from the database. It is also possible to do this in the .metadata.cs file that is created when you create a new DomainService but I find that I prefer the Fluent API. The cool thing about this approach is that you can have both the Fluent API as well as what you get from the DomainService .metadata.cs file. The datamodel is rather large and I only add the table objects to the partial file (.metadata.partial.cs) and comment out the same definition in the .metadata.cs file shown below:

Fluent RIA Services - Comment out existing definition

Data access is still a very painful workflow for Silverlight development. I try to use metadata from a database as much as possible and this really is a good candidate for doing some code generation with T4. I do wish that we had the capability to have check contraints executed on the client without needing to write custom code but so far this doesn’t seem to work very well. I would also like to see a better alternative to forcing us to use the designer for the Entity Framework model as the only way to update and modify your model.

Perhaps an ability to code closer to what WebMatrix with Razor. That would be an awesome combination for us Silverlight developers.

Once again, thanks for reading….