I’ve recently started playing around with ASP.NET MVC. I was quite impressed by Scott Hanselman’s presentation at Tech Ed 2008 Australia. Some nuggets I gleaned from his presentation were the great way you clearly separated your responsibilities of each layer, the intrinsic testability of the business logic, the mocking frameworks that could support this testing and the dependency injection you could use.

After going through the initial installation and educating myself on how ASP.NET MVC works, I started building my first application with some excellent guidance from the MVC Storefront tutorials from Rob Conery. The tutorial series is simply excellent and I greatly enjoyed the videos I’ve watched so far. It’s added a lot to my working knowledge of implementing ASP.NET MVC applications.

One of the main things that leaps up right at you after you finally get to building your views (I was actually able to almost totally build my domain model before touching the UI) was that while it was possible to use WebForms user controls, the preferred way was to build stuff in HTML with HTML helper extensions. This was where my weakness in writing HTML and javascript was made very evident. While it took me quite a while to come to grips with this new way of development, I came to realise that the basics of building web applications will never change. You will always need to have good knowledge of HTML and javascript.

Having said that, I’m totally consumed with discovering what ASP.NET MVC offers. I’ve literally only seen the tip of the iceberg. I’m excited to start using dependency injection and expanding my knowledge of mocking (TypeMock is my mocking framework of choice). There’s also a lot to learn regarding javascript, JSON and AJAX.

Many thanks to ScottGu, Rob Conery, Phil Haack, Scott Hanselman and any others who have posted material to further people’s knowledge on using ASP.NET MVC.


ADO.NET EF comes with the new .NET 3.5 service pack. This is one of my most eagerly awaited releases. I’ve been following this since beta and I’m very eager to give this a try. The new Visual Studio 2008 service pack also is released in conjunction and will provide the ADO.NET Entity Designer. As someone who has been looking into LINQ for a fair amount of time, this new OR\M makes me wonder whether this will change the face of application development at the enterprise level and whether Microsoft can convince the old-timers who have been indoctrinated with the old school data-centric approach where dynamic queries was actively discouraged to convert to the new paradigm of application development.

Languages such as Java have had mature frameworks in place for years that facilitated development using OR\M tools; NHibernate and Spring come to mind. Microsoft, on the other hand, have been lagging behind and still continuing with their data adapters, data readers and datasets. The developers who have experienced application development with OR\M tools have been loath to go back to the days of typing “ds dot”.

Whether the ADO.NET EF addresses the issues with LINQ to SQL remains to be seen. They did the right thing by building on top of the LINQ query language, though. The days where I have to deal with collection manipulation in an iterative manner are over thanks to LINQ and while LINQ did not have a strong impact on the .NET programming paradigm, I believe it was a step in the right direction. I only hope that .NET programmers are more receptive to the Entity Framework instead of being stuck in their old data-centric ways.


I’ve had the opportunity to explore different approaches to building an n-tier application with LINQ to SQL for a few months and working out the details of what has to be done with different approaches. I’ve tried the approach of propagating LINQ entities from the DAL to the UI and back, using the Repository pattern to handle entities and my current favourite approach: using DTOs.

Propagating the LINQ entities up the UI layer and back posed several problems:

  • “Fat” objects going to the UI and impacting performance and scalability
  • Handling associations (foreign key references) was difficult to say the least. I found the process of detaching and reattaching LINQ entities to be fairly painful and my final solution was far from satisfactory in terms of elegance and maintenance.

The use of the Repository pattern likewise had a few problems:

  • The data context is scoped at the level of a HttpRequest, prohibiting multi-screen transactions
  • Bloated object graph – you naturally would be bringing back the actual LINQ entities to the UI and potentially bringing back a large number of referenced objects.

Using DTO:

  • Required the construction of lightweight DTOs (yes, hand-coded DTOs). These have the advantage of being easily serializable to travel over a web service.
  • The use of assemblers to handle the hydration of DTOs and the injecting of data back to the database
  • Many DTOs mean many assemblers

While every person may have their individual preferences, my preference is to use the DTO approach. After coding my fifth assembler, I decided that enough is enough. Using Reflection, I created a generic assembler that is used to hydrate and extract data from DTOs. I was initially concerned with any performance issues but the result of using reflection has a minimal performance hit. What I’ve found with DTOs is that while it might seem that there’s additional overhead in creating these objects, it literally does not take more than a few minutes at most to build one. The DTOs allow me to limit the scope of data which I am bringing to the UI, which effectively means that I only bring what I need to the UI and avoid bringing bloated objects to the UI.

With the official release of the ADO.NET Entity Framework today in .NET 3.5 SP1, I’m eager to see how the approach to building n-tier applications with OR\Ms may change.

Technorati Tags: ,,

Technorati Tags: ,

I’ve long been told that this book is one of the must-reads for developers. After picking it up a few months ago, it took me a while to finish reading it. Having gotten through 3/4 of the book, I’ve decided that it’s good enough to give it a decent review.

The first thing that drew me to the book was that it was written by Martin Fowler. His preface struck me was the emphasis on a set process for refactoring. I consider myself a fairly competent developer and refactoring was something I did fairly often. Or so I thought. The way I was approaching refactoring was not systematic or repeatable.

As I read further into the book, I worked through the examples of how refactoring worked but one basic thing stuck with me. When you refactor, you do it in a systematic and repeatable way. The main emphasis was that you NEVER added new functionality when refactoring existing functionality. You only add new functionality when you finish refactoring your existing code base.

This impression I got from the book, so far, was that it was extremely pragmatic and logical. The steps which Fowler walks you through, when refactoring, are extremely basic and easy to follow. As you progress through the chapters, he builds upon the concepts which he has laid down before and the learning curve is fairly smooth and gradual. While I am not finished yet, I found this to be an easy read and quite enjoyable. The examples that he presents are practical and easy to adapt to real life situations. Though I am far from one of the movers and shakers in the IT industry, my two cents is that this book is a definite must-read. The next book I need to finish – after I got sidetacked – is Eric Evan’s Domain Driven Design. So far I found it to be slightly harder to follow as it’s a fairly heavy read.


It’s been a while since I last updated my blog. The reason for that is mainly work commitments and also time needed to explore different approaches to incorporating LINQ in an n-tier application.

I’ve tried several different approaches for an ASP.NET environment. The stateless nature of the application forced me to come up with different strategies for handling the data context (or unit of work). I tried instantiating LINQ custom entities and then detaching them from the data context and passing it to the UI. That works for perhaps more than half the scenarios but it really does not work well when you need to reference a related entity. The fact that the custom entity is detached means that it has no data context to reference any related entities and it really showed its shortcomings in multi-screen transactions where data comes from several entities.

I mentioned it to some former colleagues of mine (you know who you are) and they mentioned a different way of handling LINQ entities. One worked with NHibernate and the other one with LINQ and the strategy they come up with was remarkably similar. Basically, by creating lightweight data transfer objects or DTOs which are hydrated from LINQ entities. These objects will contain enough information to populate controls on a page and later are used to encapsulate data which should be sent back to the database.

At first, this approach really didn’t sit well with me. I thought that it was not much better than getting a dataset back from the database, hydrating some custom objects and passing them on. I was quite wrong, though. This strategy worked beautifully from the perspective of using LINQ to interact with the database. The DTOs were simple to write and where they really excelled at was “flattening” of data from related entities. They really shone for containing data that spanned different entities!

Yes, there is a degree of duplication using this method. You will have to write code for property assignment and extraction but the beauty of it is that it is very simple code and hardly takes any time to write. The more I used this method, the more I grew to like it. It provide simplicity and a fairly clean way of incorporating LINQ. I will post more on this architecture later once I get a bit more time.

I’d like to thank Paul and Malcolm for their feedback. It was inspirational and extremely helpful.


As I’ve progressed with development of a website, I’ve increasingly come across problems with adapting LINQ with a Domain Driven Design. Here is what I have so far:

Data Context

Everything focuses around the data context. It is literally the glue that keeps things together. The data context encapsulates the Unit of Work and its associated object graph. Persisting and exposing this data context then becomes a real issue. What sort of scope should the data context have and what is the preferred method of persistence? Here are some of the requirements for the data context:

  • Needs to persist across different pages
  • Should not be persisted without clear purpose
  • Should not be fully exposed

The persistence across different pages is fairly clear. One has to be able to carry out a transaction across multiple pages.

The data context also should not be persisted without clear purpose as it is designed to be used and disposed as needed. Making it a static variable only introduces more headaches as the data context is not thread-safe. By using a global data context, very soon your object graph will bloat. It is inevitable as you start accessing members which are lazy loaded.

The data context also should not be exposed to the UI layer, for example. Clear access to the data context will make the multi-tiered approach meaningless and it allows business logic to be introduced in the UI layer.

What works then?

I’ve thought about several solutions listed here:

  • Create a data context for each transaction
  • Create a data context for each business object
  • Create a global data context

I will investigate this further as I have yet to come up with a good solution regarding the scope of the data context and its persistence.


Quite recently I was asked to create a prototype using Team Edition for Database Professionals 2005 (Data Dude) and after installing the software, I tried creating a new database project. Imagine my surprise when I got the message:

An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified

I did what most people would have done then and cursed.

After the cursing was done, the next thing I did was to google this error and seeing that it was a fairly generic one, I had no success. The next thing I did was to refine the search by adding TFS and Data Dude-specific search parameters.

During this time I did what other might have done:
Changed the Data Connections database to localhost\instance
Changed the Design-time Validation Database to localhost\instance

Of course that failed. I was at my wits end when I finally looked at the error carefully and then looked carefully at the label for the database settings in Visual Studio. It then struck me that it was asking for an Instance, not the database and instance.

After changing the Database Connections and Design-time Validation Database to just my instance, everything magically worked.

I’m posting this mainly in the hope that others who might encounter the same problem might find a solution. It sure made me feel foolish but it’s best that my lesson benefit others as well. I hope that someone out there might find this helpful.




Follow

Get every new post delivered to your Inbox.