Quantcast
Channel: Firedancer Unleashed!
Viewing all 84 articles
Browse latest View live

Layered Architecture for .NET

$
0
0
I have always been an advocate for the Layered Architecture Design Pattern. Since the day I was exposed to it, I have always tried to practice it in my application development.

The Layered Architecture Design Pattern promotes the concept of separation of concerns where code of similar responsibilities are being factored into layers. It is purely a logical design but it can be combined with physical design patterns such as the N-tier architecture to deliver highly scalable and impressive distributed enterprise applications.

Having followed the Microsoft Application Architecture Guide 2nd Edition and its previous edition, I have always tried to materialize the concepts in code with the .NET technologies available at the time. The result of such work can be seen in Layered Architecture Sample for .NET. I actually started testing out the concepts with .NET Remoting but when I published the samples, I have already started learning WCF.

Lately, I realized that many people are starting to adopt the Layered Architecture Design Pattern and I also noticed many newer .NET technologies have emerged. I would like to take this opportunity to provide an article on my thoughts and perhaps an updated version of the Layered Architecture Design Pattern for .NET.



Conceptually, this is how I visualized the Layered Architecture Design Pattern to be, in today's modern world. There are of course more sophisticated visualizations but I purposely kept it simple and near to what most of us are familiar with (and closer to the books).

Data Layer (a.k.a. Data Access Layer or DAL)

The data layer is where we keep our components that handle the insertion (Create), selection (Read), modification (Update) and deletion (Delete) of data - or better known as CRUD operations. While it is simplistic to think that data usually comes from a database, in reality, data can come from or go into various sources as well i.e. Web Services, Flat Files, Message Queues, XML files, SharePoint Lists etc.

Therefore, components that deal with database tables are called Data Access Components (DAC) and those that deal with other data sources will be called Data Agents (DA) i.e. Service Agents, File Agents, Queue Agents etc.

Data Access technologies that you can use in .NET are ADO.NET, Enterprise Library Data Access Application Block and ADO.NET Entity Framework.

Business Layer (a.k.a. Business Logic Layer or BLL)

The business layer is where the heart of our application resides. It contains all the processing logic to make the application possible. The Business Component (BC) is where you put these processing logic where each can be coded into independent business methods. Traditionally, we are required to chain-up the business methods manually in code to form the business process but fortunately today, we have workflow technologies.

If you can isolate each business method to function on its own, you can exposed them as a Workflow Activity (WFA). These workflow activities can then be used by a Workflow (WF) component to orchestrate the business processes.

Windows Workflow Foundation is the workflow technology that can be used in .NET.

Services Layer (a.k.a. Messaging Layer)

The services layer plays the most important role in the architecture to enable the functionality of the system to be exposed to client and external applications. It is also the key to achieving multi-platform and interoperable solutions.

Services components expose the functionality of business components or workflows via Contracts. In the Services-Orientation world, contracts are the interfaces where both service providers and service consumers agree on and should be immutable. Contracts are not just limited to describe the service and its operations but can also be used to describe the messages (i.e. Message Contracts) that are to be sent and received.

I use Services (SI) to represent components that expose business components directly and Workflow Services (WFS) to represent services that expose workflow functionality. The reason for this is because Workflow Services are usually long running and may have special requirements such as correlation.

Services technologies that can be used in .NET are Windows Communication Foundation (WCF), Workflow Services and ASP.NET WEB API.

Presentation Layer (a.k.a. User Interface Layer)

The presentation layer should not need much explanation. It is basically the part of the system where the user interacts with. Your screens, forms, web pages and reports are all User Interfaces (UI) which are part of the presentation layer. User Interfaces can make use of User Process Components or Controllers (UIC) to communicate with the back-end and to navigate or process the UI.

A carefully designed layered application should be able to support any form (or platform) of UI. If you are able to encapsulate all your processing logic behind the service layer, you can have whatever UI (Web, Desktop or Mobile) that you desire to connect to it - even UI-less external systems.

Presentation technologies that can be used in .NET are Windows Presentation Foundation (WPF), Silverlight, ASP.NET Web Forms, ASP.NET MVC, Windows Phone, Windows Store Applications and Windows Forms.

Shared Types

So far we have covered the components in all the layers but we have not yet discussed about how data is being passed between them. Traditionally, .NET developers use DataSets and DataTables but these are heavy-weight objects. Entities are Plain-Old-CLR-Objects (POCO), which means they are just classes with properties that ferry data across your layers. Sometimes, they are also called Data Transfer Objects (DTO).

It is recommended that you do not put any processing logic inside the Entities. If there are any processing logic, they should be placed in the business components. The reason is because when entities are being serialized to non .NET platforms, your processing logic may not carry over.

Some property values in entities can be represented using Enumerations (Enums) for easier readability. Example, it is much better to strongly-type your Status property with an Enum to show meaningful statuses such as Pending, Cancelled or Approved, instead of 0, 1 or 2.

Frameworks (a.k.a. Cross-cutting Framework)

In every system, there are bound to be code that can be shared across all the layers i.e. logging, auditing, validation and etc. You can treat these components as Framework components that can be shared by any of the layers. Framework components can be from 3rd-party (i.e. Microsoft Enterprise Library) or any custom in-house built components i.e. string manipulation functions, custom validation functions, extension methods etc.

Conclusion

I hope the explanation in this article can be a useful foundation for adopting the Layered Architecture Design  Pattern. If you wish to see code samples on how it can be implemented with .NET technologies, please feel free to visit Layered Architecture Sample for .NET.

Deploying Layered Applications

$
0
0
In my previous post, I have briefly explained the functions of each logical components in the Layered Architecture Design Pattern and in this post, I'll be covering the deployment architecture (or physical architecture) of layered applications.

To understand the deployment architecture of layered applications, we will first need to understand the difference between layering and tiering. Layering is the concept of partitioning code into components that form logical layers and Tiering is the term used to describe the distribution of code units to physical boundaries. In simpler terms, layering is "how you organize your code into assemblies" (.dll, .exe) and tiering is "where you deploy those assemblies".

A layered application can generally support flexible tiering through the introduction of a service layer that employs any distributed communication technologies such as DCOM, Remoting, Sockets, Message Queues, Web Services and etc. How many tiers are required will most likely depend on the security, scalability and infrastructure requirements (or constraints) of your application. You must know that performance degrades with each tier being introduced as the application will need to perform cross-process, cross boundary calls to each tier.

Generally, basic enterprise applications opt for 3-tiers (Web, App and Database) with some sophisticated ones spanning to 4 or more tiers (N-tier). The layer diagram which I presented in my previous post depicts a standard 4-layer-3-tier architecture application and it will be used as the basis for discussion in this post.

Take note that if you find the text in the diagrams too small, please refer to my previous post. I had intentionally shrunk the diagram to illustrate how the components would fit in a deployment and would expect that you are already familiarize with the color-coded boxes.

Single-Tier Architecture

The simplest way to deploy a layered application is to deploy everything to a single server. This is called the single-tier or monolithic approach. This approach is common for desktop and some types of mobile applications. It is also common when there is a server budget constraint for testing new web applications.


If you are certain that your application will not grow (which is rarely the case), you can improve the performance by completely removing the service layer.


It is generally safe to omit the service layer for monolithic client applications but I recommend to keep it there for web applications because the tendency for web applications to grow is higher. To reduce the impact on performance for the service layer in a monolithic web application, you can use the netNamedPipeBinding for the WCF/WF services.

Two-Tier Architecture

It is usually very rare to find the single-tier approach in an enterprise environment. You may find them in development servers but most of the time, enterprise applications will employ the two-tier or client-server architecture due to the need for data centralization. (Like the single-tier architecture, you can remove the service layer for performance).


The client devices or web server will host all the presentation and processing logic while accessing data from a centralized database server or a data service. 

3-Tier Architecture

Due to scalability and security concerns, typical enterprise web applications usually employ the 3-tier deployment architecture. The web servers are usually placed in a public-facing perimeter network and the application servers are placed behind a firewall within a secured network.


A properly designed service layer will enable the back-end to be accessible by external systems and client-devices. Using technologies such as Windows Communication Foundation (WCF) and ASP.NET WEB API, the service layer can provide support for a variety of client platforms including non-.NET platforms such as iOS, Android, Java and etc.

In a highly-scalable and available environment, the web servers are load-balanced into a Web Farm and the application servers are load-balanced into an Application Farm. The database servers are clustered for high-availability.


Deploying the Layered Application

If you are following closely to the project structure as illustrated in Layered Architecture Sample for .NET, you may be wondering which projects should you be deploying. For the sample Leave application, you only need to publish/deploy the Web project to the web server and the Hosts project to the application server.


You will noticed that the application server will contain the service, business and data layer assemblies whereas, the web server will only contain the presentation assemblies. This keeps business rules safe in the application server and reusable for any type of client applications.


Conclusion

I hope you find the information in this post useful. For code samples on layered .NET applications, please drop by Layered Architecture Sample for .NET.

Solving Workflow Management Service Memory Leak

$
0
0
AppFabric for Windows Server (or what used to be called Windows Server AppFabric) is an extension to IIS whereby, it provides some useful application server features such as service monitoring, workflow management and caching. Is it common to host long running workflows on AppFabric because of its Auto-Start feature that allows workflows to continue running in the event of an IISRESET or a server reboot.

But recently, I have encountered a problem with the AppFabric Workflow Management Service (WMS) where it will constantly leak memory over a period of time. The symptom is after a fresh reboot, the memory of the WorkflowManagementService.exe monitored from Task Manager will slowly increase to ridiculous proportions and eventually take up all of the server's memory.

This will cause any hosted Windows Communication Foundation (WCF) service or Workflow Services hosted on the server to cease accepting new requests. It will also cause any Workflow Services that have been persisted to fail auto-start.

After some serious troubleshooting with Microsoft, we were lucky to be able to find the root cause. If you encounter similar memory leak problems, you should check for error logs in the Event Viewer. AppFabric logs errors to this location:

Application and Services Log -> Microsoft -> Windows -> Application Server-System Services -> Admin

We have discovered a lot of errors (logged for almost every minute) with the following message:

Failed to invoke service management endpoint at 'net.pipe://[server-name]/ServiceManagement.svc' to activate service '/[workflow-service-name].svc'. Exception: 'The message with To 'net.pipe://[server-name]/ServiceManagement.svc' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree.'

There are few scenarios that will cause the above error.

1. net.pipe has been disabled for the Application in the Web Site that is hosting the Workflow Service. This wasn't the case for me.

2. There are Workflow Services that were deployed on the server with active instances but then they were deleted after testing. Their active workflow instances still exist in AppFabric Persistence Store causing the WMS to think that those instances are still available on the server and therefore will continuously try to activate them.

3. There are more than one AppFabric installations that are sharing the same AppFabric Persistence Store. All AppFabric instances register its Workflow Services in the persistence store and each WMS will try to activate all of the Workflows including those that belong to other servers. The activation fails and it will continuously retry.

4. Developers install AppFabric on their development machines but points their local AppFabric to use the server's AppFabric Persistence Store. This causes both scenario 2 and 3 to happen.

To confirm the issue, you can open the Internet Information Services (IIS) Manager on the affected server and check to see whether the erroneous Workflow Service that is mentioned in the event log exist under the Web Site.

If it doesn't exist, then proceed to the AppFabric Persistence Store and query the System.Activities.DurableInstancing.ServiceDeploymentsTable for the Workflow Service. You can filter by the RelativeServicePath column. Take note that your service name is registered as /[workflow-service-name].svcin the database.

If you manage to locate the row, that means you have an orphaned Workflow Service in your application server and that is the cause for the memory leak. To rectify the issue, delete all the rows in the System.Activities.DurableInstancing.InstancesTable that belongs to the ServiceDeploymentId of the orphaned Workflow Service and then proceed to delete the row in the ServiceDeploymentsTable itself.

Do this for all orphaned Workflow Services and the Event Viewer should no longer log any errors for the missing Workflow Services. Once that is done, restart the WorkflowManagementService.exe from the Services.msc the WMS should resume back to normal behaviour without any memory leak.

Do take note that the situation is more complicated when you have 2 or more servers sharing the same AppFabric Persistence Store. You may end-up needing to sacrifice the other servers to preserve the most critical one or down the workflows on the other servers, re-configure them to use their own Persistence Store and bring-up the workflows again.

As a guideline, I would recommend the following when using AppFabric Workflow Management Service:

DO THIS


DON'T DO THIS



This problem is found on AppFabric 1.1 for Windows Server with cumulative patch 1 to 4 installed.

String Concatenation Performance

$
0
0
Which string concatenation method has the best performance? - I'm sure at some point in time, a developer will ask (or be asked) this question. Here's a spike I did to benchmark the following methods:
  • String.Format
  • StringBuilder
  • + Operator
  • String.Concat
  • String.Join
  • StringWriter
  • .Replace()

The machine specifications that the tests ran on:


Test #1 - Using fixed string values. 

String.Format
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Format("{0}{1}{2}{3}{4}{5}{6}{7}{8}",
        "The ", "Quick ", "Brown ", "Fox ", "Jumps ", "Over ", "The ",
        "Lazy ", "Dog");
}

StringBuilder
for (int i = 0; i < _iterationCount; i++)
{
    result = builder.Append("The ").Append("Quick ").Append("Brown ")
        .Append("Fox ").Append("Jumps ").Append("Over ")
        .Append("The ").Append("Lazy ").Append("Dog").ToString();
    builder.Clear();
}

+ Operator
for (int i = 0; i < _iterationCount; i++)
{
    result = "The " + "Quick " + "Brown " + "Fox " + "Jumps "
             "Over " + "The " + "Lazy " + "Dog";
}

String.Concat
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Concat("The ", "Quick ", "Brown ", "Fox "
                           "Jumps ""Over ", "The ", "Lazy ", "Dog");
}

String.Join
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Join("",
        "The ", "Quick ", "Brown ", "Fox ", "Jumps ", "Over ", "The "
        "Lazy ", "Dog");
}

StringWriter
for (int i = 0; i < _iterationCount; i++)
{
    writer.Write("The ");
    writer.Write("Quick ");
    writer.Write("Brown ");
    writer.Write("Fox ");
    writer.Write("Jumps ");
    writer.Write("Over ");
    writer.Write("The ");
    writer.Write("Lazy ");
    writer.Write("Dog");

    result = writer.ToString();
    writer.GetStringBuilder().Clear();
}

.Replace()
for (int i = 0; i < _iterationCount; i++)
{
    result = "{0}{1}{2}{3}{4}{5}{6}{7}{8}"
        .Replace("{8}", "Dog")
        .Replace("{7}", "Lazy ")
        .Replace("{6}", "The ")
        .Replace("{5}", "Over ")
        .Replace("{4}", "Jumps ")
        .Replace("{3}", "Fox ")
        .Replace("{2}", "Brown ")
        .Replace("{1}", "Quick ")
        .Replace("{0}", "The ");

}

Iteration count is set to 100,000 and here are the results:
  • String.Format = ~150 ms
  • StringBuilder = ~56 ms
  • + Operator = 0 ms
  • String.Concat = ~69 ms
  • String.Join = ~64 ms
  • StringWriter = ~59 ms
  • .Replace() = ~380 ms

Are you surprised to see 0 ms for the + operator? If you use a disassembler you will noticed that the compiler has optimized it to:

result = "The Quick Brown Fox Jumps Over The Lazy Dog"

Therefore, it shows that it is safe to use the + operator to break up long strings in our code for readability sake.

Let's proceed to another test and this time, we will replace all the fixed string values with variables instead.

privatestring the = "The ";
privatestring quick = "Quick ";
privatestring brown = "Brown ";
privatestring fox = "Fox ";
privatestring jumps = "Jumps ";
privatestring over = "Over ";
privatestring lazy = "Lazy ";
privatestring dog = "Dog ";


Test #2 - Using string variables. 

String.Format
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Format("{0}{1}{2}{3}{4}{5}{6}{7}{8}",
        the, quick, brown, fox, jumps, over, the, lazy, dog);
}

StringBuilder
for (int i = 0; i < _iterationCount; i++)
{
    result = builder.Append(the).Append(quick).Append(brown)
        .Append(fox).Append(jumps).Append(over)
        .Append(the).Append(lazy).Append(dog).ToString();
    builder.Clear();
}

+ Operator
for (int i = 0; i < _iterationCount; i++)
{
    result = the + quick + brown + fox + jumps + 
             over + the + lazy + dog;
}

String.Concat
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Concat(the, quick, brown, fox, jumps, over, the, 
                           lazy, dog);
}

String.Join
for (int i = 0; i < _iterationCount; i++)
{
    result = string.Join("",
        the, quick, brown, fox, jumps, over, the, lazy, dog);
}

StringWriter
for (int i = 0; i < _iterationCount; i++)
{
    writer.Write(the);
    writer.Write(quick);
    writer.Write(brown);
    writer.Write(fox);
    writer.Write(jumps);
    writer.Write(over);
    writer.Write(the);
    writer.Write(lazy);
    writer.Write(dog);

    result = writer.ToString();
    writer.GetStringBuilder().Clear();
}

.Replace()
for (int i = 0; i < _iterationCount; i++)
{
    result = "{0}{1}{2}{3}{4}{5}{6}{7}{8}"
        .Replace("{8}", dog)
        .Replace("{7}", lazy)
        .Replace("{6}", the)
        .Replace("{5}", over)
        .Replace("{4}", jumps)
        .Replace("{3}", fox)
        .Replace("{2}", brown)
        .Replace("{1}", quick)
        .Replace("{0}", the);

}

The results for 100,000 iterations are:
  • String.Format = ~146 ms
  • StringBuilder = ~54 ms
  • + Operator = ~70 ms
  • String.Concat = ~70 ms
  • String.Join = ~64 ms
  • StringWriter = ~57 ms
  • .Replace() = ~398 ms

From this we can conclude that StringBuilder provides the best performance for string concatenations done inside loops. StringWriter uses a StringBuilder internally and therefore, the performance is comparable however, take note to dispose it after used. String.Format is the slowest for string concatenation. Replace() is the slowest even though replacement is done from the end of the string to improve performance. Therefore, we need to be mindful when using it to substitute keywords in strings with values.

Just in case you get too excited and want to inherit the StringBuilder class and then provide an overloaded + operator for it, you need to know that StringBuilder is a sealed class. Oh! Bummer!

[Updated: 5-Jan-2014] Included machine specifications and .Replace() benchmark.

ASP.NET WEB API 2 Routing Woes

$
0
0
I was trying out ASP.NET WEB API 2 and I fumbled. I was caught off guard by the new Routing behavior. I created 2 controllers:

publicclassSampleController : ApiController
{
    [HttpGet]
    [Route("api/{controller}/{action}/{id}")]
    publicPerson GetPersonById(int id)
    {
        returnnewPerson() { FirstName = "Serena", LastName = "Yeoh" };
    }

    [HttpGet]
    [Route("api/{controller}/{action}/{id}/{country}")]
    public Person CheckCountry(int id, string country)
    {
        // ...
    }
    // Other methods not shown for simplicity sake.
}

and

publicclassDummyController : ApiController
{
    [HttpGet]
    [Route("api/{controller}/{action}/{id}")]
    publicPerson GetPersonById(int id)
    {
        returnnewPerson() { FirstName = "Serena", LastName = "Yeoh" };
    }

    [HttpGet]
    [Route("api/{controller}/{action}/{id}/{country}")]
    public Person CheckCountry(int id, string country)
    {
        // ...
    }
    // Other methods not shown for simplicity sake.
}

When I tried to access my API from the browser, I was greeted with a HTTP 404 Not Found. Using fiddler, I managed to capture the actual error message:

{"Message":"No HTTP resource was found that matches the request URI 'http://localhost:2937/Api/Sample/GetPersonById/1'.","MessageDetail":"No route providing a controller name was found to match request URI 'http://localhost:2937/Api/Sample/GetPersonById/1'"}

This error only appears when there are 2 controllers with methods that have same signatures. If the signatures are different, everything will execute fine.

I found that to fix the problem, I will need to hard-code the RoutePrefix on the classes like the following:

[RoutePrefix("api/Dummy")]
publicclassDummyController : ApiController

[RoutePrefix("api/Sample")]
public class SampleController : ApiController

and change the Route template on the methods

[Route("{action}/{id}")]
public Person GetPersonById(int id)

[Route("{action}/{id}/{country}")]
public Person CheckCountry(int id, string country)

While this solution works, I find it not very clean as I have to hard-code the controller's name in the RoutePrefix.

In case you are wondering, here's the Visual Studio generated WebApiConfig.cs file.

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        // Web API configuration and services

        // Web API routes
        config.MapHttpAttributeRoutes();

        config.Routes.MapHttpRoute(
            name: "DefaultApi",
            routeTemplate: "api/{controller}/{id}",
            defaults: new { id = RouteParameter.Optional }
        );
    }
}

I do know that we can create new MapHttpRoute entries for every different method signature but I find that to be quite insane as well.

Are there any other solutions?

Layered Applications and Windows Azure

$
0
0
With the arrival of cloud computing, we may wonder whether the Layered Architecture Pattern would still be relevant. In theory, everything should work as-is if we are leveraging on Infrastructure-as-a-Service (IaaS) because that only involves moving our servers to the cloud, but what about Platform-as-a-Service (PaaS)?

Well, I'm glad to learn that Windows Azure provides a variety of deployment options for enterprise layered applications (provided if the applications were properly layered on-premise). This the third article in the series of exploring the layered architecture pattern in modern systems. You can check the previous two posts if you have missed it.
    [Note:] If you find the text in the diagrams too small, please refer to my previous post for a larger illustration.

    PaaS - Windows Azure Web Sites and Cloud Services

    For new or enterprise applications that can be migrated completely to use the PaaS model, we can leverage on Windows Azure Cloud Services which offer us the options of deploying Web Roles and Worker Roles. Web applications containing the presentation layer can be deployed to Web Roles and the back-end stack containing the service, business and data layers, can be deployed to either Web or Worker roles.


    If the service layer was developed in WCF or WEB API, the back-end stack can be deployed to a Web Role. It is not necessary for the back-end stack to be using Worker Roles unless necessary.

    Web applications can also be deployed to Azure Web Sites if they are simple pages but I would recommend using Web Roles instead because they are more suited for application environment (i.e. network isolation, setting up start-up tasks, support for virtual networks, multi-deployment environments and etc).

    As for the database portion, some rethinking is required. The notion of database clusters is somewhat non existence in a PaaS model and instead, PaaS uses the concept of replicas for sustaining high-availability. Also, large databases may need to be shredded (horizontal partitioned) in to smaller databases and later use Federation to query them.

    Iaas - Windows Azure Virtual Network and Virtual Machines

    For existing layered applications that could not be migrated to the PaaS model, Windows Azure also provides IaaS options through Virtual Network and Virtual Machines. Similar to an on-premise environment; web, application and database servers can be virtualized into Virtual Machines and then configured within a Virtual Network on Windows Azure.


    Notice that we are able to setup database clusters for high-availability in IaaS but unlike PaaS, the tasks and responsibilities of setting up all the servers (Web, App and DB) are on us.

    Organizations may consider the IaaS model to reduce the risk of migrating applications to the cloud as it closely resembles the architecture of existing on-premise infrastructures. It is also a good option for quickly provisioning servers for testing out prototype solutions and applications. Organizations who want to have more control on their servers will also find IaaS more suitable for their liking.

    Paas and Iaas Working Together

    Windows Azure do not limit us to an all-or-nothing option when it comes to deploying applications. Through layering we can leverage on one of its benefits whereby each layer can be developed, migrated and upgraded separately from the others. In this case, we can have scenarios where our web application are migrated to an Azure Web Site or Web Role, while the back-end stack can be hosted on Virtual Machines in a Virtual Network.


    The use of Azure Web Site or Web Roles will surface some differences here. With Web Roles, your web application will be able to join the Virtual Network of the VMs. The Web Sites can only call the App server from external endpoints that we need to configure.

    Hybrid On-Premise and Cloud

    In all enterprises, there will be applications that cannot be migrated to the cloud. This can be due to governing policies, readiness or even the need to support for legacy systems. In such situations, all hopes are not lost as Windows Azure also provides the ability to connect on-premise applications to the cloud.

    A common method may be to deploy certain servers to Virtual Machines in an Azure Virtual Network and then configure VPN to connect back to the on-premise environment. Windows Azure provides Site-to-Site and Point-to-Site VPN connectivity for this purpose.

    However, if the applications are properly layered, we can actually leverage on Azure Service Bus to expose any on-premise service stack to the cloud.


    Service Bus can be used to expose any on-premise services to other external systems (i.e. partner extranets) that are either hosted on other premises or on the cloud. It can also be used to expose services to mobile applications. You can secure your Service Bus endpoints using Access Control Service.

    Summary

    As we can see, the Layered Architecture Pattern stays relevant despite the emergence of cloud computing. In fact, having a carefully layered design may assist in easing the migration to the cloud. Even, if you are not developing applications for the cloud today, I would still strongly encourage you to consider layering your applications.

    Arguably, we can still deploy monolithic web applications (everything in a server) to the cloud and abuse use the elastic scaling capabilities of the cloud by throwing in more instances but that will not provide us with the option to provide isolation and perform granular tuning or scaling i.e. 4 Web server instances serving content and 2 App server instances processing logic.

    You can check-out samples of layered applications developed for Windows Azure in Layered Architecture Sample for Azure.

    New IIS Express Behavior in VS 2013?

    $
    0
    0
    If you have been using Visual Studio 2013 for a while, you will notice that it behaves differently from previous versions of Visual Studio when it comes to debugging web applications. Visual Studio 2013 will launch a new instance of IIS Express every time we debug our code and shuts it down at the end of the debugging session.

    While this may seem a nice thing to have since a clean instance is started every time, but that wasn't the main purpose of this behaviour and obviously, it will create some problems when dealing with solutions that contains multi-tier projects i.e. a WCF/WF web host where Add or Update Service Reference is required.

    In such environment, we usually run the WCF web host in Debug mode while we add service reference from the other projects. In this case, we are no longer able to perform such feat. We can however, workaround it by launching an instance of the host without debugging but after a while, we may encounter situations where this behaviour will become more of an inconvenience.

    After some searching, I learned that this was actually caused by the new Edit and Continue feature in x64 environment. Disabling this feature will return Visual Studio 2013 to the previous behaviour of not shutting down IIS Express after debugging.

    To disable the feature, simply Right-Click on your Web Host project and select Properties. On the Web tab, un-check the Enable Edit and Continue checkbox.


    After disabling the feature, the IIS Express instance will be retained across the debugging sessions.

    Layered Architecture Sample January 2014

    $
    0
    0
    I have just finished converting and updating all the Layered Architecture Samples to Visual Studio 2013 and not only that, I have also updated some of the technologies (i.e. MVC 5 and EF 6) and fixed some of the bugs. New in this release is also a new sample that uses ASP.NET Web Pages 3 to connect to an ASP.NET WEB API back-end.

    Here are the list of samples to-date:

    1. LeaveSample-ASPNET-WCF-DAAB - ASP.NET Web Forms Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Enterprise Library DAAB 6.0.
    2. LeaveSample-ASPNET-WCF-EF - ASP.NET Web Forms Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.
    3. LeaveSample-WEBPAGE-API-EF - ASP.NET Web Pages, ASP.NET WEB API, Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.
    4. LeaveSample-MVC-WCF-EF - ASP.NET MVC 5, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.
    5. LeaveSample-WINFORMS-WCF-EF - Windows Forms, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.
    You can download them by directly clicking on their links or visit the download page here.

    WPF/MVVM and ASP.NET MVC/WEB API Samples

    $
    0
    0
    I'm excited to announce that the following samples have been added to the January 2014 release of Layered Architecture Sample for .NET.

    1. LeaveSample-MVC-API-EF - ASP.NET MVC 5, ASP.NET WEB API , Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.
    2. LeaveSample-WPF-MVVM-WCF-EF - Windows Presentation Foundation (WPF), MVVM, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and ADO.NET Entity Framework 6.

    I noticed there were more people focusing on the MVC sample, so I quickly hacked together a MVC with WEB API 2 sample which I think will be useful for those who are trying to explore them.

    I was also curious on the Model-View-ViewModel pattern for developing WPF applications, so I had spent the past few days working on a layered WPF sample that applies the MVVM pattern. It was a good learning experience and I never thought the screen design of the Leave Sample would push me to learn so many things about MVVM.

    Here's a screenshot of the Presentation layer of the application. I have worked very hard to ensure that there are no code-behind for the View's xaml. It was really a tough feat.


    Do check it out and give some comments. *Hugs*

    Data Annotations

    $
    0
    0
    I discovered data annotations while I was learning ASP.NET Web Forms Model Binding sometime back. From a little reading, I got to know that it was first introduced in Silverlight and also available on ASP.NET MVC - so it is really nothing new.

    Basically, it is a library of .NET Attributes from the System.ComponentModel.DataAnnotations namespace that can help make validation tasks simpler. Take for example if we want the Name property of our User entity to be required, we would go all the way to do something like this to validate it in our code:

    if (string.IsNullOrWhiteSpace(user.Name))
    {
        thrownewApplicationException("Name cannot be blank.");
    }

    But with data annotations, all we need to do is just decorate the Name property with a validation attribute, for example the RequiredAttribute.

    [Required]
    publicstring Name { get; set; }

    Any violation to the validation you applied will be given an error message. The default error messages are somewhat very basic. Fortunately, you can customize the error messages:

    [Required(ErrorMessage="Please enter your name. It is very important.")]
    public string Name { getset; }

    You can apply multiple attributes to the properties at once.

    [Required]
    [EmailAddress]
    [MinLength(5, ErrorMessage = "Email must be atleast 5 characters.")]
    [MaxLength(255, ErrorMessage = "Email must not exceed 255 characters.")]
    publicstring Email { get; set; }

    If you can't find a validation attribute for something that you need to validate, you can always use the RegularExpressionAttribute. Here's an example:

    [RegularExpression("^60.*$", ErrorMessage="Mobile No. must start with prefix 60.")]
    publicstring MobileNo { get; set; }

    There are many validation attributes provided out-of-the-box, you can get a list of them here.

    To hook it up to the ASP.NET Web Forms controls, specify the entity as the ItemType of the control
    (i.e. FormView) and also declare a ValidationSummary to display the error messages.

    <asp:ValidationSummaryID="ValidationSummary1"runat="server"ValidationGroup="FormFields"ForeColor="Red"/>

    <asp:FormViewID="userForm"ItemType="Sample.Entities.User"runat="server"DefaultMode="Insert"InsertMethod="userForm_InsertItem">

    In the code-behind, call the TryUpdateModel method:

    publicvoid userForm_InsertItem()
    {
        var user = newUser();
        TryUpdateModel(user);

        if (ModelState.IsValid)
        {
            // Do your stuff...
        }
    }

    That's all to it. You can check the ModelState.IsValid property to determine the next course of action. If there are any validation errors, they will be displayed in the ValidationSummary automatically.


    Now isn't that cool? :)

    Applied Technologies in Layered Architecture

    $
    0
    0
    The Layered Architecture Pattern promotes isolation as one of its benefits and with layer isolation, we are given the idea of being able to upgrade and migrate each layer to newer technologies without impacting the business logic. Having gone through few iterations of experimenting the pattern with various .NET Technologies, it has proven the idea to be somewhat accurate, and indeed the pattern proves to be a versatile architectural style that can evolve across time and sustain technology evolutions.

    I used 'somewhat accurate' in my description due to certain caveats which I will disclose at the end of this post.

    Through the exercise of testing the pattern, it is observed that the layers can easily be 'swapped-in' and 'swapped-out' when their interfaces are being abstracted carefully. You can download the samples from Layered Architecture Sample for .NET to see the implementations.

    The January 2014 release of the samples demonstrates a variety of technologies assembled to form different flavours of layered applications. While the technology in other layers can be different, the business logic does remain intact.

    To provide an easier visualization of what technology can be applied in the layers, I have came out with the following technology map. Take note that this is intended only as a basic guide and you are free to use whatever technologies you deemed fit in your environment.

    For the presentation layer, we have a choice of:

    • ASP.NET Web Forms
    • ASP.NET MVC
    • ASP.NET Web Pages
    • Windows Forms
    • Windows Presentation Foundation (WPF)
    • Windows Store Apps
    • Windows Phone Apps
    It is quite common to leverage on ASP.NET for web applications. ASP.NET has many variants today to suit the skill-sets of different developers. Windows Forms are still widely used in enterprises where fast and responsive client-server desktop applications are required. For exciting UI applications, WPF is there to fill the gap and layering can be interestingly integrated to it even with the MVVM pattern.

    Windows Store Apps are new in Windows 8, but both Store Apps and Windows Phone Apps are more suited to be connected to a cloud-based back-end (that can also be layered). Silverlight is being included as a rich-client technology but do take note that it is no longer being developed.

    For the services layer, we have:

    • Windows Communication Foundation (WCF)
    • Windows Workflow Foundation (WF) hosted as a service (Workflow Services)
    • ASP.NET WEB API exposing JSON or XML (POX)
    • Microsoft Message Queue (MSMQ)
    The most common distributed communications technology today are WEB API and WCF. For most resource-based and web-based services, WEB API is the preferred choice. It is also very suitable for providing back-ends for mobile applications. WCF still exist in large enterprises to facilitate interoperability between legacy and service-oriented systems and MSMQ is there to provide queue-based solutions.

    Deploying old-style ASMX Web Services for new applications is not recommended, even-though you may encounter them in legacy systems. Traditionally, the services layer were fulfilled by RPC technologies such as Distributed Component Object Model (DCOM) and .NET Remoting. DCOM is still supported in the latest versions of Windows but .NET Remoting has been superseded by WCF. These legacy technologies should not be used in newer applications.

    For the business layer, it is all based on our code and processing logic. At the most basic level, it will just be the .NET programming languages that we use to build our components, C# or VB.NET - F# anyone?

    For the data layer, we have:

    • ADO.NET
    • ADO.NET Entity Framework
    • Enterprise Library Data Access Application Block (DAAB)
    Nothing beats native ADO.NET when it comes to data access performance but some may prefer a lightweight wrapper over it such as the DAAB. If you are still using LINQ2SQL, I will suggest migrating to ADO.NET Entity Framework. Other 3rd-party Object-Relational Mappers (O/RM) can also be used here as well.

    Hopefully with the above technology map, you are able to get an idea of the technologies that can be used for building layered applications. Take note that in some scenarios, there can be more than one technology in a layer (i.e. WCF and Message Queue) and in some, the technologies may not be easily compatible.

    The caveat which I mentioned earlier is that while minor technology upgrades can be isolated to a single layer (i.e. replacing data layer with a newer technology), major technology upgrades (i.e. migrating from WCF to WEB API) may affect more than one layers. But nevertheless, the business logic is still preserved. As a conclusion, it is always best to plan any technology upgrades for the layers carefully to minimize the impact.

    This is the 4th part in the series of my Layered Architecture posts, you may also be interested in:




    Data Access Extension Method

    $
    0
    0
    This is a follow-up to my previous post on Exploring Extension Methods. As you can see, it was dated many years back and I'm glad that I finally had the time to look into it now. I took one of my layered application samples and converted it to test the concept.

    Just a brief recap, I use Entities (classes with just properties) in my architecture to represent data and they don't have any methods. To validate or make use of the data for processing logic, I use a Business Component and to persist their data I use Data Access Components (DAC). So, if I have a Leave object, it will look something like this in the business componentto save it to the database:

    // Instantiate data access component.
    var leaveDAC = newLeaveDAC();

    // Create leave.
    leaveDAC.Create(leave);

    But it would be nice if I can just do:

    // Create leave. 
    leave.Create();

    The code will look much cleaner and I do not need to worry about instantiating which DACs to persist the Entities. I will only need to focus on the Entities instead. This is where extension methods can come into play. By converting the DAC into a "Data Access Extension" class, the above syntactic experience can be achieved.

    Let's look at the original DAC.

    publicpartialclassLeaveDAC : DataAccessComponent
    {
        publicvoid Create(Leaveleave)
        publicvoid UpdateStatus(Leave leave)
        publicvoid SelectById(long leaveID)
        publicvoid Select(int maximumRows, int startRowIndex, 
                           string sortExpression, string employee, 
                           LeaveCategories? category, LeaveStatuses? status)
        publicint Count(stringemployee, LeaveCategories? category, 
                         LeaveStatuses? status)
        publicbool IsOverlap()
    }

    After conversion to Extension Methods, the DAC will look like the following:

    publicstaticpartialclassLeaveDAC
    {
        publicstaticvoid Create(thisLeave leave)
        publicstaticvoid UpdateStatus(thisLeave leave)
        publicstaticvoid SelectById(thisLeave leave, longleaveID)
        publicstaticvoid Select(thisList<Leave> leaves, int maximumRows,
                                  int startRowIndex, string sortExpression,
                                  string employee, LeaveCategoriescategory, 
                                  LeaveStatuses? status)
        publicstaticint Count(thisList<Leave> leaves, string employee, 
                                LeaveCategories? category, LeaveStatuses? status)
        publicstaticbool IsOverlap(thisLeave leave)
    }

    Noticed that the DAC don't inherit the DataAccessComponent anymore? It is actually a limitation and I will discuss about that later.

    Here's how a method in my business component looked like originally:

    privatevoid UpdateStatus(Leave leave)
    {
        LeaveStatusLog log = CreateLog(leave);

        // Data access component declarations.
        var leaveDAC = newLeaveDAC();
        var leaveStatusLogDAC = newLeaveStatusLogDAC();

        using (TransactionScope ts =
            newTransactionScope(TransactionScopeOption.Required))
        {
            // Step 1 - Calling UpdateById on LeaveDAC.
            leaveDAC.UpdateStatus(leave);

            // Step 2 - Calling Create on LeaveStatusLogDAC.
            leaveStatusLogDAC.Create(log);

            ts.Complete();
        }
    }

    Here's the UpdateStatus method after conversion:

    privatevoid UpdateStatus(Leave leave)
    {
        LeaveStatusLog log = CreateLog(leave);

        using (TransactionScope ts =
            newTransactionScope(TransactionScopeOption.Required))
        {
            // Step 1 - Calling Update status.
            leave.UpdateStatus();

            // Step 2 - Calling Create on log.
            log.Create();

            ts.Complete();
        }
    }

    Pretty clean right? Let's look at another method that does data retrieval. Here's the original version:

    publicList<Leave> ListLeavesByEmployee(int maximumRows, int startRowIndex,
        string sortExpression, string employee, LeaveCategories? category,
        LeaveStatuses? status, outint totalRowCount)
    {
        List<Leave> result = default(List<Leave>);

        if (string.IsNullOrWhiteSpace(sortExpression))
            sortExpression = "DateSubmitted DESC";

        // Data access component declarations.
        var leaveDAC = newLeaveDAC();

        // Step 1 - Calling Select on LeaveDAC.
        result = leaveDAC.Select(maximumRows, startRowIndex, sortExpression,
            employee, category, status);

        // Step 2 - Get count.
        totalRowCount = leaveDAC.Count(employee, category, status);

        return result;
    }

    And here's the ListLeavesByEmployee method using data extensions:

    public List<Leave> ListLeavesByEmployee(int maximumRows, int startRowIndex,
        string sortExpression, string employee, LeaveCategories? category,
        LeaveStatuses? status, out int totalRowCount)
    {
        var result = new List<Leave>();

        if (string.IsNullOrWhiteSpace(sortExpression))
            sortExpression = "DateSubmitted DESC";

        // Step 1 - Calling Select.
        result.Select(maximumRows, startRowIndex, sortExpression,
            employee, category, status);

        // Step 2 - Get count.
        totalRowCount = result.Count(employee, category, status);

        return result;
    }

    Up till now, the results have been quite satisfying but there are some limitations that needs to be dealt with. A static class can only inherit from object and not other classes, which leads to the previous highlighted problem that prevents any reusable methods to be encapsulated in the abstract base DataAccessComponent class. It has to be converted into a normal class with its properties and methods also converted to static.

    Original DataAccessComponent base class:

    publicabstractclassDataAccessComponent
    {
        protectedconststring CONNECTION_NAME = "default";

        protected T GetDataValue(IDataReader dr, string columnName)
        {
            int i = dr.GetOrdinal(columnName);

            if (!dr.IsDBNull(i))
                return (T)dr.GetValue(i);
            else
                returndefault(T);
        }
    }

    Converted to function like a utility class:

    publicsealedclassDataAccessComponent
    {
        publicconststring CONNECTION_NAME = "default";

        publicstatic T GetDataValue(IDataReader dr, string columnName)
        {
            int i = dr.GetOrdinal(columnName);

            if (!dr.IsDBNull(i))
                return (T)dr.GetValue(i);
            else
                returndefault(T);
        }
    }

    This makes DAC methods like the following

    privateLeave LoadLeave(IDataReader dr)
    {
        // Create a new Leave
        Leave leave = newLeave();

        // Read values.
        leave.LeaveID = base.GetDataValue<long>(dr, "LeaveID");
        leave.CorrelationID = base.GetDataValue<Guid>(dr, "CorrelationID");
        leave.Category = base.GetDataValue<LeaveCategories>(dr, "Category");
        
        // other fields...

        return leave;
    }

    to become a little expanded:

    privatestaticLeave LoadLeave(IDataReader dr, Leave leave)
    {
        // Read values.
        leave.LeaveID = DataAccessComponent.GetDataValue<long>(dr, "LeaveID");
        leave.CorrelationID = DataAccessComponent.GetDataValue<Guid>(dr, "CorrelationID");
        leave.Category = DataAccessComponent.GetDataValue<LeaveCategories>(dr, "Category");

        // other fields...
        return leave;
    }

    Now the limitation of not being able to do inheritance has somewhat make me feel that this feat might not be a good idea. Furthermore, I also discovered that reflecting the extension methods may somewhat be a challenge if I want to apply this in LASG.

    Most people will treat extension methods as 'syntactic sugar'. In this experiment, it does show that other than its original purpose of just purely extending class functionalities, extension methods can also make code look a bit more readable and easier to understand.

    In terms of performance, there isn't seem to be any impact (or improvements). You can check-out Sylvester Lee's post to get some in-depth details on the performance benchmark he did for me in this research.

    At this point in time, I have slight reluctance in using extension methods for data access. What do you think? Will you consider this method?

    Layered Architecture Components

    $
    0
    0
    The Layered Architecture principle states that components in one layer should only know and interact with components that are in the layer directly below it, and that components in each layer, should only serve components that are in the layer directly above it. This means that in a strict-layering practice, layers will communicate in a top-down fashion from Presentation -> Services -> Business -> Data.

    It is often easier said in theory but spells a lot of confusion to developers, especially to beginner practitioners, when it comes to implementation. To help visualize the components' relationships and interactions better, I have developed the following diagram and provided some basic guidelines.


    Legend
    BE=Business EntityBC=Business Component   SI=Service Implementation
    Enum=EnumerationsWFA=Workflow ActivityMT=Message Type
    DAC=Data Access Component   WFS=Workflow ServiceUIC=User Interface Controller
    DA=Data AgentSC=Service ContractUI=User Interface

    When designing component interactions, use the following guidelines:

    Shared

    • An entity may contain other entities. i.e. Order with a List<OrderItem>
    • An entity may use one or more enumerations for its properties.
    • All components in the layer can reference entities and enumerations.


    Data

    • A data access component should refer to a Table or View in the database.
    • A data access component may manage more than one related tables i.e. Orders and OrderItems.
    • A data agent should be used to manage the access to external services (known as Service Agent)
    • A data agent should be used to manage access to files (known as File Agent)


    Business

    • A business component should call more than one data access components. One-to-one mapping of business component to data access component is an early indication of something is amiss.
    • A business component may call a mixed of data access components and data agents which may also be called by other business components.
    • A business component may have some or all of its methods exposed as workflow activities.
    • A workflow activity should map to one business component method (although mapping to more than one is OK but not recommended).


    Services

    • A service may call one or more business components.
    • A workflow service usually contains more than one workflow activities to construct workflows.
    • A workflow service may contain workflow activities that are exposing methods from different business components.
    • A contract exposes a service (if using WCF).
    • A service may have more than one contracts (if using WCF).
    • A contract may use message types to consolidate data into request or response messages. In this case, they can be data contracts or message contracts. Message Types can also be used for WEB API.


    Presentation

    • A controller may call one or more services through contracts (If using WCF).
    • A controller may call into a service (when WCF is not used).
    • A controller may call a mixed of contracts and workflow services. (WFS are actually WCF).
    • A controller may be called by more than one user interfaces.
    • A user interface may call one or more controllers which may be called by other user interfaces.


    Framework

    • All framework components can be called by components in one or more layers.

    Take note that these are just the guidelines I tried to practice in strict layering. In a relax layering practice, the guidelines should be less rigid.

    This is the 5th posts in my Layered Architecture series. You can also catch the previous posts:




    More Data Annotations

    $
    0
    0
    Here are more stuff that I discovered that we can do with Data Annotations.

    Re-use Validation in Business Components

    You will be pleased to know that the validations that you have coded for your data annotations can be easily reused in business components and they don't necessary need to be dependent on UI components. To activate them, simply use the following code snippet:

    // Activate Data Annotation validation.
    var context = newValidationContext(leave);
    var validationErrors = newList<ValidationResult>();
    Validator.TryValidateObject(leave, context, validationErrors, true);

    if (validationErrors.Count > 0)
        thrownewApplicationException(validationErrors[0].ErrorMessage);

    In the above code snippet, leave is an instance of the Leave Entity. The call to TryValidateObject will return a list of ValidationResults if there are any validation errors. The first error in the list is thrown as an exception to the calling function.

    Writing Custom Validation Attributes

    Although we can use the RegularExpression validation attribute to satisfy most of the non-standard validation requirements, there will always be scenarios where we will need to build our own custom validation attribute.

    For example, what if we have a requirement to ensure that the StartDate is not greater than the EndDate? It would be impossible to use the standard out-of-the-box validation attributes to do it. Instead, we can write a custom validation attribute that targets the Entity class instead.

    [AttributeUsage(AttributeTargets.Class)]
    publicclassDateRangeConstraintAttribute : ValidationAttribute
    {
        public DateRangeConstraintAttribute() : base () { }

        public DateRangeConstraintAttribute(string errorMessage)
            : base(errorMessage) { }

        publicoverridebool IsValid(object value)
        {
            if (value == null || value.GetType() != typeof(Leave))
                returntrue;

            var leave = value asLeave;

            return !(leave.StartDate > leave.EndDate);
        }

        protectedoverrideValidationResult IsValid(object value, ValidationContext validationContext)
        {
            if (value == null || value.GetType() != typeof(Leave))
                returnValidationResult.Success;

            if (!IsValid(value))
            {
                var result = newValidationResult("Start date cannot be greater than End date.",
                    newList<string>() { "StartDate", "EndDate" } );

                return result;
            }

            returnbase.IsValid(value, validationContext);
        }

    }

    We can then apply the custom validation attribute to our Entity class like this:

    [DataContract]
    [DateRangeConstraint]
    publicpartialclassLeave

    Our validation will be invoke when data annotations are being invoke (either automatically in UI or manually in business components).

    Things To Know...

    There are some other things that I discovered about Data Annotations.

    All validations are executed and this prevents the code from stopping on first error encountered. If your validation needs to be executed step-by-step, you may need to rethink your strategy.

    The order of validation is unpredictable and difficult to control. Most of the time, it is depending on which attribute was declared first.

    Hope you find this information useful.

    Asphalt 8: Airborne - Strategy Guide

    $
    0
    0
    I was never good at Racing games. I tried Daytona USA when I was young and I sucked at it. My first official racing game on my XBOX 360 was Project Gotham Racing 3 and I sucked at it as well. So I have never played a single game of Forza, Burnout, Need for Speed or any other racing games since then... until recently, a friend wanted me to check if Asphalt 8 would lag on my Samsung Galaxy Note 3.

    Innocently, I downloaded it and was immediately impressed by the graphics (I'm a sucker for graphics). I gave it a spin and I was instantly hooked on to it. In case you are new, Asphalt 8: Airbone is one of the best FREE racing games developed and published by Gameloft. It is available on all mobile phone platforms and recently, it also made its way to the Windows 8 store.


    After spending months on it, I think I'm kinda cured of racing game n00bness (friend said it could be the accelerometer) and I think I'm ready to provide a strategy guide for it. Although the game is free, it comes with in-game purchase options to ease frustrations and since I'm a cheapskate, this strategy guide is intended for those who are like me... die-die also won't purchase anything. That means, this guide will help you play the game without purchasing any add-ons.

    And if you are not confident with a racing n00b like me, I hope the following screenshots will help boost your confidence. I'm currently at 897/900 stars and I have bought all the 47 cars that originally came with the game. I'm still saving up money for the new cars which they have added.




    Now let's get started!

    Basic Tips

    Be Familiar With the Routes
    It isn't just about speed and skill, but also the shortest route to victory. The tracks are made up of multiple roads with occasional splits. Therefore, choosing the correct  path to the shortest route can still net you a victory if you do not have a fast car and your rivals are unfamiliar with the roads.

    Some times, there are alternate routes that may seem to be longer but offers nitro refills on the way. If you can constantly pick up all of the refills in that route while on your nitro boost, you may end-up being faster than using the shorter route. So be familiar with all the routes!

    Knock'em Down
    Collision with other rival cars is OK in Asphalt. If there are any rivals in front of (or beside) you, hit the nitro and knock them down. Performing Knockdowns will not only slowdown your rivals down but will also refill your nitro bar.

    One thing about knockdowns is that the camera angle rotates and if you are not careful, you could get a wreck when the camera angle restores after that (because you could not see what's in-front of you). It is also more difficult to perform knockdowns in multi-player.

    Hit That Nitro!
    Nitro in this game can be refilled easily so don't stinge on it. Hit the nitro whenever on a straight and if you noticed, even the A.I. hits the nitro at the beginning of every race. If you have a short nitro bar, you may want to perform small drifts to fill it up for a longer nitro boost later. You can also cruise by towards incoming vehicles to earn Near Miss and that will fill the nitro bar as well.

    While it maybe a habit to always go for the Perfect Nitro, it can also be strategic to manually control the nitro boost i.e. L1 nitro when first entering a bend, L2 nitro when tackling the second bend and finally, a L3 nitro boost when exiting the bend (i.e. in London). You can also use Nitro to exit out of a drift (i.e. in Monaco).

    Get Airborne!
    Never give up the chance to perform Flat Spins or Barrel Rolls when you see a ramp. Performing such airborne stunts will quickly help fill your nitro bar and in some stages, earn you stars. 2 or more flat spins or barrel rolls will get your Nitro bar fully loaded (depending on your car). The trick to get more flat spins is to have your car drifted on a ramp at slower-than-max-speed (but not too slow) and the trick to get more barrel rolls is to have your car go off a curved ramp at the highest speed (and at the right angle).



    Take note that you can perform flat spins on curve ramps (drifting on the curve) and barrel rolls on normal ramps too (by hitting the sides), however, performing such intricate stunts may lead to wrecks if you are not used to it. In my experience, the chances of getting a mishap on a barrel roll is much higher than a flat spin.

    Take note that getting airborne will slow you down so if you are in a tight race for position (i.e. multi-player), you may want to tone down on the stunts and only do them in strategic spots.

    Advance Tips

    Unlocking Seasons
    Your objective in the game should be to focus on earning as many stars as possible to unlock new races/challenges and new seasons. The races in the later seasons will earn you more money per wins, therefore, you should focus on cars and upgrades that will help you win races in later seasons.

    You don't have to finish off every race/challenge in one season to unlock the next. Unlocking seasons is based on stars. It is good to have as many seasons unlocked as possible. There are 8 seasons in total.

    The Right Car Matters
    Buying the right cars is the key to unlocking more races in this game. On my second play-through, I discovered that buying the right cars could unlock up to Season 8 before running out of cash for the top tier cars.

    The general guideline is to purchase cars that will allow you to net the most stars to unlock new races/ challenges. That means, you should prioritise your money on cars that can be used to complete as many races as possible for the maximum return of investment (ROI). For example, cars like the Lamborghini Countach 25th Anniversary has very low ROI because it can only be used to in one or two races, whereas, the Lamborghini Veneno can let you win many Class S category races.

    My recommendation is to purchase these cars in Class D first:
    • Dodge Dart GT (Your 1st free car)
    • Audi R8 e-tron
    • Tesla Model S
    • Scion FR-S

    Then proceed to get the first car in every class:
    • Audi RS3 Sportback (Class C)
    • Citroen Survolt (Class B)
    • Cardillac CTS-V Coupe Race Car (Class A)
    • Lamborghini Veneno (Class S)

    With this inventory you should be able to cover the basic races/challenges in every class. When undecided, always go for the car that can help you race in higher season races because those will net you more money in return.

    Other cars that you may want to consider buying after you have the above:
    • Chevrolet Camaro GS (Class C)
    • Lotus Exige S Roadster (Class C)
    • Nissan GT-R [R35] (Class B) or Ferrari 458 Italia (Class B)
    • Ford Shelby GT5000 (Class B)
    • Dodge Viper SRT10 ACR-X (Class A)
    • Pagani Zonda R (Class A)
    • Chevrolet Corvette C7 (Class A)
    • Ferrari FXX Evoluzione (Class S)

    At this point you should be bankrupt *LOL* and you will either be playing multi-player or replaying older races to farm for money. If you can hoard enough money to 325,000, get the Mercedes-Benz Silver Lightning as that will help you clear most of the Class S category races and the Mercedes races. But be warned it is not a very easy to control car.



    Car Upgrades
    Before you start thinking  -"Since I don't have money to buy new cars, I better upgrade my existing ones". My advice to you is DON'T. You will soon discover that it is more expensive to upgrade cars than to buy new ones. Most of the Season 8 races will require you to have upgraded cars and some of them may require your cars to be upgraded to the max. So leave upgrading to the final 2 seasons.

    If you must upgrade your cars for a race (like when you have that 'I must win this darn race' feeling), do not upgrade beyond level 1. Anything above level 1 are just too expensive. When upgrading cars, focus on Acceleration, Top Speed and Nitro. There are 5 levels for each upgrade and so, that's a lot of money required!



    Push Beyond the Rank Recommendations
    Every race has a car rank recommendation displayed on the top right corner. You do not need to meet or go above the rank recommendation in order to win first place. In my experience, you can be short of around 50-80 points and still win the race so don't let that number frighten you.



    Gate Drifts
    I hate them. I'm stuck at 897/900 stars because of the last Gate Drift challenge. My advise to you is to attempt gate drifts as early as possible without an upgraded car. Yup! The moment you upgraded the Handling for your car, it will make it more difficult to drift. So, complete those gate drifts as soon as you can before you use the car for other races.

    Farming for Credits
    You can earn money by replaying previous races. The best races are the ones in later seasons where you will net more money when you come in first place. Alternatively, you can play multi-player to farm for money and rank up your multi-player ranking as well.

    Before the second update (as in Windows 8), playing multi-player will not get you much money so you best bet is to replay previous races. Go for the ones in later seasons and aim to win first place. With the second update (as in the phones), you can earn more money by playing multi-player and it also comes with consecutive win rewards. You only need to get a minimum of third place to consider as a win.

    Tournament is the Key to Perks
    When you start to own those kick @$$ cars, try to race in tournaments. You can easily win Credits (a.k.a money), Nitro Starters, Tuning Kits and Car Upgrades, basically, all the stuff that usually needs to be purchased with real money can be won (including cars).



    The Final Tip
    Here's something I think you should know. The game works much better with an XBOX 360 Controller on Windows 8. (Remember to turn on Auto-Acceleration to save your RT button).

    There you go - Race On!

    Entity Framework vs. EL DAAB Performance

    $
    0
    0
    Lately, I have encountered several questions regarding the performance of Entity Framework (EF). I have been conducting feasibility study on EF since version 4.0 and followed through several versions but every time I see the performance results, I was not convinced to use it. But I can see performance improvements in every new version. Wanting to satisfy my curiosity and also to update my impressions towards EF, I have thought to conduct another round of performance test on it.

    I realized that not only have I built an arsenal of samples but I have also created an avenue to use those samples for performance testing (all thanks to my Associate who reminded me by always playing with using the samples in demos and test runs). For the showdown, I will pit the ASPNET-WCF-EF sample against the ASPNET-WCF-DAAB sample, since both uses exactly the same code for all the layers but only differs in data access technology (all thanks to the swap-in-swap-out capability of Layered Architecture).

    The Test Machine 
    • Windows Server 2012 R2 x64
    • Intel Core i7-4800MQ CPU 2.7GHz (Quad Core HT)
    • 16 GB RAM
    • 500 GB Solid-State Hybrid Drive
    • Visual Studio 2013 Ultimate Update 1
    • Microsoft SQL Server 2012 x64

    Preparation

    To make use of the samples for the performance tests, few changes had to be made to both sets of code.

    1. Disabling IsOverlap Check in Business Component

    The samples contain an IsOverlap business logic check to prevent duplicate records from being inserted into the Leaves table. This will cause the unit tests to fail if high volume of test data were induced into it. To allow the application to continue to run even with overlaps, I commented out the line where the exception is thrown. You can locate the line of code in the Apply method of the LeaveComponent.cs.

    // Check for overlapping leaves.
    if (leaveDAC.IsOverlap(leave))
    {
        //throw new ApplicationException("Date range is overlapping with another leave.");

    }

    This will allow the query to continue to run but the exception will be suppressed.

    2. Disable Enable Edit and Continue

    Enable Edit and Continue needs to be disabled in the LeaveSample.Hosts.Web project to allow it to be running (See here for details) for the test. Alternatively, you can publish it to IIS if you want. For this test, I'm using the default IIS Express.


    With this done, we can start the LeaveSample.Hosts.Web project in debug mode to launch the IIS Express instance and then close the browser that was launch. The Host will be running in the background. 

    Take note that this needs to be done for both samples and at any one time, only one of the samples can be opened and run in Visual Studio. This is to avoid any conflicts.

    A Glimpse of the Code

    In case you do not want to download the samples and wondering how the code looks like, basically, here are some code snippets to give you an idea. I will recommend you download the samples and play with it yourself.

    The EF code looks like...

    publicLeave Create(Leave leave)
    {
        using (var db = newDbContext(CONNECTION_NAME))
        {
            db.Set<Leave>().Add(leave);
            db.SaveChanges();

            return leave;
        }

    }

    and the DAAB code looks like...

    publicLeave Create(Leave leave)
    {
        conststring SQL_STATEMENT =
            "INSERT INTO dbo.Leaves ([CorrelationID], [Category], [Employee], [StartDate], [EndDate], [Description], [Duration], [Status], [IsCompleted], [Remarks], [DateSubmitted]) " +
            "VALUES(@CorrelationID, @Category, @Employee, @StartDate, @EndDate, @Description, @Duration, @Status, @IsCompleted, @Remarks, @DateSubmitted); SELECT SCOPE_IDENTITY();";

        // Connect to database.
        Database db = DatabaseFactory.CreateDatabase(CONNECTION_NAME);
        using (DbCommand cmd = db.GetSqlStringCommand(SQL_STATEMENT))
        {
            // Set parameter values.
            db.AddInParameter(cmd, "@CorrelationID", DbType.Guid, leave.CorrelationID);
            db.AddInParameter(cmd, "@Category", DbType.Byte, leave.Category);
            db.AddInParameter(cmd, "@Employee", DbType.AnsiString, leave.Employee);
            db.AddInParameter(cmd, "@StartDate", DbType.DateTime, leave.StartDate);
            db.AddInParameter(cmd, "@EndDate", DbType.DateTime, leave.EndDate);
            db.AddInParameter(cmd, "@Description", DbType.AnsiString, leave.Description);
            db.AddInParameter(cmd, "@Duration", DbType.Byte, leave.Duration);
            db.AddInParameter(cmd, "@Status", DbType.Byte, leave.Status);
            db.AddInParameter(cmd, "@IsCompleted", DbType.Boolean, leave.IsCompleted);
            db.AddInParameter(cmd, "@Remarks", DbType.AnsiString, leave.Remarks);
            db.AddInParameter(cmd, "@DateSubmitted", DbType.DateTime, leave.DateSubmitted);

            // Get the primary key value.
            leave.LeaveID = Convert.ToInt64(db.ExecuteScalar(cmd));
        }

        return leave;

    }

    I know you are already screaming - "Holy Cow!!!"

    Unit Test: Single Run

    I noticed that the Unit Test in Visual Studio now provides execution times. This is very handy and I would like to take advantage of it. I chose the ApplyThenApproveTest unit test method since it simulates a complete Apply and Approval of a leave transaction which should give a good mix of INSERTs and UPDATEs to multiple tables and SELECT operations.

    Running a single unit test on each sample produces the following results:

    Method using Entity Framework 6.0.2 took 171 ms to complete 1 transaction.

    Method using Enterprise Library 6.0 DAAB took 139 ms to complete 1 transaction.

    Unit Test: Looping in 1000

    That's pretty good for both since they completed in milliseconds. Let's raise the stake to loop 1000 times and see the results:

    Method using Entity Framework 6.0.2 took 44 secs to complete 1000 transactions.

    Method using Enterprise Library 6.0 DAAB took 30 secs to complete 1000 transactions.

    Instrumented Performance and Diagnostics Profiler

    Let's dig slightly deeper to see the time breakdown. Visual Studio Ultimate comes with an awesome Performance and Diagnostics Profiler. I used Instrumented profiling on both the samples and here are the results:

    Apply method using Entity Framework 6.0.2 took an Average Elapse Time of 1,124.97
    Extra observation: Querying took an Average Elapse Time of 2,937.66

    Apply method using Enterprise Library 6.0 DAAB took an Average Elapse Time of 432.23
    Extra observation: Querying took an Average Elapse Time of 861.90

    From the Call Tree we can observe that the functions using DAAB is considerably faster than EF.

    Load Test

    Finally, let's see how the two perform under load. I will use Visual Studio Load Test to load the single run ApplyThenApproveTest unit test method. I will use a constant load of 100 users and run the test for 1 minute.


    Entity Framework 6.0.2 completed with 1692 Total Test runs with an Avg. Test Time of 3.41 sec but it gave 154 errors. The error thrown was 'An error occurred while reading from the store provider's data reader'.


    Enterprise Library 6.0 DAAB completed with 8236 Total Test runs with an Avg. Test Time of 0.70 sec and gave 0 errors.

    I was surprised (and concerned) by this result as it appears that EF may not be able to perform under high stressed load.

    What is Going On Behind The Scene?

    Out of curiosity, I fired-up SQL Profiler to see what is being sent to the SQL Server.

    INSERT statement generated by Entity Framework 6.0.2

    INSERT statement going through Enterprise Library 6.0 DAAB

    SELECT statement with Paging generated by Entity Framework 6.0.2

    Custom Paging SELECT statement going through Enterprise Library 6.0 DAAB

    Summary and Wrap Up

    From the performance test results it clearly shows that Entity Framework is still not as efficient as wrapper libraries such as DAAB which uses native ADO.NET. However, EF does make code a lot more easier to read and a lot less to write. The load test findings do raise some concerns for high performance applications that requires high concurrency.

    Because of these findings, I would have to skip EF implementation again for this round :( and wait for more improvements to it. The choice of not being able to use EF for me is bind by the constraints I have in my environment that requires the applications to process millions of transactions per hour. This is the volume of processing in Telco environment.

    Whether to use EF or not is entirely up to your environment and choice (as long as you know the limitations of the chosen technology). If your system does not require such crazy high loads, I don't foresee you will have any problems in using EF.

    I hope this post has given you good enough insights into Entity Framework. If you have ideas for me to fix the EF exception, please feel free to post it in the comments.

    Entity Framework 6 vs EL6 DAAB Performance - Rematched

    $
    0
    0
    After I published the benchmark results for Entity Framework (EF) vs. DAAB, I was asked to verify whether the performance of EF can be improved by using Stored Procedures (SP). Logically speaking, using SP may help improve performance, but it should improve for both frameworks and not favour one over the other. Meaning if DAAB is shown to be faster than EF, any tuning on the database side should not be able to make EF faster than DAAB - at best, it can only make EF perform better than its own previous results.

    To make sure there are no stones unturned, I have decided to do a rematch of the performance testing using the same methods and machine specifications from the previous test. The only differences are the code have all been converted to use Stored Procedures.

    Here's a code snippet for one of the methods used in the DAAB code that has been converted to use SP:

    publicLeave SelectById(long leaveID)
    {
        conststring SQL_STATEMENT = "GetLeaveById";

        Leave leave = null;

        // Connect to database.
        Database db = DatabaseFactory.CreateDatabase(CONNECTION_NAME);
        using (DbCommand cmd = db.GetStoredProcCommand(SQL_STATEMENT))
        {
            db.AddInParameter(cmd, "@LeaveID", DbType.Int64, leaveID);

            using (IDataReader dr = db.ExecuteReader(cmd))
            {
                if (dr.Read())
                {
                    // Create a new Leave
                    leave = LoadLeave(dr);
                }
            }
        }

        return leave;
    }

    And here's the equivalent method in EF:

    publicLeave SelectById(long leaveID)
    {
        using (var db = newDbContext(CONNECTION_NAME))
        {
            var paramLeaveID = CreateParameter("LeaveID", DbType.Int64, leaveID);

           var result = db.Database.SqlQuery<Leave>("GetLeaveById @LeaveID",
               paramLeaveID).ToList<Leave>();

            Leave leave = result.Count > 0 ? result[0] : null;

            // Return result.
            return leave;
        }
    }

    All the SPs were imported and mapped in the EF Model.


    I was also tipped-off by some friends who have experience in EF to try to have these settings set to improvement performance.

    db.Configuration.LazyLoadingEnabled = false;
    db.Configuration.AutoDetectChangesEnabled = false;

    db.Configuration.ValidateOnSaveEnabled = false;

    However, they don't make much difference because the Entities that were used are Plain-Old-CLR-Objects (POCO) so change-tracking is not available, and they are not linked by navigation properties, so lazy-loading won't kick-in.

    Now let the rematch begin.

    Unit Test: Single Run

    The ApplyThenApproveTest unit test method was chosen again for the test and the results are:

    Method using Entity Framework 6.0.2 took 168 ms to complete 1 transaction.

    Method using Enterprise Library 6.0 DAAB took 137 ms to complete 1 transaction.


    Looks like the performance of EF has indeed been improved a little with SP but the performance of DAAB has dropped slightly. Nevertheless, the performance gap between EF and DAAB is almost consistent compared to the previous non-SP test.

    Unit Test: Looping in 1000

    Let's see how both of them fair in loops.

    Method using Entity Framework 6.0.2 took 46 secs to complete 1000 transactions.

    Method using Enterprise Library 6.0 DAAB took 32 secs to complete 1000 transactions.

    Once again the results show that EF performs slower than DAAB and both seems to be 2 secs slower than their previous non-SP results.

    Instrumented Performance and Diagnostics Profiler

    Let's dig into the profiler and see some results:

    Create method in data layer for Entity Framework 6.0.2 took an  Avg. Elapse Time of 1,299.91. The EF methods DbContext.SaveChanges() took Avg. Elapse Time of 1,133.98 and DbSet.Add() took 150.57.

    Create method in data layer for Enterprise Library 6.0 DAAB took an Avg. Elapse Time of 23.06. The DAAB methods Database.ExecuteScalar() took Avg. Elapse Time of 19.71.

    Strangely, it shows that both EF and DAAB takes longer when SPs are used (could this be due to the extra processing needed to map to SPs internally?). From the call tree we can observe that EF is still slower than DAAB. 

    Load Test

    Now for the grand finale! Let's see how they perform in load testing.

    Entity Framework 6.0.2 completed with 6945 Total Test runs with an Avg. Test Time of 0.84 sec and gave errors.

    Enterprise Library 6.0 DAAB completed with 7310 Total Test runs with an Avg. Test Time of 0.79 sec and gave errors.

    Unlike the previous non-SP test, EF runs more stable this time and produces no errors under high load. Because of that, its performance has also been able to almost match the new results for DAAB. This could be due to the more optimized SQL statements which are used in the SPs as opposed to the ones auto-generated by EF's engine. The performance of DAAB somewhat fell when using SPs.

    Behind The Scene

    Let's take a look at what's going on behind the scene.

    INSERT operation using Stored Procedure using Entity Framework 6.0.2

    INSERT operation using Stored Procedure using Enterprise Library 6.0 DAAB

    SELECT operation using Stored Procedure using Entity Framework 6.0.2

    SELECT operation using Stored Procedure using Enterprise Library 6.0 DAAB

    From SQL Server Profiler, we can see that both EF and DAAB now use the specified stored procedures to perform their operations. Curiously, EF still uses sp_executesql for the queries but you can see that it is no longer using the verbose auto-generated SQL statements previously.

    Conclusion

    It is common for us to think that using Stored Procedures (SP) may help improve the performance of our applications but it seems that RDBMS have came a long way and have evolved tremendously in providing us with the required performance. It should be noted that both native ADO.NET and EF uses sp_executesql internally which provides us the safety of preventing SQL-injection attacks (when used with named parameters) and cached execution plans. While sp_executesql can also be tuned further, it is suffice for general purpose usage.

    The results in this rematch may not be able to prove that EF could perform better than DAAB by employing SPs but it does prove that EF is more stable and provides better query performance with properly written SPs. The only challenge I experienced is the rigid nature of SPs which makes it difficult to accommodate EF's dynamic nature i.e. difficult to support dynamic column sorting and parameter filtering, and I would certainly not encourage constructing dynamic SQL inside SPs.

    For now, I will still need to stick with DAAB.

    Disclaimer: The results are based on my own research and it is just a simple benchmark for EF and DAAB. It is not about performance tuning. If you are unhappy with the results, I would urge that you conduct your own tests in your own environment for verification. You never know, some things work differently in different environments.

    Runtime Caching with SqlChangeMonitor

    $
    0
    0
    I was looking for a cache solution similar to the one provided by System.Web.Caching for my services layer and I was introduced to System.Runtime.Caching by my associate (after he completed the research assignment which I gave him that is ;p ). New in .NET 4, System.Runtime.Caching provides an easy to use cache solution for non-ASP.NET Web applications and is just what I needed.

    Sometime back I talked about Query Notifications & SqlDependency. Now, I'm gonna do something similar but I will attempt to wire-up a SqlDependency to the SqlChangeMonitor to allow our cache data to be auto-refreshed when there are changes to the database table. This is useful for caching reference data for services that are built on WCF (or WEB API).

    I will show it with a very rough code example. Please refactor it to your needs.

    Start by declaring a few global variables.

    privatestaticMemoryCache _cache;
    privateSqlChangeMonitor _monitor;
    privateSqlDependency _dependency;
    privatebool _hasDataChanged;

    The static MemoryCache variable is defined for coding convenience so that we don't have to keep referring to MemoryCache.Default all the time.

    At the constructor of the class, initialize the _cache and call the SqlDependency.Start() method.

    _cache = MemoryCache.Default;

    SqlDependency.Start(CONNECTION_STRING);

    Next, create the main function that loads the data into a CacheItem:

    privateCacheItem LoadData(outCacheItemPolicy policy)
    {
        conststring SQL_STATEMENT = "SELECT [ID], [Data] FROM dbo.TestData";

        var data = newList<TestData>();

        var db = newSqlDatabase(CONNECTION_STRING);
        using (DbCommand cmd = db.GetSqlStringCommand(SQL_STATEMENT))
        {
            // Initialize SqlDependency
            _dependency = newSqlDependency(cmd asSqlCommand);
            _dependency.OnChange += dependency_OnChange;

            using (IDataReader dr = db.ExecuteReader(cmd))
            {
                while (dr.Read())
                {
                    // Create a new TestData
                    var testData = newTestData();

                    // Read values.
                    testData.ID = GetDataValue<int>(dr, "ID");
                    testData.Data = GetDataValue<string>(dr, "Data");

                    // Add to List.
                    data.Add(testData);
                }
            }
        }

        // Create a new monitor.
        _monitor = newSqlChangeMonitor(_dependency);

        // Create a policy.
        policy = newCacheItemPolicy();
        policy.ChangeMonitors.Add(_monitor);
        policy.UpdateCallback = CacheUpdateCallback;

        // Put results into Cache Item.
        var item = newCacheItem("TestData", data);

        // Reset the data changed flag.
        _hasDataChanged = false;

        return item;
    }


    Few things to note in this code-snippet:

    1. The CONNECTION_STRING is a constant I defined for my connection string and its value is not shown in this example.
    2. I'm using Enterprise Library Data Access Application Block (DAAB) but you can use standard ADO.NET.
    3. I'm loading the data into an Entity called TestDatawith a custom GetDataValue method that checks for null before assigning the values.
    4. I'm using a hard-coded SQL statement but you can change it to a stored procedure if you wish.

    In the example, we pass the command into a new SqlDependency instance after creating it and subscribe to the OnChange event. After loading the data, we create a new SqlChangeMonitor and pass in the SqlDependency instance to its constructor. We then create a new CacheItemPolicy and add the SqlChangeMonitor to its ChangeMonitors collection. Next, we register to the UpdateCallback and create a new CacheItem to store the data. Finally, we set the _hasDataChangedflag to false before returning the CacheItem.

    The OnChange event doesn't really do much other than setting the _hasDataChanged flag to true when there is a change in the TestData table.

    void dependency_OnChange(object sender, SqlNotificationEventArgs e)
    {
        // DataChange Detection
        _hasDataChanged = true;

    }

    The CacheUpdateCallbackmethod contains a little more code to refresh the cache item in memory.

    privatevoid CacheUpdateCallback(CacheEntryUpdateArguments args)
    {
        // Dispose of monitor
        if (_monitor != null)
            _monitor.Dispose();

        // Disconnect event to prevent recursion.
        _dependency.OnChange -= dependency_OnChange;

        // Refresh the cache if tracking data changes.
        if (_hasDataChanged)
        {
            // Refresh the cache item.
            CacheItemPolicy policy;
            args.UpdatedCacheItem = LoadData(out policy);
            args.UpdatedCacheItemPolicy = policy;
        }

    }

    The _monitor and the OnChange event subscription will be cleaned up in this method and the _hasDataChanged flag is checked to see if the items in cache should be refreshed. Without doing this, the data in the cache will be invalidated and reset to null.

    Finally, the method that we want to expose out to callers to call will be like the following:

    publicList<TestData> Select()
    {
        if (_cache["TestData"] == null)
        {
            // Create a policy.
            CacheItemPolicy policy = null;

            // Load data into Cache Item.
            var item = LoadData(outpolicy);

            // Set Cache Item into cache with the policy.
            _cache.Set(item, policy);
        }

        return _cache["TestData"] asList<TestData>;

    }

    The method first checks the cache for any data and if none is available, it will query the database for it and sets it to the cache. If the data is available, it will be retrieved directly from the cache.

    I hope this post is useful to you. Please make sure that you follow the rules and guidelines of SQL Server Service Broker (i.e. ALTER DATABASE LeaveSample SET ENABLE_BROKER) and Query Notifications to get it working. :)

    LASG: Getting Started

    $
    0
    0
    This walk-through will get you started with the Layered Architecture Solution Guidance (LASG) visual studio extension to create the project structure for a new layered web application.
    1. Launch Visual Studio 2013 with administrator privileges.
    2. Open the File menu, click New and then click Project...
    3. At the New Project dialog, under Installed Templates on the left pane, expand Guidance Packages and click Layered Architecture Solution Guidance.

    4. Leave the Layered Application solution template selected as default.
    5. Enter a Name for the solution i.e. Tutorial and click OK.
    6. At the Choose LASG Solution Template dialog, select Layered Web Application.

      Note: The Project Namespace will follow the solution name that was provided earlier. You can also check/uncheck the projects which you want to add to/remove from your solution from the Project templates list.
    7. Click OK, and wait for the projects to unfold.
    8. For Layered Web Applications, the New ASP.NET Project dialog box will be displayed. This is Visual Studio 2013's default prompt for creating Web Projects. Choose the Empty template and check the Web Forms checkbox in the Add folders and core reference for.

      Note: You may choose other predefined Visual Studio 2013 web templates if you wish.
    9. Click OK to complete the unfolding process. Once completed, the Layered Web Application solution is now ready for use.

      Tip: All the relevant project references were automatically added between the projects for your convenience.

      Note: At this point of writing, LASG only supports English version of Visual Studio.

    Get Asset Tag from BIOS

    $
    0
    0
    It was that time of the year where the auditors are visiting again and I found out this handy command to retrieve the Serial Number and Asset Tag from the system BIOS.

    Open a command prompt, and type in:

    wmic SystemEnclosure get SerialNumber, SMBIOSAssetTag

    ... and the values will be shown.

    Note: Somehow the BIOS on my Dell machine only allow up to 10 characters for the tag. If your company uses anything longer, then it won't fit :'( Darn! And I was hoping I don't have to flip my notebook the next time they wanna see it.
    Viewing all 84 articles
    Browse latest View live