I’ve been poking around at a lot of JavaScript over the last year or two and have been refining this layered architecture for setting up applications. The main idea behind it is to cover all the old bases in a way that also reduces the number of requests and performs very well. The layers I am using are setup like so:
- Web front end (static HTML)
- UI views, scripts, and related image files
- Localized strings as JSON files
- Middle tier (WebAPI/SignalR)
- Back end (database/web services)
So, to explain a little about this layout… The front end is all static. The files may be generated by a compiler or parser but upon publish they are static and require no server processing. This allows one to put these files into a CDN whether it is Azure or Amazon for massive scale and economy. Having the localization files as static JSON allows them to be packaged up with the front end and sent along with it. Depending on the configuration and build process these files could even be combined and minified further.
The middle tier is your standard WebAPI and/or SignalR server side code layer hosted on a standard ASP.NET server, usually Azure Web Sites in my case. This tier is basically an API that provides the site with any dynamic actions and information it needs.
Finally, the back end consists of a database used by the middle tier, usually a SQL server of some kind, and any external web services needed by the middle tier. You could lump the external web services into the middle tier but I prefer to think of them as something separate for the sake of organization.
There are some interesting issues you run into when implementing this pattern, including how to handle authorization and security trimming. I generally move the navigation page list into a JSON object that can be generated by the API. In this manner you avoid exposing all your pages to the client even though the templates for those pages may exist in your CDN as public files. All of the security should be enforced on the middle tier so that users cannot perform actions they are not supposed to be allowed to do.
I’ve found that this model is fast and performs well under load when done properly. The trick is keeping it simple while utilizing all the tooling available to generate the published result. If anyone is interested I could go into more detail about that.