Thursday 31 December 2015

AWS: 0. Deploying a WebAPI Service to EC2 instance

I have been reading so many great articles and watching videos about the AWS platform from the community for many months. Most of these resources form part of a larger solution and sometimes fails to discuss the full picture.

My attempt is to start a very simple project using ASP.NET MVC WebApi and deploy to AWS. I know there are pre-baked services such as AWS Elastic beanstalk where the deployment is made simple with templates. That is not the point.

I am going to start with poor man's deployment techniques (e.g. copy/paste) and work my way up the AWS platform.

To get started, I need a Windows Virtual Machine in AWS that can be accessed publicly.  Then I will deploy my application and access it externally. I am going to deploy a ASP.NET WebAPI project that returns a simple response.

Creating a Virtual Machine


The EC2 Console allows us to launch an instance.

EC2 Dashboard where the an EC2 instance can be launched

In order to host the Web API, I am going to select a Windows VM. I have selected a Windows 2012 R2 base image which is AWS free tier eligible.

Public IP address for the instance


I have accepted all the defaults from the wizard, which setting an public IP to the instance. This is very important because without a public IP address, we cannot connect to the instance from the Internet.

Auto assigning a public IP address to the instance

Connecting to the Virtual Machine


In order to connect to the VM, I need two pieces of information. These are 1) public IP address 2) Administrator password.

We can find the public IP address from the EC2 Dashboard. During the launching phase of the VM, I had to create/use key pair (pem file). This is required to decrypt the administrator password.

Once decrypted, I can connect to the VM using Remote Desktop Protocol (RDP).

Deploying the package


I packaged the WebApi solution is packaged as an WebDeploy package, simply to aid in deployment. The packages can simply be copied over to the VM (copy from local machine and paste in VM).

WebDeploy files copied over to the Windows VM on AWS Platform

The WebDeploy deployment package can be installed using the generated "cmd" file. I am using "-enableRule:DoNotDelete" flag to prevent WebDeploy from deleting the package.

Installing the WebDeploy package in Windows VM

Although the installation was successful, the service cannot be accessed publicly. The firewall of the instance only allows traffic over port 3389 (for RDP). Therefore I need to open the inbound port of the site.


Opening port 88 to the world

The change to the security group is applied immediately. In addition to this step, I need to open the port through the Windows Firewall because the service is hosted on a non-standard port (I chose to host the site over port 88).

I can now access the service publicly.

Connecting to the service remotely
So there you go..

In this task, I simply started up a Windows VM and copied our WebApi code and accessed it over the Internet. Except of for few security group configurations, there was not anything new compared to on-premises deployment.

In the next port I am going to look at how this process can be improved so that we are ready for the "cloud".


Friday 25 December 2015

AWS: No internet access on default VPC

The launching an EC2 instances on the AWS platform is pretty easy and hardly a task. However things get pretty interesting when things does not work.

After blindly accepting all the defaults for launching an EC2 instance, I could not open an SSH session to the server. The connecting was timing out.

Take 1

I was using a Windows OS to connect to Linux.  The Windows firewall the first point of interest. The connectivity still failed after opening up port 22 on my Windows machine. So this was not the issue.

Take 2

The AWS security group was the next. The AWS security groups provides an instance level firewall that allow traffic to an instance. Generally this AWS warns if you do not open up SSH or RDP port. (I did not see this) I checked the Security group and the SSH port was opened for inbound. All traffic was opened for outbound. So the security group was not the issue.

Take 2.1 

In pure desperation I launched a different instance in a separate Availability Zone. This new instance allowed SSH connectivity. So could it be an issue with AWS? The next point of call was to check whether the EC2 service is operating normally in the region using the AWS health portal. This was fine!, so its not AWS.

Take 3

Instances connect to outside world through an Internet Gateway. As the name implies, an Internet Gateway allows a subnet to connect to the internet. I checked the subnets route table and I can see that the outbound traffic from 0.0.0.0/0 is being forwarded to the Internet Gateway. There is nothing wrong with the Internet Gateway.


Take 4

The network traffic flows through the Network Access Control list (NACL), which is at the subnet level. To my surprise the outbound and inbound traffic was set to DENY. At this point I realised that the network traffic was being stopped at the NACL level. The solution was quite simple, I simply added 0.0.0.0/0 ALLOW rule for both inbound and outbound NACL before the DENY rule. Remember that NACLs are stateless and rules are executed in order. By adding the ALLOW rule before the DENY rule allowed the network traffic to enter and leave the subnet.. and most importantly I had my internet


  

Monday 23 November 2015

AWS Command line

I have to admit that AWS Command line probably is the best thing about AWS.

I started using the AWS Command line over PoweShell and it is amazingly approachable and user friendly. 

If you are playing around in the UI, I seriously suggest using the AWS Command line. More info here.

Next step is to start using it in the Mac.

Tuesday 16 June 2015

SQL Server Memory consumption (7GB)

My development machine started to run at snails speed with just a single instance of Visual Studio and SQL Server 2014 running. I have 8 Cores and 16GiB memory. 

The memory consumption was around 15.3GiB and it was hovering around the same figure continuously. 

Sql Server 2014 memory consuption

After hours of research I noticed that I had Distributed Transaction Coordinator (DTC) service stopped and disabled.

Distributed Transaction Coordinator (DTC) in operation

After restarting DTC service and restarting SQL Server, the memory consumption simply dropped to 7 GiB in total. 

Always check DTC Service if memory consumption with SQL Server is way over the top.

Monday 8 June 2015

AWS: The specified queue does not exist for this wsdl version (.NET)

One of the demo applications I was working on suddenly started failing with the above error message.

This issue occurred when the AWS .NET SDK was used. Following is a list of pointers that might help you debug the issue. In my case it was the region information that somehow got lost!.

  • Log into the AWS Console and see whether the queue is present. This is a trivial check. However make sure to check region (e.g. EU Ireland, US West etc). The region is very important because queues are tied to the region you create them.
  • Then ensure you are using the correct fully qualified queue name. The fully qualified queue name includes the ARN (Amazon Resource Name) followed by the account number and the queue name.


Fully qualified queue name

Once the above checks are confirmed you will need to ensure the code is using the same details. Normally about details are specified in the app.config (or web.config) file.

Specifying region using app-settings keys


The region details are specified in the following format:

Specifying the app settings (old style)

Alternatively, the more modern approach is through the custom app setting section.

Specifying region details through custom config section

Last resort


Once all the above steps/verification have been exhausted, you need to manually specify the region information in the code. See the following code snippet. 



The region endpoint is a property of the "Config" object and this can be set to the "RegionEndpoint" enumeration. 

The above code was the solution I used even with the fully qualified queue name is set in the app.config. Pretty odd in my book!!





Monday 1 June 2015

Playing around with Amazon SQS (3) - sending messages

The previous posts illustrated the steps that are advisable for setting up the environment for an SQS application.

It is time to write some code!

The AWS toolkit for Visual Studio


The folks at AWS have been very kind to provide an amazing API coupled with a document and an toolkit for Visual Studio. The first task is to download and install the AWS toolkit for Visual Studio. This toolkit is similar to the services available in the AWS Console.

AWS toolkit for Visual Studio
The highlighted areas in the screen capture is quite important. 

The "profile" directly links to the secret key and access key. Profiles allow developers to access access keys securely rather than storing them in source control (app.config file). 

Normally AWS services are created in a region. The queue "MyQueue" was created in the Ireland region hence using the EU West as the region name. 

Refer to this link for more information.

Adding NuGet packages


The NuGet packages for .NET SDK are now streamlined for each AWS service. This makes the AWS assembly small. Here is a great link around the motivation to modularise the SDK.

Adding AWS SQS NuGet package


* Note that the NuGet package is still in preview (hopefully not for too long!)

Sending messages


The "Amazon.SQS.AmazonSQSClient" class is the entry point to SQS. Once an instance is created, the "SendMessage" or its async counterpart "SendMessageAsync" is used to send messages.

Consider the following sample code.


One of the questions that might arise here "how do you specify the account to use?". This is specified in the app.config file.



There is no explicit mention of an account but a "profile" is used to access the service. 

The messages appear in the AWS console once the above code is executed.

Messages in AWS console

OK, the messages are in AWS now, so how to receive these? That is the topic of the next post!

Saturday 30 May 2015

Playing around with Amazon SQS (2)

In the previous post, I showed the way to create a queue and looked at some queue meta data. The next task is to create a couple of users with just enough permissions to use the queue.

There are multiple ways to set permissions to a queue. The permissions can be set at the queue level or user level. For the purpose of this post, I have decided to use user level permissions.

"Producer" and "Consumer" user permission

The Producer user require permission to send messages to the queue. The Consumer user receive message from the queue. The first step is to create these users in the IAM console.

Creating Producer and Consumer users

By default the "Generate an access key for each user" is checked. I have unchecked this for the moment. The access key is used by an application to make a connection with AWS.

The access key is comprised of two keys. The access key and the secret key. Consider these as username and password. Once the keys are generated AWS allows only a single chance to download the keys. However you can create multiple keys for each user.

User groups

Normally at this stage we set the required permissions to the users. However AWS recommends creating a user group and set permissions at the group level. This may be bit of an overkill in certain situations. However setting permissions at the group level is a tried, tested and reusable way to share access. 

Creating the Writer and Reader user groups

The AWS wizard prompts to associate a "Policy" when creating an user group. This can be skipped for the moment. Once the two user groups are created they are displayed in the user groups table. Notice the "Users" column, there are no users in these groups yet.

Add user to the group

The Producer and Consumer users must now be associated with the their respective group. This is achieved by selecting the user from users table and selecting "Add users to Group" from the following group property page.

Adding user to a group

The Producer user should be associated with the QueueWriter group and Consumer user should be associated with the QueueReader group.

Creating a new permissions ("Policy")

The permission to access a resource is configured through a "Policy". We need to create a simple policy to restrict access to the QueueWriter user group. This is where AWS shines in my opinion.

We need to select "Policies" from the IAM console and select "Create Policy".

Creating a new policy

The policies themselves comes in two flavours. The Amazon managed policies refers to "System" policies that have commonly used permissions bundles together. There is also the option to create custom policies. This is massively powerful, as we could start off from a Amazon managed policy and customise based on our requirement. 

Creating a custom policy based on an Amazon managed policy
The search for "SQS" and select "AmazonSQSReadOnlyAccess" policy to edit.

Editing SQS Read-only policy

Yes, this is a JSON fragment that describes the permissions associate with a read-only policy. This is not quite what is needed and needs to update next.

To start with, the policy name should be renamed to something meaningful. The "Actions" in the policy specify the allowed permissions. The "sqs:GetQueueAttreibutes" and "sqs:ListQueues" can be removed. The "sqs:SendMessage" is the replacement.

The "Resource" should be set next. In the previous post there was an explicit note to the fully qualifies resource name (ARN) of "MyQueue". That information is required to set the resource details. The completed policy looks like below:

Policy to send messages to the specific queue

The policy is now ready to be attached to the "QueueWriter" group. Return to the "QueueWriter" summary page and use the "Attach policy" to associate the policy with the group.

New policy attached to QueueWriter user group
The next task is to create another policy for QueueReader user group to allow receive messages from the queue.

The receive message policy looks like below:

Receive message policy
At this point the QueueReader group is configured with receive permission for MyQueue.

QueueReader group permissions


Creating access keys for users

The access keys for Producer and Consumer users must be created next. This will allow these users to connect to AWS and send or receive messages from MyQueue.

The keys are created by navigating to the "Security credentials" section in the User summary page.
Simply select "Manage access keys".

Creating access keys for each user

Then follow the instructions and download the keys. Remember these keys are the username/password combination for each user and keep it safe!

Now we can write some code!! Get ready to use AWS .NET SDK!!














Playing around with Amazon SQS

The Amazon Web Services (AWS) Simple Queue Service (SQS) is a queueing service in the cloud. Any queueing system has a producer, who sends messages to the queue and a consumer who reads these messages from the queue. The whole point of having queueing middleware is not to overwhelm the consumer. Queueing middleware is also a choice for occasionally connecting components.

In this post I will describe how to go about implementing a very simple application using AWS SQS.

Getting started


Generally developers (including myself) guilty of simply commencing development with very little information. As our queue is going to reside in the cloud, we really should take a step back and think carefully. One of the key aspects we need to clearly understand is access permissions. We do not want our queue to be accessible to everyone under the sun. 

The service that is most talked about in any AWS conference or web cast is the Amazon Identity and Access Management (IAM) service. As the name implies, this service is used to create users/identities and assign permissions.

The producer of the application sends messages to the queue. Therefore we only require send permission to the queue. At the same time, consumer reads messages from the queue. We can create two users and assign the the minimum permissions.

Before we do create the users, we should ideally create the queue. In AWS a service is known as a "resource". So a queue is a resource. The access permissions can then be applied for the new queue resource.

If you do not have an AWS account, it is time to create one. There is now a free tier allowing a good deal of access to most of the services. Login to AWS console at http://aws.amazon.com.

The username and password you used to login to AWS Console is known as the "root" credentials.

Create the queue

In the AWS console, navigate to "Services" => "All Services" and you should see "SQS" as a selection.
AWS SQS Service under All Services

Once selected you can follow the wizard to create a queue. Make sure to accept all the defaults and name the queue "MyQueue". Once the queue is created, it appears in the "Queues" table.

Queues table with the newly created queue

If you now select the queue, AWS shows a whole collection of meta data about the queue. This is the most interesting bit.

Queue details

The URL is the public endpoint of the queue. The ARN refer to the Amazon Resource Name, which is the fully qualified queue name, that includes the region, account details and the queue name. Keep a note of the ARN as we will use this when setting permissions for the users.

The next step is to create couple of users with the just enough permissions to send/receive messages.



Friday 8 May 2015

My first ("useful") Android app!

I have been playing around with Android Studio with help from Coursera Android courses.

The app is called "DailySelfie" (I bet the name gives it away!).

The video demo is below (this was created as a part of the final submission of one of the course):



In this post I am going to discuss how this is built and what fundamental elements were used.

Intents and Activities

The "Intent" is the eventing system in Android.  Intents are used to start an "Activity". From my understanding, an Activity refers to a self-contained user task that occupy one screen.

Intents are pretty powerful stuff. They allows passing data between screens. An Intent can also retrieve data from an Activity and return it back to the application. I used this technique to invoke the built-in camera application and return its status. The status in this case is whether the user cancelled taking the picture or not.

See the following code block:

The code creates a new instance of Intent by passing in a string constant. This string constant tells Android that "I need a way to capture an image". As there is no camera in my application, I am asking Android to find one for me. The above "kind" of Intent is known as an Implicit Intent.

It is possible that there can be multiple applications in a device that is capable of taking a picture. Android shows a dialog with all the applications that can be used to take a picture. In such situations we need to ask the user to pick the application they want to use. I am setting the title of this dialog to "Choose an application:". In theory you could have put a more customised title based on your application. By the way, Android has some conditions before choosing an application for this dialog. Check the documentation.

The fun begins when we call "startActivityForResult". This passes the Intent to Android. Android uses the Intent resolution workflow and hopefully bring the camera application or a chooser dialog to choose the application we like to use.

Language resources

Some call this application localisation/ globalisation. We can easily make "Choose an application" an application resource by moving this to "res/values/string.xml" file.
String resources for an application

The string.xml file looks like below:



The code is update to the following:

The interesting bit here is the R.string.choose_app which maps to the string resource in the string.xml file. The "R" is built by Android with programmatic access to resources such as layouts, drawables and views (e.g. buttons).

This is enough information for the moment. In the next post I will talk about some other interesting bits!

Saturday 25 April 2015

Performance Monitor (perfmon): Correlating w3wp# to App pool process


The Windows Performance Monitor (perfmon) is a useful tool to capture run-time performance metrics such as CPU utilisation, .NET  GC runs etc.

Problem

The IIS web applications execute on multiple"w3wp" processes and it can be difficult to profile a single web application using perfmon. Perfmon simply name each process as "w3wp#1", "w3wp#2", which is not helpful at all!.
Multiple w3wp processes in perfmon


The goal of this post is to help identify the relevant "w3wp" process and map it back to the application pool.

Step 1 : Find the Process Id of the application pool process

Navigate to C:\Windows\System32\inetsvr folder and issue the command in the following screen capture. (appcmd list wp).
Finding out Process Id per application pool

The appcmd command returns the names of each application pool together with their process Ids. At this point we know the process Id and application pool that we are interested in.

Step 2: Add the requisite performance metric to perfmon

Simply add the metric to perfmon. I have added the CPU utilisation and most importantly the process Id. The process Id can be found under "Process" category.
Adding CPU utilisation with process Id

Step 3: Monitor the metric added in Step 2 

The sample metric looks like below:

Captured metric in perfmon
The most interesting piece of information is highlighted in the capture. We can see that the process id "4780" is in fact the "Stub" service as per Step 1.

Concerns

What if there are dozens of web application in IIS? The steps discussed here are for a simple case and does not scale well unfortunately. However I have seen some answers in StackOverflow that suggest to modify few registry keys that can provide a more meaningful name.

I do see that perfmon is not the best tool for performance monitoring. But I do think it is not too bad once you get use to it. (I do prefer dotTrace!).

Sunday 22 March 2015

Coursera: Programming Mobile Applications for Android Handheld Systems: Part 1 - My take

There is no argument that Coursera is one of the largest online education providers in the world. I recently completed the part 1 of the Android course and thought I would share some of my experience.

Not for the faint hearted

Yes, if you are looking for a basic overview of the Android system, then this may not be the best place to start. I would recommend looking at the official documentation on the subject from Google itself. In fact I spend more time reading the official documentation during the course than the course material.

Format

I would say the format of the delivery is pretty good. I really felt like being at University when listening to Dr. Porter (the instructor). His use of language is fantastic and very easy to follow.

The lectures are added to the course home page every week followed by assignments which are graded through an automatic grader. The assignments comes with very clear instructions. Generally the assignments are submitted in a form a zip file with requisite files in certain folders. Be very very careful with names as the automatic grader will reject if there are any special characters in the folder names.

The final project is peer reviewed.  As a part of the review process you are asked to review others work as well. (Here is my screencast that was part of the submission.)

Quality 

I think the material is pretty good. You are expected invest about 4/5 hours a week on the material. However I think you would need to invest more time if you are really interested in learning the platform. I started the course not simply to "pass" a course, but to learn. Therefore spend more time reading questions in StackOverflow and official Google documentation than the lecture material.

Final words

There are alternative courses offered by Udacity too. Once you start on the Android journey, I would recommend looking at other courses simply to enhance and solidify your knowledge.

Score

My score is 4/5.


Tuesday 17 March 2015

Working with CallContext in WCF (with a twist)

In my previous post I discussed a bit about CallContext and how it can be used with WCF. In this post I will attempt to describe an issue I encountered and a possible workaround.

Issue

The CallContext looks pretty convincing when it comes to storing "state" of a WCF service on a per-request basis. This "state" flows through the local call context and very beneficial for async-await methods.

What I did notice is that under load (I mean with 10 requests per seconds), the content in the CallContext is shared. Which is pretty Bad.

Tool

In order to reproduce this issue, we need to use JMeter (or any load testing framework). JMeter is a tool that is used for load testing a web application.The documentation of JMeter may be bit scratchy, but is it worth the effort.


Reproducing the issue

We will need to add the following code to a WCF service. (Code is checked in here, and I will extract parts to explain the issue.)


Ideally we should never see "(GetPaymentDetails) CallContext already has..." message as this will shown only when there is something already in the CallContext.

Reflecting on the results

Normally Threads in the .NET framework is polled. This would mean same Thread will be used to execute multiple requests. After a request is complete, the Thread returns to the Thread pool.

So what seem to be happening is that the CallContext state is returned with the Thread to the Thread pool. When the same Thread is reused to process another request, it is quite possible that the state is still preserved in the Thread itself.

This could be a bug or expected behaviour of the CallContext.

What if we reset the CallContext at the end of the request? The CallContext reset is through a call to CallContext.Clear method. This may work. However in the scenario where multiple async-await method are used, we cannot be 100% sure that context is cleared across all the the Threads used to process the request.

Normally in .NET 4.5 CallContext uses "copy-on-write" behaviour. (more on this is here.) So although we reset in one Thread it is not propagated across all the Thread that we used to process a particular request. Therefore resetting it "at the end" of a request is not quite correct.

Workaround

The workaround is to introduce a "MessageInspector". At the beginning of the request we can reset the CallContext with a call to CallContext.Clear. This way we make sure the CallContext is cleared for the request.


Given the shortcomings of CallContext, I am not sure whether there is any other solution that may  work consistently .

Friday 6 March 2015

Working with CallContext in WCF

The CallContext class is provided by the .NET Framework to "track" the logical execution of a request. It is sort of a property bag that is carried through the execution path.

Why CallContext is quite useful in WCF?

Prior to .NET 4 (most specifically async-await) paradigm, WCF methods were sort of synchronous. I use "synchronous" is a very loose terms to suggest that no explicit Threads were created by the developer to process a request.

However with the introduction of async-await pattern and Task Parallel Library (TPL), developing asynchonous code has become quite easy.

The issue really is that WCF was never designed (or at least visioned) to handle async-await in a graceful manner. Normally WCF use OperationContext to store extensions that can be used later in the application. The OperationContext hold state in thread local store which will never play happy with async-await. There are dozens of questions in StackOverflow around this matter.

There are few ways to design a WCF service that leverage the power of async-await.

  • We can discount OperationContext completely and pass the any data items around.
  • We can store the data points in a backing store and read when required. We may need to pass around a references, this may be acceptable. 
  • We could consider CallContext to store request specific information and read it whenever required. 
The data stored in CallContext is maintained across threads (as by definition CallContext is per logical execution context). In theory CallContext could be considered as a replacement to OperationContext. We can continue using async-await without having to worry about the limitations in the framework (in this case WCF). 

Example

The following code is a simple wrapper over the CallContext.



A word of caution about what you should store.

  • Generally it is advised that you should only consider storing immutable objects in CallContext. There is an excellent post here that explains the pros and cons.


An operation can use the CallContext wrapper in this fashion.

Summary

There is no double that CallContext is a very strong candidate for maintaining per-request data in WCF. Here is another post around CallContext from Winterllect. However there is a twist of CallContext that we need to be very careful about. I will write up my findings in the next post.  





Saturday 17 January 2015

(xp) Pair programming

Last week I spend good few hours with a colleague from a different team pair programming. I think I learned so much during these hours compared to whole of last year. This is not to say we do not pair in my current team. We do pair, but hardly any cross team paring. 

Some of the topics we paired on were:
  • Tips and tricks of Castle Windsor.
  • Integrating GitBook with TeamCity CI.
  • Await-Async pattern and issues we encounter when used with WCF.
  • Usage of Interlocked to prevent re-entrancy.  
  • Implementing DDD (why rich models are encouraged).
  • Event based service programming with NServiceBus.

Hopefully I will be able to write few posts covering the above points in the next few weeks and months.

If you do not pair or not encouraged to pair, then try it few times. The benefits are immeasurable!