Wednesday, 16 August 2017

Route to production - as a technical architect

The role of a developer is pretty much clear and concise. However the role of a technical architect some what gray. Generally, a solution architect should be able to deliver a blueprint to a technical architect and walk away. I understand that this is an extremely naive way of looking at things, but you get the idea.

Hows thing are done

My role as a technical architect was focused on guiding the development team, enforcing the governance and adhering to the essence of the solution blueprint. From the face of it, this sounds pretty awesome.

I loved being part of the development team and cutting code. I know numbers of architects who are expert Microsoft Visio or Word users. But I prefer the a middle path.

I made choices of the frameworks and patterns that should be used for the solution. I looked at each problem in details, devised solutions and attempted to get the team onboard. I did fail number of times because there were disagreements how things can be done or what framework to use.

It was sometime easier to do the work myself. I would knock off the code within few hours which would have taken a week or so for a developer. In hindsight I think this was a bad decision. I should have tried to get the team onboard but I learned my lesson.

More I spend time with the team, I realised that I was neglecting my duties as a technical architect. I needed put in place deployment, packaging, monitoring and auditing. We used SCOM for some of these, and other alerting mechanisms. I had to talk to other teams and vendors to ensure production environment was ready for the solution.

Lonely 

I think leadership of any form force you to a corner. It does not matter you sit with the development team or not. If you are making decisions for a team, you will be isolated.

I committed lots of code the solution but no one was there to help with my work. There was no one to talk to Infrastructure team or Networks. I had to carve out time do this. I had to document the design decisions. This took time and I and only I did this.

Even though the developer could shoot off to the pub, I had spend countless hours updating developing documentation and liaising with other teams.

Day T - 1

The day before the release to production is always challenging and nervous. At this point I would  have spend countless hours designing, developing and most importantly championing the solution, but nothing prepare you for T - 1.

Day T

This is when the solution is available for the public. Great and humble experience!

Day T + n

This is the day I received the first production incident.

The cycle never stops unfortunately.


Friday, 21 July 2017

Route to production! - as a developer

I have heard many stories and quotes in the past that directly points to the title of this post. This is my take of the subject.

There is no doubt that every developer (including myself) would like to spend many hours perfecting the software engineering art. We love our design patterns, SOLID principles, frameworks and runtimes. Any passionate developer can spend hours talking about these subjects. These aspects of software engineering is so important that without it, any project can get out of control and possibly fail.

My journey

I was fortunately to be a developer on a very large and complex enterprise solution. I also played the role of technical architect and solution architect in two separate medium/small solutions. In this post I would like to share my role in getting the solution into production.

As a developer

Being a developer is a fantastic experience. I read in many articles that we stand on the shoulders of giants. I think that is absolutely correct. The open source movement, stack exchange has brought the developers of the world together.

I loved being a developer of the large and complex solution. It was a payment system which had numerous integrations and challengers. In fact the solution may look quite complex at first glance, but it was broken down by context (e.g. refunds, payments). This breakdown allowed any developer to focus on one area of functionality.

The team itself was large, and had an amazing breadth of knowledge. Some days I would spend hours going through my colleagues code as it was an art!

I was happy and joyous. I would have long lunch breaks and discuss the latest trends (e.g. microservices, functional programming) in the industry with my colleagues.  Sometimes, the debates inevitably got heated! but that was the fun of it.

The solution was owned by a solution architect and a business analyst. I never took too much notice of these guys, as the stories and tasks that were allocated was descriptive enough. Once in a while I would consult and ask the intent and the context.

The colleagues outside the team did not like the solution architect. I heard stories of him storming out of meeting and being rude to some. This was very strange as none of us in the technical team felt any hostility. The solution architect use to tell us that few hours he spends with the technical team was the best time of the day! We took it as a pinch of sugar and salt!

The solution had multiple components deployed across various systems. I never had to worry too much about the deployment topology. The code I wrote, went through the normal development cycle and ended up in production. I did know the end-to-end system. I could describe each component and how it relates to the business function and why it is needed.  I was proud of my work, and I received positive feedback from the customers. That was a great feeling! Each day, I would take a cursory look at the logs to ensure there is nothing odd.

The deployment pipeline was a heavily automated. From the engineering aspect, the only manual step was the selection of the appropriate package to test, stage and push to production. The quality assurance (QA) activity was mixed (automated and manual) as we had to depends on other teams to confirm.

Keeping aside my social life, I had the opportunity to look at the next shiny thing! The life was great!

Reflecting on the title of this post, I was not too involved with how and where the solution got deployed. I had queries from other teams but nothing to pull my hair out. Most of the queries were related to check whether this or that feature is working, what account, what permissions etc.

The solution now sits in production serving millions of customers each day without a sweat. Perfect!

Whats next

The times change, how teams are organised change and yes, change is everywhere. In the next post I am going to discuss how my role changed and what impact that had on myself.

Saturday, 8 April 2017

Useful AWS commands

  • aws iam list-users
    • Gets a list of users in the AWS account. Returns full details of the user object.
  • aws iam list-users --query Users[*].UserName
    • Gets a list of user names in the account.
  • aws iam list-policies 
    • Returns a full list of of AWS policies which includes AWS managed and customer defined.
  • aws iam get-account-password-policy
    • Returns the password policy of the account. If none defined, returns "NoSuchEntity" error.
  • aws iam get-account-summary
    • Returns the limits of the account. Useful to find out what the limits and request increase from AWS support.
  • aws iam list-policies --only-attached
    • Returns a list of policies that are attached. Useful to find out what policies are being used.
  • aws iam list-policies --only-attached --scope Local
    • Gets a list of policies that are managed by the customer that are attached (ignoring AWS managed ones). Useful to detect how many customer defined policies are being used.
  • aws iam list-entities-for-policy --policy-arn "ARN"
    • Lists the users, groups, roles and resources that are using the given policy. 
  • aws ec2 describe-regions
    • Returns a list of regions available. In order for this command to work the "region" in the CLI. If the "region" is not specified we can use "aws configure" to configure a region.

Sunday, 3 April 2016

AWS labs on qwikLabs platform

Last month @sebsto (March 2016) announced that all the labs on qwikLabs were FREE till the end of the month (March 2016). This was a great opportunity to learn (by doing) Amazon Web Services (AWS).

I managed to complete 2 Quests and have to say that my experience was pretty great. By following the labs I ended up spending more than 10 hours using the AWS platform for free. In the process I launched multiple EC2 instances, databases and deployed an application to a Docker container.

Completed Quests over the free period


When I started using AWS, I was not so sure what services were eligible for the free tier and ended up getting a bill at the end of the month. Since then I have been careful on what I use and switched off virtual machines as soon as the work is complete.

Now the offer period is finished, I can only wish the offer was extended (but it was not). But on the positive side, there are dozens of free introductory labs on offer. I think once you get started on a service you should be able to follow the documentation and learn about it a bit more. Therefore the free labs should be enough for me to start looking at new services without any risk of getting a bill at the end of the month.

My recommendation is to do as many labs as possible and learn about the platform. As many say, there is no substitute for the real hands-on experience, and these labs will start you on your journey.

Saturday, 12 March 2016

AWS: 6. Improving CloudFormation template

The previous post covered the AWS CloudFormation template that I developed to provision the environment to deploy the "ApiService" web service. The following diagram illustrates my journey so far.

AWS Services used and their interaction
Although it looks very simple I covered many AWS services including EC2 Config Service (the service to bootstrap a Windows instance).

In this post I plan to improve the AWS CloudFormation template by implementing the following features.
  1. Further parameterisation 
  2. Creating the requisite Identity and Access Management (IAM) role
  3. Creating the Virtual Private Cloud (VPC) security group

Parameterisation


The new parameter section looks like the following.


I have parameterised the template so that the security key, availability zone, VPC and instance type can be determined at deployment time. I have also updated the template to resolve the Amazon Machine Image (AMI) through the "mapping" section. The mapping section follows a "dictionary" pattern where the key can be passed through using the intrinsic function "Fn::FindInMap".  See the following.


New IAM role


The purpose of the IAM role is to allow Elastic Compute (EC2) instance access to the Simple Storage Service (S3) bucket to download "ApiService" service binaries. In this particular case I am creating a role that has full access to S3 service. I have to admit that the syntax is not very intuitive.

The first step is to create the role. Thereafter an "Instance profile" resource needs to be created. From that I can gather, the Instance profile is an envelope that contains the role. This envelope is used to pass the role information to the EC2 instance.


Setting instance profile during EC2 provisioning.
The main benefit of the refined AWS CloudFormation template is that it creates the resources instead of using existing ones (e.g. security group and role). This can be very powerful because each stack can be created, and rolled back without leaving any residue.

The IAM role and Security group is created as part of the script and the only external resource I am depending on is the VPC. The VPC provides a personalised boundary over networking resources on the AWS platform and it is not something you should treat lightly. Normally there will be network engineers responsible for configuring and I doubt you will use an AWS CloudFormation template to provision a VPC (although it is totally possible, in fact I think we MUST to aid repeatability).

The updated AWS CloudFormation template is available here.

In the next post I plan to look at monitoring as it is something most developers leave to last. In my opinion monitoring must be a first class citizen of any solution design.

Monday, 7 March 2016

AWS: 5. Automating the deployment and provisioning using AWS CloudFormation Service

In the previous post I deployed the "ApiService" to the AWS Platform using the .NET SDK.
That was really powerful and a step towards automating the provisioning/deployment process.

If we take a step back and look at the deployment of an application to production, you do not see C# code being executed to provision the production environment. This begs the question whether there is another way to deploy our simple application. In this post I am going to attempt using the AWS CloudFormation service to provision the environment and deploy the application.

AWS CloudFormation service


The AWS CloudFormation uses a form of Domain Specific Language (DSL) to define an environment. The AWS CloudFormation service accepts a text file that defines the environment and provisions it as a "Stack".

The following points describes some of the key benefits of AWS CloudFormation that I see as extremely valuable.

  • Automatic rollback when provisioning fails - my favorite!
  • Developers are fully aware of the production environment. 
  • Any change to the environment is managed through the CloudFormation template. (no random changes)
  • Allows repeatability to provisioning; hence can move between regions.

There are tons more benefits of using AWS CloudFormation, and refer to the documentation to find out more information.

Provisioning and deploying using AWS CloudFormation


The first step is to create a CloudFormation template. The template uses the JavaScript Object Notation (JSON) format. You can use any text editor to create one and I used Visual Studio Code as it has fantastic support for JSON.

A CloudFormation template at a minimum must contain a "Resources" section. This section contains the resources that must be provisioned. The template I developed for the ApiService looks like the following.

I think the above section is pretty clear and can be read without knowing too many details of CloudFormation. The section describes an EC2 instance and sets some properties such as Amazon Machine Image (AMI), Security group etc. These properties can be mapped directly to the .NET SDK example that was in the previous post. You can even see the "UserData" (bootstrap script) being used to install the application.

I have used some functions such as "Fn::Base64", and in AWS lingo these are called intrinsic functions. The parameters to the functions are passed using the JavaScript array format ("[]").

Parameterisation


Although it is not necessary, I have parameterised the template so that some of the values are defined at deployment time. There is a special section for parameters which is called "Parameters" (surprise). The parameters section looks like the following:


I have allowed the AMI, availability zone and Key name to be defined at the deployment time. Normally parameters are used to define values that should not be stored in the template such as passwords.

There is another section called "Outputs", that can be used to display information such as service endpoint or anything else that is useful once the provisioning is complete. In this particular case I am displaying the service endpoint.


Using the template


I used the AWS Console to upload the CloudFormation template. Of course this can be done through the AWS CLI too. The Create Stack option needs to be selected from the AWS CloudFormation landing page.

Creating a New Stack using AWS CloudFormation service

The next step is to upload the CloudFormation template.

Uploading the CloudFormation template

The next screen brings the CloudFormation template into life! The values specified in the parameters section is set available. (See the following)


Setting Parameters in the template

At this point CloudFormation starts provisioning the environment.

Provisioning the ApiService environment

The "Events" tab contains a list of activities that is being performed by the AWS CloudFormation service. Once the provisioning and the application is deployed, the "Outputs" tabs is populated with the endpoint to the "ApiService".

"Outputs" tab with service endpoint

The "ApiService" is now fully operational.


Service fully operational


There is no doubt the AWS CouldFormation service is so powerful and I simply scratched the surface. In the next post, I am going to look at AWS CloudFormation in bit more detail and try to incorporate few more best practices. 

PS - The full template is available here.






Sunday, 28 February 2016

AWS: 4. Lanching an EC2 instance using .NET SDK

It is great to be able to very simply launch an EC2 instance and install the application using nothing but a S3 bucket (with binaries) and a bootstrap script. Imagine if you were able to automate this process.

There are multiple ways to interact with the AWS platform. The AWS Command Line Interface (CLI) and AWS Software Developers Kit (SDK) are one of the few methods. These methods allows developers and system administrators to write scripts to interact with AWS resources. In this post I will be using the AWS .NET SDK to automate the launch process of an EC2 instance.

The AWS .NET SDK has undergone a major restructuring during the last year. Now the .NET SDK is split by AWS service. The NuGet package of a service itself is split into two components, the "Core" dynamic link library (DLL) contains all the plumbing (signature creation etc.) and "Service" DLL contains the supporting classes for the service. The NuGet package that I will be using for this post is "AWSSDK.EC2".

I strongly recommend looking at the AWS .NET blog for updates and tutorials. 

The following .NET code launches a new EC2 instance and does exactly what the manual steps did in one of the previous posts.



The key pieces of information that is of interest are the following.

  1. The Amazon Machine Image (AMI) - In this particular case it is the golden image I created.
  2. Key pair - The *.pem key use to decrypt the administrator password.
  3. Security group - The security group that opens inbound port 88 and Remote Desktop Protocol (RDP) port. 
  4. The region where the instance will be launched
The above values needs to be obtained from the AWS Console except for the region information. Normally the region information is specified as an attribute in the AWS SDK configuration. In my case I am using "eu-west-1". 

There is tons of best practices around securely storing the AWS credentials so that you reduce the risk of adding it to a public repository like Github. Therefore I suggest you look at this post

So far so good. I managed to automate the EC2 launching process and able to horizontally scale the application with manual execution of the above script. In the next stop I am going to look further into automating the launch process.