I look to Nuget as the package manager for Windows development in the same way that RubyGems is to Ruby development. This may be immature of me but it works for my scenarios. In comparing the different systems, I feel that Nuget is lacking a few key features that sometimes I look for.
1. Install a specific version of a package
Currently from the Nuget console in Visual Studio a command like the following is run:
This would install the latest package available. This is sometimes not always the best solution. If the package owner releases a package that has a bug in it then we cannot use the package at all. I would like the be able to install a version of the package that I know works:
Install-package TeamCitySharp –version:0.2
This would mean I can then update at a time when I am happy with the new package or when a fix has been added.
2. Download package direct from the Nuget gallery
I may want to download package without the need to open the Nuget console. A simple download link would be a welcome addition to be able to help this.
3. Global installation of a package
Currently, you would select the project from the list in the console and then call the install command. I would like to be able able install a package to a number of packages. For example, I have 20 unit test assemblies. I wouldn't want to have to type the command to install NUnit 20 times. I would like to be able to do something like:
Install-package NUnit -include *.UnitTest.dll –exclude *.config
This would utilise the full power of the PowerShell console. This is purely a timesaver and would help when working in a multi project environment.
I have heard people talking for a while on the TeamCity pricing strategy and thought I’d write a post to start to get some opinions on what you think it should be changed to.
TeamCity users are very lucky that we have a free, without restriction, version of the software to use. It comes with 20 build configurations and 3 build agents. Unfortunately, people still do not find this adequate for their needs. The enterprise version has unlimited build configurations and 3 build agents and is approx. £1600. So the question here is:
What do you think the pricing model of TeamCity should be?
If we can get good feedback, we can give this to Jetbrains in order to let them see what their customers feel they need. This post, or discussion, does not in any way guarantee that the pricing of the tool will change.
I have heard the following suggestions:
- Make the enterprise version come with 5 build agents
- Have a version between Free and Enterprise that allows like 70 build configurations
- Give the ability to buy package extras e.g. 20 build configurations for $500 etc.
What do you think? Comments required please!
“Until your pretty code is in production, making money, or doing whatever it does, you've just wasted your time” - Chris Read whilst at London CI.
In my opinion, this quote really sums up software development. Process and red tape are becoming an obstacle to more and more developers from doing what they are paid to do – create software.
I attended a talk by Kendall Miller whilst at DDDScotland entitled “Creating your own software company: A survival guide”. I am a huge supporter of continuous delivery and have attended many sessions but this session was extremely useful to me. It was really the first time that the reasoning and importance of continuous delivery for developers really hit home with me. I knew the benefits and drawbacks but hearing someone actually demonstrate their experience really helped me. The most important thing I learned in this session was that you have got to get your software out early, this is particularly important when you are starting up a software company. Shipping software early will not only give you a source of much needed income, but can provide potentially useful feedback that can help shape the project before too much investment, both in terms of development time and money, is wasted. This goes hand in hand with the opening quote of this blog post.
I am a huge fan of clean, SOLID, DRY code and I cannot emphasise enough about why developers should write code in this fashion. I do understand that I may be giving a mixed message here. Is it possible to create software in a clean, SOLID and DRY way and in a fast time? Could we be sacrificing quality for delivery? In my opinion, this is simply not the case. There are some fantastic guide lines laid down by Robert ‘unclebob’ Martin in his book, Clean Code. These guide lines do not take extra time to implement – they are baked in to how we write the code. As developers they should be second nature to us. If they aren’t then you should address this as soon as you can.
Fast delivery can, however, affect architecture. Architecture is one of those areas of development that really does demonstrate a lot of ways to do achieve a single task. For example there are many different frameworks, tools, code layout etc. and these are another area to be concerned with. You can implement a solution to a problem that is clean & efficient but that may not prove to be the best architecture. You could continue to work on this solution and refactor it to the nth degree until you come up with an architecture that you like. You may feel that you have used your development time wisely to improve your solution, when in actual fact, what you may have done is to waste your development resources refactoring when other solutions may be available to you.
I would argue that release fast, fail fast is a good way to develop. Get the code out there, let some decent A/B testing give feedback on its fit for purpose and let that shape the architecture. I understand this may be easier for start-up based systems as their customer base can initially be small. But I do not think it is only targeted towards them. Lets look at Google Chrome or Firefox as an example of this philosophy in practice. Changes to Chrome and Firefox are happening on a regular basis now. This means the companies can create a new piece of functionality, release it to users and collect feedback and know that feedback is targeted towards the last small release. This means they are letting the user feedback help drive the system. Firefox takes a lot of grief about how resource hungry it is. Releasing smaller code changes more often will help them see if they are addressing the issue.
Releasing a solution early can also bring problems of it’s own. There are some companies out there that are of the opinion – if it has shipped and it works then why should we spend any more money on code improvements. This falls into the argument about Technical Debt which I will not cover in this post.
In summary, I believe software should be delivered in small, continual chunks. This helps create an efficient delivery mechanism that will give us a chance to collect useful feedback in a much more structured and continual way. We can use this feedback to deliver software that is better suited for its purpose and become better at what we are paid to do!
I have recently started looking into the new html specification (HTML5) and have noticed that benefits of this new specification include improved semantics and cleaner mark-up. Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation. Consider the following code snippet:
This would usually denote a small area of navigation that loosely regards a navigational class on a html page. The loose association to the navigation is because I had the sense to name the class "navigation". Had I named the class "list" then this would have meant nothing to the reader of the mark-up. The <h2> tag relates to the <div> but is actually not part of the mark-up. HTML5 gives the ability to clean this mark-up and make it more semantic. the resulting code in HTML5 could be:
This code snippet actually lets the user see that the list of links are actually a small navigational element of the site. It will also relate the heading, Quick Links, to that small navigational element. There is another element, menu, that I will not talk about. It could be substituted for my use of <nav>
When I first started reading about HTML5 and how the semantics and cleanliness of the mark-up changed, it made be beg the question:
Is HTML5 the excuse to get front end developers to think about Clean Code?
I am a huge advocate of clean code. I expect my methods / classes and variables to have good naming conventions. I also expect the code to be short, concise and descriptive. I am a developer, not a designer. Design / front end development is usually my last port of call. therefore my mark-up and it's structure usually suffer. this means that it can be poorly organised and when I test in different browsers, I sometimes have to shoehorn in fixes which results in my code being messy.
Html5 may not be a finished specification, but I for one will start to embrace it in my development. If there is an easy way for the front end mark-up to be as clean as business logic then I think we should all lobby the W3C to finalise the specification and get this new standard into place.
I am a huge advocate of automation and the automation of as much of a system as can be implemented. I like to make sure than non-developer members of staff can work effectively and try to do as much as possible to minimise the blockers in their way. IIS can be one of those blockers. When it comes to staff creating new sites and application pools, they can sometimes get it wrong. To make sure all members of my team have the exact same setup – I script the set-up.
A CI tool should be able to handle the automated rollout of configuration management to systems. This is just another example of making sure all parts of the environment have the correct set-up. Automating this rollout means a smaller chance of human error. Being able to interact with IIS means a simple script could be run to create the required setup – both locally and on a build environment.
I was able to create the following Powershell script.
The script basically automates a way of creating a new application pool, a new site and then assigning that application pool to that site
This script was easily able to run from either powershell or via cmd line (which invokes powershell). It’s a script that works perfectly well assuming you only need to run it once as it doesn’t take into account the site or the application pool existing. It needed a lot of refactoring. the script that was produced can be found below. It is a lot more useful as it takes parameters for setup.
The trouble I had with this script started when running in different versions of the OS. X64 machines call the X64 powershell by default (as you would expect). When using X64 powershell the following error was encountered:
This of course worked perfectly in X86 version of powershell. Weirdly, a different error was encountered in X86 powershell. The line
command didnt like working. It threw the following error:
I was faced with the dilemma of fixing something I know relatively little about (powershell scripts) or continue to use the simple script and manually manipulate IIS after the initial set-up.
Strangely, I chose to ditch both methods and wrote a C# console application to manipulate IIS. It took me half the time and was very easy. Andrew Westgarth, IIS MVP, replied to me on twitter after I said I had done this
I could have told you that :-) the .net APIs rock :-)
It was as simple as referencing Microsoft.Web.Administration and writing code against that API. As it was a console application, it created an executible that I could call easily from TeamCity or via a batch script for local machine automation. I don’t have to worry about ExecutionPolicy either, which is a real bonus.
The source code for the console application is available on my github site. Feel free to take it and change pieces of it to suit you. I have also included both powershell scripts as listed above.
When it comes to the step between continuous delivery and continuous deployment (automated release of every good build), then the barrier I hear most often is automated deployments work fine until we have database changes.
Can we be confident in saying that database deployments are the only barrier to continuous deployment? Probably not the only barrier, but there are a large part of the puzzle. There will be other factors, e.g. audit regulations regarding sign off of releases etc., but I don’t think I would be frowned upon by agreeing with this statement.
I know there are many great tools out there will will help us overcome sql changes. Redgate’s Sql Compare can take 2 schemas and compare them to create a change script and a rollback script. This can be controlled by a CI tool. Others do a similar job, but Sql Compare is one I use regularly. Sql Server management studio can also be manipulated very easily via msbuild. So I can assure you it is very easy to automate sql deployments. But it scares a lot of people as they are worried they will kill their database. This is not a fear that is uncalled for if I am truthful.
Since there is the fear of damaging the database, some companies who are already doing continuous deployment will actively ‘skip’ a deployment when sql changes are involved. This involves using VCS hooks. A typical hook would be the presence of a commit message, e.g. #Skip Deployment. This would then let source control / CI server know to handle the commit without the need for a deployment.
Back in late 2010, I was introduced to the concept of NoSQL databases. This is essentially any type of store that doe not use a relational model. I have been looking at NoSQL more and more and have recently pondered ‘how easy would NoSQL based systems be to include in a continuous deployment system’?
I thought the answer was that NoSQL was indeed the missing piece of the continuous deployment puzzle. If companies were only ever deploying code then that’s easy right? Things are never that easy though. I looked further and then thought that if data was held in a store (of any kind) in a specific structure then how could we retrieve that data if the structure changed with the next system deployment.
I asked Rob Ashton, a well know presenter of Raven DB, this question. Rob said that it wouldn’t work unless you ran some sort of data migration against the data structures or versioned them in some way. So, in essence it was a no! I’m not disagreeing with Rob by any means but I still think NoSQL systems have got scope built in to allow us to handle automated deployments a little easier. As part of our development task, we could write the migration scripts for the data. We would have to do this anyway if we worked in a sql world. I’m a coder, not a sql developer. Therefore I would be a lot happier writing code migrations rather than sql migrations. This means I would also be a lot happier about deployments where only code changes were involved.
I have not had time to try this approach as I am not skilled by any means in using NoSQL. So in essence, this is a theory but I am confident it has some wheels. What do you think?………
After the BUILD conference (September 13 – 16), there was a lot of buzz about the next version of TFS. From the conference we heard that vNext was going to be a revelation. Since reading about it, and trialling it for myself (with Hosted TFS), I feel that the TFS team have fallen short of the mark, for me, on 1 of the major features they offer (there are more but I will stick with this one):
The current frustrations with TFS seem to be around painful version control problems. They include the workspace creating read-only copies of files locally that still need to be ‘checked out’ before they can be edited. If you delete a file from the file system then TFS doesn’t (currently) see that the file has been deleted. It has also been reported that ‘Get-Latest’ doesn’t realise that some files have changed when they actually had. Offline mode didn’t work well apparently when it came to a re-sync. The list could go on.
I thought that feedback would have been taken on-board that the version control side of TFS was not fantastic. A lot of people have started to look for alternative ways to use a 3rd party VCS tool with TFS. I honestly thought that Microsoft would allow 3rd party tool integration into TFS. This hasn’t happened.
Improvements to TFS version control side of things were made. These included the idea of ‘Local workspaces’. This is a copy of the files locally that are not read-only. Changes to the files outside of the TFS environment will be picked up by TFS. Another change was to offline mode. When the system came back on-line then everything automatically re-synced.
My initial thoughts when I had heard about ‘Local Workspaces’ was that Microsoft had finally hit that DVCS style system. I tweeted about this and was soon told, by a TFS development manager, that this was very much not the case. You still have to communicate with the server in order to do a ‘get’, ‘commit’ etc. This becomes interesting when you work with hosted TFS.
What happens if I am working whilst commuting and do not have network access? You can still work as you can edit files locally and TFS sees the changes, as the improvements suggest. But the idea of everything automatically syncing when back on-line worries me. Its never that easy – look at merging with VCS systems – it can be a real pain. You cannot stage local commits with TFS local workspaces. Its sounds good – a local workspace – when in reality it is only a SVN style of VCS. This can be good – but not for when working in an offline mode.
What I think Microsoft should have done it to move straight towards the local workspace being a DVCS style system. This would be the most useful approach for those who would be using hosted TFS in an offline format – a lot of developers do actually travel and work whilst doing so. I could then work locally and when I have a network connection push my local changes back to the TFS server.
Developer needs in a VCS have changed. The rise of DVCS style systems have made this happen. Therefore implementing a CVCS style of system feels as though TFS is going back to 2000, when SVN was first introduced. It doesn’t feel like it is moving in a tangible way to that of developer needs. TFS takes care of a lot of the ALM cycle – it has a CI style tool, version control, bug tracking etc. but as long as I have Git, TeamCity and YouTrack then it is not something I feel I could move towards. It feels very old fashioned in comparison to these tools.
In recent months, there has been a shift in methodologies to move my team from kanban to a flavour of scrum never quite seen before. This was not my choice but was taken by management as the pressure from the business descended on the team. The business decided that the team should be “coding” for a full 7 1/2 hours a day and continually checked that this was the case. They stopped pair programming and meetings as this was eating into coding time and not generally the best use of development time.
They decided that this new “agile” approach would suit the team better. This offers the question “are businesses loosing faith in agile and moving away from it?”.
I asked this question to Rob Cooper and Rob has the following great view.
" I don't think companies are turning their back on agile, I think they (companies that we would consider not doing agile) generally fall in to one of three camps:
1. They simply think they are doing it, but have either not invested time in decent training or misunderstand the process/practices.. The "The [Self] Proclaimers".
2. They do not care for all this agile "nonsense" since it's "just a way for developers to get out of doing important work". The "Non-Believers".
3. They simply haven't heard of it, or don't know what it means. The "Lost Souls". "
I think Rob has hit the nail on the head here. I know I have certainly been working in companies that has fallen into category 1. This means that we have fallen into the “managers know best” scenario which is not always correct. I know for a fact in this business they are definitely a “Self Proclaimer”. They are trying to throw resource at something they see as a problem (there are lots more things involved here but I won’t go into them). This flavour of scrum is certainly not scrum. In fact it’s not agile at all. The management take care of the estimates (yes! I really said that) but this is only recently the case. The developers are not part of the planning process. Therefore estimates are made on a best case scenario rather than accounting for technical debt, code quality, extensibility and architecture.
It is a really interesting time for this shift in the business to happen. Recently, Steve Denning posted “Agility is not enough: beyond the manifesto”. The article talks about how working software is not enough and that we should be aiming to delight the customer at all costs. This attitude reflects the attitude of the management I work under. They believe that an iterative approach delivers value to them. It’s the full product or nothing at all. If it is the full product then it has to be delivered at top speed and with little requirements. Less time means less money. Do businesses believe that an agile team wastes time? In my case, yes. They think when we are talking about architecture and code quality that we are not working hard enough.They do not understand that if time is not given to extensibility and maintainability, that delivering software will become very costly due to ridiculously high technical debt.
The writing of this post was triggered by Nathan Gloyn’s strong rebuttal to Steve Denning’s article. If you have not read Nathan’s post then it is certainly worth reading. IF we could bottle the passion and knowledge that Nathan has for agile and sell that to companies then I would not be in this situation.
I certainly feel that if we, as developers, are not careful then the world of waterfall methodologies awaits us again. If businesses are not turning their back on agile itself as a process but continue to run their own flavour of agile then it begs the final question “is it really fair to term it agile?”.
Agile is seen by some companies as a buzzword. Are companies saying they are agile to be in with a shut of hiring the best developers who want to work in an agile environment or winning contracts by baffling customers with methodologies just to impress them? In my opinion, some companies are doing this. They are soon found out though and it is not a fantastic impression to new employees or customers.
Nathan has given us a few tips to access whether we are in fact agile:
– “it’s not agile if”
- estimates are done by management or just the team leader
- managers decide the amount of time a project will take even after asking the team how long it will take
- team members are assigned tasks by a manager
- Velocity of the team is adjusted after team estimate to make project fit into an expected time frame
- if the team are told what work they have to do and when they have to do it by
- the team are a bunch of individuals working separately
- quality is just a word, something to be aimed for rather than something that is embedded in the team
I’d like to thank Nathan and Rob for their contributions to this post. You can read Rob's version of the post on his blog. Also like to thank Nicholas Mayne for some thought provoking words on twitter when I initially started the posst It is always nice to hear that other people are as passionate (if not more!) about a topic that I am.
One of the biggest pitfalls on working in teams where branches are created to work on features is that we are not continually checking in to the head revision / trunk. This means our work is not constantly being integrated. The feature may be small and we can integrate back in a day or 2 but there are a lot of instances where the feature is longer (2 – 3 months). If this is the case we need to be continually merging and then re-branching. This means we are going through merge issues more often than required.
What if I was to tell you there was a much better way? I bet you would be a little sceptical at first but then see the benefits of it. When developing features, we can all work from the trunk by feature switching. Feature switching (FS)is the use of a configuration system, db or config file, where we can turn a feature on or off. This will help us with canary releasing (releasing features to just a small group of users) & A/B testing. We really start to see how useful FS is when we look at moving towards a continuous delivery workflow.
We can write ‘feature’ code to the trunk and enable a switch to turn it off and then push to our head revision. We can deploy our system as the code we have written will not be parsed or shown (if it is front end changes). The danger is that we can forget to disable the feature. I usually setup my web.config transformations to take care of that issue.
FS can be temporary, while a small feature is being built. IF this is the case when the feature is complete and signed off then we must remove all instances of the switching code and the settings to go with them. FS can also be long term, e.g. being able to disable the registrations to an event on a website. It is important to keep temporary and permanent switches separate for clarity.
This code shows that on a simple request to the register action, then a view will be returned. What if we wanted to start tinkering with the view? What if we wanted to start adding new fields to the registration form but not show it until the feature is finished? Well I'm sure some would say create a cpy of the view and work from that - but there are no ways to get to that view as it is not returned from a request.
This is the same action method with some further logic added to it. This effectively checked to see if a feature is actually enabled and if so the new version of the View is rendered. Otherwise the old version of the view is rendered. This is a very very simple demonstration of feature switching. I would try not to do it this way – ‘newing up’ the FeatureManager inside the action method. I’d use Dependency Injection to inject an instance into the constructor for loosely coupled code.
What if I wanted to do this as part of a View and not have to modify the code at all. We can look at the following example:
This example says that if the feature is enabled then show the new panel. This uses a Html Helper method to hide the logic of the switch. All I have to do is to pass the name of the feature to the helper. The helper code is as follows:
Pretty simple and straightforward stuff. Using this type of approach will mean we can all work effectively on the head revision which will save us time (we shouldn't have problem merges). This will als let the CI system take care of integration and spotting integration issues earlier. It will also mean that when we check into the trunk our application should be in a deliverable state. There are much more complex ways to feature switch. We will cover those in a later post.
I have taken a copy of the ASP.NET MVC MusicStore and added some feature switches to it. This is available on my github repository. Please feel free to take a copy and have a play with the switches. The code there is what is contained in this post
This is the 3rd post in a series ‘How to get started with CI’
Previously, we talked about choosing the correct infrastructure for your CI system. We will now talk about CI tools themselves. There are lots of CI tools out there. Many more than I know about and have listed here. Some tools are good and some, not to my liking.
If you have used a CI tool before then you may find yourself comfortable with that, if that is the case then I’d love to hear about your experiences. Alternatively, if you are new to these tools I have included a few things to consider when choosing a tool:
Don’t choose a CI tool that is not compatible with your environment e.g. if you run linux based servers then you would not choose a windows based tool unless you had the ability to create a windows VM. Before choosing a tool, make sure to investigate the requirements needed by it.
If you have little or no budget then you will be limited to some of the tools that are either free or have community versions. Check the license agreement for terms and conditions about their uses – its best to know what is expected before you start using it.
3. Support &/or Community
Choose a tool that has a good user base or support forums. If you choose otherwise then it may be difficult (if at all) to get help if things do not go as expected or if there are issues with the software. A more established and well known tool means there is a greater probability of someone having a similar issue to you and thus a resolution may be more readily available.
4. Ease of Use & Maintainability
If you choose a tool that is difficult to use, then it will be difficult to adopt into your organisation. For example. CCNET, hugely powerful tool (and one I started with) is configured mostly via XML files (special editors may be available) and build scripts. There is no easy UI to make things smoother. In my opinion, unless you like creating scripts in notepad (or your favourite text editor) then it may not be the one for you. Remember, you probably won’t be at a company forever, so think of the guy that has to pick up this system after you leave – unless you have a simple system, it may get neglected or trashed completely.
5. Choose something ‘Cool’
I personally love new shiny tools. In fact, if there is a developer out there who doesn’t then I’d love to speak with them to find out why. I enjoy testing, evaluating and writing about new tools. So when a new CI system appears on my radar, I give it a go. If you find a tool that keeps you interested then you will enjoy using it.
It is very important for your tool to not be a hassle to use, configure or maintain. If you have to reason for “CI” maintenance that may take hours out of your week then the team is less likely to buy into that situation. My top tip is to use a scoring matrix and score each of the sections above 1 – 10 (1 being worst, 10 being best) and then get a shortlist of the tools that you like. After you have got a shortlist of say 3 tools, then trial them. It is very important to trial them, they may look great on paper but they may not work out how you imagine.
This article talks about the points worth considering when choosing a CI tool. It you have been through this process and feel there are other points worth considering then please do comment and I will add them to the list. If you feel that a particular CI tool was missed from this article then please do add that as well with a link to its website.
The next post in this series will talk about how to implement your first project into a CI tool.