Dec 29 2011

Good Bye Code Reviews, Pair Programming is the Silver Bullet….

Category: Continuous Integration,Software DevelopmentPhil @ 12:01 am
I actually wrote this post last March, but never got a chance to publish it. That pretty much sums up my year!!! I don’t think I was ever really happy with the way it flowed or the exact point I wanted to make. However, after working with some different teams this past year, I’m even more convinced that code reviews and standards should exist and be enforced. Good topic for another post!

I might have said this in a previous post, but I believe that many developers use Agile as an excuse to avoid following institutionalized processes. They prefer to make up their own extremely lightweight methodology, doing only the activities they enjoy or believe they have time for. By adopting an Agile process, many claim they can eliminate the need for analysis, design, and even code reviews! Personally, I don’t believe this was the intention of the Agile methodology. However, given the substantial number of quality gates and sign-offs that organizations institute to ensure the elimination of mistakes, it is no wonder that Agile becomes the easiest method to subvert corporate mandates and deliver software. I completely understand the theory and goal of these mandates, but it does seem that by the time they are institutionalized, the overhead of the process is more complicated than the actual business to be solved! And before anyone corrects my association of Agile and Pair-programming, I do realize that Pair-programming is a tenant of Extreme Programming, not Agile. Unfortunately, many people seem to commingle the Extreme Programming principles with Agile; XP tends to generate negative connotations in some groups, while Agile is complete accepted.

Not so recently, a coworker sent me a question, wondering if I thought that Pair-programming eliminated the need for code reviews. I have actually heard this claim by several Agile teams in the past, using pair programming as an approach to avoid the code review process. There are several reasons that developers are quick to forgo code reviews, I believe the two most prevalent reasons are:
  1. Code reviews are done as an SDLC requirement; not because the development team actually desires to conduct them or finds value in them.
  2. The vast majority of code reviews are completely ineffective; there is no preparation, no structured process, and no follow through on findings.

There is actually a lot of Internet discussion on this subject; I have included a few links at the bottom of this post which I found interesting. None of them changed my personal views, in that code reviews cannot (should not) be eliminated from the process. One point that I would concede, is that pair-programming could make the code review process easier, assuming that fewer issues will be discovered during the review.

I believe that code reviews should be a “continuous process” during the entire development cycle; whereas most people consider the code review to be a process point. Some believe that the review is something that happens at the end of development, after the code is completed, after it has been tested; typically when it too late in the process to make corrections. I feel that code reviews are a never ending cycle of four basic activities, all contributing to the overall quality of the code base.

  1. The easiest and cheapest process is through automation. There are two basic solutions, the IDE and a Continuous Integration Server. Your IDE can be one of the most effective tools. Eclipse can be configured to share coding standards and validations across the development team, eliminating numerous bad practices at their point origin. Tools such as Checkstyle, PMD, and FindBugs can easily be incorporated into your IDE. These same validations (rules) can also be utilized by your Continuous Integration server, providing the final conformance checks. Automation eliminates almost all of the soft issues from the review process and even promotes better coding practices.
  2. Pair-programming is a very valuable technique and I enjoy the collaboration and education that happens during the process. However, I’m not sure that pair-programming works on all teams, there needs to be a real sense of openness and cooperation for the pairing to be effective. And there lies the real issue, if the pair is very cohesive in their approach, you have the problem of “group think”. If everyone is thinking the same way, it will be much harder to discover issues outside of their common perspective. I believe that code reviews can be more effective when conducted by people who are not wed to the design and/or implementation; an external perspective is always valuable to the process. Another small problem is related to traceability, without the review, there are no review artifacts or documentation to support the process. Next you have the practicality of the schedule. Assuming that you can eliminate the code review process by requiring pair-programming, how do you mandate or assure that it happens? Was all of the code written together by the pair? I find it really hard to believe that 100% of the code will be written completely by the pair. Given all of the demands placed on people, with meetings, vacations, kids, etc; it would not leave many shared hours for pair to write code. It does not seem practical to mandate that no code will be written outside of the paring. So do you only review the code that was not written by the pair? Sounds complicated! All that we accomplished was removing a “learning opportunity” from everyone on the team, including the developer that wrote the code!
  3. I’m not sure that “Continuous Review” is an officially documented concept, but it is a technique that I have used successfully over the years. Continuous Review is fairly simple, but is potentially time-consuming and highly prone to apathy. The process is extremely trivial; before accepting changes from your teammates, simply review the change set in your IDE. It can be quickly accomplished using the Team View found within Eclipse. You can walk through each commit and its associated files before updating your work area. By spending this small amount of time throughout the development process (each day), you can help ensure that everyone is on the same page; even ensuring that the code is in sync with the design. You get a very good sense of the overall heath and progress of the project, just by watching the commit stream. Unfortunately, not many developers seem to adopt this role, as it takes a serious commitment and can also cause tension, when the team is not as cohesive as it should be (some developers might feel like they are being monitored).
  4. The final and most common piece of the review process is the traditional “code review”. It has been my experience that this type of review is generally the most ineffective. Even with years of published guidelines, which could actually make the process effective, they are typically tossed out due to the lack of time and management guidance. I have blogged on this subject in the past. These reviews turn out to be more of a code presentation rather than a review. This will unfortunately be the first time that many of the participants will have actually seen the code!  No time is allocated for participant preparation. Worse yet, no time is ever allocated for the correction of discovered issues. The reviews are always performed late in the development cycle or during the testing cycle, simply to satisfy an SDLC deliverable. I believe many developers have been conditioned into disrespecting code reviews, simply because they become just another meeting, which produces very little value.

I might sound like I’m against code reviews, far from it. I am against the traditional code process that focuses on developer selected code, typically presented during a meeting. I recently reviewed the Crucible tool from Atlassian. The tool is far from perfect, but is ideal for managing asynchronous, distributed code reviews. Automated tools provide a variety of methods for selecting the code to be included in the review process, such as querying by user id or change ticket number.  The selected changes are then assigned to a collection of developers and the process begins. Each developer is notified via email that they have been assigned a collection of code to review.  When convenient for the reviewer, they log into the tool and are presented the files to review. Crucible actually tracks your progress through each file, giving progress feedback to the review coordinator, indicating that progress is underway.  The developer is presented the change sets and can add comments directly into the code. Each comment can be categorized as an issue or suggestion. I think Crucible is an great tool, but provides absolutely no reporting capabilities. I have talked with Atlassian and they seem to have absolutely no interest in generating meaningful reports. Even the simplest report, which would show all of the issues found during the review is not possible to produce. I’m a huge Atlassian fan, but this lack of functionality completely boggles my brain.  In today’s evidence driven SDLC world, the documentation from the code review is actually more important the code review itself!

Obviously, I don’t think code reviews should ever be eliminated from the process. I also believe it is possible to achieve real value from them, with very little overhead. Hopefully someday, I will actually be able to prove it!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 28 2011

Continuous Testing Plug-ins verses Continuous Integration

Category: Continuous Integration,Eclipse,TestingPhil @ 12:01 am

This is an extremely interesting concept, but I really wonder how it works in the real world. Toy projects are one thing, but real production size applications are a different story! The principle is very simple; just modify a piece of code in your project and the IDE automatically executes all of the unit tests. Pretty cool concept. But, what if your unit test suite runs for several hours? I believe this is an all too common problem. I have seen many projects dedicate time to building their unit tests, and over time, end up with test suites that can run for multiple hours. There is a very negative side effect associated with this scenario, it can change a developer’s attitude towards checking in code, it actually discourages it. If the developer knows their commits will not be validated for several hours, they tend to check in very infrequently and only when they feel it will not impact other developers. Many factors can contribute to this scenario. The primary factor is that many of these tests are not really unit tests; they typically do not provide the proper isolation (via mocking) and take too long to execute. I’m not going to address those issues directly, but the concept does support and encourage the idea of quick test execution.

Wouldn’t it be cool to run all of the unit tests, in your IDE before checking in the your changes? Obviously, this action is expected by the developer and is completely possible without any special tools. However, it is not automatic and requires extra effort by the developer. Usually, it is easier to let the Continuous Integration server run the tests and fix the fallout later. It seems like a simple problem to solve; create a plug-in that automatically executes all of the unit tests saving a file, requiring no additional effort or interaction from the developer. If you are an Eclipse user, you have two good options, JUnit Max and Infinitest. I would really like to try Kent Beck’s JUnit Max, but you need to purchase a subscription to use (and even try)  the plug-in. That is a show stopper for me, as there are free alternatives such as Infinitest. I evaluated the Infinitest plug-in last year, after it was open-sourced. I did encounter some problems with the plug-in, my Spring-based projects did not work; I was quickly discouraged and abandoned the tool. Months later, I was still intrigued by the idea, so I gave it a try last week on a different project and achieved success!

Unfortunately, Infinitest is not currently installable from the Eclipse Marketplace. It can be quickly installed the old fashioned way from the Eclipse update site.  There is only one setting you may want to tweak, the Slow Test Warning threshold. If  you have unit tests that execute for longer than half a second, Infinitest will add a warning to your Problems View for each one; this could be problematic if you have a large number of long running unit tests. You should also notice a huge new “Continuous Test” status bar in your Eclipse footer. It will update automatically based on your coding activity. JUnit Max is a little more subtle, but provides the same type of feedback mechanism. That’s it, you are now ready to write code and auto test!

I’m not exactly sure how intelligent these plug-ins are, related to determining which unit tests to execute based on what code was changed. Hopefully that is the next level of continuous testing evolution. However, if you modify a single unit test, Infinitest will only run that single test; beyond that, I don’t believe there is much intelligence. If your code modification causes any unit tests to fail, the status bar will turn red. Unfortunately, it does not tell you how many test failed, it only provides the number of tests which it executed. If you mouse over the status bar, it will display the names of the tests which failed, but you can not click on them, they are just there for feedback. You will need to look in the Problems View to discover which test broke, the view will now contain markers for each failed test.

The only issue I have with the plugin, is around user interaction. It does not provide its own View for test results, it simply adds the failures to the existing items in the Problems View. I assumed that the Unit Test View would be updated with the test results, similar to manually running a unit test; however, this was not the case.  It appears that both Infinitest and JUnit Max use the same strategy, providing minimal feedback to the developer. Maybe this a good approach, but I was expecting more visual feedback and confirmation. I associate static problems with the Problems View, such as compiler and code quality errors, not dynamic problems such as test failures. The developer should be able to click on a marker (problem) and be navigated to the exact line of code indicated by that marker, giving them the opportunity to resolve the issue .  Here is my problem with this approach, if the developer clicks on the failed JUnit marker, where will they be navigated? The only possible location is the unit test which failed. Generally speaking, this is not where the problem would be found, it would lie in the code that was just changed. Given this issue, I would rather use a different View for continuously executed tests;   maybe even better, find a way to reuse the existing Unit Test View.

This does bring up an interesting question, do these plug-ins minimize or eliminate the need for Continuous Integration? I don’t believe this is the case, but would seem to eliminate the need for some of the more advanced features the vendors are implementing. One such example is TeamCity’s remote build, pre-commit test feature. There are many factors to consider, but if unit tests execute quickly, I do not see the need for this functionality to be provided by the CI server. If the tests execute for multiple hours, we could be adding more risk into the process, by not committing changes when they are ready to integrate. Remote build, pre-commit testing sounds valuable on paper, but I am struggling to see the real benefits. I tend to think this functionality over complicates the Continuous Integration process. One of the main benefits of Continuous Integration is that it is seamless and automatic. Nothing is required by the development team to reap the CI benefits, simply check in your code early and often.  As more education and process overhead are required by the CI process, it is no longer simple nor free.  The focus should be on doing what makes sense, at the right time, in the right place. These IDE plug-ins seem to have correct perspective, pre-check-in on the developers desk, simple, automatic, and efficient.

I hope that you will try to integrate one of these plug-ins into your personal development process. I believe they will help create a more robust development process, creating fewer Continuous Integration failures and ultimately, a more efficient, faster executing test suite. If you have real world experience with one of these tools, please provide some comments. I am still curious, does this approach work on real projects?

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 01 2011

Build Evolution: Ant to Maven – A simple exercise…

I typically end up taking the role of “build guy” and I have no idea why! It is probably a control thing; I like to be able to fix problems and make improvements. I have been an Ant-man for many years, and was proficient in the “Art of Make” before that. I never gave Maven a chance, as it always seemed like it was too complicated for adoption by corporate projects. Ant can be very simple and quick; some might even throw the word dirty in there! Unfortunately, I have seen (hopefully not created!) some of the most horrific Ant build systems imaginable. Many times, it is a developer’s “first time effort” that becomes the project’s permanent build system. Because I had the opportunity to work on many different Ant-based projects, I gained experience by using and evaluating many different approaches. I believe that real problem with Ant, is not necessarily Ant itself, but the lack of standards and structure it provides. Ant allows each developer (or project) to invent its own Ant Philosophy. Once you are familiar with the Ant “programming style”, you can script just about anything, any way you want, manipulating files located in any direct structure, located anywhere on the network. Ant is a very powerful and flexible tool. Unfortunately, the more inventive the Ant Philosophy is, the more brittle and complicated the scripts can become. Ant XML also seems to be an excellent “cut and paste” candidate. New team members, with little time or understanding of Ant or the specific philosophy implemented, find duplicating targets the safest method to implement change. Another shortcoming of Ant is dependency management. This problem is easily solved and have blogged about it numerous times in the past. I believe that the use of a dependency management tool, such as Ivy, is an evolutionary step taken by “build guys”. Sadly, many developers don’t seem to be interested in dependency management. They don’t like the dependency on the dependency management tool and would rather just manage them within the project. For me, the value of the central repository and structurally documented, versioned dependencies cannot be undervalued.

I have probably described many of the same experiences you have encountered with Ant-based projects; they can be even more exaggerated when you start managing multiple projects. One possible evolution path is to define a “standard” directory structure and develop a collection of “shared” Ant scripts that work with the standard directory structure. This allows each individual project’s build XML to be little more than a set of properties. This is exactly where we ended up, utilizing an Ivy-like externalized repository for build scriptlets and Ant’s URL import capability. Hold on, a standard directory structure with reusable build components? Almost sounds like some other tool…

The Experiment…
Being an Eclipse guy, I was curious how to combine an Eclipse Web Tools Platform (WTP) project with a Maven project. My ultimate goal was to setup a Jenkins Continuous Integration Server, that monitored a Maven-based project on GitHub, and develop the code within Eclipse and utilizing a Dynamic Web Project facet. Historically, my projects were managed by Hudson using Free-style jobs; they were always Ant-based and extracted from a Subversion repository. I had worked on quite a few web applications using this pattern, so it seemed like a pretty good starting point! I had started a toy project to manage encrypted data a couple of years ago and never put it under version control, it was a perfect candidate for this experiment.

Changing a project’s build tool has a rather large impact and needs be considered from multiple perspectives. I have approached the problem by breaking it down into five (5) different spaces. This might not be the optimal or preferred approach, but does show the high-level differences and addresses the major build challenges. Surely, my next project will be done completely differently, but you have to start learning somewhere!

Perspective Changes
Dependencies To end up with an application that compiles in Eclipse, this change and the next have to be done in parallel. The easiest approach is to create a POM and focus on the dependencies section. If you are already using Ivy to manage your dependencies, this will not be that big of a change. Using a public Maven repository like MVN Repository to locate dependencies is very simple and saves a ton of time. The easiest way to create a POM is to let Eclipse make it for you; you will need to install the Maven plug-in first.  Once you have your project loaded into Eclipse, simply “Enable Dependency Management” in your project’s properties. You can also add dependencies directly from the  plug-in; I’m a little old school and prefer to edit the POM manually, but either approach works fine.
Local Development I never liked the required Maven directory structure; I must have subconsciously adopted the Eclipse project structure as my personal standard. Converting the project’s directory structure was trivial and soon realized it was not that big deal after all; I aways separate my source and test files anyway. The main difference is everything lives under the source root, including the WebContent directory. I’m not sure what the “preferred” process is for loading a Maven project into Eclipse, probably using the Maven Eclipse Plug-in, which will create the appropriate Eclipse property files. Since I was doing a conversion, I don’t think this plug-in would be too helpful. Because I’m fairly Eclipse savvy, I simply wired everything up myself. Having the Eclipse Maven Plug-in will enable your dependencies  to be automatically added to the build path. With the addition of the Eclipse “Deployment Assembly” options dialog, it is extremely easy manually configure your project. You need to have a Faceted Dynamic Web Module for the “Deployment Assembly” option to be visible, but that should be a fairly simple property change as well. At this point, you should have a fully functional, locally deployable web application.
The Build Now we can ignore Eclipse, and focus on building our WAR file using Maven. Even with the minimalist POM shown to the left (plus your dependencies section), it is possible to compile the code and create a basic WAR file. Use mvn compile to ensure that your code compiles. Using the Maven package goal, the source code is compiled, the unit tests are compiled and executed, and the WAR file is built. All of this functionality, without writing one line of XML!  One of the more time consuming parts of an Ant-based build is integrating all of the “extras” typically associated with a project, making them available to the continuous integration server. The extras include: unit test execution, code coverage metrics, and quality verification tools such as Checkstyle, PMD, and FindBugs. This tools are all typically easy to setup, but every project implements them slightly different and never put the results into a standard place! The general process for adding new behavior (tools) to a build appears to be the same for most tools. You simply add the plug-in to the POM and configure it to fire the appropriate goals at the appropriate time in the Maven lifecycle. Ant does not have this lifecycle concept, but it seems like a very elegant way to add behavior into the build. From the following example, I added the Checkstyle tool to the POM. The <executions> section controls what and when the plug-in will be executed. In this example, the check goal will be executed during the compile phase of the build process. Simply executing the compile goal, will cause Checkstyle to be invoked as well. This seems like a very clean integration technique, one that does not cause refactoring ripples.

The Cobertura integration is another very good example. I have been a fan of the Clover code coverage for many years. Since losing access to Clover, I needed to look for an open-source alternative,  and had never tried EMMA or Cobertura. They both seemed like capable tools, but I had more success integrating Cobertura with Jenkins. I’m highlighting this point, as doing code coverage with Ant and Clover can sometimes be a little tricky and usually messy. The Cobertura plug-in takes compete responsibility for instrumenting the code, executing the unit test, and generating the report; completely non-intrusive.

Continuous Integration Most development teams also want their project monitored by a Continuous Integration (CI) process. Modern CI tools such as Hudson/Jenkins provide excellent dashboards for reporting a variety of quality metrics. As I previously stated, it is rather time consuming to develop and test the Ant XML required to generate and publish these metrics; combine that with configuring each CI server job to capture these metrics and you have added a fair amount of overhead. I knew there was support for Maven-based projects within Hudson/Jenkins, but never took the time to understand why it would be beneficial. The main benefit is right there in the description, had I bothered to read it! Configuring a Maven-based job is little more than clicking a few check boxes. No need configure them in Jenkins, using the information provided by Maven, it is basically automatic. This is one interesting aspect of the Hudson-Jenkins fork. Sonatype, the creator of the Nexus Maven Repository manager and the Eclipse Maven plug-in, have chosen the Hudson side of the battle. I wonder what this means for Maven support on the Jenkins side. Obviously, it will not go away, but that might end up being a Hudson advantage in the long run. I still believe that the Jenkins community will quickly out pace the Hudson community.
Deployment I’m not going into much detail on this point, other than to say that I would like to see the deployment process completely separated from the build process, maybe even done by two different people or organizations. I have seen too many projects combine building and deploying into one system, creating artificial dependencies, time consuming, unreliable, and unmaintainable deployment processes. I have also experienced complete deployment overkill, causing simple deployments to take over thirty (30) minutes, with no real guarantee that it would be successful. Hopefully teams (developer and operations) will strive for a minimal deployment overhead/process, one that provides just enough security and control to enable continuous, rapid deployments.

I have barely touched upon the real power of Maven and know that some people are going to say my requirements are too simple to be valid. I’m sure there are projects far more complicated than mine, but I do feel that this project is highly representative of the 80-20 rule. If developer’s can have a little self control, conform, and do what is required verses what would be really cool, simple Maven POMs should satisfy a large number of corporate projects with very little overhead. That is the goal, correct?  I can even see using Maven for my toy projects; why do I want to waste time writing and maintaining a bunch of useless build scripts? I never realized that you could ease into Maven; I think that is the real point of the post! And, it is not actually a huge investment.

When I was Googling for this topic, I found many interesting articles. I truly believe the biggest reason preventing teams from evolving and adopting new approaches is simply fear, fear of change and the unknown. I still think Maven is a rather large “pill to swallow”, but have now experienced enough value to continue investing time into this technology. I took the typical developer approach and dove into Maven with little preparation or understanding… I would not suggest this approach, unless you to have lots of time to waste! If you are unfamiliar with Maven, I hope the following collection of articles will provide a quick overview of the basic concepts. I realize that some of them are a little old, but you should start with the Maven basics, and those really have not changed.

An Introduction to Maven2
Should you move to Maven2
Maven vs Ant: Stop the Battle
How to Migrate from Ant to Maven: Project Structure
Top Ten Reasons to Move to Maven 3
Maven over Ant + Ivy
Automated Deployment with Maven – Going the whole nine yards

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Feb 04 2011

Continuous Integration Saga – The move to Jenkins…

Category: Continuous IntegrationPhil @ 2:35 pm

If you have ever looked at my blog, you know that I’m a huge advocate of Continuous Integration and have been pushing Hudson for several years now.  The end of SUN unfortunately had a significant impact on many open-source projects, including Hudson. Support for Hudson continued to work pretty smooth from a user perspective until last December. I don’t think the quality of Hudson changed, but the quality of the releases seemed to change.  The Hudson team had been extremely consistent with their weekly releases; the release notes were always updated, the download links always worked, life was good. As 2010 came to an end,  something seemed to change; links pointed to the wrong versions, releases seems inconsistent, release notes were out of date or lagged days after.  The most obvious indicator was that the community was very quiet… something was happening behind the scenes.

Fast forward to last week, an important vote was taken by the existing Hudson community. It overwhelming decided to break away from ORACLE, essentially forking Hudson into a new product called Jenkins. ORACLE apparently holds the rights to the Hudson name and has already revamped the Hudson website. It will be very interesting to see what happens next. I don’t think I’m a fan of forking. I lived through the remnants of the emacs/XEmacs split; where did that really get us? In the end, I think it is mostly about ego and control; as users, we ultimately end up being split into two camps, typically incompatible with lots of petty sniping. You can read this recent announcement from ORACLE and the follow up responses, it is pretty obvious that this was not a friendly breakup!

Question of the Day – Which side are you picking?
I’m going with Jenkins.

  • First, the community was very committed to the split. Considering they have been leading the product for a very long time, I don’t predict a major change in direction.
  • Second, is Kohsuke Kawaguchi. This is not to take anything away from the rest of the Hudson
    developers; they all seem to do an amazing job and I anticipate this will not change. I am just extremely impressed with what KK has produced over the years.
  • Finally, where will ORACLE go with Hudson? Will there be anyone actually using it? I predict that Hudson will quickly fall behind in functionality to Jenkins and be a “good” choice for large companies that are only concerned about “supportability”.

Alternate Question of the Day – Which butler image will change first?
They can’t be the same, right? I think Jenkins should get a new mascot, after all, it is a brand new world! And one small favor, please don’t make it Chuck Norris!

I recently had the opportunity to create a couple Hudson plug-ins, this was a very educational experience! There is a lot of information available on the process, but unfortunately, I found it more valuable to walk through the code of existing plug-in implementations.  I believe that one of the major advantages of the Hudson/Jenkins architecture is the plug-in infrastructure. With just a little bit of vision and support, teams (and organizations) could extend Hudson into so much more than basic CI server. All things take time…

I already have my Jenkin’s server up and running and hope to get it integrated with my GitHub account in the near future… I want to finish documenting my experience creating a custom plug-in, which I will hopefully share out with my sample implementation.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 20 2010

Eclipse Project Configurations – Shared or Personal?

Category: Continuous Integration,Eclipse,Software DevelopmentPhil @ 12:01 am

To be honest, this was not a topic I was actually planning to write about. However, because I received a couple of interesting comments on a recent blog entry, Unfriendly Developer Practices, I thought I should clarify my position.

Question of the Day
Should your IDE (Eclipse) configuration files be checked in as part of your project?

One of the comments suggested that there should be should be no dependency between the build system and the IDE. Another person suggested the disconnect between the IDE and the build system creates a convenient place for dependency inconsistencies to develop unchecked.

I completely agree with both of those statements and have seen both problems manifested several times. However, there are two simple tools can make this a non-issue: Continuous Integration and Dependency Management.

  • Continuous Integration is obvious; if that lazy developer forgets to check in a dependency or update the build scripts, the build fails. Not perfect, but the problem is immediately detected and the relevant people are notified of the situation. The problem can be resolved within minutes.
  • The Silver Bullet for me, was the addition of the Ivy Dependency Management tool into our build process. Because the build system and IDE share the same dependency configuration, the project’s dependencies were now managed in single place. Using the IvyDE plug-in, Eclipse simply worked, with no additional configuration. Using the externalized dependencies and basic Ant targets, the project could create a robust, change resilient build system. To achieve this level of robustness, I took advantage of the Ivy post resolve tasks. It was not until I discovered how easily they could isolate the build scripts from the actual dependencies, did it all come together. The real beauty of this approach is that no files (dependencies) are directly referenced. The post resolve tasks create variables which contain all of the appropriate files; the build script can then treat these variables generically, without concern. Nice and clean!

It was an unstated, fundamental requirement to have no “direct” dependency on Eclipse; such that we could all revert back to the wonderful world of Emacs tomorrow, should the need arise. I don’t think which IDE a project chooses to use, is really that important. I am apparently an Eclipse snob; but all I really care about is having Emacs key-bindings! If a majority of the team works on the same platform, I do believe there is real value around this continuity; that just happens to be Eclipse for me!

I have several other reasons for checking in configuration files:

  • It quickly highlights wrong doing! If a developer checks in something specific to their environment, they will break everyone on the team. My goal is for complete project neutrality: check out on any machine, in any directory, and the project is guaranteed to build and deploy.  Additionally, check the project out in Eclipse and it should build with no issues, within minutes.
  • Not all developer’s actually care about tools. Some developers simply want their environment setup for them. They have no desire to figure out how Eclipse formats code when you save a file or how to configure and run quality checks after each build; implementing business solutions is their primary concern.
  • Enhanced team productivity. If everyone’s world (environment) is the same, it is so much easier to spin up a new developer or help a teammate with a problem. Would you really want each developer to go through the discovery process of setting up the project? In the big picture, isn’t this really just wasted time?
  • Helping to ensuring quality coding practices and standards. We also check in the Checkstyle, PMD, and Findbugs configuration and rule sets. Taking advantage of the sharable configuration files, both the Eclipse plug-ins and Ant tasks work from the same rule sets, ensuring complete consistency across the team, no matter where the rules are executed.

I appreciate all of the recent comments; thanks for taking the time to reply. They enable me think about and reconsider the decisions I have made, giving me yet another opportunity to learn from my mistakes!  As far as Eclipse configuration files are concerned, I strongly believe there is far more project value gained by including them, as compared to requiring each developer manage their own environment. One final note, I have no issue with developers wanting to manage their own world, more power to them!  I would hope that these efforts would be to make the overall, shared environment a little better; after all, everyone should be contributing to all aspects of the project! The real point is that everyone should not be required to configure a project, unless they really want or need to.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 11 2010

Unfriendly Developer Pactices – Adding uncessary environmental complexities

A couple of years ago, I began focusing on construction and deployment complexities. I had just joined a new team and began looking for the commonalities between their architectures and environments. Needless to say, they were all different. The most disturbing issue was the amount of effort required for a new developer to start working with an application. If there was any project documentation, it was probably out of date; resulting in hours or even days of trial and error to actually build and deploy the system. In this post, I’m talking about is building and deploying J2EE applications. The following list represents the top developer unfriendly practices I have observed.

Developer Unfriendly Practices
  1. Without first running a setup task in Ant, the project does not build in Eclipse.
  2. The Eclipse .project and .classpath files are not checked into version control system with the source.
  3. The project’s Ant scripts are generic and non-deterministic; they have to be edited for your individual environment before building the project.
  4. The project requires specific libraries or packages to be installed on your machine before it can be built.
  5. Eclipse project not configured as an WTP project.
  6. Minimal jUnits and no continuous integration.
  7. J2EE container dependencies built into the project.

After experiencing these challenges, I began a quest of environmental simplification. I understand there are numerous reasons why projects end up with these unfriendly characteristics,  but leaving them unaddressed was just not in my character! I made it my personal mission to ensure that each project that I worked on, I would try to leave it in a little better shape than when I arrived. This is a never ending activity; I hope that those that come after me will have a similar philosophy and continue my quest. It is amazing how easy and fast an application can atrophy, eliminating all of the positive changes that had been previously applied. To keep things simple, I came up with following three project requirements. I try to weave some aspect of them into the development process and architecture of each project that I work on.

Project Principles

  1. Self-Containment, No External Configuration.
  2. Environmental Awareness
  3. Change Resilient

I hope that most of these principles seem like common sense and are nothing new. Much like many of the XP principals, they are not new or revolutionary, just good reminders of often overlooked practices. I will try to elaborate on each principal in a future post, hopefully, in the near future!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jun 06 2010

Ivy Dependency Management – Lessons Learned and Ant 1.8 Mapped Resources

Category: Continuous Integration,Software DevelopmentPhil @ 1:59 pm

I have been using Ivy for a couple of years now. Strangely, of all of the developers that I have introduced Ivy too, I think I’m the only one really jazzed by this approach. Some actually question the value of the tool; change is a hard thing! I think it is hard to see the real value of dependency management, especially when you only work on a single application, for an extended period of time. After all, once an application is wired up with the right libraries, what is the big deal? I tend to work on multiple projects and take a more board view of development issues. I seem to be the guy creates the Ant scripts,  upgrades the libraries, and manage the dependencies between all of these tools; hence I’m a huge fan of dependency management! I see no better way to document the external dependency requirements of an application. From a corporate or even organization perspective, Ivy can provide additional value beyond development, it can support both compliance and configuration management activities. The value of a single, centrally managed, browsable repository of reusable components should not be overlooked. I believe that significant number of person-hours are lost each year on the simple task of component (open source and commercial) version management. Just think about the amount of time spent by a project to download new component versions (and their dependencies), update build scripts, manage them in a version control system, creating tags and branches for items that never change! Now, multiple that effort by the number of applications your company manages. I have to imagine this amount of time is more than insignificant. OK, my Ivy sermon is over; on to something that might actually be valuable.

Once you have your repository structured, Ivy works pretty well. Converting an Ant-based application to use Ivy is pretty trivial and always seems to work perfectly. On the other hand, the IvyDE plug-in has always been a little bit of pain, especially when you are working with a WTP project. After converting the first project to use Ivy, it was basically a cut and paste exercise for every other project. Unfortunately, there were a few side effects of using Ivy that I just finally got around to addressing. Not that these were huge issues, more a matter of cleanliness.

Use multiple Ivy configuration files rather than one single file.

We always use a single Ivy configuration file per project; it contains the dependencies for all activities: compiling, execution, jUnit, Clover, FindBugs, PMD, Checkstyle, etc. This made the Ivy configuration a little difficult to comprehend, as it was not 100% obvious which libraries were actually required to simply build the application. Additionally, this made the Ant build files and file system rather messy; we ended up with hundreds of jar files in the Ivy retrieve directory, not knowing if a tool or component was dependent on them.  If the project happened to create a WAR file, we had to “hand pick” the files from the retrieve directory and provide a custom fileset of required files for the WEB-INF/lib directory. This was a continual maintenance issue, as someone would add a new dependency to Ivy and forget to update the custom path object in the Ant XML. To simplify and eliminate this problem, we decided to create two Ivy configuration files, one for the required components (ivy.xml), and one for the continuous integration / build environment (ivy.tools.xml).  It was now much easier to visualize and manage the run-time components required to execute the application, independently from components needed to build and/or test the application.

Use Ivy resolve Ant task rather than Ivy retrieve task, or better yet, use the Ivy cachepath task.

The simplest Ivy strategy is to dump all the components into a single directory using the Ivy:retrieve Ant task. This approach gives direct access to the files and enables the creation of multiple filesets for specific uses. This is the easiest way to convert an existing project to Ivy, just re-point the build scripts to the new location. This approach aggravated some developers on the team, as we could end up with the same jar file in at least 3 different sub-directories on your machine; a copy in your Ivy cache, another copy in the Ivy retrieve directory, and another copy in the required location (WEB-INF/lib for example). The IvyDE plug-in uses files directly from the Ivy cache, no extra copies of the files are created. I should have picked up on this approach sooner, as it seems much more elegant. There are several post resolve tasks that can be used to implement this approach, namely the Ivy:cachepath task. This task creates Ant path objects with direct references to the file’s location within the Ivy cache. If you need a fileset object, you can utilize the Ivy:cachefileset task.

As you can see from the following example, it is easy to create multiple path objects from within your Ant build process. It is even possible to have extremely fine grain control, you can create path objects for individual modules, one for each specific tool if you so desired; I  have not used this strategy yet, but it does seem interesting.

<ivy:settings file="ivysettings.xml" />
<ivy:cachepath pathid="classpath.CORE" conf="runtime" resolveId="CORE" file="ivy.xml" />
<ivy:cachepath pathid="classpath.CI" conf="runtime" resolveId="CI" file="ivy.tools.xml" />
<ivy:cachepath pathid="classpath.COMMON"organisation="beilers.com" module="common" revision="1.0" inline="true" conf="debug" />

Once you have your path objects created, you can use them just like normal in other tasks, nothing special here. It is that point that what makes Ivy so attractive from a build (Ant) perspective, in that Ivy does not require you to change the way you think or typically use Ant.

<javac source="1.5" destdir="${build.dir}" debug="${javac.debug.option}" verbose="no">
     <compilerarg value="-Xlint:unchecked" />
     <src path="src" />
     <classpath>
           <path refid="classpath.COMMON" />
           <path refid="classpath.CORE" />
           <path refid="classpath.CI" />
      </classpath>
</javac>

I did run into one issue with the Ant <war> task using the Ivy:cachepath task. When the WAR file was created, the WEB-INF/lib directory did not get flattened. All of the dependent jar files were placed in sub-directories that mirrored the Ivy repository structure. Unfortunately, the <war> task <lib> section does not support the “flatten” option. (An Ant bug was actually filed in 2004 for this problem.) Luckily, the post gave me the appropriate Ant way of resolving the situation, using a new feature in Ant 1.8 called mapped resources. The XML seems a little convoluted looking, but works perfectly. I don’t think I would have ever figured this out without Google!

By combining all of these ideas into the Ant build build system, I can take advantage of the Ivy cache and eliminate all extraneous copies of jar files. The real value is provided by the multiple Ivy configuration files, making it completely unnecessary to know what jar files are included in each dependency. This approach makes the Ant build system significantly less error prone and eliminates all future maintenance concerning  jar files.

<war destfile="${war.full.path}" webxml="WebContent/WEB-INF/web.xml" manifest="${manifest.path}">
    <fileset dir="WebContent">
     </fileset>
    <classes dir="${build.dir}"/>

    <mappedresources>
      <restrict>
        <path refid="classpath.CORE"/>
        <type type="file"/>
      </restrict>
      <chainedmapper>
        <flattenmapper/>
        <globmapper from="*" to="WEB-INF/lib/*"/>
      </chainedmapper>
    </mappedresources>

    <zipfileset dir="src" prefix="WEB-INF/classes">
         <include name="**/resources/**/*.properties" />
         <include name="**/resources/**/*.xml" />
    </zipfileset>
</war>
https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Apr 20 2010

Making up your own methodology… WetAgile or W-agile

I was reading the Software Development Times last week and found an interesting article about development methodologies. I almost think Agile has turned into a “techno-fad”; a meaningless buzzword, right up there next to architect! Nobody wants to admit they are using the waterfall approach; they would have be considered an technical dinosaur; just look at the history highlights below!  Unfortunately, many projects seem to choose Agile for the wrong reason; there seems to be some kind of coolness, prestige, or promise of advancement associated with using Agile, rather than a true embracement of the methodology.

Can you believe…

  • The waterfall model was first documented in 1970 by Winston Royce.
  • The spiral model was defined by Barry Boehm in 1986 article “A Spiral Model of Software Development and Enhancement”.
  • The Extreme Programming model was introduced by Kent Beck in 1999. I recently suggested following Kent here
  • Interesting fact, Ken Schwaber, one of the SCRUM creators stated that 75% of organizations using SCRUM do not get the benefits they had hoped for using this approach.

So, everyone claims to be “doing Agile”; I think reality tells a slightly different story. I recently observed a group define the allowable methodology options for new projects; they chose Waterfall, Iterative, and Agile. Waterfall and Agile are pretty to definable, but Iterative seemed a little vague to me. After reading this article, I think I actually figured out what it was, WetAgile. Per the author, WetAgile is a term used to describer Waterfall shops that either aspire to be Agile or are in the transition process.
I actually found two versions of the document by the author, Steve Pieczko. The SD Time version seemed to be at a little higher level, while the Enterprise Management Quarterly version has a little more substance.

The real question is why do teams only implement so few aspects of Agile? The author suggest that we typically implement the pieces of Agile that we most understand or believe the Waterfall teams can easily embrace. It seems like the XP techniques are much easier for the development team to implement, specifically because they control their own space; whereas embracing the complete Agile (SCRUM) management process is a little more involved. I believe the problem is that too many people have to be involved to make it work. Throwing a bunch of people in a room and having a daily stand-up meeting does not exactly imply you are doing “Agile”. I find it disappointing that the most valuable part of SCRUM (my opinion), the creation and management of the backlog, is rarely  done. Probably because project management is still very “Waterfall” centric; you start with some requirements and end with a date, nice and simple. Additionally, it is not very easy to build a backlog when requirements, including defects and enhancements, are managed by processes that have nothing to do with the software development life cycle.

Here is the root of the problem, most projects (applications) are managed as series of individual releases, where each release has a concise beginning and end. Each release is a discrete and isolated project. I don’t think applications are ever viewed as perpetual, ongoing, evolving entities.  The subtle difference is that enhancements and defects are usually managed independently and are rarely viewed and prioritized together. The backlog, if it exists is virtual. The business seems to “divine” their most relevant needs out of thing air, ask for some LOEs, and then define the next release based on what can be implemented by the some magical date. That is a far cry of creating, evolving, and managing a complete list of features that ultimately should be built in a application, using a iterative release stream.

I think another significant problem is the perceived cost of creating a release. How great would it be to deliver new functionality to your customer every month? Unfortunately, with today’s audit and traceability requirements, the actual cost of creating a release can outweigh the value. Too many meetings, too many sign-offs, just too much overhead.; Not to bash the testing community, but it seems that the effort required to test and certify a release is also cost prohibitive. There is no possible way that testing teams could manage monthly release cycles using their traditional approaches. Should testing actually be the limiting factor in the software development process? What is really driving up the cost? What is burning up all of the time? Something I will have to research! I would hope that testing could be accomplished just like development, as developers attempt to do test driven development, testers should be following the exact same process. The continuous integration process should be running their suite, same as the developers suite. Would this not be sweet? The next question is, do we really need “User Acceptance Testing”? If the users are truly involved with the SDLC process and all testing is automated and quantifiable, is that not good enough? We seem to add multiple levels of testing, choosing an “accumulative testing strategy”, rather than a single, cohesive plan.

I found an interesting post on Agile Coaching, called “Agile or W-agile“. I think the blog is kind of unique, a series of drawings done on note cards and white boards… actually a pretty good way to get out a message!

Extra Credit Reading…
This article got me thinking: what is the next advancement in software development methodologies? It seems like eXtreme Programming and Agile have been around for many, many years; even though, maybe not widely adopted. I did a quick Google and found an interesting post titled: After Agile, What’s next? There are some interesting questions and several good links.
https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 30 2010

Unit Testing Not Required…

I assume that everyone performs some kind of unit testing… The real question is what does unit testing mean to you?  For me, it is some form of repeatable, assertion-based approach that can be re-executed by other developers and a continuous integration process. Unit testing is just one piece in my software confidence puzzle. That sounds like a great topic for my next blog! Unfortunately, some developers think that generating some log messages to inspect or stepping through the code with a debugger accomplishes the very same thing. Depending on the competency of the developer, I believe this can be partially true; however what about the next developer who needs to change the code?

Next, you need to place some kind of value on your unit test strategy. Is testing an investment, an expense, or a liability?  I think that unit testing typically starts as some type of organizational mandate. It quickly turns into an expense and ultimately a liability. I have seen very few projects truly embrace and believe in the value of unit testing. It seems that more often than not, a true believer (or zealot) establishes the process and everything goes along smoothly. After a while, the “pressure of the release date” eliminates the need for creating new unit tests; or even worse, the existing unit test are no longer maintained. They become brittle, broken and ignored. As the number of failed unit test grows, is seems very hard to justify the cost of fixing them. At this point, you have a liability problem; the existing failures get in the way of identifying and managing new failures. Who will really notice the the one new failure when there are already fifty other failures? If ignoring unit test failures is allowed by the continuous integration process, the typical developer will have no incentive to fix them. We have now created an environment where the correctness of the unit test only matters at the point when the code is checked in. That statement might not even be true, as there is no guarantee or reason for the developer to ensure that their new unit test executed properly in the continuous integration environment. Chances are, that after some period of time,  that new test will decay and be added to the every growing, ignored list of broken unit tests.

Wow, I’m painting a pretty bleak picture of unit testing! To make things even more discouraging, I recently lost the debate on the merits of unit testing with one of my mentors. I think he is one of the smartest and most practical architect/developer/person that I have ever worked with, yet he saw little to no value in unit testing. To this day, I still cannot completely follow his logic; surely there are other, external factors that could minimize the value of unit testing, such as team competency, cohesiveness, maturity, and the general environment, but I don’t believe unit testing can (or should be) completely eliminated.

Sad, but true…
On a previous project, one of my teammates said that he would not create any unit tests, unless the project manager added additional line items to the project plan, specifically for unit testing his code. I was amazed… Why was this not considered a standard development procedure? It is pretty easy to see why one could argue that unit testing is doomed.

Fortunately, I have worked on a few projects where we considered unit testing an investment and an asset. These projects mandated zero unit test failures and even required some level of code coverage. The teams never considered or required the creation of a unit test to be called out on the project plan; it was simply part of their job. More than once, these unit test suites gave us the confidence to make the right change (re-factor), rather than the typical Band-Aid Software Methodology typically employed to address problems. Sounds like another good blog topic!

Final thoughts…
Here is another good read, Design to Test; old, but still valid today! There are so many good points in this article, they deserve to be reiterated again and again. If you are familiar with the Spring Framework, they should really resonate with you…

  • Test the interface, not the implementation
  • Composition over inheritance
  • Singleton avoidance

Here was one more blog, “Tips for Testing“, that reinforces the increased ROI of unit testing over time. You should at least skim his tips, he seems to have an interesting perspective…

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Sep 09 2009

Hudson Task Scanner Plug-in

Category: Continuous Integration,Software DevelopmentPhil @ 8:21 pm

Ok, that is just about enough Hudson for this month, unless I get fired up about Selenium and try out the Hudson integration! I did find one more useful plug-in this week, the Task Scanner.  Because I tend to check in code early and often, my code is typically in an incomplete state; you will tend see a lot of TODO or FIXME comments in the code. Managing these task inside of Eclipse is easy enough, but I thought it might be interesting to also monitor them on Hudson’s dashboard.

The plug-in is very easy to configure and allows you to specify thresholds on the number of open tasks. Like most other plug-ins, it generates configurable trend graphs and provides a convenient task browsing screen.

Here was an interesting blog post that shows how you could use the plug-in to determine how messy your SVN merges will be… kind of cool..

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Next Page »