Dec 29 2011

Good Bye Code Reviews, Pair Programming is the Silver Bullet….

Category: Continuous Integration,Software DevelopmentPhil @ 12:01 am
I actually wrote this post last March, but never got a chance to publish it. That pretty much sums up my year!!! I don’t think I was ever really happy with the way it flowed or the exact point I wanted to make. However, after working with some different teams this past year, I’m even more convinced that code reviews and standards should exist and be enforced. Good topic for another post!

I might have said this in a previous post, but I believe that many developers use Agile as an excuse to avoid following institutionalized processes. They prefer to make up their own extremely lightweight methodology, doing only the activities they enjoy or believe they have time for. By adopting an Agile process, many claim they can eliminate the need for analysis, design, and even code reviews! Personally, I don’t believe this was the intention of the Agile methodology. However, given the substantial number of quality gates and sign-offs that organizations institute to ensure the elimination of mistakes, it is no wonder that Agile becomes the easiest method to subvert corporate mandates and deliver software. I completely understand the theory and goal of these mandates, but it does seem that by the time they are institutionalized, the overhead of the process is more complicated than the actual business to be solved! And before anyone corrects my association of Agile and Pair-programming, I do realize that Pair-programming is a tenant of Extreme Programming, not Agile. Unfortunately, many people seem to commingle the Extreme Programming principles with Agile; XP tends to generate negative connotations in some groups, while Agile is complete accepted.

Not so recently, a coworker sent me a question, wondering if I thought that Pair-programming eliminated the need for code reviews. I have actually heard this claim by several Agile teams in the past, using pair programming as an approach to avoid the code review process. There are several reasons that developers are quick to forgo code reviews, I believe the two most prevalent reasons are:
  1. Code reviews are done as an SDLC requirement; not because the development team actually desires to conduct them or finds value in them.
  2. The vast majority of code reviews are completely ineffective; there is no preparation, no structured process, and no follow through on findings.

There is actually a lot of Internet discussion on this subject; I have included a few links at the bottom of this post which I found interesting. None of them changed my personal views, in that code reviews cannot (should not) be eliminated from the process. One point that I would concede, is that pair-programming could make the code review process easier, assuming that fewer issues will be discovered during the review.

I believe that code reviews should be a “continuous process” during the entire development cycle; whereas most people consider the code review to be a process point. Some believe that the review is something that happens at the end of development, after the code is completed, after it has been tested; typically when it too late in the process to make corrections. I feel that code reviews are a never ending cycle of four basic activities, all contributing to the overall quality of the code base.

  1. The easiest and cheapest process is through automation. There are two basic solutions, the IDE and a Continuous Integration Server. Your IDE can be one of the most effective tools. Eclipse can be configured to share coding standards and validations across the development team, eliminating numerous bad practices at their point origin. Tools such as Checkstyle, PMD, and FindBugs can easily be incorporated into your IDE. These same validations (rules) can also be utilized by your Continuous Integration server, providing the final conformance checks. Automation eliminates almost all of the soft issues from the review process and even promotes better coding practices.
  2. Pair-programming is a very valuable technique and I enjoy the collaboration and education that happens during the process. However, I’m not sure that pair-programming works on all teams, there needs to be a real sense of openness and cooperation for the pairing to be effective. And there lies the real issue, if the pair is very cohesive in their approach, you have the problem of “group think”. If everyone is thinking the same way, it will be much harder to discover issues outside of their common perspective. I believe that code reviews can be more effective when conducted by people who are not wed to the design and/or implementation; an external perspective is always valuable to the process. Another small problem is related to traceability, without the review, there are no review artifacts or documentation to support the process. Next you have the practicality of the schedule. Assuming that you can eliminate the code review process by requiring pair-programming, how do you mandate or assure that it happens? Was all of the code written together by the pair? I find it really hard to believe that 100% of the code will be written completely by the pair. Given all of the demands placed on people, with meetings, vacations, kids, etc; it would not leave many shared hours for pair to write code. It does not seem practical to mandate that no code will be written outside of the paring. So do you only review the code that was not written by the pair? Sounds complicated! All that we accomplished was removing a “learning opportunity” from everyone on the team, including the developer that wrote the code!
  3. I’m not sure that “Continuous Review” is an officially documented concept, but it is a technique that I have used successfully over the years. Continuous Review is fairly simple, but is potentially time-consuming and highly prone to apathy. The process is extremely trivial; before accepting changes from your teammates, simply review the change set in your IDE. It can be quickly accomplished using the Team View found within Eclipse. You can walk through each commit and its associated files before updating your work area. By spending this small amount of time throughout the development process (each day), you can help ensure that everyone is on the same page; even ensuring that the code is in sync with the design. You get a very good sense of the overall heath and progress of the project, just by watching the commit stream. Unfortunately, not many developers seem to adopt this role, as it takes a serious commitment and can also cause tension, when the team is not as cohesive as it should be (some developers might feel like they are being monitored).
  4. The final and most common piece of the review process is the traditional “code review”. It has been my experience that this type of review is generally the most ineffective. Even with years of published guidelines, which could actually make the process effective, they are typically tossed out due to the lack of time and management guidance. I have blogged on this subject in the past. These reviews turn out to be more of a code presentation rather than a review. This will unfortunately be the first time that many of the participants will have actually seen the code!  No time is allocated for participant preparation. Worse yet, no time is ever allocated for the correction of discovered issues. The reviews are always performed late in the development cycle or during the testing cycle, simply to satisfy an SDLC deliverable. I believe many developers have been conditioned into disrespecting code reviews, simply because they become just another meeting, which produces very little value.

I might sound like I’m against code reviews, far from it. I am against the traditional code process that focuses on developer selected code, typically presented during a meeting. I recently reviewed the Crucible tool from Atlassian. The tool is far from perfect, but is ideal for managing asynchronous, distributed code reviews. Automated tools provide a variety of methods for selecting the code to be included in the review process, such as querying by user id or change ticket number.  The selected changes are then assigned to a collection of developers and the process begins. Each developer is notified via email that they have been assigned a collection of code to review.  When convenient for the reviewer, they log into the tool and are presented the files to review. Crucible actually tracks your progress through each file, giving progress feedback to the review coordinator, indicating that progress is underway.  The developer is presented the change sets and can add comments directly into the code. Each comment can be categorized as an issue or suggestion. I think Crucible is an great tool, but provides absolutely no reporting capabilities. I have talked with Atlassian and they seem to have absolutely no interest in generating meaningful reports. Even the simplest report, which would show all of the issues found during the review is not possible to produce. I’m a huge Atlassian fan, but this lack of functionality completely boggles my brain.  In today’s evidence driven SDLC world, the documentation from the code review is actually more important the code review itself!

Obviously, I don’t think code reviews should ever be eliminated from the process. I also believe it is possible to achieve real value from them, with very little overhead. Hopefully someday, I will actually be able to prove it!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 29 2011

Tired of reading blogs? Try Listening to some podcasts…

Category: Blogging,Software DevelopmentPhil @ 12:01 am

During my time off, I decided to start listening to some technical podcasts. I assume this makes me a complete geek, but since I increased my running distance to three miles per day, it turned out to be a very good use of my time. I believe podcasts can be an excellent source of information and even entertainment. The best strategy is to listen to different topics, topics that you don’t encounter everyday. A person can only read so many blogs! If you are like me, you probably only read blogs which are related to your personal interests. Seems completely normal, but how do you ever get exposed to different technologies or techniques? Unless you change companies or completely refresh your team every 6 to 12 months, your learning environment can become very stale. Listening to different topics is an great way to re-energize your brain and kick start your creative thought processing; at least it does for me!

Several months ago, I discovered Scott Hanselman. He happens to work for Microsoft, but don’t hold that against him. He records several podcasts each week or so; I subscribe to two of them and they are both excellent. Scott focuses on a variety of software development issues and technologies. Hanselminutes is more technology oriented, while This Developers Life covers a variety of personal issue, from motivation, drive, to the Egyptian Revolution. I highly recommend going back and listening to all of the old This Developer’s Life episodes; I can’t tell you how insightful and reflective they are. As an added bonus, you get to hear some rather interesting musical choices! The people that he interviews are so interesting and have some really funny commentary. You will hear stories from different types of developers, both famous and infamous. You will learn how they navigated through their careers, sometimes successfully and other times, not so well. All I can say, is they are definitely worth the time! I hope you enjoy them as much as I did.

Interesting Podcast Topics from Hanselminutes…

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 15 2011

Upgraded Web Browsing…. Firefox 4 and Chrome 10…

Category: Blogging,Software DevelopmentPhil @ 11:49 am

This might seem a little off topic, but the web browser could be considered one of your primary development tools. Just think how much time you actually spend in a browser:  researching, reading, testing, debugging, or simply wasting time! After playing with a few Chrome Extensions, I realized just how much more efficient I could be.  I liked the way Chrome integrates information and makes it useful, which is essentially Google’s Mission.

I had been a loyal Firefox user for many years. Firefox provided a nice cross-platform experience between Windows and Linux, and was highly extensible. This extensibility was one of the early benefits of Firefox, the ability to add new behavior to the browser through “Add-ons”. Amazingly, I was never a big “Add-on” user, I used a few of them, but did not take advantage or even explore what they could do for me. I used the Delicious Add-on for my bookmarking needs, but recently moved to the Diigo Add-on. I also use Firefox Sync for browser synchronization, including my tabs, bookmarks, history, etc. For posting to my blog, I’m a huge fan of the ScribeFire Add-on.  If you happen to do web page development, you have to try the Firebug Add-on. That is basically the extent of my Add-on usage, I did not ask too much from my browser!

Before jumping ship, I upgraded all of my machines to the Firefox 4 Beta. There were numerous technical improvements, but I was primarily focused on pure usability and how the browser could help me be more efficient. Start-up time was one of my biggest Firefox complaints; the browser seemed to have a tendency to bog down over time. The new version seems to have gone through a pretty dramatic user interface overhaul and addressed multiple performance issues, including start-up.

I was pretty happy with the UI changes, preferring the new, but controversial new tab location. The tabs are now located over-top of the navigation tool-bar; there was apparently quite a bit of debate on this little change! I prefer having two control rows at the top of the browser window, one row for tabs and the another row for navigation, apps, and widgets. I have seen a lot of content about these “web apps”, but it seems a little like pure marketing to me! The Firefox implementation, App Tabs, appear to be little more than a space saving short-cut; however, I can see them providing value for highly used web sites.

I had installed Chrome a couple of years ago, but was not too excited by it; I saw no compelling reason to change browsers. Wikipedia has an interesting graph of web browser usage; I was really amazed to see how the Chrome market share has taken off in the last twelve months. Even on my own blog, Chrome accounts for almost 25% of the traffic. I installed the newest version of Chrome last week and was immediately hooked. Unfortunately, I have become a true Google convert. It started with the purchase of my Android phone and there was no looking back. I am not saying that the following activities can or cannot be done in Firefox, I’m simply saying that I like everything better in Chrome!

A simple, but extremely cool feature is the “New Tab” behavior. It obviously opens a new tab, but its contents are quite different than you would expect. It is basically divided into three sections, Apps, Most visited, and Recently closed. Under the Apps section, you will see the Web Store icon; does everyone need to have their own app store these days? Anyway, the Web Store is a very well done site, that makes searching and installing new behavior extremely easy, using either applications or extensions.

Applications seem like fancy bookmarks, but from my reading, they can be (are) a lot more sophisticated. I looked at the SlideRocket app, it was genuinely cool… however, you can also run the app in Firefox! The app concept seems analogous to a rich user interface experience, one that performs like a real desktop application, rather than a collection of old-fashioned HTML pages.

My favorite feature of Chrome has to be the Extensions. Extensions add additional behavior to the browser itself. You can see from the picture to the right, I have added quite a few of them! They integrate into the Navigation Bar and look very nice, consuming minimal space while providing significant functionality. They look similar to the icons found in cell phones; many of the extensions have little indicators that track the number of items you need to address. You can find extensions for all of the standard Google tools: Gmail, Reader, Calendar, and even eBay. The Calendar extension is extremely helpful; it will tell you how long until your next appointment and when you mouse over it, it shows you the event details. The WP Stats is another personal favorite; it tells me how many people have looked at my site! Clicking on some icons will navigate you into the corresponding website, much like a short cut. Other icons have specific behavior, such as showing you detailed web site access statistics or an enhanced view of your search history. My favorite blogging tool, ScribeFire is also available in Chrome, but the spell checker is not working! I like the placement and interaction of the Chrome extensions much better than the traditional Firefox “Add-on” view, which is typically at the bottom of the browser window; Chrome make the extensions feel more integrated with the browser and part of the actual user experience.

My final Chrome praise is the synchronization with my Google account. It is pretty cool to watch an extension get automatically installed on my Windows machine, simply by installing it on my Linux machine. No restart or refresh required, it just automatically shows up!  I did notice one small oddity, I still had to configure the extension on the Windows machine. This seems rather strange, maybe it is a bug…. I assumed that Chrome would save each of the extension’s settings and synchronize them too. Even with this little shortcoming, there is no going back to Firefox for me, I hope you give it a try too!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 03 2011

TDD is too hard…

Category: Software Development,TestingPhil @ 2:11 pm
Earlier this year, I participated in a meeting to recommend testing best practices. Being fairly passionate about TDD, and working in an environment where unit testing is fundamentally an afterthought, I assumed the group would certainly recommended TDD as a best practice. Amazingly, TDD failed to get the official endorsement. Many of the committee members stated that TDD was too hard and required too much up front work. Too hard? Really? I was very surprised by their rational, especially since many of them were seasoned developers. The image to the right has nothing to do with software development, but I love the message! I thought there was a rather subtle correlation to TDD: Thinking first can prevent problems. Think First is probably a little misleading, surely, everyone claims to think before embarking on a task. For me, the real value behind TDD is the thought process; I like to connect thinking with the design process, not the coding process
I had the opportunity to participate in a SD Times webinar on Wednesday called “Leaders of Agile: Practical Agile with Test-Driven Development” by Kent Beck and Mark Seemann.  I took some notes on points I thought were relevant to truly adopting TDD.

 

Concept Observation
Write a little test, write a little code, iterate.
– Don’t spend hours writing code
– TDD tells you where you stand
I have worked with a lot of developers that literally spent hours writing code without ever executing it. Once they feel like they are done coding, they start to think about testing. Many times, this testing is done in the same manner as their development, which I will describe as “without a plan”.  Some even avoid structured, assertion-based testing frameworks, they simply spin up the container and execute the code. This approach will certainly work, but would you really say this is the most effective approach? You end up with a huge pile of un-validated, un-executed code.  Unless you are perfect and never make mistakes, you have no clue how many bugs are buried in the code. Even worse, if you discover a design problem or a more elegant approach, how much code will you have to change or toss away? This is not to say that refactoring or eliminating code is bad. The point is, this effort could have been minimized or even prevented by thinking first. Another benefit of starting from the test perspective, is you always know where you stand in the process; the test works or it does not. If the test does not work, you are probably not done.
TDD allows you to defer design decisions
– Refactoring and Retrofitting
– Working test first, Design as you go
If you think about TDD from an iterative perspective, you are basically working from the outside in. You can almost compare it to the Object Oriented principle of encapsulation, hiding or postponing design decisions.  Unit test should be focused on the behavior of the class, not the actual implementation. Because the unit test is focused on behavior, you have the opportunity to defer some of the design/implementation decisions to a later iteration. This later iteration might only be a couple of hours later, but, by thinking first about the desired behavior and interaction, you are not required to commit to an actual design or implementation.  Obviously refactoring can be a big part of TDD and is hopefully not considered a deterrent. Refactoring does have a cost; I believe this cost is justified and minimized by the confidence provided by a good unit test suite and the long term supportability of a elegant design.
TDD = API Design For me, TDD is more about the design process than the actual unit test. TDD forces me to concentrate on how the class will be utilized, enabling me to focus on the requirements of the class, implementing no more functionality than is actually necessary. Implementing the class first, you are basically guessing at the API, only imagining how the class could be used. Adding and subtracting methods as you go, based on how you feel the class might be invoked. I have to believe that this approach will generally not result in the cleanest API. Focusing on the external interaction and dependencies provides a simpler and more exact view of the actual implementation requirements.
Tests are First Class Citizens. This one is very near and dear to my heart. I can’t tell you how many times developers have pushed back on the enforcement of coding standards and quality as it pertains to the unit test suite. Why would anyone want to allow or encourage bad coding practices in the unit test code, that are not allowed in the real code? Do developers have a quality switch that can be turned on and off? Writing good, clean code over here, and sloppy code over there? It should be all about consistency and supportability. Unit tests need to be as understandable and maintainable as the mainline code; they will be maintained as long as the mainline code and over time, could provide more value (comprehension) than the mainline code.
Testable = Loosely Coupled Testability is another topic I find very interesting. I just love the Testability Explorer tool, but was never able to get others excited about this concept. If not designed effectively, unit tests have the potential to become a roadblock of change. Unfortunately, the same point can be made about the actual code; there are certain designs and implementation that can impede future change. The goal is to use patterns and tools that support change, such as dependency injection. Some people think that TDD can encourage coding without thinking; I have the complete opposite view, thinking first, to establish the design. It is my opinion that TDD is just one small tool that helps flush out designs; doing so without significant overhead and simultaneously building a safety net.

 

I was not sure how these “virtual conferences” actually worked, but this one was very well organized and seemed to go off perfectly. I was very impressed with both of the speakers and recommend subscribing to their blogs. They each made numerous points, but I wanted to share a point from each of them that I found encouraging.  I will probably paraphrase them poorly, but hopefully you get the point!

 

Speaker Thoughts…
Kent Beck – “Everything is hard. You need to be eager and willing to learn.” I thought it was refreshing to hear the words, “eager and willing”.  Unfortunately, we are not always eager and willing! You can’t force anyone to learn or change. There has to be a willingness; we need to be open-minded to absorb and process new thoughts. The benefits are not always immediate and may cause some pain, but in the end, learning is what makes us better.
Mark Seemann – “I tend to do a lot of thinking.” I thought this was great! I never hear people say they do a lot of thinking. Many of us tend to jump into the fire, probably because we are not given sufficient time to evaluate at the situation. My challenge to you, demand the time, take the time. Think First!
https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Mar 01 2011

Build Evolution: Ant to Maven – A simple exercise…

I typically end up taking the role of “build guy” and I have no idea why! It is probably a control thing; I like to be able to fix problems and make improvements. I have been an Ant-man for many years, and was proficient in the “Art of Make” before that. I never gave Maven a chance, as it always seemed like it was too complicated for adoption by corporate projects. Ant can be very simple and quick; some might even throw the word dirty in there! Unfortunately, I have seen (hopefully not created!) some of the most horrific Ant build systems imaginable. Many times, it is a developer’s “first time effort” that becomes the project’s permanent build system. Because I had the opportunity to work on many different Ant-based projects, I gained experience by using and evaluating many different approaches. I believe that real problem with Ant, is not necessarily Ant itself, but the lack of standards and structure it provides. Ant allows each developer (or project) to invent its own Ant Philosophy. Once you are familiar with the Ant “programming style”, you can script just about anything, any way you want, manipulating files located in any direct structure, located anywhere on the network. Ant is a very powerful and flexible tool. Unfortunately, the more inventive the Ant Philosophy is, the more brittle and complicated the scripts can become. Ant XML also seems to be an excellent “cut and paste” candidate. New team members, with little time or understanding of Ant or the specific philosophy implemented, find duplicating targets the safest method to implement change. Another shortcoming of Ant is dependency management. This problem is easily solved and have blogged about it numerous times in the past. I believe that the use of a dependency management tool, such as Ivy, is an evolutionary step taken by “build guys”. Sadly, many developers don’t seem to be interested in dependency management. They don’t like the dependency on the dependency management tool and would rather just manage them within the project. For me, the value of the central repository and structurally documented, versioned dependencies cannot be undervalued.

I have probably described many of the same experiences you have encountered with Ant-based projects; they can be even more exaggerated when you start managing multiple projects. One possible evolution path is to define a “standard” directory structure and develop a collection of “shared” Ant scripts that work with the standard directory structure. This allows each individual project’s build XML to be little more than a set of properties. This is exactly where we ended up, utilizing an Ivy-like externalized repository for build scriptlets and Ant’s URL import capability. Hold on, a standard directory structure with reusable build components? Almost sounds like some other tool…

The Experiment…
Being an Eclipse guy, I was curious how to combine an Eclipse Web Tools Platform (WTP) project with a Maven project. My ultimate goal was to setup a Jenkins Continuous Integration Server, that monitored a Maven-based project on GitHub, and develop the code within Eclipse and utilizing a Dynamic Web Project facet. Historically, my projects were managed by Hudson using Free-style jobs; they were always Ant-based and extracted from a Subversion repository. I had worked on quite a few web applications using this pattern, so it seemed like a pretty good starting point! I had started a toy project to manage encrypted data a couple of years ago and never put it under version control, it was a perfect candidate for this experiment.

Changing a project’s build tool has a rather large impact and needs be considered from multiple perspectives. I have approached the problem by breaking it down into five (5) different spaces. This might not be the optimal or preferred approach, but does show the high-level differences and addresses the major build challenges. Surely, my next project will be done completely differently, but you have to start learning somewhere!

Perspective Changes
Dependencies To end up with an application that compiles in Eclipse, this change and the next have to be done in parallel. The easiest approach is to create a POM and focus on the dependencies section. If you are already using Ivy to manage your dependencies, this will not be that big of a change. Using a public Maven repository like MVN Repository to locate dependencies is very simple and saves a ton of time. The easiest way to create a POM is to let Eclipse make it for you; you will need to install the Maven plug-in first.  Once you have your project loaded into Eclipse, simply “Enable Dependency Management” in your project’s properties. You can also add dependencies directly from the  plug-in; I’m a little old school and prefer to edit the POM manually, but either approach works fine.
Local Development I never liked the required Maven directory structure; I must have subconsciously adopted the Eclipse project structure as my personal standard. Converting the project’s directory structure was trivial and soon realized it was not that big deal after all; I aways separate my source and test files anyway. The main difference is everything lives under the source root, including the WebContent directory. I’m not sure what the “preferred” process is for loading a Maven project into Eclipse, probably using the Maven Eclipse Plug-in, which will create the appropriate Eclipse property files. Since I was doing a conversion, I don’t think this plug-in would be too helpful. Because I’m fairly Eclipse savvy, I simply wired everything up myself. Having the Eclipse Maven Plug-in will enable your dependencies  to be automatically added to the build path. With the addition of the Eclipse “Deployment Assembly” options dialog, it is extremely easy manually configure your project. You need to have a Faceted Dynamic Web Module for the “Deployment Assembly” option to be visible, but that should be a fairly simple property change as well. At this point, you should have a fully functional, locally deployable web application.
The Build Now we can ignore Eclipse, and focus on building our WAR file using Maven. Even with the minimalist POM shown to the left (plus your dependencies section), it is possible to compile the code and create a basic WAR file. Use mvn compile to ensure that your code compiles. Using the Maven package goal, the source code is compiled, the unit tests are compiled and executed, and the WAR file is built. All of this functionality, without writing one line of XML!  One of the more time consuming parts of an Ant-based build is integrating all of the “extras” typically associated with a project, making them available to the continuous integration server. The extras include: unit test execution, code coverage metrics, and quality verification tools such as Checkstyle, PMD, and FindBugs. This tools are all typically easy to setup, but every project implements them slightly different and never put the results into a standard place! The general process for adding new behavior (tools) to a build appears to be the same for most tools. You simply add the plug-in to the POM and configure it to fire the appropriate goals at the appropriate time in the Maven lifecycle. Ant does not have this lifecycle concept, but it seems like a very elegant way to add behavior into the build. From the following example, I added the Checkstyle tool to the POM. The <executions> section controls what and when the plug-in will be executed. In this example, the check goal will be executed during the compile phase of the build process. Simply executing the compile goal, will cause Checkstyle to be invoked as well. This seems like a very clean integration technique, one that does not cause refactoring ripples.

The Cobertura integration is another very good example. I have been a fan of the Clover code coverage for many years. Since losing access to Clover, I needed to look for an open-source alternative,  and had never tried EMMA or Cobertura. They both seemed like capable tools, but I had more success integrating Cobertura with Jenkins. I’m highlighting this point, as doing code coverage with Ant and Clover can sometimes be a little tricky and usually messy. The Cobertura plug-in takes compete responsibility for instrumenting the code, executing the unit test, and generating the report; completely non-intrusive.

Continuous Integration Most development teams also want their project monitored by a Continuous Integration (CI) process. Modern CI tools such as Hudson/Jenkins provide excellent dashboards for reporting a variety of quality metrics. As I previously stated, it is rather time consuming to develop and test the Ant XML required to generate and publish these metrics; combine that with configuring each CI server job to capture these metrics and you have added a fair amount of overhead. I knew there was support for Maven-based projects within Hudson/Jenkins, but never took the time to understand why it would be beneficial. The main benefit is right there in the description, had I bothered to read it! Configuring a Maven-based job is little more than clicking a few check boxes. No need configure them in Jenkins, using the information provided by Maven, it is basically automatic. This is one interesting aspect of the Hudson-Jenkins fork. Sonatype, the creator of the Nexus Maven Repository manager and the Eclipse Maven plug-in, have chosen the Hudson side of the battle. I wonder what this means for Maven support on the Jenkins side. Obviously, it will not go away, but that might end up being a Hudson advantage in the long run. I still believe that the Jenkins community will quickly out pace the Hudson community.
Deployment I’m not going into much detail on this point, other than to say that I would like to see the deployment process completely separated from the build process, maybe even done by two different people or organizations. I have seen too many projects combine building and deploying into one system, creating artificial dependencies, time consuming, unreliable, and unmaintainable deployment processes. I have also experienced complete deployment overkill, causing simple deployments to take over thirty (30) minutes, with no real guarantee that it would be successful. Hopefully teams (developer and operations) will strive for a minimal deployment overhead/process, one that provides just enough security and control to enable continuous, rapid deployments.

I have barely touched upon the real power of Maven and know that some people are going to say my requirements are too simple to be valid. I’m sure there are projects far more complicated than mine, but I do feel that this project is highly representative of the 80-20 rule. If developer’s can have a little self control, conform, and do what is required verses what would be really cool, simple Maven POMs should satisfy a large number of corporate projects with very little overhead. That is the goal, correct?  I can even see using Maven for my toy projects; why do I want to waste time writing and maintaining a bunch of useless build scripts? I never realized that you could ease into Maven; I think that is the real point of the post! And, it is not actually a huge investment.

When I was Googling for this topic, I found many interesting articles. I truly believe the biggest reason preventing teams from evolving and adopting new approaches is simply fear, fear of change and the unknown. I still think Maven is a rather large “pill to swallow”, but have now experienced enough value to continue investing time into this technology. I took the typical developer approach and dove into Maven with little preparation or understanding… I would not suggest this approach, unless you to have lots of time to waste! If you are unfamiliar with Maven, I hope the following collection of articles will provide a quick overview of the basic concepts. I realize that some of them are a little old, but you should start with the Maven basics, and those really have not changed.

An Introduction to Maven2
Should you move to Maven2
Maven vs Ant: Stop the Battle
How to Migrate from Ant to Maven: Project Structure
Top Ten Reasons to Move to Maven 3
Maven over Ant + Ivy
Automated Deployment with Maven – Going the whole nine yards

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Feb 04 2011

Interview Question – Name Five reasons Java is not Object Oriented?

Category: Java,Software DevelopmentPhil @ 12:53 pm

I thought it would be a really good idea to share interesting interview questions, especially the ones that caught me off guard!

I found this title to be a very interesting question… I’m not sure it was intended as an open-ended question, as I don’t think there is a right or wrong answer. It really depends on your perspective. Are you looking at it from a purely technical or logical (modeling/solution) perspective?   Even though I might be considered very technical, I prefer to stay out of the really low level stuff. Sure it is fun, to get down to the bare metal once in a while, but I get more enjoyment out of the thought process. Claiming to be a very Object Oriented person and stumbling on this question really bothered me… I came home and did some Googling, but really never found the answers that seemed to live up my expectations. Next, I sent an mail to one of my pals who just loves the nuts and bolts… In his typically witty way, he rattled off: int, float, boolean, double, long, and char. A completely valid answer! He then have me six other bullets which I have merged into the table below and tried to expand upon.

Reason Explanation
Primitives The interviewer lead me with this one, which it is actually pretty obvious when you think about it! This is really not disputable, primitives are not truly objects. I think for historical reasons, namely performance and ease of use, developers were pushed to primitives. With modern JVMs and Java 1.5 autoboxing, I feel like this flaw is slightly minimized. However, from a purest OO perspective this is a valid reason.
Constructors The interviewer also talked about the limitation where constructors can only return an instance of itself, the specific class; which has lead to the creation of factory patterns and DI frameworks. Having been a Spring Framework fanatic for so many years, I have probably developed a different style of programming which as shielded me from this issue. With some of the more  dynamic OO languages, you can apparently return a different type instance from the constructor. I need to study some more on this, but did find an interesting Ruby thread.  Ruby is definitely on my learning list, so I have to stop blogging!
First Class Objects This reason takes us to pure object orientation. I’m going to gloss over this a little and let you read more on your own as I did today, try this simple search. One of the more obvious distinctions, would probably be the meta data / class model provided by Java.
Statics This is actually one of my pet peeves. I am a no fan of static methods.  For me it is simple, statics equal functions which equal non object oriented. I have always bought into the DI propaganda which states that static objects and methods are untestable, which I fundamentally agree with. However, I did recently discover there are are several mocking frameworks that can actually mock out static classes and methods. This is pretty cool, but unnecessary in my preferred design and implementation style.
No Multiple Inheritance Coming from a C++ background, this was always a big discussion point. I generally did not recommend multiple inheritance, as it was tricky and usually implemented (and even modeled) inappropriately. I was much more interested in inheriting behavior rather than state. If I remember correctly, this is where it got interesting using C++.  Java Interfaces do a little bit to address this issue, but there is typically no why to elegantly solve the problem without a little code duplication and/or helper classes.
Strings My nuts and bolt friend gave me this one; not sure where he was going, but obviously the Java String class is its own unique creation. Personally, I was always bummed by the lack of a mutable String class. I might have prefer to see non-mutability implemented via subclasses, or something like the Collections.unmodifiableCollection() method. How about that, another static method reference! Maybe this would have been a little better, stringInstance.getImutable()!
Limited Operator Overloading Having done a lot of C++ in my early programming career, I found elegance in operator overloading and saw it as a void in Java. But, I do have to agree with the reason it was left out of Java, “it was used incorrectly to many times in C++”. Unfortunately, you can use any tool incorrectly; it might have been nice to let us decide if and how to use some of these features, rather than eliminate them. I did find an interesting thread on StackOverflow, so I will keep my thoughts short!
JNI This was another bullet from my nuts and bolt friend. In my opinion, JNI was more of a Java after thought, “bolted-on” to simply to take advantage of existing solutions and provide some level of extensibility. I have not used JNI for many years, and all I can remember is procedural nature of our solution. I don’t remember details, other than it was ugly! You could also say that ugly is not object oriented!
Package Level Scope Late breaking addition. I personally never liked the “package” level scoping provided by Java.  How is this object oriented? You are basically breaking the class level encapsulation, allowing all classes in that package access… This always felt like a little implementation hack to me… not one of my favorites!

My thoughts might not be completely baked, but I need to keep on moving; so please let me know if you think I missed the point! Generally speaking, I have been very happy with Java as a implementation language. All languages have their warts, and Java is no different. The vast number of tools and frameworks allow Java applications to be effectively developed, tested, and deployed.  I also believe that Java is pretty forgiving, allowing developers of all levels to build solutions; from being barely object oriented to insanely obscure. Hopefully, we end up in the middle with simple elegance. I hope to have the opportunity to explore some of these Java flaws in a Ruby, so please check back and see what I have learned!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 20 2010

Eclipse Project Configurations – Shared or Personal?

Category: Continuous Integration,Eclipse,Software DevelopmentPhil @ 12:01 am

To be honest, this was not a topic I was actually planning to write about. However, because I received a couple of interesting comments on a recent blog entry, Unfriendly Developer Practices, I thought I should clarify my position.

Question of the Day
Should your IDE (Eclipse) configuration files be checked in as part of your project?

One of the comments suggested that there should be should be no dependency between the build system and the IDE. Another person suggested the disconnect between the IDE and the build system creates a convenient place for dependency inconsistencies to develop unchecked.

I completely agree with both of those statements and have seen both problems manifested several times. However, there are two simple tools can make this a non-issue: Continuous Integration and Dependency Management.

  • Continuous Integration is obvious; if that lazy developer forgets to check in a dependency or update the build scripts, the build fails. Not perfect, but the problem is immediately detected and the relevant people are notified of the situation. The problem can be resolved within minutes.
  • The Silver Bullet for me, was the addition of the Ivy Dependency Management tool into our build process. Because the build system and IDE share the same dependency configuration, the project’s dependencies were now managed in single place. Using the IvyDE plug-in, Eclipse simply worked, with no additional configuration. Using the externalized dependencies and basic Ant targets, the project could create a robust, change resilient build system. To achieve this level of robustness, I took advantage of the Ivy post resolve tasks. It was not until I discovered how easily they could isolate the build scripts from the actual dependencies, did it all come together. The real beauty of this approach is that no files (dependencies) are directly referenced. The post resolve tasks create variables which contain all of the appropriate files; the build script can then treat these variables generically, without concern. Nice and clean!

It was an unstated, fundamental requirement to have no “direct” dependency on Eclipse; such that we could all revert back to the wonderful world of Emacs tomorrow, should the need arise. I don’t think which IDE a project chooses to use, is really that important. I am apparently an Eclipse snob; but all I really care about is having Emacs key-bindings! If a majority of the team works on the same platform, I do believe there is real value around this continuity; that just happens to be Eclipse for me!

I have several other reasons for checking in configuration files:

  • It quickly highlights wrong doing! If a developer checks in something specific to their environment, they will break everyone on the team. My goal is for complete project neutrality: check out on any machine, in any directory, and the project is guaranteed to build and deploy.  Additionally, check the project out in Eclipse and it should build with no issues, within minutes.
  • Not all developer’s actually care about tools. Some developers simply want their environment setup for them. They have no desire to figure out how Eclipse formats code when you save a file or how to configure and run quality checks after each build; implementing business solutions is their primary concern.
  • Enhanced team productivity. If everyone’s world (environment) is the same, it is so much easier to spin up a new developer or help a teammate with a problem. Would you really want each developer to go through the discovery process of setting up the project? In the big picture, isn’t this really just wasted time?
  • Helping to ensuring quality coding practices and standards. We also check in the Checkstyle, PMD, and Findbugs configuration and rule sets. Taking advantage of the sharable configuration files, both the Eclipse plug-ins and Ant tasks work from the same rule sets, ensuring complete consistency across the team, no matter where the rules are executed.

I appreciate all of the recent comments; thanks for taking the time to reply. They enable me think about and reconsider the decisions I have made, giving me yet another opportunity to learn from my mistakes!  As far as Eclipse configuration files are concerned, I strongly believe there is far more project value gained by including them, as compared to requiring each developer manage their own environment. One final note, I have no issue with developers wanting to manage their own world, more power to them!  I would hope that these efforts would be to make the overall, shared environment a little better; after all, everyone should be contributing to all aspects of the project! The real point is that everyone should not be required to configure a project, unless they really want or need to.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 19 2010

Self-destructive Tendencies and Root Cause Analysis

Category: Software DevelopmentPhil @ 12:01 am

One of my favor blogs, 37Signals, had an interesting post. It quoted Ed Catmull, President of Pixar, on the topic of creative vision and leadership. The full interview can be read here; it is actually rather interesting. What caught my attention was the quote concerning self-destructive tendencies, “if everyone is trying to prevent error, it screws things up. It’s better
to fix problems than to prevent them
“.

Unfortunately, it seems that in our current economic environment, it is more important to prevent mistakes rather than take action to address the underlying problems. This strategy can even be taken to the extreme, putting so many gates into the process, that it actually makes it impossible to move forward. It becomes difficult to fix existing issues and nearly impossible to development new functionality. The number of signatures and the amount of evidence required to prevent errors has become more of an accountability tool, rather than a quality tool.

Writing this post made me think of another technique that we always seem to overlook, the “5 Whys“. If you are like me and have not looked at this lately, it is perfect time for a refresher! Just click the link!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 11 2010

Unfriendly Developer Pactices – Adding uncessary environmental complexities

A couple of years ago, I began focusing on construction and deployment complexities. I had just joined a new team and began looking for the commonalities between their architectures and environments. Needless to say, they were all different. The most disturbing issue was the amount of effort required for a new developer to start working with an application. If there was any project documentation, it was probably out of date; resulting in hours or even days of trial and error to actually build and deploy the system. In this post, I’m talking about is building and deploying J2EE applications. The following list represents the top developer unfriendly practices I have observed.

Developer Unfriendly Practices
  1. Without first running a setup task in Ant, the project does not build in Eclipse.
  2. The Eclipse .project and .classpath files are not checked into version control system with the source.
  3. The project’s Ant scripts are generic and non-deterministic; they have to be edited for your individual environment before building the project.
  4. The project requires specific libraries or packages to be installed on your machine before it can be built.
  5. Eclipse project not configured as an WTP project.
  6. Minimal jUnits and no continuous integration.
  7. J2EE container dependencies built into the project.

After experiencing these challenges, I began a quest of environmental simplification. I understand there are numerous reasons why projects end up with these unfriendly characteristics,  but leaving them unaddressed was just not in my character! I made it my personal mission to ensure that each project that I worked on, I would try to leave it in a little better shape than when I arrived. This is a never ending activity; I hope that those that come after me will have a similar philosophy and continue my quest. It is amazing how easy and fast an application can atrophy, eliminating all of the positive changes that had been previously applied. To keep things simple, I came up with following three project requirements. I try to weave some aspect of them into the development process and architecture of each project that I work on.

Project Principles

  1. Self-Containment, No External Configuration.
  2. Environmental Awareness
  3. Change Resilient

I hope that most of these principles seem like common sense and are nothing new. Much like many of the XP principals, they are not new or revolutionary, just good reminders of often overlooked practices. I will try to elaborate on each principal in a future post, hopefully, in the near future!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Jul 05 2010

Ant 1.8, Reusable XML Scriptlets, and Ivy for Self-Containment

Category: Software DevelopmentPhil @ 8:33 am

I started reading about EasyAnt sometime ago on the Ivy developer forums. It was a very interesting concept, storing reusable Ant targets in your Ivy Repository and pulling them into projects at build time. Ivy works great for jar files, so why not Ant build scripts?  I have a set of Ant scripts in my toolbox that seem to get copied from project to project; it bothers me every time I copy them! We gave EasyAnt a quick look, but it had one show stopper; it required every developer and builder to have EasyAnt installed (rather than just Ant). This is a little thing, but when you are working with dozens of projects and hundreds of developers, it seemed like too much of a hurdle.  I wanted Ant to be the only installation requirement and be “self contained”. Just type “ant…” Nice and simple.

I started addressing the problem my making templates out of my scripts and placing them into a Subversion repository. They could easily be included into a project using the Ant <get> task. A little ugly, but it worked well enough. It did required us to land the XML files in the project directory, such that they could be “imported” into the running build.  Once again, not a problem if you remember to use SVN ignore on the .ant directory; we might have been able to put them in /tmp, but never tried it.

<property name="ant.xml.location" value="http://svn:1234/Ivy/Ant/Common/trunk/main/1.0/xml" />
<mkdir dir=".ant" />
<get src="${ant.xml.location}/ant.ivy.xml" dest="${basedir}/.ant/ant.ivy.xml" usetimestamp="true" />
<get src="${ant.xml.location}/ant.checkstyle.xml" dest="${basedir}/.ant/ant.checkstyle.xml" usetimestamp="true" />

<import file=".ant/ant.ivy.xml" />
<import file=".ant/ant.checkstyle.xml" />

With Ant 1.8, you can actually accomplish this in a much cleaner method. Look at the import task for some quick examples. Our first test was a very simple implementation, but worked like a champ.

I personally like to break my Ant scripts into multiple files, each with it own purpose. Some of my coworkers hate this approach and would prefer a single XML file. In the long run, the multi-file approach seems easier to support, especially when you factor in multiple maintainers. You would never put all of your Java code in a single class, why would you put all of your XML in a single file? I will ultimately create a single file to import; it will import all of the individual XML files. This should make the consumers of the templates a little happier!

<property name="ant.xml.location" value="http://svn:1234/Ivy/Ant/Common/trunk/main/1.0/xml" />
<import>
       <url url="${ant.xml.location}/ant.ivy.xml" />
       <url url="${ant.xml.location}/ant.checkstyle.xml" />
       <url url="${ant.xml.location}/ant.pmd.xml" />
       <url url="${ant.xml.location}/ant.clover.xml" />
       <url url="${ant.xml.location}/ant.findbugs.xml" />
       <url url="${ant.xml.location}/ant.testability.xml" />
</import>

I think the javaresource approach would be the perfect way to solve my problem. Unfortunately, I could not find any way to combine a remote URL with a classpath attribute. It is a neat little trick to remember, maybe this feature could be added to the next version of Ant.

<import>
    <javaresource name="common/targets.xml">
      <classpath location="common.jar">
<!--  <url url="http://somewhere/common.jar/> -->
    </classpath>
  </javaresource>
</import>

One final word about completeness. My personal goal is to make the build system completely self-contained, meaning no external dependencies (outside of Ivy), no custom Ant installations, no Ant-lib requirements, or any manual configuration. Ideally, a developer should be able to check out the project from the version control system and get to work. It should automatically build and execute from Eclipse, and as long as Ant is installed, the developer should just be able to type Ant <target> to accomplish where ever is required.

I have been changing all of my templates to make use of the ivy:cachepath and ivy:cachefileset tasks. In prior Ant implementations, I used Ivy retrieve to access all of the dependencies; my new goal is to only use files from the Ivy cache. Additionally, I think the Ant templates should do their own Ivy dependency management. Now applications teams only need to worry about their own dependencies, not what is required by the environment or build system. The follow example demonstrates how to add Clover to the build process, and ensure that the Clover license is added to your class path, all without really knowing where the files are… Pretty cool!

<ivy:cachepath pathid="classpath.CLOVER" conf="runtime" organisation="atlassian" module="clover" revision="3.0.2" inline="true" />
<ivy:cachefileset setid="fileset.CLOVER" conf="license" organisation="atlassian" module="clover" revision="3.0.2" inline="true" />
<pathconvert pathsep="${line.separator}-> " property="clover.license.path" refid="fileset.CLOVER" setonempty="" />
<dirname file="${clover.license.path}" property="clover.license.dir" />

<taskdef resource="cloverlib.xml">
     <classpath>
          <path refid="classpath.CLOVER" />
          <path location="${clover.license.dir}" />
      </classpath>
</taskdef>
https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Next Page »