Nov 22 2009

Self-Documenting Code… Interesting thoughts about comments…

Category: Java,Software DevelopmentPhil @ 2:30 pm

I recently found an interesting web site called Making Good Software. I really liked this guy’s site, he might just save me from writing down a lot of my own thoughts! Unlike my typical post, his are very short and easy to read. I highly recommend checking it out and popping through his other recent posts too. Nothing revolutionary, but just great reminders of how we should be thinking as we write new software. The following blog entries caught my attention:

The Comments are Evil statement really hit home with me… I have always said that code should basically be self-documenting. This statement is usually frowned upon some developers and managers; some people might think I’m just lazy. Farthest thing from the truth! By enforcing good coding practices with automation like Checkstyle, PMD, and FindBugs, combined with placing a high value on class and variable names, placing logic in the proper place, and keeping methods and classes small, most comments are redundant and typically provide little value. It was very interesting how the author associated comments with the DRY (Don’t Repeat Yourself) principal. Interesting thought, when your comments simply repeat what the code is already saying!

I also place a huge value on refactoring. I borrowed the image at the right to further highlight the point. There are many articles on the “Red Green Refactor” approach worth reading, helping to tie this all together. I guess one of my personal goals is always to create that elegant (simple and clean) solutions, which never happens the first time! Looping through R-G-R a few time helps achieve that elegance, and you end up with code that requires less comments.

Some of the comments in the Making Good Software blog are very interesting has well; many people throwing in their two cents, both pro and con on the value of comments. The author also states that comments cannot be completely eliminated. The code we write is never perfect and probably needs a little extra context to explain the hopefully infrequent imperfections.  I think the bottom line is to focus on quality code; eliminate the obvious, limited value comments, and comment the pieces that are truly intricate. If you are writing a lot of comments, then you better go back and look at your design, as something is probably not quite right.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Nov 21 2009

Gnome 3.0 Beta Shell

Category: Blogging,UbuntuPhil @ 6:22 am

Half of the fun of playing with Linux is trying out new stuff! I found a blog the other night that talked about Gnome 3.0 as well as installing it on Ubuntu. I don’t think that anyone would disagree that the current Gnome desktop seems a little dated; it is not vastly different from a pre-Vista Microsoft experience. Add in Avant Window Manager, Screenlets and/or Google Widgets, and you get a more modern Apple/Windows experience. I took the plunge and installed the 3.0 beta last week and have not looked back. I really like the interaction and thought the presentation was pretty slick. It is kind of funny how emotional people get (read the above blog comments) about change. People were going off on the usability and new look; saying it looked like Windows or Mac, or this feature was stolen from some other implementation. Who really cares? And why does everyone have to be so negative? I thought it was pretty creative!  Everything was very intuitive and easy to use; it might take a little more mouse movement to navigate, but the overall concept works for me. After using it for a couple days, I have learned there are multiple ways to navigate around, minimizing the clicks. The “Find” is one such short cut, you can quickly find an application in the menu system. It still has the good old fashioned alt-tab behavior to quickly switch between applications. The only option that was not obvious to me, was the virtual desktops. I’m not a big virtual desktop user any more (too much time on a Windows box during the day!); it did seam a little easier with the 2.x desktop (just needed to scroll the mouse wheel, I think).. Now you can dynamically (and easily) add new desktops through the activities menu (just click the plus icon). There is probably a nice short to to navigate between desktops, but I did not look for it.  One other usability note that is not completely obvious; you don’t actually have to click on the word “Activity” to get access to the menu, just push the mouse all the way to the corner and the menu pops up… Nice…

A couple of other interesting things I found on the Digitizor website.  Installing Chrome on Ubuntu and GIMP to be removed from next release of Ubuntu. I’ve also been playing with Chrome this week; it seems fine and fast. My “Wifi-based” Internet provider has so much latency, that it is hard to really hard to test “fast”.  I have been a fan of GIMP for quite a while and have only figured out the most simplistic functions to support my blogging; I guess I will have to give F-Spot a try in the future!

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Nov 16 2009

Code Presentation or Inspection?

Category: Software DevelopmentPhil @ 8:18 pm

Several months ago, I got back on the code review bandwagon and jotted down some thoughts. As I worked with other teams doing “code reviews”, the lack of structure (from my perspective) got me energized to go back down this road.  We have been playing with the Crucible Code Review Tool for a couple of weeks now, but I’m not completely sold on what I’ve seen; I feel like I’ve missed something, but still think this is the right direction to be heading.

I first want to share a couple of links to some vendor papers that I think are worth noting, Effective Code Reviews (Atlassian.com), and Best Practices (SmartBear.com). I found two points that I wanted to highlight from these documents, both of which are about “preparation”. The following graphics were pulled from referenced links.
This first one really drives me crazy, but  unfortunately, this appears to be the way that most teams perform their “code reviews”.  First, the presenter/reader sets up a phone line for all of the participants to dial into, even if they all sit together in the same area! Next, the presenter sets up a “net meeting” and walks the team through the code; adding comments or making small fixes along the way. Maybe I have too limited of an attention span, but I don’t really commit to this type of code review, nor do many others from my observations. Inevitably, someone comes by with a question, or you get an interesting email or instant message; something to draw you away from what is being presented. To make matters worse, you have probably never even seen the code before and maybe don’t even understand the design. How much can you really contribute, other than noticing the most obvious issues? The above graph shows the range of code reviews, from a formality perspective. Not that the process has to be this rigid, but I think it is important to discuss the stages and roles of an inspection:

  • An inspection follows a well-defined procedure that includes six stages: planning, overview, individual preparation, inspection meeting, rework,and follow-up. Certain participants have assigned roles: author, moderator, reader, and recorder.

Depending on the team, there is usually only one stage (meeting) and one role (reader). I personally feel it is much more valuable to explore the code and actually read the code in advance,  committed to providing feedback; I think the reviewer needs to physically participate, whatever that means: print off the code or browse the code on line, actually understanding the code, be engaged!

My second point is about preparation. I generally don’t think that most developers are very effective at “thinking on their feet”. It is really not about intelligence, but more about communication. Teams are typically very diverse,  both technically and it the way they consume (process) information. It can be very difficult to present code at a level which will be effective for everyone to understand and process. In my opinion, no preparation is equivalent to GIGO.

Previously, I thought it was more of the reviewers responsibility to make the process meaningful. I think the reviewers should (must) prepare before the meeting, just like they were studying for a test. Feedback must be returned before the “meeting”;  otherwise, there is no need to schedule the meeting. The purpose of the meeting is to discuss comments and determine the best solution for highlighted issues; this approach also helping to get everyone on the same page. The amount of preparation done by the presenter/reader can vastly impact the review as well.  If the reader simply “wings it”, it will be a generally unfocused and undirected presentation.
As illustrated by the graph to the right, the reader’s preparation has a dramatic effect on the code review process. The process of the author reviewing and re-thinking their implementation before submitting the code for review, provides another opportunity to discover problems before the inspection process actually starts, making the process even more efficient. How many times have you looked at your own code and said… “What was I thinking? This is not good at all…”  So, why know incorporate a “self-review” into the process as well?

I will wrap this up, so you don’t get too bored, my next blog will be on our Crucible evaluation. I did find a short blog that had a nice picture of the “Crucible Workflow“. This picture is where I’m striving to be, from a process perspective: collaboratively and iterative and ultimately actionable.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Nov 09 2009

NetBeans Unit Test Creation better than Eclipse? And where should unit test live?

Category: Eclipse,Java,TestingPhil @ 5:00 am

I seem to work on a variety of Java applications and find that unit testing is one of the most varied (implementation-wise) pieces of the development process. These applications, created by different development staffs, many which have evolved over the pass few years are all very unique… kind of makes me wonder why? That is a topic for another post!

One debate I always seem to encounter is around where to actually save the unit tests? Should we do in-package testing or rely on the public API. My preferences are pretty simple:

  • Unit test should not be stored with the code, but rather created under a secondary source root. On my early projects, I created a sub-package under each package, called test, to manage the relevant files (tests and data). I later realized that I could simplify the Ant build process and enable in-package testing by creating a separate source tree. I have seen projects commingle the actual source code, unit tests, and even the test data files, all within the single source tree. This defaults the testing strategy to in-package testing, and discourages (or even prevents) API testing. I find this approach rather messy and unclear. This approach also complicates the Ant build process; at some point during the process, the unit tests, supporting classes and files should be separated from the actual deployable content.
  • I like to name my secondary source root tdd. I know it is not widely practiced, but my hope is that if the developers see that TDD directory every time the open up their IDE, the concept might actually wear off! Maybe someday, one developer (hopefully more) will actually be challenged to write their unit tests first. With Eclipse and jUnit 4.x annotations, I seem to always start with my unit test, and sometimes even refactor code from the unit test into the actual class; kind of a high-bread TDD process, but the thought is always there!
  • I also prefer the public API testing strategy verses in-package testing. This line of thinking always takes me back to the Testability Explorer. The concepts behind this metric enforce the idea public API testing and is worth a quick read. Add the Spring Framework to the mix, enabling dependency injection and I see little need for in-package testing. This is typically accomplished by creating a “test” package as the root of the secondary source tree, with sub-packages reflecting the package hierarchy of your classes to be tested. I truly believe that defaulting to in-package testing allows the developers to be very sloppy and even unaware that they are testing a specific internal implementation, rather than externalized behavior presented by the API… This is VERY bad practice in my opinion.
  • If there is a need for in-package testing, then the sub-package structure of the classes to be tested can be recreated under the secondary source tree, maintaining a clean separation between the source and test cases. These test should be considered an exception to the norm, rather than common practice. I would hope for a 90-10 or 80-20 ratio, with a majority of the test falling under the test package root (in-package is the minority).

I am kind of tied to Eclipse as my IDE, but do play with NetBeans every so often. I think I could switch to NetBeans, my only real requirement is that the IDE must have Emacs key bindings; some habits are just too hard to break! NetBeans looks like an pretty good tool and seems to be very responsive on my Ubuntu box; I especially like the way it manages plug-ins.  One interesting thing that NetBeans does (maybe a little better than Eclipse), is manage unit testing. NetBeans will automatically create the secondary source tree, but seems to default to in-package testing. It is very easy to add the additional test package into the package structure to enable API testing. I also like the way that NetBeans separates the Test Libraries from the regular Libraries. I’m not sure how well this would work when you use a tool like Ivy, but does make it more obvious, as to which libraries are used for execution verses unit testing… If you do happen to generate a unit test from an existing class, NetBeans will generate more code than Eclipse. I’m not sure how useful this code is, but it does try to create the object under test, invoke the get() methods and perform assertions on the returned values. Might be more noise that it is worth, and anyway, you should be writing the unit test first!

The bottom line is that Unit testing should be easy and valuable. If it becomes too hard or complicated, it might be time to re-address how the unit test strategy is being implemented.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png


Nov 08 2009

Eclipse – Ubuntu 9.1 Button Issue…

Category: Eclipse,TestingPhil @ 5:18 pm
This was making me crazy the other night… I did not spend too much time on it, thinking is was just me. But, after upgrading my laptop and encountering the exact same problem, I knew there was more too it. It seemed like only the cancel buttons actually worked; all of the next and finish buttons did absolutely nothing, other then beep!  I soon realized that I could use the tab key and press enter to move to the next screen; but that did not make me very happy.

A quick Google search turned up several people with the same exact problem. It was kind of interesting. The Ubuntu developers did not want to take responsibility for the problem and said it was an Eclipse issue. The Eclipse community said they never had to do anything special for Eclipse to run on Ubuntu; it was looking like a stand off! I did find the official bug report filed with Launchpad.net, it says the problem is resolved. However, without overriding the following variable, Eclipse is not very usable.

export GDK_NATIVE_WINDOWS=1

It also appears that this problem is affecting many other tools on the new 9.1 release as well. This simple fix will get you back into action, without having to wait for the real fix.

https://www.beilers.com/wp-content/plugins/sociofluid/images/digg_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/reddit_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/dzone_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/stumbleupon_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/delicious_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blinklist_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/blogmarks_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/google_48.png https://www.beilers.com/wp-content/plugins/sociofluid/images/facebook_48.png