views:

44

answers:

2

When faced with a task, a new business domain, or a new library/framework/technology in your work environment there are basically two approaches:

  • Take a book, close yourself in a room for one or more weeks with your head in the book, finish the book and start designing, or coding, or
  • read the documentation, code a bit, read more, code a bit more, and so on, until you reach a breakthrough level of understanding, which is normally followed by insane refactoring or revision of what you did.

Both approaches have pros and cons. Notably, if you cannot afford to make it wrong the first time, the BLUF (nice acronym...) makes apparently more sense, but you have the drawback that your boss could look at you wondering how unproductive you are. Also, it's the only way to get a panorama of the whole toolset available, and perform appropriate choices upfront without risking dead ends.

The EL approach instead makes you productive from moment 0, you start producing something even if you are guaranteed it won't survive, it's for learning. This has the advantage that you get to think, together with learning. In some cases, you could potentially reinvent the wheel or discover that you wrote a lot of code for nothing. Your code is very unstable and changes as you learn, and you could reach a point where you are near to a breakthrough but the code produced seems to work, so you (or your boss) don't want to invest more time into it.

This is similar to the BigDesignUpFront over eXtreme Programming discussion, but it moves at another level. Even if you do BDUF, you can still perform EvolutionaryLearning while you write the specs, or the design.

What is your method when faced with a new (potentially large) learning task and have to accommodate it with the rhythms of your employment ?

CW as it's clearly a discussion.

+2  A: 

If the situation exists that "you can't afford to make it wrong the first", then you should seriously consider not using a new technology to develop it. Obviously, sometimes technology is chosen for you, and can't be avoided, but the situation should be avoided. More to the point, even if you go through a book or a class, chances are you still need to improve on your first design, for the simple reason, one always gets better the more one does something. I am a fan of learning a little, and coding a little up front to determine if the technology is what you'd like to start using. If so, then make a bigger investment to learn as much as you can, so you're more in line with that technology standards.

Jay
A: 

EL is my suggestion though generally I'd prefer to do a small experiment rather than a large project when something new is going to be used. Just to give a few examples of this:

CMS - Sitecore was chosen as what we would use in building our websites, so in that there wasn't a choice. However, we did end up having to take a few attempts before finally getting something out the door that was a workable product. A point here is that while we did get some training for it, the training and scope of work were just a wee bit different as the training was a 2 day quickie overview while the scope of work was rather broad given all the different things we need it to do and integrate with other systems.

AJAX - My first run at using AJAX was at a company where I got to investigate a few different ways to integrate it into the company's product and I took a couple of weeks of trying out things to figure out what worked best and went from there.

I could just as easily point to learning classic ASP or ASP.Net for examples similar to the AJAX case where the closest thing to training was a few books we had on hand to get through it.

JB King