Carlos Ble

Carlos Ble

I am a professional software developer, I solve problems.

I also teach and mentor developers to build better software.

Developing software since 2001.

Can I help you?

  • Do you need high quality tailor-made software?
  • Need training on TDD, clean code or refactoring?
  • Do you need a technical consultant?
  • May I pair with you to write better code?

Events

Upcoming training courses:

  1. TDD - [en Español] - 6 y 7 Octubre
    Gran Canaria
  2. TDD - [in English] - October 20, 21 & 22
    London, UK
  3. TDD - [en Español] - 29, 30 y 31 de Octubre.
    Madrid, Spain

Conferences:

  1. I'll be at the Agile Testing Days 2014
  2. I'll be at the London Test Gathering Workshops.

Practising Mob Programming

mobProg1

In the past edition of Socrates UK, I met Gianfranco Alongi who told us about his team's experience with  Mob Programming. It was the first time I heard about it. As the site says, Mob programming is people working at the same time, in the same space, at the same computer, on the same thing. Gianfranco said it was very useful for them.

At the beginning of November three of my fellow craftsmen and I joined a team of other 12 developers to work on a new project. We are building a DMS (Dealer management system) and an IMS (Importer management system). The project is very exciting but we don't have a clue on the domain. We are also a lot of people with different programming styles. So we decided to give Mob Programming a try and after a few weeks the results are very positive.

 

Some lessons we have learned along the way:

  • Like with pair programming, navigators should take notes whilst the driver is at the keyboard, rather than disturbing him constantly.
  • Only when the driver raises his hands off the keyboard we are allowed to discuss.
  • Rotate frequently: it's faster to jump in the keyboard and do it yourself than trying to explain the code you envision.
  • Limit the group size to 5 or 6 people. We find that 4 is a good number, more than that is not being productive for us. Sometimes it's useful though, when we want to define conventions for the whole team.
  • The group should be stable: if new people join the discussion in the middle then we have to explain again and again the code and the decisions we have already made. It's like the never ending story.
  • At some point I had to leave a session right in the middle of something. When I came back a colleague told me that the people who were shy or less proactive during my driving period, started proposing interesting ideas during my absence. The people considered "seasoned" or "experienced" should go away from time to time for the others to feel empowered.

So far the project is just fantastic. People are awesome, learning incredibly fast, very enthusiastic. Everyone wants to learn and contribute. We are all learning a lot and doing our best. I can't think of a better project. We are very lucky.

mobProg2mobProg3

 

 

Implementing BDD – part 1

Scenario review

Scenario review

This is the first of a series of posts on how our team is practicing BDD. These are mostly notes to myself and our team so other readers may not find enough context to understand the points.

 

 

 

 

 

 

After a few weeks we have decided that:

  • Although source code is written in English, the specifications are written in Spanish. Because the business is always discussed in Spanish, it's the language used to determine what we need to do. So the Cucumber scenarios are written in Spanish.
  • There is a glossary of business terms to help us document the ubiquitous language. The glossary contains the translations from Spanish to English so that the source code expresses the same concepts.
  • If developers need to change or adjust the scenarios for some reason (for instance, automation purposes), then business analyst have to validate the changes.
  • Scenarios will be stored in only place only, and that place is SpecFlow (Cucumber). This means that business analyst need to have all the tools installed and access to the source control repository (Visual Studio and TortoiseHG).
  • After the specifications workshop, scenarios are written in pairs or groups of 3 or 4 people. Two people is usually enough. We've realized that writing the scenarios in pairs help us come up with more questions and more ideas to simplify the problem.
  • The automation of the scenarios should also be done in pairs, because this is yet another opportunity for us to detect inconsistencies or doubts. Well, in general we just pair most of the time.
  • A user story is not ready for implementation until it's got at least 4 scenarios. We started with just one and the lack of context caused us to introduce wrong models in the code. Actually when test-driving a small function, we never start out with a single example but with a to-do list. Then we focus on each test writing the minimum code required but we do know where we want to go. This is the same but at a higher level.
  • Given that our business analyst know how to read code, they should read the source code at the end of the iteration. By reading the part of the code inside the hexagon (we use hexagonal architecture), they can ensure the designed model makes sense.
  • Scenarios should not repeat irrelevant details. If one scenario uses real data like a "plate number", then the following scenarios don't need to repeat the whole plate number but just something like "x". We should keep in mind that scenarios complement each other in terms of documentation. They should be indepdendent for execution but not for documentation.
  • Triangulation: we can make a scenario go green quickly. However, triangulating the code that really works for all the cases falling under that scenario, may require more code. If the triangulation does not concern business details, we use NUnit to test-drive rather than SpecFlow. Scenarios are used for documentation, not for triangulation, unless the triangulation covers relevant business tips.

It was by chance that one of our business analysts came up to a mob programming session and got to see code related to the scenarios that we had defined together. Quickly he realised that we were introducing aliens in the domain (wrong models). It was because we were lacking some other scenarios. The fact that the code was in the wall with a massive font size and it really communicated our intent, together with the fact that the reader is very clever, made us discover an useful technique to double-check our assumptions.

 

A method is an abstraction

The name of a method must have a higher level of abstraction than the body for the method to be worth it. The name should explain what the method does but not how. If the name communicates exactly the same than its code, it could be a sign that the method does not pay off and should be inlined.

Whenever we extract a method from a bunch of lines of code, we are adding a little abstraction to our code. If the extraction happens too early, then we are adding wrong abstractions to the code. Those premature abstractions get in our way to a better design, often hiding duplication and deeper design problems. Lots of small methods does not necessarily mean better design.

I extract a method only when my tests are green and it's really obvious the responsibility of those lines. When I am done implementing the branch of the solution I am test-driving and it's easy to find a name for the new method.  I don't create new methods to make a test pass. Methods appear later.

Code below is an example of premature method extraction. Right after the first test passed the developer created this method:

  1.  
  2. private boolean isValidArithmeticExpression(String input) {
  3. return input != null;
  4. }
  5.  

The body of the method is perfectly understandable without the method. There was no duplication of the statement and there were just a few lines in the production code. Definitely too risky, too early, too complex.

An example of a wrong encapsulation:

  1.  
  2. public string splitBySpace(string expression){
  3. return expression.split(" ");
  4. }
  5.  

In this example the method's name says exactly what one can read in the body.

The same applies to helper methods in the tests. This is an example of a wrong encapsulation:

  1.  
  2. private void conductCalculationAndAssertResult(String equation, int expectedResult) {
  3. int result = calculate(equation);
  4. assertEquals(expectedResult, result);
  5. }
  6.  

Implementing a fake repository

Lately I am using fake objects more than the other common kinds of doubles - stubs, spies and mocks. This is quite recent, I used to prefer stubs, spies or mocks. I find that fakes make my tests less fragile in the case of outside-in development when it comes to changes in the design. The problem is that there could be defects in the implementation of the fake objects. A tricky one may appear in the case of in-memory repositories:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. jobs.Add(job);
  6. }
  7. public void Update(Job job)
  8. {
  9. // TODO
  10. }
  11. public List FindBy(string ownerId)
  12. {
  13. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  14. }
  15. }

This implementation is misleading. If we are test-driving a service expecting it to update a Job instance, we may forget to invoke the repository's Update method and it would still work.

  1. [TestClass]
  2. public class ServiceTest {
  3. [Test]
  4. public void AssignsOwnerToJob()
  5. {
  6. var service = Factory.CreateJobServiceWithFakeRepository();
  7. var job = service.CreateJob(irrelevantTitle);
  8.  
  9. service.Assign(job, irrelevantOwnerId);
  10.  
  11. service.JobsOwnedBy(irrelevantOwnerId).Should().HaveCount(1);
  12. }
  13. }
  14.  
  15. public class Service {
  16. private JobRepository repo {get; set;}
  17.  
  18. public Service(JobRepository repo){
  19. this.repo = repo;
  20. }
  21. public Job CreateJob(string title)
  22. {
  23. var job = new Job();
  24. repo.Add(job);
  25. /*...*/
  26. return job;
  27. }
  28. public void Assign(Job job, string ownerId){
  29. /*...*/
  30. job.Assign(ownerId);
  31. /* BUG: am forgetting the call to "repo.Update(job)" right here */
  32. }
  33. public List JobsOwnedBy(string ownerId){
  34. return repo.FindBy(ownerId);
  35. }
  36. }

The problem is that the in-memory repository is using the same object references all the time. When an object is passed in as an argument, it's actually a copy of the reference. However if the repository was accessing a database it would most likely return different instances (references). A better implementation of the repository would be this one:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. /* clone: the object added to the repo
  6.   is different to the given one */
  7. jobs.Add(job.Clone());
  8. }
  9. public void Update(Job job)
  10. {
  11. /* find the given job,
  12.   remove it from the collection and
  13.   add a clone of the given one */
  14. }
  15. public List FindBy(string ownerId)
  16. {
  17. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  18. }
  19. }

Conclusion: the behavior of the fake should be the same than the original when it comes to handling object references.

JavaScript for testers workshop at LTGW2014

JavaScript for testers workshop

Yesterday, Jenny Mulholland (from Softwire) and I & ran our workshop on JavaScript for Testers at the London Tester Gathering Workshops 2014, at Skills Matter.

We had a nice group of participants with a wide range of kills and knowledge and we covered (roughly) multiple aspects of testing JavaScript apps. I had fun and we were busy during the workshop which is a good sign. Please if you were in the workshop give us some feedback using the comments in this post apart from the form that SkillsMatter will send to you, it will be very appreciated and will help us improve.

These are some of the resources we used:

Big thank you to Jenny for her invaluable help and support before and during the workshop. And to Tony Bruce and SkillsMatter for organizing this great two days of networking and workshops. It's fantastic to visit London from time to time and meet so many fascinating people.

There is no magic in programming languages. The fact that they are formal languages makes them deterministic. For every behaviour there is a reason. For every error there is a logical cause.

Before googling for an odd error message, try to understand what the interpreter means. This is the fastest way to solve the problem.

I often see developers wondering why a certain test throws an error, whilst reading several times the production code, without really understand what's the meaning of the error. Wrong approach, one must know what the interpreter is saying before searching for the cause. It may be an error in the test itself!

The same error message may be expressed in a different manner by different browsers and other JavaScript interpreters like Node. This fact emphasizes the need for understanding what are they talking about.

In order to understand JavaScript error messages one must understand
what's the meaning of "undefined". Undefined is a special value.

  1. var x; // We are declaring a varible, whose value is undefined

In this case we say that variable "x" is undefined, is the type of the variable:

  1. typeof(x); // prints out the string "undefined" (as a string!).

The value of a variable is undefined until it's assigned a concrete
value:

  1. var x = 5; // x is a Number
  2. var x = function(){}; // x is a Function
  3. var x = {}; // x is an Object
  4. var x = true; // x is a boolean
  5. var x = undefined; // x is undefined

Now let's review errors containing the word "undefined" in the message:

TypeError: undefined is not a function (Google Chrome)

The interpreter says that something used as a function, is not a function actually, it's an undefined variable. Why does the interpreter assumes it's a function? Because there are braces after a name. When there is an open parenthesis right after a name, the interpreter determines that name is a function according to the grammar of the language. It's exactly like the question mark at the end of a sentence in English, we assume the sentence must be a question because of the mark at the end. The writer of the function was probably expecting the variable to be a function at that point but given some defect, it's not. This is how to reproduce the error:

  1. var x; // x is undefined.
  2. x(); // and... undefined is not a function, is just undefined!
  3.  

TypeError: Cannot read property 'length' of undefined (Google Chrome)

The error says there is a property named "length". A property is a member of an object. The interpreter assumes that "length" must be a property of an object because there is a dot preceding it. Properties are accessed with a dot:

  1. var obj = {propA: 123, propB: 456}; // an object with 2 properties
  2. var val1 = obj.propA; // reading property (123)
  3. var val2 = obj.propB; // reading property (456)

The error message says that the variable at the left of the "length" property is undefined, that is, the object is not actually an object, it's actually the undefined value. This is the way to reproduce the error:

  1. var obj; // obj is undefined
  2. obj.length; // trying to create or reference a property in an undefined value, raises an error.

If the "property" is accessed to set it a value, the error message will be slightly different:

  1. obj.length = 777; // TypeError: Cannot set property 'length' of undefined

TypeError: Cannot call method 'trigger' of undefined (Google Chrome)

This one is similar to the previous. It says "of undefined" expresses the sense of belonging. It must be related to an object. This time it's a method, not a property. It assumes that it must be a method because there are braces after "trigger". The error means that the value of the variable at the left hand side of the dot, is undefined:

  1. var obj;
  2. obj.trigger(); // trying to reference a method in an undefined value, raises error.

Node.js produces slightly different messages for the same errors:

  1. var x; // variable x is undefined
  2. x(); // TypeError: Property 'x' of object #<Object> is not a functio

In JavaScript everything happens within a certain environment. When the variable is defined out of any function its scope is global. In the case of the browser, the global environment is the "window" object. In the case of node, it's the "global" object:

  1. var x = 5;
  2. window.x; // 5 in the browser, error in node
  3. global.x; // 5 in node, error in browse

Node makes this fact explicit in the error messages.

TypeError: Property 'x' of object #<Object> is not a function (Node.js)

As there is not dot preceeding the "x", we know it's talking about
the "global" object. As we are using braces after "x", node assumes
it must be a function. Notice that it doesn't reveal the value of "x", which is undefined, it just says that "x" is not a function. So it would raise the same error if "x" was a number.

At the time of this writing, Firefox behaves a bit similar with regards to the value of the variable:

TypeError: x is not a function (Mozilla Firefox)

  1. var x;
  2. x(); // TypeError: x is not a function
  3.  

ReferenceError: x is not defined

I find this message a bit unfortunate. I would prefer "x is not declared" or "x does not exists". Because "not defined" seems to be the same as "undefined" but it's not. This error happens when the variable has not been declared but it's value is being accessed:

  1. var x = 10 + someNameNotDeclaredEver; // ReferenceErro

 

 

 

Programmer’s memory

By pairing with a wide variety of other programmers, I've noticed that the programmer's memory has an important impact on the productivity. In order to be agile searching for code blocks within a file, we must remember the structure of the code, be familiar with the code geography. We should have a mental map of the file that we can handle in our heads, so as to remember whether certain block is located near the top, near the middle or at the bottom of the file. When the file is too big or the developer doesn't make an effort to build the mental map, he starts using the IDE's search feature to navigate everywhere, even when the code block is present in the current screen. Searching for code blocks with the IDE all the time, stops the flow and thus reduces productivity. The focus goes away. Well, this is what I've seen and I've experimented whilst pairing.

Different people have different memory capacity, the trick is to be aware of your capacity and manage it accordingly. If you can't handle files longer than 300 lines of code, then keep your files below that size. When you find yourself using Ctrl+F to search for a function name often, consider splitting the file into several ones.  The same applies for packages, modules, namespaces... try to group artifacts so that you can handle a mental model that contains all you need to work on the task or feature at hand.

Coding requires a lot of concentration and all the attention that we can put on ourselves to improve our own process day after day. Every coding session is an opportunity to learn and improve. Specially when you pair and get immediate feedback from your pair.

I believe this has a relationship with developer's preferred font size. The bigger the monitors are, the smaller I see people's fonts. I need a magnifier to be able to read to code sometimes, I always have to ask them to enlarge the font. Smaller fonts means more lines in every screen but anyway, most people work with horrible mountains of legacy code and it doesn't fit in even 5 screens!

Use your memory and save your eyes!

Do you want to know something that works really really well for me? I spend the first 10 to 15 minutes of the day reading the code  I (or we) wrote the day before. I always find out something to change. It's often a rename or a method extraction, something that I can apply quickly and safely, most of the time using the automatic refactor tool at hand. I review both, the production code and the tests, to see if the tests really document the behaviour of the system. The effort is insignificant and the benefit in the long term is huge. This is the low hanging fruit, a nice way to reduce accidental complication. This exercise let me learn about myself, my way of thinking and designing, providing me with immediate feedback to improve the code. It's like travelling to the past to prevent problems before they really happen in the future. It also helps focus on the code.  This exercise is way better when done in pairs - let your new pair give you feedback about what your previous pair and you did yesterday.
Apart from the source code, I do refactor the documentation. Documentation?! Yes, I do write documentation and comment my code. I will write another post on this. However I never comment what the code does but rather the "why" and "why not" - why is the code in there or why did I discarded some other implementation.

Refactoring is somehow similar to dish-washing. When done often, it takes very little time and effort.
When you haven't done it in two days you don't see the right moment to start off, you look at the pile and feel lazy. You blame others for using so many cups... do they really have to drink so much tea?! - even when it was yourself!
If you haven't done it in a week, you just consider buying new dishes and throwing away the entire pile.

Sometimes I decide to spend more than 15 minutes, I'd say that at least once I week I spend about two hours purely refactoring. I call this session "refactoring to infinity". I start out with the easy wins but soon I search for duplication, coupling and other design problems. I ask myself whether the code is really modelling the business domain properly. And I keep on refactoring until I can't find anything more to change. It doesn't mean the code is perfect, it just means I don't know how to make it better given my current understanding of the domain and skills. This is why I want come back later, because we use to learn about the specific details of the domain as we go.

Refactoring always pays off, except when you introduce unnecessary  complexity. Be careful, do not employ the Open/Closed principle until the new requirements turn out to be easier to implement upon code previously opened for extension. This is, consider YAGNI before.

 

 

 

Last week I participated in SocratesUK for the second time. I learned many things, met amazing people and had a lot of fun.

I flu from Tenerife to London the day before and met Sebastian Larsson in the pub where we had a nice chat on professional careers, talking about the resources and techniques we use for improving our skills.

---------- On the way to Cotswolds -----------

The next day I went to meet Sandro,  Gonçalo, Samir and Emanuele - part of the Socrates' team. Sandro drove us to the venue. It was a nice journey, full of rock and fun.

I had the luck of sharing room the Nicole Rauch this time although we almost didn't see each other, there were many things going on so as not to stay in the room for long.

After dinner Codurance sponsored a selection of craft beer in the main room. In that room there were several flipcharts and that helped to host very interesting discussions. Steve Tooke explained the benefits of Specification by Example when we started talking about Cucumber.

Discussing with beers

--------------- Discussing with beers ---------------

 

This year I came up with the idea of booking a dedicated room for deliberate practice where one could code and pair all day long. I exposed the idea during the proposals in the next morning and got the room. The idea was to work on code katas or pet projects together. During the day there were always people in the room but the truth is that for many people it wasn't clear what to expect from the "workshop room".  I had fun pairing with Stefano Cadario and Matt Glover among others and facilitating some exercises like the Word Wrap kata. Next time rather than booking a room I'll be at the bar with my laptop ready to code. I was unlucky this time and my laptop's screen got broken during the flight.

There were like six or seven tracks going on at the same time, some in the garden or the bar and  fortunately none of them was about Scrum or Kanban!

After dinner Pawel Duda and I started a pair programming session at the bar and we ended up doing a kind of Mob Programming with Matt, Uku, Sebastian and Jan using tmate - a terminal sharing tool. We worked on an experiment, trying to solve a Sudoku from the outside in, stubbing out smaller functions. It was kind of brain fuck, it was good practice and fun. Working on experiments, trying different techniques to those you usually apply is one way to improve your skills.  The problem was quite hard and there were situations where Pawel and I couldn't see a mistake and then Matt or Uku spotted it on. I believe somebody took pictures of us, please let me know if you find them out.

sessions in the garden

-------------- Sessions in the garden -------------

 

 The next day I ran a two hours workshop on the fundamentals of TDD and Pair Programming, an introduction. We were just a handful of people which made me realise that most Socrates' participants were seasoned practitionersNaseer Ashraf and Sebastian came along to help with their vision after years of practice. It was really good as we had complementary opinions to make the discussion enriching. It was very nice to see how Matt and Nic Infante realised the value of PP and TDD in terms of empathy and productivity, they found a sustainable pace.

Samir's session

-------------------------- Samir's session ----------------------

Ivan Moore showed us the "refactoring golf" exercise before lunch. I find it useful if we get rid of the "points" and focus just on keeping the tests green.

In the afternoon I attended to Samir's session on refactoring. The exercise was very interesting - avoid primitives in the code and then add new features. The repository is here.

Later on that Pawel and I went to the tiny gym beside the reception desk to work out together. Pawel is an amazing guy. Being able to do some exercise raised my energy levels.

Mob programming

Mob programming on Saturday night - Look how much we cared about the football

After dinner a bunch of us went back to Samir's code, this time in front of the projector, running a Mob programming session. A different person was at the keyboard every 7 minutes with Pawel controlling the time rigorously. Interestingly enough the person at the keyboard always tried to understand and learn IntelliJ shortcuts and the Mac keyboard. Everyone was trying to learn and bring value in a very respectful atmosphere. I was sincerely amazed but the suggestions of my fellow craftsmen.

On Sunday morning I was late to join the group on their walk through the beautiful country side so I stayed in the hotel. In the bar again people were pairing. I started a kata with Sebastian in Java. Then he had to set off but Steve and I took over using Erlang. It was my first Erlang program and I believe Steve's too. I learned several tricks on Vim thanks to Steve and some Erlang which is very cool. This is the code. Unfortunately we didn't commit often to show all the intermediate baby steps.

The way back to London was also very nice. Mashooq took Oliver and Raph to the airport and then gave a lift to Pedro and me. We pretty much crossed London by car it was my tourist tour thanks to the infinite patience of Mashooq. It's funny that I should have met Pedro in England after he's been living in Barcelona for so many years. His Spanish is better than mine! I am glad he is part of Codurance.

I like this year's edition more than the one before but I think it had to do with my attitude and the fact that this time I knew more people already and I had different expectations. Half the participants were new anyway.

What I like the most about this conference is that I get to meet people from whom I can learn a lot but at the same time there is no hierarchy we all treat others with respect. It's definitely important to run open spaces on software craftsmanship where it's about developers, practice and learning.

For this reason some friends and I are thinking of organising and hosting Socrates Canaries on the next winter, when it's cold in Europe but warm in the Canary Islands. I'd like to hear from you, would you participate in Socrates Canaries? Please fill in the form and give us your opinion.

Thank you so much to the organisers and to Rachel Davies our excellent facilitator. Thanks also to Farncombe's staff that kindly provided me with a vegan option on every meal - it was delicious. If I can make it next year, I'll be there again!

I am starting to work on a new project for a new customer, I'll join the team for a couple of days or perhaps weeks. They have been spiking and eventually have decided they want to develop the front-end using Marionette.js a library on top of Backbone.js together with Require.js. I've done a spike to show them how I'd structure the files, the app and the different kinds of tests.

The app is dead simple. The is a form with two fields, "description" and "url". When the user clicks on the button below, a new link is added underneath. So it's to add links to a panel.

Versions used in this working spike:

  • Jasmine.js - v2.0
  • Require.js - v2.1.14
  • Marionette.js - v2.0.1

The repo is here:  https://bitbucket.org/carlosble/marionette-testing-sample/overview

Structure:

  • bower_components - dependencies
  • node_modules - dependencies
  • index.html - launches the app
  • uiTests.html - launches Jasmine tests against the DOM, using Require.js
  • actionsTests.html - launches Jasmine tests to test business logic
  • js - this is the production code
    - app.js
    - views.js
    - models.js
    - actions.js
  • spec - these are the tests
    - viewsSpec.js - interact with the DOM
    - actionsSpec.js - no interaction with the DOM,  works on the business logic

With this structure the business logic don't have to depend on Marionette,  although it depend on Backbone models and collections.

Resources that help: