We solve problems

  • High quality tailor-made software
  • Training courses on TDD, BDD, clean code, and refactoring
  • Consultancy and guidance
  • Help teams reach the next level in quality and productivity

Archive for the ‘Test Driven Development’ Category

Among other qualities good tests should be easy to read, quick to understand. When the test requires complex data structures to be sent to the SUT or to be part of a stubbed answer, it takes longer to read. Moreover those structures use to evolve as the production code does causing too many changes in the tests in order to adapt them. An indirection level in between the test and the production code helps improve readability and ease of maintenance. Builders come to the rescue. I often overload builder methods to support several data structures and then apply the conversions internally.

As an example, this is the setup of one of our tests before the final refactor:

  1. [Test] public async void
  2. raise_multiple_choices_event_on_equipment_selection () {
  3. SetupViewModelWithMockService();
  4. var selectedEquipment = new Equipment { Code = "PR" };
  5. var oneEquipment = new Equipment { Code = "PR1"};
  6. var otherEquipment = new Equipment { Code = "PR2"};
  7. var equipments = new List<Equipment>{ selectedEquipment, oneEquipment, otherEquipment };
  8. var multipleChoices = new List<CompulsoryCombinationChoice> {
  9. new CompulsoryCombinationChoice(new List<CompulsoryCombinationItem> {
  10. new CompulsoryCombinationItem(oneEquipment.Code, CatalogItemTypes.Equipment)
  11. }),
  12. new CompulsoryCombinationChoice(new List<CompulsoryCombinationItem> {
  13. new CompulsoryCombinationItem(otherEquipment.Code, CatalogItemTypes.Equipment)
  14. })
  15. };
  16. ACatalog.MockingDependenciesOf(vm)
  17. .WithEquipments(equipments)
  18. .ResolvingCompulsoryCombinationsAs(multipleChoices)
  19. .Configure();
  20. /* act ... */
  21. /* assert ... */

Imagine how ugly it was before the "ACatalog" builder. And this is the test after the builder was overloaded to supply a more comfortable API:

  1. [Test] public async void
  2. raise_multiple_choices_event_on_equipment_selection() {
  3. SetupViewModelWithMockService();
  4. var theEquipment = "PR";
  5. var equipment1 = "PR1";
  6. var equipment2 = "PR2";
  7. ACatalog.MockingDependenciesOf(vm)
  8. .WithEquipments(theEquipment, equipment1, equipment2)
  9. .ResolvingCompulsoryCombinationsAs(
  10. MultipleCompulsoryCombinationChoices(
  11. equipment1, equipment2))
  12. .Configure();
  13. /* act ... */
  14. /* assert ... */

As I am getting older my memory doesn't work as it did some years ago. If want to remember and internalize the important stuff from all the things I study, I just have to blog about them for two reasons:

  • being able to come back to my blog to reread
  • record it better on my mind - forcing my brain to explain what I learn produces a deeper learning experience

I was lucky to participate in a nightly mob programming session with Sandro Mancuso and other fellow craftsmen during Socrates Canaries when I first saw his way of writing acceptance tests. Now with Sandro's Screencasts we have a great material to study and explore. We have to thank Sandro for exposing himself with these code screencasts specially because they are for free and unedited. What happens with screencasts (because I record screencasts myself) is that after a while, perhaps after a bunch of months you change your way of coding, you find little tweaks along the long journey of continuous improvement that may make you feel a bit embarrassed when you review your old videos. Specially for this I appreciate the effort Sandro is putting into the community.

The main pick I take from the recorded kata is what I would call an "evolving acceptance test" and "deferred dependency injection". When I write a test, I don't want to change it later. In TDD the production code evolves with each test however my tests should not change even when I refactor the code. And this is also true for the acceptance tests when it comes to the "act" and "assert" parts but not necessarily to the "arrange". I have learned that I can postpone the decision of where and how are my dependencies going to be connected. On Outside-in TDD #1, Sandro knows there is a Console object and spies on it but the object is not injected yet into the Account object - by the way, the kata is better understood if the Account was was named AccountService. He defers the decision on how to wire up dependencies. To make it more intention revealing what I am doing now is to get the service under test from a factory:

  1. AccountService service = Factory.AccountService();

Now all I have to do is to change the factory method whenever I am done with the low level unit tests. But I don't have to have the object graph on my head at the time I am writing that very first test. Nice one.

Now when it comes to the unit tests, my outside-in style used to be very similar to Sandro's but in the last couple of years it has changed slightly. I try to mock only the boundaries of the system. Only those things I don't own. As an example, on the Outside-in TDD #2, at about 2mins:49secons he creates a test to spy on the StatementPrinter. That test turns out to be exactly like the production code which makes it pass. It is strongly coupled to the implementation. I would rather use a Console spy to test that properly formatted statements are sent to the console, thus hiding the intermediate collaboration step from my test.
But I would do so probably using Sandro's technique of deferred dependency injection - I don't know whether I really want to inject the Console into the Account. So the production code will likely employ the StatementPrinter but my test wouldn't know about it. To shorten the feedback loop I often fake the implementation (yes, "return true"), that gets me an idea of the complexity of the problem at hand. Then I turn to read and triangulate to get to green with the minimum effort. Eventually I would end up with the StamentPrinter as I refactor.

Essentially I like my tests to know only what is strictly needed from the implementation so that I can refactor often without breaking many tests.

What about you? What things do you do differently?

We were part of CukeUp! 2015

I had the pleasure of speaking at CukeUp! this year (2015), the fifth edition of Cucumber's conference which is more about BDD than Cucumber itself, although there are always updates on the state of the tool (this year Matt released version 2.0 during his talk!)

Borja and I

Borja and I

It's been a great conference, I've learned a bunch of things, met friends and new people. My colleague Borja Navarro accepted the challenge of co-presenting with me and that made our session fun, a least for me, a had a lot of fun. We engaged the attendees with a couple of examples together with questions and answers on how to write user stories with acceptance criteria and scenarios. The attendees were very helpful and active. This is the recorded session, although the sound is quite bad.

Thank you Borja, you rock mate!

These are my conference picks:

  • A mature XP team don't need to have people exclusively for testing and QA purposes. If every developer has testing skills and every pair is responsible for quality then people just play the two roles but there is no such distinction between testers and developers. There will be people with stronger development skills and people with stronger testing skills, experts on each area. Well, this is my understanding from Rachel's and Aimy's keynote.
  • They don't have a staging environment, they release directly into production, performing some smoke testing live with some "testing" user accounts. I find this technique specially interesting when the new features can be deployed just on a few servers so that only certain users get to exercise the new code whilst the others keep using the previous version for a while. Exploratory testing and other tests are performed on the development environment. The maturity level of the team has to be very high, sounds like a nice goal to reach at my current project.
  • I participated in Paul Rayner's useful workshop,  specially nice because I got the chance to share table with Steve Tooke and Gáspár Nagy, experts on the subject. My pick is that any technique that could help with discovery during the analysis stage is welcome. One of the suggested techniques was Event Storming, which I am going to trying as soon as I can. Since we are using DDD in our project, Event storming sounds like a great technique to combine analysis and modeling.
  • From Seb's talk there are many ideas to take away, it's worth reading the slides. I already listened to his talk in a previous event in London and I want to read the recommended books. Take a look at this map on how to split user stories.
  • Konstantin Kudryashov, creator of Behat, gave an interesting talk on how to explore and exercise the domain model from Gherkin scenarios, the slides are here. My understanding was that he uses domain objects within the step definitions as if these steps were unit tests. The approach is a bit different from the one we are using where the step definitions invoke actions on application services rather than objects from the inner hexagon. Although we use application services we usually stub out or spy on repositories aiming for fast tests. Another technique he employs is to mark a few scenarios as critical making them run end-to-end through the UI so as to make sure all pieces integrate with each other. It's good to know the techniques that are working for other people and see how they are different to the ones we currently use.
  • The keynote by Dan North was full of insight, I highly recommend watching it. I didn't watch it live as I prefer to chat with people whilst talks are being recorded - at that time I took the chance and enjoyed listening to Chris Matts in the bar. To me, the most interesting tip is the experiment Dan is conveying where every team member's board has to have Kaizen tasks, Discovery tasks and Feature tasks. Brilliant, isn't it?
  • The workshop by Chris Matts and Richard Warner was full of insightful ideas. I liked the exercise on "Left Shifting" company's culture. On the other hand they explained their discovery process when it comes to feature analysis. The three keys to discover new features are "Customer needs and wants", "Target group" and "Outcome". The workshop was recorded, it's worth watching that part of the session which was more of a talk than a workshop. My understanding is that they are not using user story mapping at all. It was really nice to share table with Nat Pryce during the workshop, he was very active and fun, I am glad I got to know him in person finally.
  • Matt Wynne's workshop was one of my favorite ones. I really like the password strength exercise to illustrate the difference between acceptance criteria (rules) and examples. See these slides for more details. The other pick is "Example mapping" a technique where the colors end up making obvious what we know and don't know about the context in order to write good user stories. I also realised that the examples used for exercises may work better when they are fun, when we add some humor to them. Writing user stories for a boring domain is not as engaging as writing them for some fun domain with curious situations.

At the end of the conference there was a panel with the experts, the creators of BDD. As a result Dan North came up with this initiative on explaining BDD by example. I want to contribute to those examples. By the way, Dan said he found John Ferguson's book very good, I have to check it out.

CukeUp brought me more value than I expected, I am glad I was part of it. Thank you organisers, you made a great job!


.Net 4.5 came out with a really handy built-in asynchronous mechanism, async and await. However the method signature of a void method is a bit strange:

  1. public async Task SomeAction(){...}

It is possible to use async void but it's not recommended unless we are talking about event handlers. It's also a bit strange the fact that the type specified in the signature is not the one returned:

  1. public async Task<string> SomeQuery(){
  2. .....
  3. return someString;
  4. }

But everything else is great and very handy.

Stubbing method calls can be hard though. You can get weird exceptions like System.AggregateException when running tests with NUnit. The problem comes up when awaiting for a

stubbed async method:

  1. // notice that async tests with NUnit require the method to be async Task rather than void
  3. [Test] public async Task
  4. filter_results_from_search(){
  5. var srv = new Mock<Service>(); // using Moq
  6. sut = new Sut(srv.Object);
  8. sut.someAction(); // Exception happens here
  10. SomeAssertionOverHere();
  11. }
  12. // Production code
  13. public class Service(){ // dependency
  14. public virtual async Task<string> SomeQuery(){ // virtual is mandatory in order to mock it
  15. return "someResult";
  16. }
  17. }
  18. public class Sut(){ // system under test
  19. Service srv;
  20. public Sut(Service srv){
  21. this.srv = srv;
  22. }
  23. public async Task SomeAction(){
  24. var result = await srv.SomeQuery(); // NullReferenceException in here
  25. ....
  26. }
  27. }

The problem is that Moq will make the stubbed method return null when invoked, because we are not specifying any return value. The default value for Task<string> is null. We should tell Moq to return a proper Task:

  1. [Test] public async Task
  2. filter_results_from_search(){
  3. var srv = new Mock<Service>();
  4. srv.Setup(s => s
  5. .SomeQuery())
  6. .Returns(Task.Factory.StartNew(
  7. () => "someQuery")
  8. );
  9. sut = new Sut(srv.Object);
  11. sut.someAction();
  13. SomeAssertionOverHere();
  14. }

The key is to return a new task: Task.Factory.StartNew(lambda)

Este post habla de dos de los valores de XP: Simplicidad y Feedback. Fundamentales y habitualmente olvidados.

Cuando escribimos software o dibujamos una interfaz de usuario, estamos dando forma a una solución concreta de las muchas que puede tener un problema. Al elegir una solución automaticamente descartamos las demás porque nunca damos marcha atrás para escribir otra solución distinta si la que tenemos ya “funciona”. Por esto hay que reflexionar constantemente sobre la complejidad de la solución que hemos elegido. Lo mejor es hablarlo con los compañeros, discutirlo. A menudo exponiendo la idea descubrimos que podemos conseguir lo mismo de una manera más sencilla.

La clave para poder simplificar es tener en cuenta cuál es el objetivo (qué problema fundamental estamos resolviendo) de negocio. Pero no el objetivo del software a largo plazo sino de la release actual. Lanzamos releases frecuentes en ciclos cortos para obtener feedback y cada una tiene el objetivo de aprender algo concreto del negocio o de los usuarios. Cada release ofrece alguna caracteristica nueva o un cambio que les ayude en su día a día pero sin querer abarcar demasiado de un golpe. Me gusta que la lista de features y cambios de una release me quepa en la cabeza, que pueda contar con los dedos de una mano sin necesidad de documentos. Y que haya un objetivo predominante.

Para simplificar hay que tener en mente cuál es el objetivo de la release actual, que no es el mismo que la release que haremos dentro de 3 meses. De aquí a 3 meses habremos cambiado buena parte del código, a veces mediante refactoring y a veces porque nos damos cuenta de que hay otras necesidades. La vida cambia en 3 meses y en menos.

Conviene darle más de una vuelta a la solución, hay que “resimplificar” todo lo posible. Lleva poco tiempo hacerlo y tiene un beneficio muy grande en el medio y largo plazo. "Vale ya funciona, ahora vamos a echarle una pensada para ver si lo podemos simplificar, a ver si podemos borrar la mitad del codigo y que siga funcionando".

El código tiene que quedar fácil de entender y su simplicidad es más importante que el uso de patrones. No tenemos por qué aplicar el patrón MVVM en absolutamente todas las partes de la capa de presentación, a veces su complejidad no se justifica. Por cierto, recordemos que un ViewModel es una representación de la vista y no un DTO, un ViewModel es una especie de "controller".

La soluciones deben ser lo más abiertas posibles para poder dar margen de maniobra al usuario y aprender cual es su forma preferida de trabajar. Hay que identificar limitaciones innecesarias que estemos introduciendo tanto a traves de al interfaz de usuario como a nivel del código para poder evitarlas. Una solución abierta se centra en el problema a resolver, en el “qué” y permite cambios de forma con facilidad, poder pivotar hacia otras cosas.

Por ejemplo, imaginemos que hay 3 acciones fundamentales que el usuario puede realizar en la aplicación. No hace falta que las realice en un orden concreto, el objetivo de negocio es que pueda realizar las acciones A, B y C cuando mejor le convenga. Entonces no hagamos que para ejecutar la acción A, tenga que hacer primero la B. Eso es introducir una limitación innecesaria.

Ahora un ejemplo de complejidad excesiva: Imaginemos que queremos mostrarle al comercial la disponibilidad de un determinado modelo de vehiculo, es decir, si está en stock. El objetivo de negocio es que sepa si puede vender ese modelo o no. ¿Para qué vamos a contar cuántos vehículos hay disponibles de ese modelo? Le da igual si hay 24, solo necesita saber si tiene una unidad disponible para la venta o no la tiene. Para darnos cuenta de este exceso de complejidad tenemos que recordar el objetivo de la release. Recordar el propósito de la release actual.

Cuanto menos software escribamos mejor. La linea que no falla es la que no está escrita. Cada linea que escribimos es un compromiso que hay que mantener, hasta que alguien venga y la borre. Cuanto vamos justos para cumplir con el objetivo de la release (en el mundo del software esto es siempre), debemos recortar en alcance (funcionalidad) y nunca en calidad. Hay que hacer todo lo posible por respetar los tiempos de entrega previstos, consiguiendo el objetivo a base de quitar características menos importantes en el momento actual.

Implementing BDD – part 1

Scenario review

Scenario review

This is the first of a series of posts on how our team is practicing BDD. These are mostly notes to myself and our team so other readers may not find enough context to understand the points.







After a few weeks we have decided that:

  • Although source code is written in English, the specifications are written in Spanish. Because the business is always discussed in Spanish, it's the language used to determine what we need to do. So the Cucumber scenarios are written in Spanish.
  • There is a glossary of business terms to help us document the ubiquitous language. The glossary contains the translations from Spanish to English so that the source code expresses the same concepts.
  • If developers need to change or adjust the scenarios for some reason (for instance, automation purposes), then business analyst have to validate the changes.
  • Scenarios will be stored in only place only, and that place is SpecFlow (Cucumber). This means that business analyst need to have all the tools installed and access to the source control repository (Visual Studio and TortoiseHG).
  • After the specifications workshop, scenarios are written in pairs or groups of 3 or 4 people. Two people is usually enough. We've realized that writing the scenarios in pairs help us come up with more questions and more ideas to simplify the problem.
  • The automation of the scenarios should also be done in pairs, because this is yet another opportunity for us to detect inconsistencies or doubts. Well, in general we just pair most of the time.
  • A user story is not ready for implementation until it's got at least 4 scenarios. We started with just one and the lack of context caused us to introduce wrong models in the code. Actually when test-driving a small function, we never start out with a single example but with a to-do list. Then we focus on each test writing the minimum code required but we do know where we want to go. This is the same but at a higher level.
  • Given that our business analyst know how to read code, they should read the source code at the end of the iteration. By reading the part of the code inside the hexagon (we use hexagonal architecture), they can ensure the designed model makes sense.
  • Scenarios should not repeat irrelevant details. If one scenario uses real data like a "plate number", then the following scenarios don't need to repeat the whole plate number but just something like "x". We should keep in mind that scenarios complement each other in terms of documentation. They should be indepdendent for execution but not for documentation.
  • Triangulation: we can make a scenario go green quickly. However, triangulating the code that really works for all the cases falling under that scenario, may require more code. If the triangulation does not concern business details, we use NUnit to test-drive rather than SpecFlow. Scenarios are used for documentation, not for triangulation, unless the triangulation covers relevant business tips.

It was by chance that one of our business analysts came up to a mob programming session and got to see code related to the scenarios that we had defined together. Quickly he realised that we were introducing aliens in the domain (wrong models). It was because we were lacking some other scenarios. The fact that the code was in the wall with a massive font size and it really communicated our intent, together with the fact that the reader is very clever, made us discover an useful technique to double-check our assumptions.


Implementing a fake repository

Lately I am using fake objects more than the other common kinds of doubles - stubs, spies and mocks. This is quite recent, I used to prefer stubs, spies or mocks. I find that fakes make my tests less fragile in the case of outside-in development when it comes to changes in the design. The problem is that there could be defects in the implementation of the fake objects. A tricky one may appear in the case of in-memory repositories:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. jobs.Add(job);
  6. }
  7. public void Update(Job job)
  8. {
  9. // TODO
  10. }
  11. public List FindBy(string ownerId)
  12. {
  13. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  14. }
  15. }

This implementation is misleading. If we are test-driving a service expecting it to update a Job instance, we may forget to invoke the repository's Update method and it would still work.

  1. [TestClass]
  2. public class ServiceTest {
  3. [Test]
  4. public void AssignsOwnerToJob()
  5. {
  6. var service = Factory.CreateJobServiceWithFakeRepository();
  7. var job = service.CreateJob(irrelevantTitle);
  9. service.Assign(job, irrelevantOwnerId);
  11. service.JobsOwnedBy(irrelevantOwnerId).Should().HaveCount(1);
  12. }
  13. }
  15. public class Service {
  16. private JobRepository repo {get; set;}
  18. public Service(JobRepository repo){
  19. this.repo = repo;
  20. }
  21. public Job CreateJob(string title)
  22. {
  23. var job = new Job();
  24. repo.Add(job);
  25. /*...*/
  26. return job;
  27. }
  28. public void Assign(Job job, string ownerId){
  29. /*...*/
  30. job.Assign(ownerId);
  31. /* BUG: am forgetting the call to "repo.Update(job)" right here */
  32. }
  33. public List JobsOwnedBy(string ownerId){
  34. return repo.FindBy(ownerId);
  35. }
  36. }

The problem is that the in-memory repository is using the same object references all the time. When an object is passed in as an argument, it's actually a copy of the reference. However if the repository was accessing a database it would most likely return different instances (references). A better implementation of the repository would be this one:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. /* clone: the object added to the repo
  6.   is different to the given one */
  7. jobs.Add(job.Clone());
  8. }
  9. public void Update(Job job)
  10. {
  11. /* find the given job,
  12.   remove it from the collection and
  13.   add a clone of the given one */
  14. }
  15. public List FindBy(string ownerId)
  16. {
  17. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  18. }
  19. }

Conclusion: the behavior of the fake should be the same than the original when it comes to handling object references.

I had a blast at SocratesUK 2014

Last week I participated in SocratesUK for the second time. I learned many things, met amazing people and had a lot of fun.

I flu from Tenerife to London the day before and met Sebastian Larsson in the pub where we had a nice chat on professional careers, talking about the resources and techniques we use for improving our skills.

---------- On the way to Cotswolds -----------

The next day I went to meet Sandro,  Gonçalo, Samir and Emanuele - part of the Socrates' team. Sandro drove us to the venue. It was a nice journey, full of rock and fun.

I had the luck of sharing room the Nicole Rauch this time although we almost didn't see each other, there were many things going on so as not to stay in the room for long.

After dinner Codurance sponsored a selection of craft beer in the main room. In that room there were several flipcharts and that helped to host very interesting discussions. Steve Tooke explained the benefits of Specification by Example when we started talking about Cucumber.

Discussing with beers

--------------- Discussing with beers ---------------


This year I came up with the idea of booking a dedicated room for deliberate practice where one could code and pair all day long. I exposed the idea during the proposals in the next morning and got the room. The idea was to work on code katas or pet projects together. During the day there were always people in the room but the truth is that for many people it wasn't clear what to expect from the "workshop room".  I had fun pairing with Stefano Cadario and Matt Glover among others and facilitating some exercises like the Word Wrap kata. Next time rather than booking a room I'll be at the bar with my laptop ready to code. I was unlucky this time and my laptop's screen got broken during the flight.

There were like six or seven tracks going on at the same time, some in the garden or the bar and  fortunately none of them was about Scrum or Kanban!

After dinner Pawel Duda and I started a pair programming session at the bar and we ended up doing a kind of Mob Programming with Matt, Uku, Sebastian and Jan using tmate - a terminal sharing tool. We worked on an experiment, trying to solve a Sudoku from the outside in, stubbing out smaller functions. It was kind of brain fuck, it was good practice and fun. Working on experiments, trying different techniques to those you usually apply is one way to improve your skills.  The problem was quite hard and there were situations where Pawel and I couldn't see a mistake and then Matt or Uku spotted it on. I believe somebody took pictures of us, please let me know if you find them out.

sessions in the garden

-------------- Sessions in the garden -------------


 The next day I ran a two hours workshop on the fundamentals of TDD and Pair Programming, an introduction. We were just a handful of people which made me realise that most Socrates' participants were seasoned practitionersNaseer Ashraf and Sebastian came along to help with their vision after years of practice. It was really good as we had complementary opinions to make the discussion enriching. It was very nice to see how Matt and Nic Infante realised the value of PP and TDD in terms of empathy and productivity, they found a sustainable pace.

Samir's session

-------------------------- Samir's session ----------------------

Ivan Moore showed us the "refactoring golf" exercise before lunch. I find it useful if we get rid of the "points" and focus just on keeping the tests green.

In the afternoon I attended to Samir's session on refactoring. The exercise was very interesting - avoid primitives in the code and then add new features. The repository is here.

Later on that Pawel and I went to the tiny gym beside the reception desk to work out together. Pawel is an amazing guy. Being able to do some exercise raised my energy levels.

Mob programming

Mob programming on Saturday night - Look how much we cared about the football

After dinner a bunch of us went back to Samir's code, this time in front of the projector, running a Mob programming session. A different person was at the keyboard every 7 minutes with Pawel controlling the time rigorously. Interestingly enough the person at the keyboard always tried to understand and learn IntelliJ shortcuts and the Mac keyboard. Everyone was trying to learn and bring value in a very respectful atmosphere. I was sincerely amazed but the suggestions of my fellow craftsmen.

On Sunday morning I was late to join the group on their walk through the beautiful country side so I stayed in the hotel. In the bar again people were pairing. I started a kata with Sebastian in Java. Then he had to set off but Steve and I took over using Erlang. It was my first Erlang program and I believe Steve's too. I learned several tricks on Vim thanks to Steve and some Erlang which is very cool. This is the code. Unfortunately we didn't commit often to show all the intermediate baby steps.

The way back to London was also very nice. Mashooq took Oliver and Raph to the airport and then gave a lift to Pedro and me. We pretty much crossed London by car it was my tourist tour thanks to the infinite patience of Mashooq. It's funny that I should have met Pedro in England after he's been living in Barcelona for so many years. His Spanish is better than mine! I am glad he is part of Codurance.

I like this year's edition more than the one before but I think it had to do with my attitude and the fact that this time I knew more people already and I had different expectations. Half the participants were new anyway.

What I like the most about this conference is that I get to meet people from whom I can learn a lot but at the same time there is no hierarchy we all treat others with respect. It's definitely important to run open spaces on software craftsmanship where it's about developers, practice and learning.

For this reason some friends and I are thinking of organising and hosting Socrates Canaries on the next winter, when it's cold in Europe but warm in the Canary Islands. I'd like to hear from you, would you participate in Socrates Canaries? Please fill in the form and give us your opinion.

Thank you so much to the organisers and to Rachel Davies our excellent facilitator. Thanks also to Farncombe's staff that kindly provided me with a vegan option on every meal - it was delicious. If I can make it next year, I'll be there again!

Learn TDD on the beach

I am glad to announce that iSQI and I are organizing a TDD training course in Tenerife, open to the public. Rather than running the course in Potsdam (Berlin) we have decided to move it to the island this time.  In the south of the island, by the beach.  There will be participants coming from the Netherlands, Austria and other European countries. And you dear reader are still on time to join us. In this edition there might be different profiles, people new to TDD and experienced test-driven developers that want to become trainers. Some people will learn TDD from scratch and some other people will learn how to teach it.  At the end of the course there will be a certification exam.
It's the first time we organise it in Tenerife and I am looking forward to it.  Apart from the learning experience,  it's a perfect occasion to visit the island, bring the family and enjoy the sun and the sea.  I may even organise a walk through the shore or the mountains.I hope this will be the first of several courses in this beautiful island where the weather is perfect during the whole year.

The week afterwards, I'll run another TDD course but this time in Spanish and in Gran Canaria, the neighbour island. The registration form will be published soon.  Thanks to SPEGC, the price will be cheaper as they are funding part of the training. In this training there will not be a certification exam.

Two TDD courses, open to the public  in Canary Islands this summer - this is good news! 😀



Watching Kent Beck test-drive

I've been watching and studying K. Beck's video series on TDD this week. These screencasts are really good in my opinion, I am glad that Kent made them. I would like to see more of them.

According to Amazon, I bought Kent's book on TDD in January 2008. At that time I was working on a kind of experimental project at the University of La Laguna so I had plenty of time to practice TDD all day long. The software had to work but there was no deadline, it was a kind of grant, perfect for me to learn and practice. And we delivered clean code that worked finally. Well, today for sure I wouldn't like that code.

With Kent's book and the help of the TDD mailing list (Yahoo), I embraced the technique soon and stuck with it since then. So my style is quite "classic", although I believe I use more test doubles than Kent (looks like he doesn't use doubles at all, but I'd like to see that). However as I've gained experience and learned from others and from my own mistakes, my style has been evolving over time. Moreover teaching TDD to lots of  people during these years has have a huge positive impact on the way I test-drive my code.

But now watching Kent coding has been a bit surprising to me. Notes to myself about the episodes:

Episode 1:

Kent writes the first test from the outside-in, but makes it pass bottom-up. Using a single test as a means to constantly validate the code he is writing works. He moves code from the test to production and back to the test, always moving in safe steps. My surprise he is that all the triangulation is performed with a single test, an evolving end-to-end test. I think I'd probably have written several tests, leaving the first one red for a while (he does that in an upcoming episode). From this very first episode I can notice that Kent uses the tests to maximize feedback, focusing on getting things done. Very pragmatic, I like that.

Episode 2:

Kent uses the "fake it until you make it" trick, returning a hardcoded zero value, but then rather than writing another test to triangulate, he uses the same one to make the production code more generic. Again very pragmatic. I only use "fake it until you make it" when I don't have a clear vision of how to make it pass. That keeps my flow and gives me some inspiration. So in this case I would just write something more than "return 0".
The surprise in this episode is that Kent prefers to leave some duplication in the code in favor of symmetry. Duplication makes me feel unconfortable so I would rather remove it. Now I value symmetry a bit more, I got his point.

Episode 3:

Kent removes several tests. In order to split a problem into smaller pieces, he leaves a failing test and moves on to test-drive those small pieces (methods) that are needed. When the big method works and the big test passes, he realizes the small methods should be private and so changes their visibility. Then the small tests don't even compile and Kent just deletes them. He calls these "scafolding tests", needed to flow with safe small steps but not necessary in the end.
To me this is a bit surprising, it's definitely not my style but I see the benefit - flow and fast feedback again. I rarely delete tests, only when they are proven redundant (and I loose the 10$ as Kent says). When I find myself testing a method that should be private I question my design. It usually tells me that the method probably belongs in a different object where it's public. If writing a specification (what tests really are from my point of view) turns out to be difficult, that gives me a hint about my design. However it's true that sometimes premature design decisions may get you stuck, loose the flow.

Episode 4:

I really like the idea of test-driving the same problem from scratch several times to find out which order is the best. Choosing the tests appropriately is very important for TDD to be productive. To success with triangulation the order in which we choose tests is fundamental. This exercise is excellent, I will practice it more often with code katas.

Some conclusions:

I can see that Kent uses the tests to progress with steady pace in safe small steps, maximizing fast feedback loops. I see that the primary focus of his tests is not on documenting the behavior of the  system, Kent just wants to get code working soon. The names of the tests are readable and tell a story but I like them to be more expressive. Sometimes Kent reflects only the input in the name, whilst I prefer to describe the behavior. As an example I would type  "it_is_not_possible_to_remove_a_null" whilst he typed "removeNull". I continously refactor my tests to improve their names, making sure I could copy and paste those names (together with the class name), paste them into a text document and deliver it as human readable documentation source.

I tend to use tests more as specifications than tests so I probably spend more time thinking of the tests and ordering them for the sake of an effective flow aftewards. The truth is that sometimes it's hard to approach the problem from the outside-in and the pragmatism demonstrated by Kent helps with that. I'll take advantage of these lessons to keep on evolving my style


Thank you Kent Beck for your book and your videos!

And thanks to my friend @pasku1 because I believe he told me about these videos.