We solve problems

  • High quality tailor-made software
  • Training courses on TDD, BDD, clean code, and refactoring
  • Consultancy and guidance
  • Help teams reach the next level in quality and productivity

Archive for the ‘Software Development’ Category



DTO vs Domain Models

A Data Transfer Object (DTO) is an object intended to carry data, for example between the client and the server or between the UI and the domain layer. It exposes fields or properties (getters and setters) publicly. Sometimes a DTO could be seen as an anemic model. DTOs are mostly used out of the hexagon, in the delivery mechanism.

On the other hand domain models should promote a good object oriented design with proper encapsulation. They belong in the hexagon.

However it's not always clear where to use a DTO and a domain model, where and how to convert one into the other. None of the approaches we have tried are working for us in all the cases, there are always exceptions. But there are some principles that are helping me:

Principle: Do not expose domain models out of the hexagon.
Example: Binding an object to the UI.
Reason: Changes in the domain will impact the delivery mechanism in unsuspected ways. When the GUI is defined with XAML, the data binding is declared in this XML. Changing the visibility of a getter from public to private will compile but as soon as the user enters some data in a bound field it will throw a runtime exception that we can't reproduce with automated tests, it's hard to discover.

Principle: Consider emerging DTOs as early versions of models
Usually objects start with one or two fields and no logic on them. We consider them DTOs at that point but anyway they are part of our domain model and so they are used by services and domain models. We may even have a repository to persist that particular DTO when it looks like an aggregate.

Later on, as we learn about the domain and the objects grow, the DTO might turn into a domain model. Then it makes sense to have two separate objects. The DTO is typically generated by the model:

  1. var dto = domainModel.ToDto(); // model to dto
  2. var model = DomainModel.From(dto); // model from dto

Now we make sure the domain model is not accessible by the GUI. And the DTO is no longer present in the repository's method signature. The repository may get the DTO from the domain model internally.

We don't know what is the right answer in some cases but it's important to remember what didn't work for us. This post is a reminder to myself, like many others.

We were part of CukeUp! 2015

I had the pleasure of speaking at CukeUp! this year (2015), the fifth edition of Cucumber's conference which is more about BDD than Cucumber itself, although there are always updates on the state of the tool (this year Matt released version 2.0 during his talk!)

Borja and I

Borja and I

It's been a great conference, I've learned a bunch of things, met friends and new people. My colleague Borja Navarro accepted the challenge of co-presenting with me and that made our session fun, a least for me, a had a lot of fun. We engaged the attendees with a couple of examples together with questions and answers on how to write user stories with acceptance criteria and scenarios. The attendees were very helpful and active. This is the recorded session, although the sound is quite bad.

Thank you Borja, you rock mate!

These are my conference picks:

  • A mature XP team don't need to have people exclusively for testing and QA purposes. If every developer has testing skills and every pair is responsible for quality then people just play the two roles but there is no such distinction between testers and developers. There will be people with stronger development skills and people with stronger testing skills, experts on each area. Well, this is my understanding from Rachel's and Aimy's keynote.
  • They don't have a staging environment, they release directly into production, performing some smoke testing live with some "testing" user accounts. I find this technique specially interesting when the new features can be deployed just on a few servers so that only certain users get to exercise the new code whilst the others keep using the previous version for a while. Exploratory testing and other tests are performed on the development environment. The maturity level of the team has to be very high, sounds like a nice goal to reach at my current project.
  • I participated in Paul Rayner's useful workshop,  specially nice because I got the chance to share table with Steve Tooke and Gáspár Nagy, experts on the subject. My pick is that any technique that could help with discovery during the analysis stage is welcome. One of the suggested techniques was Event Storming, which I am going to trying as soon as I can. Since we are using DDD in our project, Event storming sounds like a great technique to combine analysis and modeling.
  • From Seb's talk there are many ideas to take away, it's worth reading the slides. I already listened to his talk in a previous event in London and I want to read the recommended books. Take a look at this map on how to split user stories.
  • Konstantin Kudryashov, creator of Behat, gave an interesting talk on how to explore and exercise the domain model from Gherkin scenarios, the slides are here. My understanding was that he uses domain objects within the step definitions as if these steps were unit tests. The approach is a bit different from the one we are using where the step definitions invoke actions on application services rather than objects from the inner hexagon. Although we use application services we usually stub out or spy on repositories aiming for fast tests. Another technique he employs is to mark a few scenarios as critical making them run end-to-end through the UI so as to make sure all pieces integrate with each other. It's good to know the techniques that are working for other people and see how they are different to the ones we currently use.
  • The keynote by Dan North was full of insight, I highly recommend watching it. I didn't watch it live as I prefer to chat with people whilst talks are being recorded - at that time I took the chance and enjoyed listening to Chris Matts in the bar. To me, the most interesting tip is the experiment Dan is conveying where every team member's board has to have Kaizen tasks, Discovery tasks and Feature tasks. Brilliant, isn't it?
  • The workshop by Chris Matts and Richard Warner was full of insightful ideas. I liked the exercise on "Left Shifting" company's culture. On the other hand they explained their discovery process when it comes to feature analysis. The three keys to discover new features are "Customer needs and wants", "Target group" and "Outcome". The workshop was recorded, it's worth watching that part of the session which was more of a talk than a workshop. My understanding is that they are not using user story mapping at all. It was really nice to share table with Nat Pryce during the workshop, he was very active and fun, I am glad I got to know him in person finally.
  • Matt Wynne's workshop was one of my favorite ones. I really like the password strength exercise to illustrate the difference between acceptance criteria (rules) and examples. See these slides for more details. The other pick is "Example mapping" a technique where the colors end up making obvious what we know and don't know about the context in order to write good user stories. I also realised that the examples used for exercises may work better when they are fun, when we add some humor to them. Writing user stories for a boring domain is not as engaging as writing them for some fun domain with curious situations.

At the end of the conference there was a panel with the experts, the creators of BDD. As a result Dan North came up with this initiative on explaining BDD by example. I want to contribute to those examples. By the way, Dan said he found John Ferguson's book very good, I have to check it out.

CukeUp brought me more value than I expected, I am glad I was part of it. Thank you organisers, you made a great job!

 

.Net 4.5 came out with a really handy built-in asynchronous mechanism, async and await. However the method signature of a void method is a bit strange:

  1. public async Task SomeAction(){...}
  2.  

It is possible to use async void but it's not recommended unless we are talking about event handlers. It's also a bit strange the fact that the type specified in the signature is not the one returned:

  1. public async Task<string> SomeQuery(){
  2. .....
  3. return someString;
  4. }
  5.  

But everything else is great and very handy.

Stubbing method calls can be hard though. You can get weird exceptions like System.AggregateException when running tests with NUnit. The problem comes up when awaiting for a

stubbed async method:

  1. // notice that async tests with NUnit require the method to be async Task rather than void
  2.  
  3. [Test] public async Task
  4. filter_results_from_search(){
  5. var srv = new Mock<Service>(); // using Moq
  6. sut = new Sut(srv.Object);
  7.  
  8. sut.someAction(); // Exception happens here
  9.  
  10. SomeAssertionOverHere();
  11. }
  12. // Production code
  13. public class Service(){ // dependency
  14. public virtual async Task<string> SomeQuery(){ // virtual is mandatory in order to mock it
  15. return "someResult";
  16. }
  17. }
  18. public class Sut(){ // system under test
  19. Service srv;
  20. public Sut(Service srv){
  21. this.srv = srv;
  22. }
  23. public async Task SomeAction(){
  24. var result = await srv.SomeQuery(); // NullReferenceException in here
  25. ....
  26. }
  27. }
  28.  

The problem is that Moq will make the stubbed method return null when invoked, because we are not specifying any return value. The default value for Task<string> is null. We should tell Moq to return a proper Task:

  1. [Test] public async Task
  2. filter_results_from_search(){
  3. var srv = new Mock<Service>();
  4. srv.Setup(s => s
  5. .SomeQuery())
  6. .Returns(Task.Factory.StartNew(
  7. () => "someQuery")
  8. );
  9. sut = new Sut(srv.Object);
  10.  
  11. sut.someAction();
  12.  
  13. SomeAssertionOverHere();
  14. }

The key is to return a new task: Task.Factory.StartNew(lambda)
 

Este post habla de dos de los valores de XP: Simplicidad y Feedback. Fundamentales y habitualmente olvidados.

Cuando escribimos software o dibujamos una interfaz de usuario, estamos dando forma a una solución concreta de las muchas que puede tener un problema. Al elegir una solución automaticamente descartamos las demás porque nunca damos marcha atrás para escribir otra solución distinta si la que tenemos ya “funciona”. Por esto hay que reflexionar constantemente sobre la complejidad de la solución que hemos elegido. Lo mejor es hablarlo con los compañeros, discutirlo. A menudo exponiendo la idea descubrimos que podemos conseguir lo mismo de una manera más sencilla.

La clave para poder simplificar es tener en cuenta cuál es el objetivo (qué problema fundamental estamos resolviendo) de negocio. Pero no el objetivo del software a largo plazo sino de la release actual. Lanzamos releases frecuentes en ciclos cortos para obtener feedback y cada una tiene el objetivo de aprender algo concreto del negocio o de los usuarios. Cada release ofrece alguna caracteristica nueva o un cambio que les ayude en su día a día pero sin querer abarcar demasiado de un golpe. Me gusta que la lista de features y cambios de una release me quepa en la cabeza, que pueda contar con los dedos de una mano sin necesidad de documentos. Y que haya un objetivo predominante.

Para simplificar hay que tener en mente cuál es el objetivo de la release actual, que no es el mismo que la release que haremos dentro de 3 meses. De aquí a 3 meses habremos cambiado buena parte del código, a veces mediante refactoring y a veces porque nos damos cuenta de que hay otras necesidades. La vida cambia en 3 meses y en menos.

Conviene darle más de una vuelta a la solución, hay que “resimplificar” todo lo posible. Lleva poco tiempo hacerlo y tiene un beneficio muy grande en el medio y largo plazo. "Vale ya funciona, ahora vamos a echarle una pensada para ver si lo podemos simplificar, a ver si podemos borrar la mitad del codigo y que siga funcionando".

El código tiene que quedar fácil de entender y su simplicidad es más importante que el uso de patrones. No tenemos por qué aplicar el patrón MVVM en absolutamente todas las partes de la capa de presentación, a veces su complejidad no se justifica. Por cierto, recordemos que un ViewModel es una representación de la vista y no un DTO, un ViewModel es una especie de "controller".

La soluciones deben ser lo más abiertas posibles para poder dar margen de maniobra al usuario y aprender cual es su forma preferida de trabajar. Hay que identificar limitaciones innecesarias que estemos introduciendo tanto a traves de al interfaz de usuario como a nivel del código para poder evitarlas. Una solución abierta se centra en el problema a resolver, en el “qué” y permite cambios de forma con facilidad, poder pivotar hacia otras cosas.

Por ejemplo, imaginemos que hay 3 acciones fundamentales que el usuario puede realizar en la aplicación. No hace falta que las realice en un orden concreto, el objetivo de negocio es que pueda realizar las acciones A, B y C cuando mejor le convenga. Entonces no hagamos que para ejecutar la acción A, tenga que hacer primero la B. Eso es introducir una limitación innecesaria.

Ahora un ejemplo de complejidad excesiva: Imaginemos que queremos mostrarle al comercial la disponibilidad de un determinado modelo de vehiculo, es decir, si está en stock. El objetivo de negocio es que sepa si puede vender ese modelo o no. ¿Para qué vamos a contar cuántos vehículos hay disponibles de ese modelo? Le da igual si hay 24, solo necesita saber si tiene una unidad disponible para la venta o no la tiene. Para darnos cuenta de este exceso de complejidad tenemos que recordar el objetivo de la release. Recordar el propósito de la release actual.

Cuanto menos software escribamos mejor. La linea que no falla es la que no está escrita. Cada linea que escribimos es un compromiso que hay que mantener, hasta que alguien venga y la borre. Cuanto vamos justos para cumplir con el objetivo de la release (en el mundo del software esto es siempre), debemos recortar en alcance (funcionalidad) y nunca en calidad. Hay que hacer todo lo posible por respetar los tiempos de entrega previstos, consiguiendo el objetivo a base de quitar características menos importantes en el momento actual.

DDD: Entity or Value Object?

Depending on the context certain concept may be modeled as an entity or as a value. It could be both in different bounded contexts. To me, good heuristics to choose between entities and values are life cycle and mutability. Values are immutable, represent constants. A color is a good example (thanks Alfredo). If color changes from red to blue it's not the red color anymore but something else. The cost of returning a new instance when color changes is cheap:

  1. Color yellow = red.mixWith(green);

When returning a whole new objects costs many lines of code and memory that could be a sign that it's probably better modeling as an entity. If we care about changes that could be another sign. Recently we had to model a draft invoice. From the point of view of the business a draft doesn't require a unique number, only the final invoice has to have a unique sequential number (identity). So I thought we could model the draft as a value. After all if two drafts consist of the same customer and the same lines, the business could not tell the difference between the two. We chose a value object but then every time a new line went into the draft we had to create a new one, copying all the existing lines. Too much code for such a simple operation. Finally we changed the design modeling the draft as an entity.

Questioning the design made us speak to the domain experts - “how would you search for a certain draft? If two draft contains exactly the same how do you differentiate them?”. And they decided the application should allow for just a single draft per customer. It simplified the application even though eventually we used an entity.

As the object changes, we may be interested in tracking why, who and by whom changes were made” - This is a good heuristic to think of an entity.

When the entity's identity generation is delayed until it's persisted, we might have to implement the equals and hashCode methods as if it was a value object to avoid bugs”. This kind of workarounds make the distinction fuzzy.

I currently think that choosing an entity or a value boils down to good object oriented and simple design. Value objects are older than DDD, I believe the term was coined by Ward Cunningham. Even if the object is an entity, I try to avoid getters and setters as much as I can. Rather than reading information from object properties, objects may communicate with each other by sending messages.

Things to remember from the two chapters:

  • I like the trick of Layer Supertype on page 187 to hide a surrogate identity from entity's consumers.
  • I find the idea of “role interfaces” on page 203 quite handy to expose only certain behavior to consumers.
  1.  
  2. class Customer : AddOrder, MakePreferred {
  3. }
  4.  
  5. var customer = Repository.Find<AddOrder>(customerId);
  6. customer.AddOrder(order);
  7.  
  • Values advantages: “Value types that measure, quantify or describe things are easier to create, test, use, optimize and maintain”
  • Limit a value to depend on and understand its own type and the types of its attributes.
  • Pass only values as parameters to value methods.
  • I like how he uses the Java enum to implement the State pattern on page 235 (he calls it Standard Type). Java enums are really powerful.

DDD Paradoxes

I read Eric Evans' book on DDD years ago and fall asleep with it. Didn't really took much value from it. Last year I read Vaughn Vernon's book on IDDD and I found it more interesting because of the code examples but still too abstract. Too much prose, too thick. One problem with both books is that their written English style is quite sophisticated for a non-native English speaker like me. But I believe there is a lot of value in DDD, so I've read it several times, on kindle and later on paper. My feeling with DDD is still a bit frustrating. I find many contradictions in Vernon's book. But there is a lot of valuable advise in the book I want to benefit from. I should be able to get more value from the book.

This is the first of a series of posts to myself with the idea of better study the subject, trying to summarize the heuristics that I think are going to be more useful for me and hoping the return the investment of time. I want to fix the ideas that I find good.

The following statements are not necessarily in the book literally, this is just what I have in mind and it could just be my interpretation.

Along the book it's advised not to model the domain based on infrastructure concerns, tools or frameworks. However there are sentences where I think it reads: depending on the persistence mechanism you might choose to model a concept as an entity or as value object. Aggregates are specially contradictory. “Aggregate is synonymous with transactional consistency boundary”. Well, transactionality is purely infrastructural so what's the point with aggregates? I believe the chapter on aggregates has very little to do with modeling, but it can be really useful if I can extract a list of architecture/infrastructure problems and solutions, recipes. I just need to extract a list of architecture solutions from that chapter, will do so in an upcoming post.

The distinction between Application Service and Domain Service is hard to see for me. I understand that an Application Service should not contain domain logic, however I am not sure anymore what “domain logic” is after reading some examples of Application Services. I have to dig further into it.

Choosing an Entity or a Value Object is still a hard decision. Apparently Value Objects should be used more often than they are, developers tend to abuse Entities and end up with anemic models. However my impression is that entities are used way more than values along the book. After reading the chapter on Value Objects for the first time, I thought I got it – if I could consider two objects to be equivalent when they had the same field values, they must be value objects. After working with experienced DDD developers in my team and reading the book again, I see I was wrong. I'll focus on the distinction between entities and values in upcoming posts.

This post is going to be updated as I encounter more paradoxes (from my current understanding).

2014: Un gran año

Últimas horas del 31 de Diciembre de 2014. Para millones de personas es un día de celebración. Aproximádamente el 75% de la población cambia de año cada primero de enero siguiendo el calendario gregoriano, aunque solo unos pocos podemos hacer una fiesta. Para muchos es el último día de su vida, para otros es el primero. Según la ONU, más de 30 millones se encuentran desplazados a causa de conflictos armados y muchos de ellos están en plena guerra, luchando por la supervivencia. Muchos morirán de hambre y otros por enfermedad. Miles de millones de animales sufren el holocausto en las granjas y mataderos industriales para satisfacer a una minoría de los humanos del planeta. Y el planeta según diversas organizaciones está en el peor momento de la historia.

Sin ignorar todo lo que está ocurriendo aún hay margen para el optimismo, la esperanza y la gratitud.  Mi deseo es ser cada día más optimista y más féliz sin necesidad de amnesia, ignorancia ni indiferencia. Haciendo por los demás todo lo que esté en mi mano. Consumiendo de manera responsable.

La hoja de ruta consiste en vivir el momento presente con atención, agradecido por tantas cosas buenas que me suceden.

2014 ha sido el mejor año de mi carrera profesional desde que trabajo como independiente. Todo lo que podía salir bien ha salido bien. He tenido la suerte de visitar multitud de empresas y trabajar con gente genial. He tenido tanto trabajo este año que no voy a citar todos los lugares por los que he pasado y personas con las que he trabajado porque se me quedaría gente fuera seguro.  Me ha encantado ver trabajar a algunos colegas del gremio como Luis Fraile o Luis Ruiz Pavón y ha sido un placer descubrir a Carlos Bastos.

A principios de 2013 pensaba que debía centrar mis esfuerzos en el mercado extranjero, sobre todo UK y Alemania pero este año el mercado nacional me ha dado mucho más de lo que esperaba y he salido poco fuera del país. Prácticamente cada semana recibía alguna petición para formación o consultoría y he tenido varios encargos de desarrollo de producto a medida con los que me lo he pasado bomba. He conseguido programar tanto como quería. También he podido empezar a grabar los screencasts de programación que quería (y haré más).

Pero lo más importante son las alianzas que se han ido fraguando por el camino. Sin buscarlo estamos cerrando el año funcionando como equipos en lugar de ser un profesional solitario. Estoy encantado de cerrar el 2014 trabajando con mis compañeros Luis Rovirosa, Juan M. Gómez, Imobach Martín, Nestor Bethencourt, Modesto San Juan, Alfredo Casado y Fran Reyes. Y con todos los compañeros de AIDA (ahora cuento más). 

Luis Rovirosa está ayudandome con consultoría y formación en varios clientes, es un aliado con el que llevaba tiempo queriendo colaborar. Juan, Imo y Nestor estan desarrollando conmigo el editor de plantillas del popular MDirector, la herramienta de email marketing de Antevenio, un proyecto muy bonito. Alfredo, Modesto y Fran están conmigo y con el resto de compañeros de AIDA trabajando en un proyecto apasionante para el Grupo Domingo Alonso. Y hay más proyectos en camino. Por ello estoy en proceso de constitución de un SL, la que será mi segunda SL. Estoy ya cambiando la portada de esta web (carlosble.com) y pronto habrá mas fotos e info de todos.

Con semejante grupo de profesionales podemos atender toda la demanda que me llega y puedo estar tranquilo al delegar. Poder delegar sin preocupaciones, no tiene precio.

Cerrar el año trabajando para Grupo Domingo Alonso es una suerte. Se trata de una de las mayores empresas de Canarias aunque yo no la conocía porque los coches no me llaman la atención. Surgió una consultoría con ellos, en parte a través de nuestro amigo y maestro Jose Juan Hernandez de la ULPG y pude conocerles. Me quedé encantado con la calidad humana del equipo, sus valores, su energía y con el proyecto en sí. No tenía intención de proponerles colaboración para sacar adelante el proyecto pero cuando volví a casa y pasaron unos días me dí cuenta que yo quería trabajar allí y les mandé una propuesta. Es un super reto, la construcción de un complejísimo Dealer Management System. El reto es tan grande que necesitaba pedir refuerzos y para mi sorpresa, tanto Alfredo como Modesto y Fran lo dejaron todo (bueno, a sus parejas no!) y se mudaron conmigo a Gran Canaria para meterse en el proyecto. Llevamos dos meses de proyecto y cada vez me gusta más todo lo que conozco. El capital humano de AIDA (Aplicaciones Informáticas Domingo Alonso) es increible. En la fiesta de navidad nuestro compañero Alejandro Artiles se curró este gracioso vídeo explicando lo que estamos viviendo en estos comienzos del proyecto, al estilo "Stars Wars":

Por si fuera poco, en Junio de este año mi padre volvió a nacer. Ingresó en urgencias y los medicos dijeron que no sobreviviría. Un fulminante cáncer de cólon. Sin embargo lo ha superado por completo en muy poco tiempo. Mi padre cuenta la experiencia en su blog. Ha sido la mejor noticia del año, lo mejor que ha sucedido.

Este post es sobre todo para daros mil gracias a todos los que estais haciendo posible este grandísimo momento. Gracias a todos los que este año habeis contado conmigo. Ha sido un placer y estoy seguro de que viene mucho más por delante.

Gracias a Dácil por estar siempre ahí, por ser mi motor. Gracias a mi madre por cuidar de todos nuestros animales del refugio cuando yo no puedo hacerlo por mi trabajo. Este año, sin mi madre no hubiera sido lo mismo, no sé cómo lo hubieramos hecho.

Caramba! ya es 2015! Próspero año nuevo!

Implementing BDD – part 1

Scenario review

Scenario review

This is the first of a series of posts on how our team is practicing BDD. These are mostly notes to myself and our team so other readers may not find enough context to understand the points.

 

 

 

 

 

 

After a few weeks we have decided that:

  • Although source code is written in English, the specifications are written in Spanish. Because the business is always discussed in Spanish, it's the language used to determine what we need to do. So the Cucumber scenarios are written in Spanish.
  • There is a glossary of business terms to help us document the ubiquitous language. The glossary contains the translations from Spanish to English so that the source code expresses the same concepts.
  • If developers need to change or adjust the scenarios for some reason (for instance, automation purposes), then business analyst have to validate the changes.
  • Scenarios will be stored in only place only, and that place is SpecFlow (Cucumber). This means that business analyst need to have all the tools installed and access to the source control repository (Visual Studio and TortoiseHG).
  • After the specifications workshop, scenarios are written in pairs or groups of 3 or 4 people. Two people is usually enough. We've realized that writing the scenarios in pairs help us come up with more questions and more ideas to simplify the problem.
  • The automation of the scenarios should also be done in pairs, because this is yet another opportunity for us to detect inconsistencies or doubts. Well, in general we just pair most of the time.
  • A user story is not ready for implementation until it's got at least 4 scenarios. We started with just one and the lack of context caused us to introduce wrong models in the code. Actually when test-driving a small function, we never start out with a single example but with a to-do list. Then we focus on each test writing the minimum code required but we do know where we want to go. This is the same but at a higher level.
  • Given that our business analyst know how to read code, they should read the source code at the end of the iteration. By reading the part of the code inside the hexagon (we use hexagonal architecture), they can ensure the designed model makes sense.
  • Scenarios should not repeat irrelevant details. If one scenario uses real data like a "plate number", then the following scenarios don't need to repeat the whole plate number but just something like "x". We should keep in mind that scenarios complement each other in terms of documentation. They should be indepdendent for execution but not for documentation.
  • Triangulation: we can make a scenario go green quickly. However, triangulating the code that really works for all the cases falling under that scenario, may require more code. If the triangulation does not concern business details, we use NUnit to test-drive rather than SpecFlow. Scenarios are used for documentation, not for triangulation, unless the triangulation covers relevant business tips.

It was by chance that one of our business analysts came up to a mob programming session and got to see code related to the scenarios that we had defined together. Quickly he realised that we were introducing aliens in the domain (wrong models). It was because we were lacking some other scenarios. The fact that the code was in the wall with a massive font size and it really communicated our intent, together with the fact that the reader is very clever, made us discover an useful technique to double-check our assumptions.

 

A method is an abstraction

The name of a method must have a higher level of abstraction than the body for the method to be worth it. The name should explain what the method does but not how. If the name communicates exactly the same than its code, it could be a sign that the method does not pay off and should be inlined.

Whenever we extract a method from a bunch of lines of code, we are adding a little abstraction to our code. If the extraction happens too early, then we are adding wrong abstractions to the code. Those premature abstractions get in our way to a better design, often hiding duplication and deeper design problems. Lots of small methods does not necessarily mean better design.

I extract a method only when my tests are green and it's really obvious the responsibility of those lines. When I am done implementing the branch of the solution I am test-driving and it's easy to find a name for the new method.  I don't create new methods to make a test pass. Methods appear later.

Code below is an example of premature method extraction. Right after the first test passed the developer created this method:

  1.  
  2. private boolean isValidArithmeticExpression(String input) {
  3. return input != null;
  4. }
  5.  

The body of the method is perfectly understandable without the method. There was no duplication of the statement and there were just a few lines in the production code. Definitely too risky, too early, too complex.

An example of a wrong encapsulation:

  1.  
  2. public string splitBySpace(string expression){
  3. return expression.split(" ");
  4. }
  5.  

In this example the method's name says exactly what one can read in the body.

The same applies to helper methods in the tests. This is an example of a wrong encapsulation:

  1.  
  2. private void conductCalculationAndAssertResult(String equation, int expectedResult) {
  3. int result = calculate(equation);
  4. assertEquals(expectedResult, result);
  5. }
  6.  

Implementing a fake repository

Lately I am using fake objects more than the other common kinds of doubles - stubs, spies and mocks. This is quite recent, I used to prefer stubs, spies or mocks. I find that fakes make my tests less fragile in the case of outside-in development when it comes to changes in the design. The problem is that there could be defects in the implementation of the fake objects. A tricky one may appear in the case of in-memory repositories:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. jobs.Add(job);
  6. }
  7. public void Update(Job job)
  8. {
  9. // TODO
  10. }
  11. public List FindBy(string ownerId)
  12. {
  13. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  14. }
  15. }

This implementation is misleading. If we are test-driving a service expecting it to update a Job instance, we may forget to invoke the repository's Update method and it would still work.

  1. [TestClass]
  2. public class ServiceTest {
  3. [Test]
  4. public void AssignsOwnerToJob()
  5. {
  6. var service = Factory.CreateJobServiceWithFakeRepository();
  7. var job = service.CreateJob(irrelevantTitle);
  8.  
  9. service.Assign(job, irrelevantOwnerId);
  10.  
  11. service.JobsOwnedBy(irrelevantOwnerId).Should().HaveCount(1);
  12. }
  13. }
  14.  
  15. public class Service {
  16. private JobRepository repo {get; set;}
  17.  
  18. public Service(JobRepository repo){
  19. this.repo = repo;
  20. }
  21. public Job CreateJob(string title)
  22. {
  23. var job = new Job();
  24. repo.Add(job);
  25. /*...*/
  26. return job;
  27. }
  28. public void Assign(Job job, string ownerId){
  29. /*...*/
  30. job.Assign(ownerId);
  31. /* BUG: am forgetting the call to "repo.Update(job)" right here */
  32. }
  33. public List JobsOwnedBy(string ownerId){
  34. return repo.FindBy(ownerId);
  35. }
  36. }

The problem is that the in-memory repository is using the same object references all the time. When an object is passed in as an argument, it's actually a copy of the reference. However if the repository was accessing a database it would most likely return different instances (references). A better implementation of the repository would be this one:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. /* clone: the object added to the repo
  6.   is different to the given one */
  7. jobs.Add(job.Clone());
  8. }
  9. public void Update(Job job)
  10. {
  11. /* find the given job,
  12.   remove it from the collection and
  13.   add a clone of the given one */
  14. }
  15. public List FindBy(string ownerId)
  16. {
  17. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  18. }
  19. }

Conclusion: the behavior of the fake should be the same than the original when it comes to handling object references.