We solve problems

  • High quality tailor-made software
  • Training courses on TDD, BDD, clean code, and refactoring
  • Consultancy and guidance
  • Help teams reach the next level in quality and productivity

Archive for the ‘Software Development’ Category

Productive Pair Programming

The title of this post is redundant, pair programming is already a productivity technique. That's my understanding or pair programming not just two people sitting together. Two people may code together for learning or mentoring purposes, however we pair for productivity reasons - productivity in the long term.
Like any other XP practice, this one aims to promote the values:
When I pair I get immediate feedback about my design and ideas. The communication is direct and the conversation provides us with simplicity. Good pairs respect each other and have the courage to split when necessary.

In my experience, it takes quite a lot of time to become a productive pair because one needs to learn how the other think. You need to know when the other person is focused not to break her flow. The navigator should never perform the role of the IDE, we obviously don't interrupt to say... "you missed a bracket in there" as the IDE is already highlighting the mistake. We wait until the driver is not typing to ask questions, propose changes or take turns. Nevertheless waiting for the silence in order to start up a discussion is not enough. When I am the driver, I need my pair to realise that sometimes I need silence to think, specially when my flow is appropriate. The fact that I am not typing does not mean I am ready to talk about other levels of abstraction. As the driver, when this happens I ask the navigator for a few seconds of patience and trust. Flow is one of the most important principles when it comes to TDD and Pair Programming to me. If the flow is interrupted continuously, pairing is frustrating. As a navigator part of my mission is to discover when the driver is ready to listen to me. Although a healthy pair is talkative, silence is necessary. The amount of silence depends on the context. If the navigator is a junior (means that he lacks some knowledge - domain or technical) then as a driver I need more time to demonstrate my points. I need to conquer little milestones with code to later explain the underlying rationale with words. In this case, talking about written code that works, feels easier to me.
I've learned recently that sometimes I just need to ask the navigator to be quite and write down notes that we can discuss some minutes later. Although I used to be open to discussion at anytime, I've learned to prioritize flow. If the navigator is a senior then his comments will be practical and direct and so continuous discussion feels more natural.

The silent moments I am talking about last between 30 seconds and 3 minutes. Being quite for more than 5 minutes might be a sign that the pair is not working well. So yes, there is a conversation which is not the same as thinking out loud. My recommendation is that the navigator always has some paper notes to avoid thinking out loud half baked ideas. Discussing half baked ideas is OK as long as the driver is not typing. If I am typing I can't listen to my pair.

Pairing is also about adding the right amount of pressure to the other. The driver should engage the navigator to avoid the "back-seat" driver syndrome. Taking turns help. Alternate often when the pair is well-balanced. Consider taking longer turns when there is a junior. Be careful, if the pressure trespass a certain threshold part of the intellectual capacity is cancelled. We can't think properly under high pressure.

There are different kinds of interruptions from the point of view of the abstraction level. Low level abstraction comments are easier to handle than high level ones. For instance, say the driver has stopped typing and she is observing the code she's just written, the navigator could say.... "that method should be private, rather public". The level of abstraction of that comment is very likely compatible with the current thoughts of the driver, she can easily accept the change and still focus on the TO-DO list. However, something like "how would you implement that in Clojure" might kill the flow. That comment is OK once the driver is open to discuss. Having a TO-DO list or some kind of little roadmap that is created at the beginning of the pairing session is important to focus on the right level of abstraction.

There is a lot to write on pair programming, this post contains just a few ideas related to my recent experiences. I like this funny list of ways to hate pair programming - the challenge lies in getting to know your pair enough to notice when you are pairing badly.



Polymorphic Equality

The default generation of Equality members provided by Resharper let you choose three different implementations when overriding the "Equals" method in a class (Alt+Insert -> Equality Members):

The default choice is "Exactly the same type as 'this'" which IS NOT polymorphic. I mean, subtypes of that class will be considered not equal regardless of the values:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. if (obj.GetType() != this.GetType()){
  9. return false;
  10. }
  11. return Equals((Base) obj);
  12. }

On line 8 it compares the types which in the case of subtypes will be different thus returning false.
I didn't pay attention to this detail today and for some reason assumed that the comparison was going to work for subtypes. Lesson learned: always pay attention to generated code!

This is the generated code to consider subtypes equal:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. var other = obj as Base;
  9. return other != null && Equals(other);
  10. }

And this is yet another implementation that works:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. if (!(obj is Base){
  9. return false;
  10. }
  11. return Equals(other);
  12. }

The other lesson learned is that overriding the Equals method in the child class when the base class already overrides it, increases the complexity too much. The code is hard to follow and surprising. It increases the coupling between the child class and the parent.
Avoid overriding the equality members in class hierarchies if you can.

What is a ViewModel?

A ViewModel is one of the three parts that comprises the MVVM (Model View View-Model). This pattern is a variation of the Presentation Model (M. Fowler). In the Presentation Model, as opposed to Passive View, the "controller" doesn't hold any reference to the view - and it isn't called "controller" but "Presentation Model". The View is the one which references the Presentation Model. The Presentation Model communicates changes to the View through events and data-binding. The View handles GUI events (user interaction) and may invoke methods on the Presentation Model.

In 2005, John Gossman from Microsoft introduced the MVVM pattern which has been evolving since then. In this post Josh Smith explains the pattern as it's commonly used today with an example. According to this post, a ViewModel is an object which wraps a model (CustomerViewModel contains a Customer model inside). However the another ViewModel is the MainWindowsViewModel, an abstract representation of the MainWindow. So there you go, the same term is used for two different things. So what is a ViewModel?

In this other example by Jeremy Likness, there is a class called Contact which he says is a Model, a domain model. I bet that to some people Contact is rather a ViewModel - specially given its implementation of INotifyPropertyChanged.  This is getting more and more confusing!

In our team, a ViewModel is the class handling all the logic related to the GUI. It acts as an Application Service, the entry point to the hexagon. I believe that in his book, Vaughn Vernon discourages the reader from using this kind of "controller" as an application service. But we haven't encounter any good reason not to use it as such. In fact, I find the definition of application service quite contradictory in that book. We never expose domain models directly to the GUI. We may wrap domain models into other objects which in turn are bounded to the GUI (data-binding) and implement interfaces like INotifyPropertyChanged. Although more often than not, our domain models are not even there. These objects are definitely a kind of "view" of the model, I was tempted to call them "ModelView". I understand they may be called ViewModels, it's reasonable. In some articles thew ViewModel is more model-ish than view-ish. Nonetheless to avoid the dilemma of calling these objects ViewModels, we decided not to add the suffix *ViewModel to them. We simply keep them under the *.ViewModel namespace but they are just objects - a Vehicle, a Customer...
What we definitely know is that they are not domain models. They are anemic models to support a particular view of the application. They should not hold references to other services. The application service (what we call ViewModel) holds the references to these data bounded objects in order to get the information from the user.

It's important is to remember the Presentation Model. It does not make calls to the view, it should not have references to any view component. If the GUI logic requires to pop up a confirmation dialog, the Presentation Model (ViewModel) should not make a direct call to some kind of "Dialog", it should rather trigger an event that the View can listen to show up the dialog.

What do you think? I appreciate your comments and thoughts 😉





DTO vs Domain Models – Part II

As we learn more about the business and the domain we are taking the time to improve the design of the software. The first handful of features have been deployed into production, now it's a good time to measure, gather more feedback and redesign. In Par I, we exposed some of the difficulties we were facing when modeling. Changes we are making:

  • We are changing the repositories to work with aggregates (entities). This is, rich domain models rather than plain anemic models. The repository performs database queries and then pass in the raw results to the aggregate's constructor or to some adapter to build up the model.  Thus the repository returns models to the service layer and receives models.
  • We (the team) are changing our definition of Data Transfer Object. Currently an object is a DTO only if it's used to send and receive data from the GUI (often through data binding) or via HTTP requests. In the case of the GUI, those DTOs may be considered ViewModels. Otherwise they are just anemic models which means they are an opportunity for us to improve the design.
  • Transfer the minimum amount of data for each operation, no more. A single service may work with many different DTOs because each operation requires different data setsFor instance, the OfferService (an application service) has public methods to create and update offers. The OfferDto used to contain all the data required for both operations:
         public void Create(OfferDto offer);
         public void Update(OfferDto offer);
    However only a subset of the data is required for each operation. Now we prefer to exchange a minimal amount of data:
         public void Create(NewOffer offer);
         public void UpdatePrice(OfferId offerId, decimal price);
    public void Update(UpdatedOffer offer);

    This approach help us realize those objects are used just to transfer data, nothing more.

Slowly we are evolving from a data-centric approach to a rich domain model.


Today has been a special day. Not because it's my 34th Birthday which I never really celebrate (although I do appreciate all the greetings and warm words from all my friends and family!) but because I've had the pleasure to spend the day working with Carlos Peix for the first time. Carlos is a well-known software craftsman from Argentina. It's his first time in Spain and we are lucky that he chose us to spend two days working with us as part of his journey. It's an exchange, something a journeyman does to learn and share. Among other things we have been mob programming today, refactoring code aiming to come up with a proper domain model. We have been identifying anemic models and data transfer objects trying to turn some of them into domain models. Trying hard to make up a good object oriented design. Things I like from Carlos' suggestions today:

  • Enumerate a list of all the classes we plan to refactor (or classes that are going to be affected) so as to be able to estimate how much time we are going to invest in this task and thus make a better decision on where to stop refactoring.
  • Rather than refactoring every object in depth, apply little changes to all the objects visiting all of them to build a mental map of dependencies and structure. For instance, we decided we wanted to hide setters and create constructors to inject the attributes. Although a single class suffered from other smells, we performed only that change to it and moved to the others until all of them changed. To me it's like exploring the tree breadth-first.
  • Sequence diagrams help understand the complexity of the design. It's funny that this should happen exactly at the same time I am reading Sandy Metz's book on OOP. Carlos hasn't read the book but he recommends the same technique than Sandy Metz. The sequence diagram was an "aha" moment to my colleague Miguel.
  • We commented on the different strategies to expose an object to different bounded contexts, using interfaces and using wrappers. We didn't have much time to dig into it.
  • Repositories retrieve aggregates when possible, that is rich domain models rather than anemic models.

Looking forward to more synergy tomorrow, it's been an exciting day.

And you fellow journeyman, remember that we are open minded and value your visit to our office. If you want to work with us for a few days drop us a line, we look forward to it!

As I am getting older my memory doesn't work as it did some years ago. If want to remember and internalize the important stuff from all the things I study, I just have to blog about them for two reasons:

  • being able to come back to my blog to reread
  • record it better on my mind - forcing my brain to explain what I learn produces a deeper learning experience

I was lucky to participate in a nightly mob programming session with Sandro Mancuso and other fellow craftsmen during Socrates Canaries when I first saw his way of writing acceptance tests. Now with Sandro's Screencasts we have a great material to study and explore. We have to thank Sandro for exposing himself with these code screencasts specially because they are for free and unedited. What happens with screencasts (because I record screencasts myself) is that after a while, perhaps after a bunch of months you change your way of coding, you find little tweaks along the long journey of continuous improvement that may make you feel a bit embarrassed when you review your old videos. Specially for this I appreciate the effort Sandro is putting into the community.

The main pick I take from the recorded kata is what I would call an "evolving acceptance test" and "deferred dependency injection". When I write a test, I don't want to change it later. In TDD the production code evolves with each test however my tests should not change even when I refactor the code. And this is also true for the acceptance tests when it comes to the "act" and "assert" parts but not necessarily to the "arrange". I have learned that I can postpone the decision of where and how are my dependencies going to be connected. On Outside-in TDD #1, Sandro knows there is a Console object and spies on it but the object is not injected yet into the Account object - by the way, the kata is better understood if the Account was was named AccountService. He defers the decision on how to wire up dependencies. To make it more intention revealing what I am doing now is to get the service under test from a factory:

  1. AccountService service = Factory.AccountService();

Now all I have to do is to change the factory method whenever I am done with the low level unit tests. But I don't have to have the object graph on my head at the time I am writing that very first test. Nice one.

Now when it comes to the unit tests, my outside-in style used to be very similar to Sandro's but in the last couple of years it has changed slightly. I try to mock only the boundaries of the system. Only those things I don't own. As an example, on the Outside-in TDD #2, at about 2mins:49secons he creates a test to spy on the StatementPrinter. That test turns out to be exactly like the production code which makes it pass. It is strongly coupled to the implementation. I would rather use a Console spy to test that properly formatted statements are sent to the console, thus hiding the intermediate collaboration step from my test.
But I would do so probably using Sandro's technique of deferred dependency injection - I don't know whether I really want to inject the Console into the Account. So the production code will likely employ the StatementPrinter but my test wouldn't know about it. To shorten the feedback loop I often fake the implementation (yes, "return true"), that gets me an idea of the complexity of the problem at hand. Then I turn to read and triangulate to get to green with the minimum effort. Eventually I would end up with the StamentPrinter as I refactor.

Essentially I like my tests to know only what is strictly needed from the implementation so that I can refactor often without breaking many tests.

What about you? What things do you do differently?

In order for a native desktop application to be responsive, I mean, not to freeze whilst sending a request over the network or processing a heavy CPU operation, these operations have to run in a separate thread.
.Net 4.0 introduced promises (a well-known concept for JavaScript developers), the Task object is a promise in C#. Later on, C# 5.0 introduced a nice syntactic helper to work with theses promises: async and await keywords.

Powerful features, comfortable to develop with but it's important to know how they work behind scenes. Otherwise you could get in trouble with deadlocks and invalid operation exceptions.

Our teammate Modesto San Juan recommended the book "Async in C# 5.0" by Alex Davies (@alexcode), and I found it great. I've created a tiny sample app to summarize the things I've learned from the book:

Sample app: https://bitbucket.org/carlosble/asyncawait - Look at MainWindow.xaml.cs file.

The app is just a window with a bunch of buttons and a text field. Every button has an event handler in the MainWindow.xaml.cs that exercises each use case.
The method names in that file along with some comments, explain the most remarkable features for our team right now.

Most of the stuff in Alex's book is very well summarized in this presentation by Alex Casquete (@acasquete) & Lluis Franco (@lluisfranco) :

Async Best Practices from Fimarge

DTO vs Domain Models

A Data Transfer Object (DTO) is an object intended to carry data, for example between the client and the server or between the UI and the domain layer. It exposes fields or properties (getters and setters) publicly. Sometimes a DTO could be seen as an anemic model. DTOs are mostly used out of the hexagon, in the delivery mechanism.

On the other hand domain models should promote a good object oriented design with proper encapsulation. They belong in the hexagon.

However it's not always clear where to use a DTO and a domain model, where and how to convert one into the other. None of the approaches we have tried are working for us in all the cases, there are always exceptions. But there are some principles that are helping me:

Principle: Do not expose domain models out of the hexagon.
Example: Binding an object to the UI.
Reason: Changes in the domain will impact the delivery mechanism in unsuspected ways. When the GUI is defined with XAML, the data binding is declared in this XML. Changing the visibility of a getter from public to private will compile but as soon as the user enters some data in a bound field it will throw a runtime exception that we can't reproduce with automated tests, it's hard to discover.

Principle: Consider emerging DTOs as early versions of models
Usually objects start with one or two fields and no logic on them. We consider them DTOs at that point but anyway they are part of our domain model and so they are used by services and domain models. We may even have a repository to persist that particular DTO when it looks like an aggregate.

Later on, as we learn about the domain and the objects grow, the DTO might turn into a domain model. Then it makes sense to have two separate objects. The DTO is typically generated by the model:

  1. var dto = domainModel.ToDto(); // model to dto
  2. var model = DomainModel.From(dto); // model from dto

Now we make sure the domain model is not accessible by the GUI. And the DTO is no longer present in the repository's method signature. The repository may get the DTO from the domain model internally.

We don't know what is the right answer in some cases but it's important to remember what didn't work for us. This post is a reminder to myself, like many others.

We were part of CukeUp! 2015

I had the pleasure of speaking at CukeUp! this year (2015), the fifth edition of Cucumber's conference which is more about BDD than Cucumber itself, although there are always updates on the state of the tool (this year Matt released version 2.0 during his talk!)

Borja and I

Borja and I

It's been a great conference, I've learned a bunch of things, met friends and new people. My colleague Borja Navarro accepted the challenge of co-presenting with me and that made our session fun, a least for me, a had a lot of fun. We engaged the attendees with a couple of examples together with questions and answers on how to write user stories with acceptance criteria and scenarios. The attendees were very helpful and active. This is the recorded session, although the sound is quite bad.

Thank you Borja, you rock mate!

These are my conference picks:

  • A mature XP team don't need to have people exclusively for testing and QA purposes. If every developer has testing skills and every pair is responsible for quality then people just play the two roles but there is no such distinction between testers and developers. There will be people with stronger development skills and people with stronger testing skills, experts on each area. Well, this is my understanding from Rachel's and Aimy's keynote.
  • They don't have a staging environment, they release directly into production, performing some smoke testing live with some "testing" user accounts. I find this technique specially interesting when the new features can be deployed just on a few servers so that only certain users get to exercise the new code whilst the others keep using the previous version for a while. Exploratory testing and other tests are performed on the development environment. The maturity level of the team has to be very high, sounds like a nice goal to reach at my current project.
  • I participated in Paul Rayner's useful workshop,  specially nice because I got the chance to share table with Steve Tooke and Gáspár Nagy, experts on the subject. My pick is that any technique that could help with discovery during the analysis stage is welcome. One of the suggested techniques was Event Storming, which I am going to trying as soon as I can. Since we are using DDD in our project, Event storming sounds like a great technique to combine analysis and modeling.
  • From Seb's talk there are many ideas to take away, it's worth reading the slides. I already listened to his talk in a previous event in London and I want to read the recommended books. Take a look at this map on how to split user stories.
  • Konstantin Kudryashov, creator of Behat, gave an interesting talk on how to explore and exercise the domain model from Gherkin scenarios, the slides are here. My understanding was that he uses domain objects within the step definitions as if these steps were unit tests. The approach is a bit different from the one we are using where the step definitions invoke actions on application services rather than objects from the inner hexagon. Although we use application services we usually stub out or spy on repositories aiming for fast tests. Another technique he employs is to mark a few scenarios as critical making them run end-to-end through the UI so as to make sure all pieces integrate with each other. It's good to know the techniques that are working for other people and see how they are different to the ones we currently use.
  • The keynote by Dan North was full of insight, I highly recommend watching it. I didn't watch it live as I prefer to chat with people whilst talks are being recorded - at that time I took the chance and enjoyed listening to Chris Matts in the bar. To me, the most interesting tip is the experiment Dan is conveying where every team member's board has to have Kaizen tasks, Discovery tasks and Feature tasks. Brilliant, isn't it?
  • The workshop by Chris Matts and Richard Warner was full of insightful ideas. I liked the exercise on "Left Shifting" company's culture. On the other hand they explained their discovery process when it comes to feature analysis. The three keys to discover new features are "Customer needs and wants", "Target group" and "Outcome". The workshop was recorded, it's worth watching that part of the session which was more of a talk than a workshop. My understanding is that they are not using user story mapping at all. It was really nice to share table with Nat Pryce during the workshop, he was very active and fun, I am glad I got to know him in person finally.
  • Matt Wynne's workshop was one of my favorite ones. I really like the password strength exercise to illustrate the difference between acceptance criteria (rules) and examples. See these slides for more details. The other pick is "Example mapping" a technique where the colors end up making obvious what we know and don't know about the context in order to write good user stories. I also realised that the examples used for exercises may work better when they are fun, when we add some humor to them. Writing user stories for a boring domain is not as engaging as writing them for some fun domain with curious situations.

At the end of the conference there was a panel with the experts, the creators of BDD. As a result Dan North came up with this initiative on explaining BDD by example. I want to contribute to those examples. By the way, Dan said he found John Ferguson's book very good, I have to check it out.

CukeUp brought me more value than I expected, I am glad I was part of it. Thank you organisers, you made a great job!


.Net 4.5 came out with a really handy built-in asynchronous mechanism, async and await. However the method signature of a void method is a bit strange:

  1. public async Task SomeAction(){...}

It is possible to use async void but it's not recommended unless we are talking about event handlers. It's also a bit strange the fact that the type specified in the signature is not the one returned:

  1. public async Task<string> SomeQuery(){
  2. .....
  3. return someString;
  4. }

But everything else is great and very handy.

Stubbing method calls can be hard though. You can get weird exceptions like System.AggregateException when running tests with NUnit. The problem comes up when awaiting for a

stubbed async method:

  1. // notice that async tests with NUnit require the method to be async Task rather than void
  3. [Test] public async Task
  4. filter_results_from_search(){
  5. var srv = new Mock<Service>(); // using Moq
  6. sut = new Sut(srv.Object);
  8. sut.someAction(); // Exception happens here
  10. SomeAssertionOverHere();
  11. }
  12. // Production code
  13. public class Service(){ // dependency
  14. public virtual async Task<string> SomeQuery(){ // virtual is mandatory in order to mock it
  15. return "someResult";
  16. }
  17. }
  18. public class Sut(){ // system under test
  19. Service srv;
  20. public Sut(Service srv){
  21. this.srv = srv;
  22. }
  23. public async Task SomeAction(){
  24. var result = await srv.SomeQuery(); // NullReferenceException in here
  25. ....
  26. }
  27. }

The problem is that Moq will make the stubbed method return null when invoked, because we are not specifying any return value. The default value for Task<string> is null. We should tell Moq to return a proper Task:

  1. [Test] public async Task
  2. filter_results_from_search(){
  3. var srv = new Mock<Service>();
  4. srv.Setup(s => s
  5. .SomeQuery())
  6. .Returns(Task.Factory.StartNew(
  7. () => "someQuery")
  8. );
  9. sut = new Sut(srv.Object);
  11. sut.someAction();
  13. SomeAssertionOverHere();
  14. }

The key is to return a new task: Task.Factory.StartNew(lambda)