As I am getting older my memory doesn't work as it did some years ago. If want to remember and internalize the important stuff from all the things I study, I just have to blog about them for two reasons:
I was lucky to participate in a nightly mob programming session with Sandro Mancuso and other fellow craftsmen during Socrates Canaries when I first saw his way of writing acceptance tests. Now with Sandro's Screencasts we have a great material to study and explore. We have to thank Sandro for exposing himself with these code screencasts specially because they are for free and unedited. What happens with screencasts (because I record screencasts myself) is that after a while, perhaps after a bunch of months you change your way of coding, you find little tweaks along the long journey of continuous improvement that may make you feel a bit embarrassed when you review your old videos. Specially for this I appreciate the effort Sandro is putting into the community.
The main pick I take from the recorded kata is what I would call an "evolving acceptance test" and "deferred dependency injection". When I write a test, I don't want to change it later. In TDD the production code evolves with each test however my tests should not change even when I refactor the code. And this is also true for the acceptance tests when it comes to the "act" and "assert" parts but not necessarily to the "arrange". I have learned that I can postpone the decision of where and how are my dependencies going to be connected. On Outside-in TDD #1, Sandro knows there is a Console object and spies on it but the object is not injected yet into the Account object - by the way, the kata is better understood if the Account was was named AccountService. He defers the decision on how to wire up dependencies. To make it more intention revealing what I am doing now is to get the service under test from a factory:
- AccountService service = Factory.AccountService();
Now all I have to do is to change the factory method whenever I am done with the low level unit tests. But I don't have to have the object graph on my head at the time I am writing that very first test. Nice one.
Now when it comes to the unit tests, my outside-in style used to be very similar to Sandro's but in the last couple of years it has changed slightly. I try to mock only the boundaries of the system. Only those things I don't own. As an example, on the Outside-in TDD #2, at about 2mins:49secons he creates a test to spy on the StatementPrinter. That test turns out to be exactly like the production code which makes it pass. It is strongly coupled to the implementation. I would rather use a Console spy to test that properly formatted statements are sent to the console, thus hiding the intermediate collaboration step from my test.
But I would do so probably using Sandro's technique of deferred dependency injection - I don't know whether I really want to inject the Console into the Account. So the production code will likely employ the StatementPrinter but my test wouldn't know about it. To shorten the feedback loop I often fake the implementation (yes, "return true"), that gets me an idea of the complexity of the problem at hand. Then I turn to read and triangulate to get to green with the minimum effort. Eventually I would end up with the StamentPrinter as I refactor.
Essentially I like my tests to know only what is strictly needed from the implementation so that I can refactor often without breaking many tests.
What about you? What things do you do differently?
In order for a native desktop application to be responsive, I mean, not to freeze whilst sending a request over the network or processing a heavy CPU operation, these operations have to run in a separate thread.
Powerful features, comfortable to develop with but it's important to know how they work behind scenes. Otherwise you could get in trouble with deadlocks and invalid operation exceptions.
Our teammate Modesto San Juan recommended the book "Async in C# 5.0" by Alex Davies (@alexcode), and I found it great. I've created a tiny sample app to summarize the things I've learned from the book:
Sample app: https://bitbucket.org/carlosble/asyncawait - Look at MainWindow.xaml.cs file.
The app is just a window with a bunch of buttons and a text field. Every button has an event handler in the MainWindow.xaml.cs that exercises each use case.
The method names in that file along with some comments, explain the most remarkable features for our team right now.
Most of the stuff in Alex's book is very well summarized in this presentation by Alex Casquete (@acasquete) & Lluis Franco (@lluisfranco) :
Note: read Part II of this post here.
A Data Transfer Object (DTO) is an object intended to carry data, for example between the client and the server or between the UI and the domain layer. It exposes fields or properties (getters and setters) publicly. Sometimes a DTO could be seen as an anemic model. DTOs are mostly used out of the hexagon, in the delivery mechanism.
On the other hand domain models should promote a good object oriented design with proper encapsulation. They belong in the hexagon.
However it's not always clear where to use a DTO and a domain model, where and how to convert one into the other. None of the approaches we have tried are working for us in all the cases, there are always exceptions. But there are some principles that are helping me:
Principle: Domain models may travel out of the hexagon but... do not bind domain models to the GUI directly.
Reason: Data mappers like XAML binding need public setters and getters. We don't want public setters in our domain models. When the GUI is defined with XAML, the data binding is declared in this XML. Changing the visibility of a getter from public to private will compile but as soon as the user enters some data in a bound field it will throw a runtime exception that we can't reproduce with automated tests, it's hard to discover.
Principle: Apply data transformations out of the core hexagon
Usually objects start with one or two fields and no logic on them, they are anemic models. At that stage there is no difference with a DTO.
Later on, as we learn about the domain and the objects grow, we need to apply transformations to carry data over to the view or through the network. Our first approach was to populate the DTO from the model:
But as many people commented on this post and as we saw later, it's not a good idea because we are coupling the domain models to the way data is exported and sent out of the hexagon. Coupling is unavoidable, what matters is the direction of the coupling. It's OK that the outside world knows the model but it's not that good to pollute our core domain with external concerns like GUIs or serialization.
Our current approach is to have an "adapter" class that knows both worlds and connect them:
- var dto = adapter.ToDto(model); // model to dto
- var model = adapter.ToModel(dto); // model from dto
Thanks everyone for the comments