Archive for the ‘aida’ Category



I don't really know whether they are the best practices to be honest, and certainly there is a lot for me to learn but these are principles and practices that work well for us in the development of a complex native Windows App (Windows 8.1+) using C# and the MVVM pattern.

Files in my example (namespace + classname) :

  • Example.Views.App.xaml.cs            (Main app class)
  • Example.Views.Vehicle.xaml           (View)
  • Example.Views.Vehicle.xaml.cs       (View's Codebehind)
  • Example.ViewModels.Vehicle.cs     (View model)
  • Example.Domain.Vehicle.cs             (Domain model)
  • Example.ViewModels.AppState.cs   (In-memory app state)
  • Example.Views.NavigationService.cs (Our custom navigator)
  • Example.Views.NavigationParameters.cs (Bag of parameters to be sent to the target view)
  • Example.Domain.EventBus.cs         (Our custom
    pub-sub implementation, a singleton)

Page navigation is performed by the framework:

  1. ((Frame)Window.Current.Content).Navigate(
  2. typeof(Vehicle), vehicleId);

The first parameter is the type of the target Page and the second is an "object" intended to send any custom parameter. Such parameter is received as an argument of OnNavigatedTo method in the target page.
The code above is used to navigate from App.xaml.cs (Main page) to Vehicle (Page).

The NavigationService is an indirection level that sends the ViewModel to the View as the context object. It's used pretty much like Frame.Navigate:

  1. NavigationService.Navigate<Vehicle>(Window.Current, vehicleId);

Implementation (NavigationService.cs):

  1. public static void Navigate<T>(Window w, object context){
  2. ((Frame) w.Context).Navigate(typeof(T),
  3. new NavigationParameters{
  4. ViewModel = GetViewModel<T>(),
  5. Context = context ?? GetContext<T>()
  6. });
  7. }
  8.  
  9. private static object GetViewModel<T>(){
  10. if (typeof (T) == typeof(Vehicle)){
  11. return Factory.CreateVehicleViewModel();
  12. }
  13. ...
  14. throw new NotImplementedException("Can't navigate to such page");
  15. }
  16.  

This is how the view model is received in Vehicle's codebehind (Vehicle.xaml.cs):

  1. protected override async void OnNavigatedTo(NavigationEventArgs e){
  2. var navigationParams = e.Parameter as NavigationParameters;
  3. var vm = navigationParams.ViewModel as ViewModels.Vehicle;
  4. vm.SubscribeToEventBus(); // in case vm is a listener
  5. await vm.Initialize(); // in case of some initialization
  6. DataContext = vm; // set the DataContext at the very end
  7. }
  8.  
  9. protected override void OnNavigatedFrom(NavigationEventArgs e){
  10. if (ViewModel != null){
  11. ViewModel.UnsubscribeFromEventBus(); // release the reference
  12. }
  13. }
  14.  

Principles applied in the code snippet above:

  • DataContext is set in the last step of the method, not before. DataContext is set either in the codebehind or in xaml, but not in both places at the same time. If the DataContext is set in the xaml (DataContext="SomeProperty") and also in the codebehind, you can't guarantee which data will be finally set, race conditions could happen.
  • Pages and UI controls in general must not contain state. Avoid any field in the codebehind holding a reference to the view model. This is to prevent race conditions. We rather create a getter instead:
    1. protected ViewModels.Vehicle Vehicle {
    2. get { return DataContext as ViewModels.Vehicle }
    3. };
  • Avoid subscribing the codebehind to the EventBus, use the view model as the listener. Life cycle of the pages is controlled by the framework - this is specially important when caching pages via NavigationCacheMode="Required". Sending a reference to the EventBus will prevent the garbage collector from cleaning up the Page instance.

Avoid global statics: Although there is a single instance of AppState class - is a global singleton, we inject it into every view model that requires read or write access rather than having direct static references. The Factory knows the AppState singleton and injects it to the viewmodels. Although two different views may require the same data, we try not to store everything in the AppState but rather cache the service methods retrieving the required data and then injecting the same instance service to both viewmodels. The amount of data kept in the AppState should be minimal, basically it should contain identifiers that view models understand in order to pull data from the services. Sometimes it contains more data to avoid time consuming transformations or calculations, that's fine, it's a trade off.

Custom controls: We ended up having our own custom pages, inheriting the Page control to remove duplication from initialization process. One of such inheritors is generic: CachedPage, where T is the type of ViewModel. However in xaml you can't define a page inheriting from a generic class. To work around this minor issue we create an intermediate empty class:

  1. public class CachedVehiclePage : CachedPage<Vehicle>{}

Then in xaml we can set the type of our page to be CachedVehiclePage.

Nested user controls: When a Page contains a user control, the DataContext of that user control is the same than the Page's one. Neither the codebehind or the xaml of user control should overwrite the DataContext. The DataContext should not be set programmatically it's just inherited from the parent container. Otherwise there could be race conditions and memory leaks.

Data binding: We don't bind domain models directly to the GUI. The main reason is that double way binding requires public setters. Sometimes we create a bindable object that wraps the domain model exposing only ome properties. But we often create custom bindable objects from the domain model for the specific purposes of the view.

I'll update this post with more stuff that is working well for us.

 

 

Learning with Peter

Last week I was lucky to host my good friend Peter Kofler in his visit to Gran Canaria, where he came to facilitate the Glodal Day of Code Retreat and also to work together a couple of days in our biggest project at the moment.

We've been working in the same project for a year now, our team joined the client's team to play several roles, from mentoring to developing features ourselves. Peter's visit was a fantastic learning experience and also a way to bring back principles that wear out as we face recurring issues over time. Even though we started as external consultants, the fact that we've been a year immerse in the project sometimes leading the development ourselves, changes our vision and focus. Peter's fresh vision reminded me of myself a year ago, when the gig started and reinforced my willingness to stick with my principles. Thank you Peter!

Notes that I wrote down during working sessions:

  • Code metrics is exactly what we need now to show everyone the health of the code base. We have enough code already and we must improve the readability of some tests, reorganise dependencies, clean up namespaces and some other things we know are important. A tool like Sonar of something like that will provide us with metrics to measure the improvements over time. It's definitely the time to visualize code metrics
  • Dependencies diagrams are another type of metric that we should use now that. A quick overview of how the various namespaces are related to each other.
  • There are certain areas of the hexagon suffering from "primitive obsession" code smell. We want more objects within our business domain, and less primitives.
  • More information in commit's comments. Apart from the refactoring we apply or the name of the tests turning green or red, I'll try to explain shortly what is the benefit of every single commit. The reason to change the code. I've found myself searching for particular design decisions in the commits' history and realized that the information in the comments was not enough to find out what I wanted. Example: why I moved a method to a class, extracted a class... the "why" is not in the code so it's a good idea to add it in the form of comments in the version control system.
  • I want to learn how to work with Visual Studio without the mouse, because it's faster with the keyboard only. This is an old resolution that I want to approach now.
  • I realised that our style of "mob programming" is different to the original style proposed by Woody Zuill and his team. In the original proposal, the driver is mostly "keyboarding", focusing on typing what the navigators tell him to do. Our approach is different, we have the discussion and the person who clearly sees how to translate the idea into code jumps in the keyboard and expresses it in code, no one says to the driver what should be written. The driver leads the session when he is at the keyboard. It's sometimes easier to express an idea writing some code or pseudo-code, than trying to explain how the code is going to look like, and I think this has to do with the fact that talking constantly exhausts me. It also has to do with focus, the driver is the person who can implement the discussed idea faster, who knows the code better. Whenever the driver is stuck, the navigator who knows how to carry on, asks for the keyboard and becomes the driver. I find dictating awkward and sometimes irritating. So we are not following their rule: "for an idea go from your head into the computer it MUST go through someone else’s hands.". We try not to talk whilst the driver is typing just like in pair programming, because we want the driver to be able to listen what navigators have to say, and I can't type and listen at the same time. Perhaps we are not doing mob programming at all. But it works for us.
  • In order to keep the good energy and motivation in a mob or pair prog session, it's better to give up or take a break than working without feeling like it. I need to feel that my colleagues feel like working and thus I must feel like doing so too.
  • I really like the way Peter summarized the Hexagonal Architecture, focusing on the direction of the dependencies. A simple diagram with two boxes joined by an arrow pointing from one to the other, was enough for people to understand it. Also the onion was a good idea.

Regarding the Global Day of Code Retreat, there are a bunch of things I learned from Peter and from the day itself:

  • If there are no sponsors buying food, I want to buy food myself for me and for all the participants. This year I brought the food in the afternoon but I should have brought it in the morning, when people were hungry. Next time I'll arrive in the morning with breakfast for everyone. I missed the relaxing moment of the morning break sharing some fruit and coffee or tea.
  • Whenever I talk very serious aiming to remark something important, I look like I am rude. I don't realize that until I see people's faces and by then I feel bad about it. To avoid this situation I'll think twice before sending the message. I must be quiet, talk more slowly, think from a positive angle and use positive words. Words like "no" or "not" or "don't" may be negative. I want to read about Non Violent Communication and practice it.
    As an example, during the retreat in one iteration's retrospective I said that the facilitators (Peter and me) were not there to teach. My aim was to encourage people to learn from themselves rather than expecting some kind of master class or demo from our side, but the way I said it was understood as ... "I am not here to help you, or I don't care about you" as some people told me afterwards. It had a negative impact in some people's motivation.
  • At the end of the day, after the final retrospective, in the pursuit of feedback and appreciation I talked in such an unfortunate way that made people feel in debt with us. But again I didn't realise until I saw people's reaction. My aim was to discover what benefits did participants find out during the day, what did they take away so that next time I could maximize it with specific actions or exercises.
    When it's obvious that I've dedicated time and energy working for others, I must be very careful not to express that explicitly because it will make people feel in debt which is not what I wanted.

Several friends have spoken to me about Non Violent Communication (NVC) in the last 5 years or so, I believe Diego Rojas was the first one, working with Peter was a trigger point for me to really start digging into the subject. Thank you for the recommendation Peter and for this excellent video by Marshall Rosenberg, the father of NVC:

What is a ViewModel?

A ViewModel is one of the three parts that comprises the MVVM (Model View View-Model). This pattern is a variation of the Presentation Model (M. Fowler). In the Presentation Model, as opposed to Passive View, the "controller" doesn't hold any reference to the view - and it isn't called "controller" but "Presentation Model". The View is the one which references the Presentation Model. The Presentation Model communicates changes to the View through events and data-binding. The View handles GUI events (user interaction) and may invoke methods on the Presentation Model.

In 2005, John Gossman from Microsoft introduced the MVVM pattern which has been evolving since then. In this post Josh Smith explains the pattern as it's commonly used today with an example. According to this post, a ViewModel is an object which wraps a model (CustomerViewModel contains a Customer model inside). However the another ViewModel is the MainWindowsViewModel, an abstract representation of the MainWindow. So there you go, the same term is used for two different things. So what is a ViewModel?

In this other example by Jeremy Likness, there is a class called Contact which he says is a Model, a domain model. I bet that to some people Contact is rather a ViewModel - specially given its implementation of INotifyPropertyChanged.  This is getting more and more confusing!

In our team, a ViewModel is the class handling all the logic related to the GUI. It acts as an Application Service, the entry point to the hexagon. I believe that in his book, Vaughn Vernon discourages the reader from using this kind of "controller" as an application service. But we haven't encounter any good reason not to use it as such. In fact, I find the definition of application service quite contradictory in that book. We never expose domain models directly to the GUI. We may wrap domain models into other objects which in turn are bounded to the GUI (data-binding) and implement interfaces like INotifyPropertyChanged. Although more often than not, our domain models are not even there. These objects are definitely a kind of "view" of the model, I was tempted to call them "ModelView". I understand they may be called ViewModels, it's reasonable. In some articles thew ViewModel is more model-ish than view-ish. Nonetheless to avoid the dilemma of calling these objects ViewModels, we decided not to add the suffix *ViewModel to them. We simply keep them under the *.ViewModel namespace but they are just objects - a Vehicle, a Customer...
What we definitely know is that they are not domain models. They are anemic models to support a particular view of the application. They should not hold references to other services. The application service (what we call ViewModel) holds the references to these data bounded objects in order to get the information from the user.

It's important is to remember the Presentation Model. It does not make calls to the view, it should not have references to any view component. If the GUI logic requires to pop up a confirmation dialog, the Presentation Model (ViewModel) should not make a direct call to some kind of "Dialog", it should rather trigger an event that the View can listen to show up the dialog.

What do you think? I appreciate your comments and thoughts 😉

 

 

 

 

Heuristics, bad smells and principles in the design of our Windows 8 app, which shares the core domain with a rich JavaScript client application. In Par I, we exposed some of the difficulties we were facing when modeling. How we are working today:

  • Our repositories work with aggregates (entities). This is, rich domain models rather than plain anemic models. The repository performs database queries and then pass in the raw results to the aggregate's constructor or to some adapter to build up the model.  Thus the repository returns models to the service layer and receives models.
  • Currently an object is a DTO only if it's used to send and receive data through the network (via HTTP requests).
  • Bad smell: the fact that only certain fields of a DTO are used in some part of the application and the other fields are used in another place. That means we are trying to reuse concepts that are different. The solution is to split the object in two. Principle: a data object have to be consistent, all its fields must be in use.
  • The objects we bind to the GUI are not called DTOs anymore. But we are not happy calling them ViewModels either, because we still want to make the distinction between a data object bound to the GUI, and a kind of "Controller" that manages GUI logic and depends on collaborators like application services. So we are avoiding prefixes and suffixes for those objects and using their namespaces to distinguish them. For example if "Vehicle" happens to have the same properties in the model, the communications layer and the GUI, there will be three objects:
    • ProjectName.Model.Vehicle
    • ProjectName.Dto.Vehicle
    • ProjectName.ViewModel.Vehicle
  • The transformations between DTO and Model are performed by an adapter as described in Part I. But we do not create the three objects from the very beginning when there is just one object with one field or two. We may use the same object all over the place until it grows and starts acquiring some logic.
  • We usually talk about "bindable objects" to refer to those data objects bound to the GUI. Sometimes the tranformation between domain model and bindable object is performed by some ViewModel/Controller. Sometimes we delegate the task in an adapter, within the ViewModel namespace.
  • Transfer the minimum amount of data for each operation, no more. A single service may work with many different DTOs because each operation requires different data setsFor instance, the OfferService (an application service) has public methods to create and update offers. The OfferDto used to contain all the data required for both operations:
         public void Create(OfferDto offer);
         public void Update(OfferDto offer);
    However only a subset of the data is required for each operation. Now we prefer to exchange a minimal amount of data:
         public void Create(NewOffer offer);
         public void UpdatePrice(OfferId offerId, decimal price);
    public void Update(UpdatedOffer offer);

    This approach help us realize those objects are used just to transfer data, nothing more.

  • Keep data structures simple in the client/GUI side: if all I need of some entities are their identities, I don't need whole objects in that view, I can just hold a list of strings. Once the strings enter the hexagon, they'll be turned into domain models.
  • Keep the outer interface of the hexagon simple so as to make it simple to interact with: it's OK to receive primitives as arguments in our application services or actions.

 

Este post habla de dos de los valores de XP: Simplicidad y Feedback. Fundamentales y habitualmente olvidados.

Cuando escribimos software o dibujamos una interfaz de usuario, estamos dando forma a una solución concreta de las muchas que puede tener un problema. Al elegir una solución automaticamente descartamos las demás porque nunca damos marcha atrás para escribir otra solución distinta si la que tenemos ya “funciona”. Por esto hay que reflexionar constantemente sobre la complejidad de la solución que hemos elegido. Lo mejor es hablarlo con los compañeros, discutirlo. A menudo exponiendo la idea descubrimos que podemos conseguir lo mismo de una manera más sencilla.

La clave para poder simplificar es tener en cuenta cuál es el objetivo (qué problema fundamental estamos resolviendo) de negocio. Pero no el objetivo del software a largo plazo sino de la release actual. Lanzamos releases frecuentes en ciclos cortos para obtener feedback y cada una tiene el objetivo de aprender algo concreto del negocio o de los usuarios. Cada release ofrece alguna caracteristica nueva o un cambio que les ayude en su día a día pero sin querer abarcar demasiado de un golpe. Me gusta que la lista de features y cambios de una release me quepa en la cabeza, que pueda contar con los dedos de una mano sin necesidad de documentos. Y que haya un objetivo predominante.

Para simplificar hay que tener en mente cuál es el objetivo de la release actual, que no es el mismo que la release que haremos dentro de 3 meses. De aquí a 3 meses habremos cambiado buena parte del código, a veces mediante refactoring y a veces porque nos damos cuenta de que hay otras necesidades. La vida cambia en 3 meses y en menos.

Conviene darle más de una vuelta a la solución, hay que “resimplificar” todo lo posible. Lleva poco tiempo hacerlo y tiene un beneficio muy grande en el medio y largo plazo. "Vale ya funciona, ahora vamos a echarle una pensada para ver si lo podemos simplificar, a ver si podemos borrar la mitad del codigo y que siga funcionando".

El código tiene que quedar fácil de entender y su simplicidad es más importante que el uso de patrones. No tenemos por qué aplicar el patrón MVVM en absolutamente todas las partes de la capa de presentación, a veces su complejidad no se justifica. Por cierto, recordemos que un ViewModel es una representación de la vista y no un DTO, un ViewModel es una especie de "controller".

La soluciones deben ser lo más abiertas posibles para poder dar margen de maniobra al usuario y aprender cual es su forma preferida de trabajar. Hay que identificar limitaciones innecesarias que estemos introduciendo tanto a traves de al interfaz de usuario como a nivel del código para poder evitarlas. Una solución abierta se centra en el problema a resolver, en el “qué” y permite cambios de forma con facilidad, poder pivotar hacia otras cosas.

Por ejemplo, imaginemos que hay 3 acciones fundamentales que el usuario puede realizar en la aplicación. No hace falta que las realice en un orden concreto, el objetivo de negocio es que pueda realizar las acciones A, B y C cuando mejor le convenga. Entonces no hagamos que para ejecutar la acción A, tenga que hacer primero la B. Eso es introducir una limitación innecesaria.

Ahora un ejemplo de complejidad excesiva: Imaginemos que queremos mostrarle al comercial la disponibilidad de un determinado modelo de vehiculo, es decir, si está en stock. El objetivo de negocio es que sepa si puede vender ese modelo o no. ¿Para qué vamos a contar cuántos vehículos hay disponibles de ese modelo? Le da igual si hay 24, solo necesita saber si tiene una unidad disponible para la venta o no la tiene. Para darnos cuenta de este exceso de complejidad tenemos que recordar el objetivo de la release. Recordar el propósito de la release actual.

Cuanto menos software escribamos mejor. La linea que no falla es la que no está escrita. Cada linea que escribimos es un compromiso que hay que mantener, hasta que alguien venga y la borre. Cuanto vamos justos para cumplir con el objetivo de la release (en el mundo del software esto es siempre), debemos recortar en alcance (funcionalidad) y nunca en calidad. Hay que hacer todo lo posible por respetar los tiempos de entrega previstos, consiguiendo el objetivo a base de quitar características menos importantes en el momento actual.

2014: Un gran año

Últimas horas del 31 de Diciembre de 2014. Para millones de personas es un día de celebración. Aproximádamente el 75% de la población cambia de año cada primero de enero siguiendo el calendario gregoriano, aunque solo unos pocos podemos hacer una fiesta. Para muchos es el último día de su vida, para otros es el primero. Según la ONU, más de 30 millones se encuentran desplazados a causa de conflictos armados y muchos de ellos están en plena guerra, luchando por la supervivencia. Muchos morirán de hambre y otros por enfermedad. Miles de millones de animales sufren el holocausto en las granjas y mataderos industriales para satisfacer a una minoría de los humanos del planeta. Y el planeta según diversas organizaciones está en el peor momento de la historia.

Sin ignorar todo lo que está ocurriendo aún hay margen para el optimismo, la esperanza y la gratitud.  Mi deseo es ser cada día más optimista y más féliz sin necesidad de amnesia, ignorancia ni indiferencia. Haciendo por los demás todo lo que esté en mi mano. Consumiendo de manera responsable.

La hoja de ruta consiste en vivir el momento presente con atención, agradecido por tantas cosas buenas que me suceden.

2014 ha sido el mejor año de mi carrera profesional desde que trabajo como independiente. Todo lo que podía salir bien ha salido bien. He tenido la suerte de visitar multitud de empresas y trabajar con gente genial. He tenido tanto trabajo este año que no voy a citar todos los lugares por los que he pasado y personas con las que he trabajado porque se me quedaría gente fuera seguro.  Me ha encantado ver trabajar a algunos colegas del gremio como Luis Fraile o Luis Ruiz Pavón y ha sido un placer descubrir a Carlos Bastos.

A principios de 2013 pensaba que debía centrar mis esfuerzos en el mercado extranjero, sobre todo UK y Alemania pero este año el mercado nacional me ha dado mucho más de lo que esperaba y he salido poco fuera del país. Prácticamente cada semana recibía alguna petición para formación o consultoría y he tenido varios encargos de desarrollo de producto a medida con los que me lo he pasado bomba. He conseguido programar tanto como quería. También he podido empezar a grabar los screencasts de programación que quería (y haré más).

Pero lo más importante son las alianzas que se han ido fraguando por el camino. Sin buscarlo estamos cerrando el año funcionando como equipos en lugar de ser un profesional solitario. Estoy encantado de cerrar el 2014 trabajando con mis compañeros Luis Rovirosa, Juan M. Gómez, Imobach Martín, Nestor Bethencourt, Modesto San Juan, Alfredo Casado y Fran Reyes. Y con todos los compañeros de AIDA (ahora cuento más). 

Luis Rovirosa está ayudandome con consultoría y formación en varios clientes, es un aliado con el que llevaba tiempo queriendo colaborar. Juan, Imo y Nestor estan desarrollando conmigo el editor de plantillas del popular MDirector, la herramienta de email marketing de Antevenio, un proyecto muy bonito. Alfredo, Modesto y Fran están conmigo y con el resto de compañeros de AIDA trabajando en un proyecto apasionante para el Grupo Domingo Alonso. Y hay más proyectos en camino. Por ello estoy en proceso de constitución de un SL, la que será mi segunda SL. Estoy ya cambiando la portada de esta web (carlosble.com) y pronto habrá mas fotos e info de todos.

Con semejante grupo de profesionales podemos atender toda la demanda que me llega y puedo estar tranquilo al delegar. Poder delegar sin preocupaciones, no tiene precio.

Cerrar el año trabajando para Grupo Domingo Alonso es una suerte. Se trata de una de las mayores empresas de Canarias aunque yo no la conocía porque los coches no me llaman la atención. Surgió una consultoría con ellos, en parte a través de nuestro amigo y maestro Jose Juan Hernandez de la ULPG y pude conocerles. Me quedé encantado con la calidad humana del equipo, sus valores, su energía y con el proyecto en sí. No tenía intención de proponerles colaboración para sacar adelante el proyecto pero cuando volví a casa y pasaron unos días me dí cuenta que yo quería trabajar allí y les mandé una propuesta. Es un super reto, la construcción de un complejísimo Dealer Management System. El reto es tan grande que necesitaba pedir refuerzos y para mi sorpresa, tanto Alfredo como Modesto y Fran lo dejaron todo (bueno, a sus parejas no!) y se mudaron conmigo a Gran Canaria para meterse en el proyecto. Llevamos dos meses de proyecto y cada vez me gusta más todo lo que conozco. El capital humano de AIDA (Aplicaciones Informáticas Domingo Alonso) es increible. En la fiesta de navidad nuestro compañero Alejandro Artiles se curró este gracioso vídeo explicando lo que estamos viviendo en estos comienzos del proyecto, al estilo "Stars Wars":

Por si fuera poco, en Junio de este año mi padre volvió a nacer. Ingresó en urgencias y los medicos dijeron que no sobreviviría. Un fulminante cáncer de cólon. Sin embargo lo ha superado por completo en muy poco tiempo. Mi padre cuenta la experiencia en su blog. Ha sido la mejor noticia del año, lo mejor que ha sucedido.

Este post es sobre todo para daros mil gracias a todos los que estais haciendo posible este grandísimo momento. Gracias a todos los que este año habeis contado conmigo. Ha sido un placer y estoy seguro de que viene mucho más por delante.

Gracias a Dácil por estar siempre ahí, por ser mi motor. Gracias a mi madre por cuidar de todos nuestros animales del refugio cuando yo no puedo hacerlo por mi trabajo. Este año, sin mi madre no hubiera sido lo mismo, no sé cómo lo hubieramos hecho.

Caramba! ya es 2015! Próspero año nuevo!

Practising Mob Programming

mobProg1

In the past edition of Socrates UK, I met Gianfranco Alongi who told us about his team's experience with  Mob Programming. It was the first time I heard about it. As the site says, Mob programming is people working at the same time, in the same space, at the same computer, on the same thing. Gianfranco said it was very useful for them.

At the beginning of November three of my fellow craftsmen and I joined a team of other 12 developers to work on a new project. We are building a DMS (Dealer management system) and an IMS (Importer management system). The project is very exciting but we don't have a clue on the domain. We are also a lot of people with different programming styles. So we decided to give Mob Programming a try and after a few weeks the results are very positive.

 

Some lessons we have learned along the way:

  • Like with pair programming, navigators should take notes whilst the driver is at the keyboard, rather than disturbing him constantly.
  • Only when the driver raises his hands off the keyboard we are allowed to discuss.
  • Rotate frequently: it's faster to jump in the keyboard and do it yourself than trying to explain the code you envision.
  • Limit the group size to 5 or 6 people. We find that 4 is a good number, more than that is not being productive for us. Sometimes it's useful though, when we want to define conventions for the whole team.
  • The group should be stable: if new people join the discussion in the middle then we have to explain again and again the code and the decisions we have already made. It's like the never ending story.
  • At some point I had to leave a session right in the middle of something. When I came back a colleague told me that the people who were shy or less proactive during my driving period, started proposing interesting ideas during my absence. The people considered "seasoned" or "experienced" should go away from time to time for the others to feel empowered.

So far the project is just fantastic. People are awesome, learning incredibly fast, very enthusiastic. Everyone wants to learn and contribute. We are all learning a lot and doing our best. I can't think of a better project. We are very lucky.

mobProg2mobProg3

 

 

Implementing BDD – part 1

Scenario review

Scenario review

This is the first of a series of posts on how our team is practicing BDD. These are mostly notes to myself and our team so other readers may not find enough context to understand the points.

 

 

 

 

 

 

After a few weeks we have decided that:

  • Although source code is written in English, the specifications are written in Spanish. Because the business is always discussed in Spanish, it's the language used to determine what we need to do. So the Cucumber scenarios are written in Spanish.
  • There is a glossary of business terms to help us document the ubiquitous language. The glossary contains the translations from Spanish to English so that the source code expresses the same concepts.
  • If developers need to change or adjust the scenarios for some reason (for instance, automation purposes), then business analyst have to validate the changes.
  • Scenarios will be stored in only place only, and that place is SpecFlow (Cucumber). This means that business analyst need to have all the tools installed and access to the source control repository (Visual Studio and TortoiseHG).
  • After the specifications workshop, scenarios are written in pairs or groups of 3 or 4 people. Two people is usually enough. We've realized that writing the scenarios in pairs help us come up with more questions and more ideas to simplify the problem.
  • The automation of the scenarios should also be done in pairs, because this is yet another opportunity for us to detect inconsistencies or doubts. Well, in general we just pair most of the time.
  • A user story is not ready for implementation until it's got at least 4 scenarios. We started with just one and the lack of context caused us to introduce wrong models in the code. Actually when test-driving a small function, we never start out with a single example but with a to-do list. Then we focus on each test writing the minimum code required but we do know where we want to go. This is the same but at a higher level.
  • Given that our business analyst know how to read code, they should read the source code at the end of the iteration. By reading the part of the code inside the hexagon (we use hexagonal architecture), they can ensure the designed model makes sense.
  • Scenarios should not repeat irrelevant details. If one scenario uses real data like a "plate number", then the following scenarios don't need to repeat the whole plate number but just something like "x". We should keep in mind that scenarios complement each other in terms of documentation. They should be indepdendent for execution but not for documentation.
  • Triangulation: we can make a scenario go green quickly. However, triangulating the code that really works for all the cases falling under that scenario, may require more code. If the triangulation does not concern business details, we use NUnit to test-drive rather than SpecFlow. Scenarios are used for documentation, not for triangulation, unless the triangulation covers relevant business tips.

It was by chance that one of our business analysts came up to a mob programming session and got to see code related to the scenarios that we had defined together. Quickly he realised that we were introducing aliens in the domain (wrong models). It was because we were lacking some other scenarios. The fact that the code was in the wall with a massive font size and it really communicated our intent, together with the fact that the reader is very clever, made us discover an useful technique to double-check our assumptions.

 

Implementing a fake repository

Lately I am using fake objects more than the other common kinds of doubles - stubs, spies and mocks. This is quite recent, I used to prefer stubs, spies or mocks. I find that fakes make my tests less fragile in the case of outside-in development when it comes to changes in the design. The problem is that there could be defects in the implementation of the fake objects. A tricky one may appear in the case of in-memory repositories:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. jobs.Add(job);
  6. }
  7. public void Update(Job job)
  8. {
  9. // TODO
  10. }
  11. public List FindBy(string ownerId)
  12. {
  13. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  14. }
  15. }

This implementation is misleading. If we are test-driving a service expecting it to update a Job instance, we may forget to invoke the repository's Update method and it would still work.

  1. [TestClass]
  2. public class ServiceTest {
  3. [Test]
  4. public void AssignsOwnerToJob()
  5. {
  6. var service = Factory.CreateJobServiceWithFakeRepository();
  7. var job = service.CreateJob(irrelevantTitle);
  8.  
  9. service.Assign(job, irrelevantOwnerId);
  10.  
  11. service.JobsOwnedBy(irrelevantOwnerId).Should().HaveCount(1);
  12. }
  13. }
  14.  
  15. public class Service {
  16. private JobRepository repo {get; set;}
  17.  
  18. public Service(JobRepository repo){
  19. this.repo = repo;
  20. }
  21. public Job CreateJob(string title)
  22. {
  23. var job = new Job();
  24. repo.Add(job);
  25. /*...*/
  26. return job;
  27. }
  28. public void Assign(Job job, string ownerId){
  29. /*...*/
  30. job.Assign(ownerId);
  31. /* BUG: am forgetting the call to "repo.Update(job)" right here */
  32. }
  33. public List JobsOwnedBy(string ownerId){
  34. return repo.FindBy(ownerId);
  35. }
  36. }

The problem is that the in-memory repository is using the same object references all the time. When an object is passed in as an argument, it's actually a copy of the reference. However if the repository was accessing a database it would most likely return different instances (references). A better implementation of the repository would be this one:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. /* clone: the object added to the repo
  6.   is different to the given one */
  7. jobs.Add(job.Clone());
  8. }
  9. public void Update(Job job)
  10. {
  11. /* find the given job,
  12.   remove it from the collection and
  13.   add a clone of the given one */
  14. }
  15. public List FindBy(string ownerId)
  16. {
  17. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  18. }
  19. }

Conclusion: the behavior of the fake should be the same than the original when it comes to handling object references.