Archive for the ‘Clean code’ Category

Learning with Peter

Last week I was lucky to host my good friend Peter Kofler in his visit to Gran Canaria, where he came to facilitate the Glodal Day of Code Retreat and also to work together a couple of days in our biggest project at the moment.

We've been working in the same project for a year now, our team joined the client's team to play several roles, from mentoring to developing features ourselves. Peter's visit was a fantastic learning experience and also a way to bring back principles that wear out as we face recurring issues over time. Even though we started as external consultants, the fact that we've been a year immerse in the project sometimes leading the development ourselves, changes our vision and focus. Peter's fresh vision reminded me of myself a year ago, when the gig started and reinforced my willingness to stick with my principles. Thank you Peter!

Notes that I wrote down during working sessions:

  • Code metrics is exactly what we need now to show everyone the health of the code base. We have enough code already and we must improve the readability of some tests, reorganise dependencies, clean up namespaces and some other things we know are important. A tool like Sonar of something like that will provide us with metrics to measure the improvements over time. It's definitely the time to visualize code metrics
  • Dependencies diagrams are another type of metric that we should use now that. A quick overview of how the various namespaces are related to each other.
  • There are certain areas of the hexagon suffering from "primitive obsession" code smell. We want more objects within our business domain, and less primitives.
  • More information in commit's comments. Apart from the refactoring we apply or the name of the tests turning green or red, I'll try to explain shortly what is the benefit of every single commit. The reason to change the code. I've found myself searching for particular design decisions in the commits' history and realized that the information in the comments was not enough to find out what I wanted. Example: why I moved a method to a class, extracted a class... the "why" is not in the code so it's a good idea to add it in the form of comments in the version control system.
  • I want to learn how to work with Visual Studio without the mouse, because it's faster with the keyboard only. This is an old resolution that I want to approach now.
  • I realised that our style of "mob programming" is different to the original style proposed by Woody Zuill and his team. In the original proposal, the driver is mostly "keyboarding", focusing on typing what the navigators tell him to do. Our approach is different, we have the discussion and the person who clearly sees how to translate the idea into code jumps in the keyboard and expresses it in code, no one says to the driver what should be written. The driver leads the session when he is at the keyboard. It's sometimes easier to express an idea writing some code or pseudo-code, than trying to explain how the code is going to look like, and I think this has to do with the fact that talking constantly exhausts me. It also has to do with focus, the driver is the person who can implement the discussed idea faster, who knows the code better. Whenever the driver is stuck, the navigator who knows how to carry on, asks for the keyboard and becomes the driver. I find dictating awkward and sometimes irritating. So we are not following their rule: "for an idea go from your head into the computer it MUST go through someone else’s hands.". We try not to talk whilst the driver is typing just like in pair programming, because we want the driver to be able to listen what navigators have to say, and I can't type and listen at the same time. Perhaps we are not doing mob programming at all. But it works for us.
  • In order to keep the good energy and motivation in a mob or pair prog session, it's better to give up or take a break than working without feeling like it. I need to feel that my colleagues feel like working and thus I must feel like doing so too.
  • I really like the way Peter summarized the Hexagonal Architecture, focusing on the direction of the dependencies. A simple diagram with two boxes joined by an arrow pointing from one to the other, was enough for people to understand it. Also the onion was a good idea.

Regarding the Global Day of Code Retreat, there are a bunch of things I learned from Peter and from the day itself:

  • If there are no sponsors buying food, I want to buy food myself for me and for all the participants. This year I brought the food in the afternoon but I should have brought it in the morning, when people were hungry. Next time I'll arrive in the morning with breakfast for everyone. I missed the relaxing moment of the morning break sharing some fruit and coffee or tea.
  • Whenever I talk very serious aiming to remark something important, I look like I am rude. I don't realize that until I see people's faces and by then I feel bad about it. To avoid this situation I'll think twice before sending the message. I must be quiet, talk more slowly, think from a positive angle and use positive words. Words like "no" or "not" or "don't" may be negative. I want to read about Non Violent Communication and practice it.
    As an example, during the retreat in one iteration's retrospective I said that the facilitators (Peter and me) were not there to teach. My aim was to encourage people to learn from themselves rather than expecting some kind of master class or demo from our side, but the way I said it was understood as ... "I am not here to help you, or I don't care about you" as some people told me afterwards. It had a negative impact in some people's motivation.
  • At the end of the day, after the final retrospective, in the pursuit of feedback and appreciation I talked in such an unfortunate way that made people feel in debt with us. But again I didn't realise until I saw people's reaction. My aim was to discover what benefits did participants find out during the day, what did they take away so that next time I could maximize it with specific actions or exercises.
    When it's obvious that I've dedicated time and energy working for others, I must be very careful not to express that explicitly because it will make people feel in debt which is not what I wanted.

Several friends have spoken to me about Non Violent Communication (NVC) in the last 5 years or so, I believe Diego Rojas was the first one, working with Peter was a trigger point for me to really start digging into the subject. Thank you for the recommendation Peter and for this excellent video by Marshall Rosenberg, the father of NVC:

C#, Java and other languages have the same behaviour when it comes to reference types.

  1. public class SomeClass {
  2. public string someField;
  3. }
  5. var instance1 = new SomeClass(); // instance1 is a reference to an object in memory
  6. var instance2 = instance1; // instance2 is a copy of the reference
  7. instance2.someField = "changed";
  8. instance1.someField == instace2.someField // -> true
  9. instace2 = new SomeClass();
  10. instance2.someField = "changed again";
  11. instance1.someField != instance2.someField // -> true -> they are different objects
  12. instance1.someField; // -> "changed" -> nothing changed

The dot symbol after the variable name (instance1 or instance2) accesses the actual
object referenced by that variable. So before the dot, we have a variable referencing an object, and
after the dot we have the actual object.

  1. instance1.someField;

Means: get reference instace1, then access the object, then access someField

Passing objects to functions has exactly the same behaviour, function parameters behave like variables assigned to the original arguments.

  1. public static void SomeMethod(SomeClass arg1){
  2. arg1.someField = "changed";
  3. }
  5. var instance1 = new SomeClass();
  6. SomeMethod(instance1);
  7. instance1.someField; // -> "changed" -> field has changed
  9. public static void OtherMethod(SomeClass arg1){
  10. arg1 = new SomeClass();
  11. }
  13. var instance1 = new SomeClass();
  14. instance1.someField = "changed";
  15. OtherMethod(instance1);
  16. instance1.someField; // -> "changed" -> nothing changed

Instances of SomeClass are mutable because the value of someField may be changed.
This mutation may happen mistakenly as a result of an uncontrolled access to the object via some copy of its reference, causing unexpected side effects like defects and memory leaks.
As long as the application code can reach an object - has some reference to it - the garbage collector can't free the memory allocated for that object. Short version of our desktop app architecture as an example:

  1. public static class EventBus{
  2. private static ISet<Subscriber> subscribers = new HashSet<Subscribers>();
  3. public void AddSubscriber(Subscriber subscriber){
  4. subscribers.Add(subscriber);
  5. }
  6. ...
  7. }
  9. public class View{
  10. public ViewModel ViewModel;
  12. public View(ViewModel viewModel){
  13. ViewModel = viewModel;
  14. }
  15. public void Init(){
  16. EventBus.AddSubscriber(ViewModel);
  17. }
  18. }

The life cycle of the View instance is controlled by the framework, not by us. It may create a new instance every time the view is shown on screen and destroy the instance as it disappears. However we are adding a reference
to the static list of subscribers in the EventBus. As long as the subscribers list is not flushed, the garbage collector won't be able to set memory free for ViewModel and View instances, even though the view may not be even displayed. Opening that view many times will increase the memory consumption every time, that is a memory leak. In this particular case we unsubscribe the instance from the bus before hiding the view:

  1. public class View{
  2. public ViewModel ViewModel;
  4. public View(ViewModel viewModel){
  5. ViewModel = viewModel;
  6. }
  7. public void Init(){
  8. EventBus.AddSubscriber(ViewModel);
  9. }
  10. public void Clear(){
  11. EventBus.RemoveSubscriber(ViewModel);
  12. }
  13. }

In the case of the bus there isn't much we can do to avoid having two references to the same object in two different places, we have to be aware of this behavior. In C# there is the concept of Weak Reference but as far as I know we can't use it on WinRT (tablets).

In some other cases though we may avoid side effects:

  • Avoid more than one reference per object, avoid state
  • Keep variables local, avoid instance variables (fields)
  • When the object is a value object, design it to be immutable
  • In the case of collections, use ReadOnlyCollection when they must keep their size
  • If the object can't be designed immutable but you need to avoid state changes at all cost, clone the object returning a deep copy of it

We may be tempted to clone objects every time someone asks for a reference to them. However this may not be possible (like with the EventBus) or it may be too expensive and complex. I'd say that cloning is the last alternative and perhaps the need for it is a design smell. Who is responsible for ensuring that object references are not causing memory leaks? It boils down to the "less surprise" principle. We should design interfaces (methods) in such a way that it's obvious how references and state are going to be managed. If it looks like the consumer (the one asking for the reference) will not be aware of it and the consumer will likely make undesired changes to the object, then cloning the object could be a good defensive approach. But I would rather try to think how to make the API more expressive considering context and level of abstraction. I assume that the caller understands how references work.

If you, dear reader, provide some code examples I will be able to clarify my point with code. I'll update this post if I come up with some snippets.

EventBus in a Windows 8 app

eventBusWinAppHow to communicate different pages of a Windows 8 App? How to manage the life cycle of the pages?

Pages are instantiated by the framework when asked to navigate:


  1. frame.Navigate(typeof(MyPage));

It will be a new instance of MyPage everytime unless the pages are cached. To set up the page cache, add this line to the beginning of


  1. <Page
  2. x:Name="pageRoot"
  3. x:Class="MyNamespace.MyPage"
  4. NavigationCacheMode="Enabled"
  5. ...

Cache can be clear as described here as long as the NavigationCacheMode is "Enabled" and not "Required". The "Required" pages can't be deleted from the cache as explained by Andreas Hammar.

The page's life cycle is controlled by the framework although we can configure the cache, therefore my objects - those whose life cycle is controlled by myself - should not reference pages in order to avoid memory leaks and undesired side effects.

We've decided that Pages and other UserControls are the ones which create and manage ViewModel's life cycle (we're using MVVM pattern). The page's codebehind (MyPage.xaml.cs) contains a reference to ViewModel.cs which is a plain old C# object that contains the GUI logic. MyPage instantiates the ViewModel in its constructor. So there should be no references to ViewModels from other objects.


  1. public MyPage(){
  2. InitializeCompontent();
  3. ViewModel = new ViewModel();
  4. DataContext = ViewModel;
  5. }

However I can't avoid references to the ViewModel from the EventBus (simple pubsub static class) because it's the mechanism we are using to communicate different parts of the application:


  1. public void SubscribeToEventBus(){
  2. EventBus.Subscribe(this); // static method
  3. }
  4. public void UnsubscribeEventBus(){
  5. EventBus.Unsubscribe(this); // static method
  6. }

To work around this problem, I make sure the ViewModel contains references to the bus only when it's active (visible on screen):


  1. public override void OnNavigatedTo(){
  2. ViewModel.SubscribeToEventBus();
  3. ViewModel.Initialize(); // optional
  4. }
  5. public override void OnNavigatedFrom(){
  6. ViewModel.UnsubscribeEventBus();
  7. }

When the ViewModel requires some data previously generated in another view and stored in the AppState, it raises an event through the bus saying it's loaded and then the AppState listener replies back through the bus sending the required data.

We haven't found a built-in way to reset the application clearing all data so we've implemented a reset method. The reset is an event triggered by a button in the bottom bar, sent through the bus. Our App.xaml.cs object which is the singleton that starts the application handles the event clearing all data. The sequence is important to avoid memory leaks


  1. public void Handle(AppReset appResetArgs){
  2. EventBus.Clear();
  3. AppState.Reset();
  4. InitializeServices();
  5. EventBus.Subscribe(this);
  6. ResetXamlCache(); // this must be the last thing to clear
  7. NavigateToRootPage();
  8. }

Only the App object subscribes to the AppReset event. Other global events like a connection failure are also handled by the App object.

A final reflection is,... Why do we need an EventBus at all? We could just inject the same AppState instance in every ViewModel to share data among the different screens. Something like a NavigationService could have solved this for us. The following pseudocode illustrates the idea:


  1. public static void NavigateTo<T>(){
  2. var frame = (Frame) Window.Current.Content;
  3. var viewModel = Factory.ViewModelFor<T>();
  4. frame.Navigate(typeof (T), viewModel);
  5. }


  1. protected override void OnNavigatedTo(NavigationEventArgs e){
  2. ViewModel = e.Parameter as ViewModel;
  3. }

Then in the Factory we could inject the same AppState to all ViewModels and manage their life cycle. This indirection level could have changed out architecture. A custom base paged could implement the "OnNavigatedTo" to avoid duplication.

Thanks to my friend Juan M. Gomez for the idea of the navigation service.

Using C# Collections

There are many ways to work with collections. We are following Microsoft Guidelines for Collections  plus some ideas that Kent Beck explains in Implementation Patterns. I've created a repository with code examples so that anyone can play with them. All the unit tests in the project are green except for two, which are red on purpose to express my surprise when I wrote them, because it's a behavior I didn't expect.

IEnumerable<T> is not dangerous itself, the surprise may come up when the variable is created via a LINQ expression (.Select, .Where ...). Then it may be the case that the query execution is deferred and the behavior of the code is totally unexpected. For this reason we try to avoid IEnumerable<T> and IQueryable<T> as the return type of functions.

Try not to expose collections as much as you can by wrapping them within objects you own which expose only the minimum domain required features. But if you have to expose the collection anyway, in our teams the convention is:

  • ReadOnlyCollection<T> is the type returned to avoid modifications to the number of elements in the collection. (Internally it's just a wrapper around IList).
  • IList<T> is returned when the collection may be modified and the order of the items in the collection is relevant.
  • ICollection<T> is returned when the collection may be modified and we don't care about the order.

Good code expresses the programmer's intent, this is why we choose from the three types above to return collections when we have to.

When it comes to parameters, it's OK to use IEnumerable<T> as a parameter because it's the most generic collection type. Although the caller could send an unresolved Linq query as an argument hidden in the form of a IEnumerable<T>, the method does not have to be defensive about it (in most cases).

Other built-in collections like List<T> are intended to be used internally, only for implementation. So it's OK for a private method to return List<T>. The type Collection<T> actually implements IList<T> so I can't find a good reason right now to use Collection<T> instead of just List<T>.

Among other qualities good tests should be easy to read, quick to understand. When the test requires complex data structures to be sent to the SUT or to be part of a stubbed answer, it takes longer to read. Moreover those structures use to evolve as the production code does causing too many changes in the tests in order to adapt them. An indirection level in between the test and the production code helps improve readability and ease of maintenance. Builders come to the rescue. I often overload builder methods to support several data structures and then apply the conversions internally.

As an example, this is the setup of one of our tests before the final refactor:

  1. [Test] public async void
  2. raise_multiple_choices_event_on_equipment_selection () {
  3. SetupViewModelWithMockService();
  4. var selectedEquipment = new Equipment { Code = "PR" };
  5. var oneEquipment = new Equipment { Code = "PR1"};
  6. var otherEquipment = new Equipment { Code = "PR2"};
  7. var equipments = new List<Equipment>{ selectedEquipment, oneEquipment, otherEquipment };
  8. var multipleChoices = new List<CompulsoryCombinationChoice> {
  9. new CompulsoryCombinationChoice(new List<CompulsoryCombinationItem> {
  10. new CompulsoryCombinationItem(oneEquipment.Code, CatalogItemTypes.Equipment)
  11. }),
  12. new CompulsoryCombinationChoice(new List<CompulsoryCombinationItem> {
  13. new CompulsoryCombinationItem(otherEquipment.Code, CatalogItemTypes.Equipment)
  14. })
  15. };
  16. ACatalog.MockingDependenciesOf(vm)
  17. .WithEquipments(equipments)
  18. .ResolvingCompulsoryCombinationsAs(multipleChoices)
  19. .Configure();
  20. /* act ... */
  21. /* assert ... */

Imagine how ugly it was before the "ACatalog" builder. And this is the test after the builder was overloaded to supply a more comfortable API:

  1. [Test] public async void
  2. raise_multiple_choices_event_on_equipment_selection() {
  3. SetupViewModelWithMockService();
  4. var theEquipment = "PR";
  5. var equipment1 = "PR1";
  6. var equipment2 = "PR2";
  7. ACatalog.MockingDependenciesOf(vm)
  8. .WithEquipments(theEquipment, equipment1, equipment2)
  9. .ResolvingCompulsoryCombinationsAs(
  10. MultipleCompulsoryCombinationChoices(
  11. equipment1, equipment2))
  12. .Configure();
  13. /* act ... */
  14. /* assert ... */

Polymorphic Equality

The default generation of Equality members provided by Resharper let you choose three different implementations when overriding the "Equals" method in a class (Alt+Insert -> Equality Members):

The default choice is "Exactly the same type as 'this'" which IS NOT polymorphic. I mean, subtypes of that class will be considered not equal regardless of the values:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. if (obj.GetType() != this.GetType()){
  9. return false;
  10. }
  11. return Equals((Base) obj);
  12. }

On line 8 it compares the types which in the case of subtypes will be different thus returning false.
I didn't pay attention to this detail today and for some reason assumed that the comparison was going to work for subtypes. Lesson learned: always pay attention to generated code!

This is the generated code to consider subtypes equal:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. var other = obj as Base;
  9. return other != null && Equals(other);
  10. }

And this is yet another implementation that works:

  1. public override bool Equals(object obj){
  2. if (ReferenceEquals(null, obj)){
  3. return false;
  4. }
  5. if (ReferenceEquals(this, obj)){
  6. return true;
  7. }
  8. if (!(obj is Base){
  9. return false;
  10. }
  11. return Equals(other);
  12. }

The other lesson learned is that overriding the Equals method in the child class when the base class already overrides it, increases the complexity too much. The code is hard to follow and surprising. It increases the coupling between the child class and the parent.
Avoid overriding the equality members in class hierarchies if you can.

Heuristics, bad smells and principles in the design of our Windows 8 app, which shares the core domain with a rich JavaScript client application. In Par I, we exposed some of the difficulties we were facing when modeling. How we are working today:

  • Our repositories work with aggregates (entities). This is, rich domain models rather than plain anemic models. The repository performs database queries and then pass in the raw results to the aggregate's constructor or to some adapter to build up the model.  Thus the repository returns models to the service layer and receives models.
  • Currently an object is a DTO only if it's used to send and receive data through the network (via HTTP requests).
  • Bad smell: the fact that only certain fields of a DTO are used in some part of the application and the other fields are used in another place. That means we are trying to reuse concepts that are different. The solution is to split the object in two. Principle: a data object have to be consistent, all its fields must be in use.
  • The objects we bind to the GUI are not called DTOs anymore. But we are not happy calling them ViewModels either, because we still want to make the distinction between a data object bound to the GUI, and a kind of "Controller" that manages GUI logic and depends on collaborators like application services. So we are avoiding prefixes and suffixes for those objects and using their namespaces to distinguish them. For example if "Vehicle" happens to have the same properties in the model, the communications layer and the GUI, there will be three objects:
    • ProjectName.Model.Vehicle
    • ProjectName.Dto.Vehicle
    • ProjectName.ViewModel.Vehicle
  • The transformations between DTO and Model are performed by an adapter as described in Part I. But we do not create the three objects from the very beginning when there is just one object with one field or two. We may use the same object all over the place until it grows and starts acquiring some logic.
  • We usually talk about "bindable objects" to refer to those data objects bound to the GUI. Sometimes the tranformation between domain model and bindable object is performed by some ViewModel/Controller. Sometimes we delegate the task in an adapter, within the ViewModel namespace.
  • Transfer the minimum amount of data for each operation, no more. A single service may work with many different DTOs because each operation requires different data setsFor instance, the OfferService (an application service) has public methods to create and update offers. The OfferDto used to contain all the data required for both operations:
         public void Create(OfferDto offer);
         public void Update(OfferDto offer);
    However only a subset of the data is required for each operation. Now we prefer to exchange a minimal amount of data:
         public void Create(NewOffer offer);
         public void UpdatePrice(OfferId offerId, decimal price);
    public void Update(UpdatedOffer offer);

    This approach help us realize those objects are used just to transfer data, nothing more.

  • Keep data structures simple in the client/GUI side: if all I need of some entities are their identities, I don't need whole objects in that view, I can just hold a list of strings. Once the strings enter the hexagon, they'll be turned into domain models.
  • Keep the outer interface of the hexagon simple so as to make it simple to interact with: it's OK to receive primitives as arguments in our application services or actions.


Today has been a special day. Not because it's my 34th Birthday which I never really celebrate (although I do appreciate all the greetings and warm words from all my friends and family!) but because I've had the pleasure to spend the day working with Carlos Peix for the first time. Carlos is a well-known software craftsman from Argentina. It's his first time in Spain and we are lucky that he chose us to spend two days working with us as part of his journey. It's an exchange, something a journeyman does to learn and share. Among other things we have been mob programming today, refactoring code aiming to come up with a proper domain model. We have been identifying anemic models and data transfer objects trying to turn some of them into domain models. Trying hard to make up a good object oriented design. Things I like from Carlos' suggestions today:

  • Enumerate a list of all the classes we plan to refactor (or classes that are going to be affected) so as to be able to estimate how much time we are going to invest in this task and thus make a better decision on where to stop refactoring.
  • Rather than refactoring every object in depth, apply little changes to all the objects visiting all of them to build a mental map of dependencies and structure. For instance, we decided we wanted to hide setters and create constructors to inject the attributes. Although a single class suffered from other smells, we performed only that change to it and moved to the others until all of them changed. To me it's like exploring the tree breadth-first.
  • Sequence diagrams help understand the complexity of the design. It's funny that this should happen exactly at the same time I am reading Sandy Metz's book on OOP. Carlos hasn't read the book but he recommends the same technique than Sandy Metz. The sequence diagram was an "aha" moment to my colleague Miguel.
  • We commented on the different strategies to expose an object to different bounded contexts, using interfaces and using wrappers. We didn't have much time to dig into it.
  • Repositories retrieve aggregates when possible, that is rich domain models rather than anemic models.

Looking forward to more synergy tomorrow, it's been an exciting day.

And you fellow journeyman, remember that we are open minded and value your visit to our office. If you want to work with us for a few days drop us a line, we look forward to it!

A method is an abstraction

The name of a method must have a higher level of abstraction than the body for the method to be worth it. The name should explain what the method does but not how. If the name communicates exactly the same than its code, it could be a sign that the method does not pay off and should be inlined.

Whenever we extract a method from a bunch of lines of code, we are adding a little abstraction to our code. If the extraction happens too early, then we are adding wrong abstractions to the code. Those premature abstractions get in our way to a better design, often hiding duplication and deeper design problems. Lots of small methods does not necessarily mean better design.

I extract a method only when my tests are green and it's really obvious the responsibility of those lines. When I am done implementing the branch of the solution I am test-driving and it's easy to find a name for the new method.  I don't create new methods to make a test pass. Methods appear later.

Code below is an example of premature method extraction. Right after the first test passed the developer created this method:

  2. private boolean isValidArithmeticExpression(String input) {
  3. return input != null;
  4. }

The body of the method is perfectly understandable without the method. There was no duplication of the statement and there were just a few lines in the production code. Definitely too risky, too early, too complex.

An example of a wrong encapsulation:

  2. public string splitBySpace(string expression){
  3. return expression.split(" ");
  4. }

In this example the method's name says exactly what one can read in the body.

The same applies to helper methods in the tests. This is an example of a wrong encapsulation:

  2. private void conductCalculationAndAssertResult(String equation, int expectedResult) {
  3. int result = calculate(equation);
  4. assertEquals(expectedResult, result);
  5. }

Do you want to know something that works really really well for me? I spend the first 10 to 15 minutes of the day reading the code  I (or we) wrote the day before. I always find out something to change. It's often a rename or a method extraction, something that I can apply quickly and safely, most of the time using the automatic refactor tool at hand. I review both, the production code and the tests, to see if the tests really document the behaviour of the system. The effort is insignificant and the benefit in the long term is huge. This is the low hanging fruit, a nice way to reduce accidental complication. This exercise let me learn about myself, my way of thinking and designing, providing me with immediate feedback to improve the code. It's like travelling to the past to prevent problems before they really happen in the future. It also helps focus on the code.  This exercise is way better when done in pairs - let your new pair give you feedback about what your previous pair and you did yesterday.
Apart from the source code, I do refactor the documentation. Documentation?! Yes, I do write documentation and comment my code. I will write another post on this. However I never comment what the code does but rather the "why" and "why not" - why is the code in there or why did I discarded some other implementation.

Refactoring is somehow similar to dish-washing. When done often, it takes very little time and effort.
When you haven't done it in two days you don't see the right moment to start off, you look at the pile and feel lazy. You blame others for using so many cups... do they really have to drink so much tea?! - even when it was yourself!
If you haven't done it in a week, you just consider buying new dishes and throwing away the entire pile.

Sometimes I decide to spend more than 15 minutes, I'd say that at least once I week I spend about two hours purely refactoring. I call this session "refactoring to infinity". I start out with the easy wins but soon I search for duplication, coupling and other design problems. I ask myself whether the code is really modelling the business domain properly. And I keep on refactoring until I can't find anything more to change. It doesn't mean the code is perfect, it just means I don't know how to make it better given my current understanding of the domain and skills. This is why I want come back later, because we use to learn about the specific details of the domain as we go.

Refactoring always pays off, except when you introduce unnecessary  complexity. Be careful, do not employ the Open/Closed principle until the new requirements turn out to be easier to implement upon code previously opened for extension. This is, consider YAGNI before.